text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
On distortion in groups of homeomorphisms
JMD Home
A nondifferentiable essential irrational invariant curve for a $C^1$ symplectic twist map
July 2011, 5(3): 593-608. doi: 10.3934/jmd.2011.5.593
Bernoulli equilibrium states for surface diffeomorphisms
Omri M. Sarig 1,
Faculty of Mathematics and Computer Science, The Weizmann Institute of Science, POB 26, Rehovot, Israel
Received May 2011 Revised July 2011 Published November 2011
Suppose $f\colon M\to M$ is a $C^{1+\alpha}$ $(\alpha>0)$ diffeomorphism on a compact smooth orientable manifold $M$ of dimension 2, and let $\mu_\Psi$ be an equilibrium measure for a Hölder-continuous potential $\Psi\colon M\to \mathbb R$. We show that if $\mu_\Psi$ has positive measure-theoretic entropy, then $f$ is measure-theoretically isomorphic mod $\mu_\Psi$ to the product of a Bernoulli scheme and a finite rotation.
Keywords: countable Markov partitions., surface diffeomorphisms, Bernoulli, equilibrium measures.
Mathematics Subject Classification: Primary: 37D35; Secondary: 37D2.
Citation: Omri M. Sarig. Bernoulli equilibrium states for surface diffeomorphisms. Journal of Modern Dynamics, 2011, 5 (3) : 593-608. doi: 10.3934/jmd.2011.5.593
R. L. Adler and B. Weiss, "Similarity of Automorphisms of the Torus," Memoirs of the American Mathematical Society, No. 98, American Mathematical Society, Providence, R.I., 1970. Google Scholar
R. L. Adler, P. Shields and M. Smorodinsky, Irreducible Markov shifts, The Annals of Math. Statistics, 43 (1972), 1027-1029. doi: 10.1214/aoms/1177692569. Google Scholar
L. Barreira and Y. Pesin, "Nonuniform Hyperbolicity. Dynamics of Systems with Nonzero Lyapunov Exponents," Encyclopedia of Mathematics and its Applications, 115, Cambridge University Press, Cambridge, 2007. Google Scholar
R. Bowen, Bernoulli equilibrium states for Axiom A diffeomorphisms,, Math. Systems Theory, 8 (): 289. doi: 10.1007/BF01780576. Google Scholar
R. Bowen, "Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms," Lecture Notes in Mathematics, 470, Springer Verlag, Berlin-New York, 1975. Google Scholar
J. Buzzi, Maximal entropy measures for piecewise affine surface homeomorphisms, Ergodic Theory Dynam. Systems, 29 (2009), 1723-1763. doi: 10.1017/S0143385708000953. Google Scholar
J. Buzzi and O. Sarig, Uniqueness of equilibrium measures for countable Markov shifts and multidimensional piecewise expanding maps, Ergodic Th. & Dynam. Syst., 23 (2003), 1383-1400. Google Scholar
B. M. Gurevič, Shift entropy and Markov measures in the space of paths of a countable graph, (Russian), Dokl. Akad. Nauk SSSR, 192 (1970), 963-965; English Transl. in Soviet Math. Dokl., 11 (1970), 744-747. Google Scholar
B. P. Kitchens, "Symbolic Dynamics. One-Sided, Two-Sided and Countable State Markov Shifts," Universitext, Springer-Verlag, Berlin, 1998. Google Scholar
F. Ledrappier, Propriétés ergodiques de mesures de Sinaï, Inst. Hautes Études Sci. Publ. Math. No., 59 (1984), 163-188. Google Scholar
F. Ledrappier and L.-S. Young, The metric entropy of diffeomorphisms. I. Characterization of measures satisfying Pesin's entropy formula, Ann. of Math. (2), 122 (1985), 509-539. doi: 10.2307/1971328. Google Scholar
S. Newhouse, Continuity properties of entropy, Annals of Math. (2), 129 (1989), 215-235; Errata in Annals of Math., 131 (1990), 409-410. doi: 10.2307/1971492. Google Scholar
D. Ornstein, Factors of Bernoulli shifts are Bernoulli shifts, Adv. in Math., 5 (1970), 349-364. doi: 10.1016/0001-8708(70)90009-5. Google Scholar
D. Ornstein, Two Bernoulli shifts with infinite entropy are isomorphic, Adv. in Math., 5 (1970), 339-348. doi: 10.1016/0001-8708(70)90008-3. Google Scholar
D. Ornstein, Imbedding Bernoulli shifts in flows, in "1970 Contributions to Ergodic Theory and Probability" (Proc. Conf., Ohio State Univ., Columbus, Ohio, 1970), 178-218, Springer, Berlin, 1970. Google Scholar
D. Ornstein and N. A. Friedman, On isomorphism of weak Bernoulli transformations, Adv. in Math., 5 (1970), 365-394. doi: 10.1016/0001-8708(70)90010-1. Google Scholar
D. Ornstein and B. Weiss, On the Bernoulli nature of systems with some hyperbolic structure, Ergodic Theory Dynam. Systems, 18 (1998), 441-456. doi: 10.1017/S0143385798100354. Google Scholar
W. Parry, Intrinsic Markov chains, Trans. Amer. Math. Soc., 112 (1964), 55-66. doi: 10.1090/S0002-9947-1964-0161372-1. Google Scholar
Y. Pesin, Characteristic Ljapunov exponents and smooth ergodic theory, Uspehi, Mat. Nauk, 32 (1977), 55-112, 287. Google Scholar
M. Ratner, Anosov flows with Gibbs measures are also Bernoullian, Israel J. Math., 17 (1974), 380-391. doi: 10.1007/BF02757140. Google Scholar
R. Ruelle, A measure associated with axiom-A attractors, Amer. J. Math., 98 (1976), 619-654. doi: 10.2307/2373810. Google Scholar
O. M. Sarig, Thermodynamic formalism for null recurrent potentials, Israel J. Math., 121 (2001), 285-311. doi: 10.1007/BF02802508. Google Scholar
O. M. Sarig, Symbolic dynamics for surface diffeomorphisms with positive entropy,, submitted., (). Google Scholar
P. Walters, Ruelle's operator theorem and g-measures, Trans. Amer. Math. Soc., 214 (1975), 375-387. Google Scholar
P. Walters, "Ergodic Theory, Introductory Lectures," Lecture Notes in Mathematics, 458, Springer-Verlag, Berlin-New York, 1975. Google Scholar
P. Walters, Regularity conditions and Bernoulli properties of equilibrium states and g-measures, J. London Math. Soc. (2), 71 (2005), 379-396. doi: 10.1112/S0024610704006076. Google Scholar
Yair Daon. Bernoullicity of equilibrium measures on countable Markov shifts. Discrete & Continuous Dynamical Systems, 2013, 33 (9) : 4003-4015. doi: 10.3934/dcds.2013.33.4003
Michael Jakobson, Lucia D. Simonelli. Countable Markov partitions suitable for thermodynamic formalism. Journal of Modern Dynamics, 2018, 13: 199-219. doi: 10.3934/jmd.2018018
Enrique R. Pujals, Federico Rodriguez Hertz. Critical points for surface diffeomorphisms. Journal of Modern Dynamics, 2007, 1 (4) : 615-648. doi: 10.3934/jmd.2007.1.615
Manfred Denker, Yuri Kifer, Manuel Stadlbauer. Thermodynamic formalism for random countable Markov shifts. Discrete & Continuous Dynamical Systems, 2008, 22 (1&2) : 131-164. doi: 10.3934/dcds.2008.22.131
Manfred Denker, Yuri Kifer, Manuel Stadlbauer. Corrigendum to: Thermodynamic formalism for random countable Markov shifts. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 593-594. doi: 10.3934/dcds.2015.35.593
Yakov Pesin. On the work of Sarig on countable Markov chains and thermodynamic formalism. Journal of Modern Dynamics, 2014, 8 (1) : 1-14. doi: 10.3934/jmd.2014.8.1
Manfred G. Madritsch. Non-normal numbers with respect to Markov partitions. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 663-676. doi: 10.3934/dcds.2014.34.663
Dominic Veconi. Equilibrium states of almost Anosov diffeomorphisms. Discrete & Continuous Dynamical Systems, 2020, 40 (2) : 767-780. doi: 10.3934/dcds.2020061
Mark F. Demers, Christopher J. Ianzano, Philip Mayer, Peter Morfe, Elizabeth C. Yoo. Limiting distributions for countable state topological Markov chains with holes. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 105-130. doi: 10.3934/dcds.2017005
Wael Bahsoun, Paweł Góra. SRB measures for certain Markov processes. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 17-37. doi: 10.3934/dcds.2011.30.17
Thomas Ward, Yuki Yayama. Markov partitions reflecting the geometry of $\times2$, $\times3$. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 613-624. doi: 10.3934/dcds.2009.24.613
Alfonso Artigue. Robustly N-expansive surface diffeomorphisms. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2367-2376. doi: 10.3934/dcds.2016.36.2367
C. Morales. On spiral periodic points and saddles for surface diffeomorphisms. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1191-1195. doi: 10.3934/dcds.2011.29.1191
Yakov Pesin, Samuel Senti. Equilibrium measures for maps with inducing schemes. Journal of Modern Dynamics, 2008, 2 (3) : 397-430. doi: 10.3934/jmd.2008.2.397
Barry Simon. Equilibrium measures and capacities in spectral theory. Inverse Problems & Imaging, 2007, 1 (4) : 713-772. doi: 10.3934/ipi.2007.1.713
Brian Marcus and Selim Tuncel. Powers of positive polynomials and codings of Markov chains onto Bernoulli shifts. Electronic Research Announcements, 1999, 5: 91-101.
Peng Sun. Measures of intermediate entropies for skew product diffeomorphisms. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 1219-1231. doi: 10.3934/dcds.2010.27.1219
Eleonora Catsigeras, Heber Enrich. SRB measures of certain almost hyperbolic diffeomorphisms with a tangency. Discrete & Continuous Dynamical Systems, 2001, 7 (1) : 177-202. doi: 10.3934/dcds.2001.7.177
Rostyslav Kravchenko. The action of finite-state tree automorphisms on Bernoulli measures. Journal of Modern Dynamics, 2010, 4 (3) : 443-451. doi: 10.3934/jmd.2010.4.443
John Franks, Michael Handel. Some virtually abelian subgroups of the group of analytic symplectic diffeomorphisms of a surface. Journal of Modern Dynamics, 2013, 7 (3) : 369-394. doi: 10.3934/jmd.2013.7.369
Omri M. Sarig | CommonCrawl |
Session J12: Focus Session: AFM in Studying Cell Mechanics and Biointerfaces
Sponsoring Units: DBIO
Chair: Igor Sokolov, Tufts University
J12.00001: Mechanical properties of metastatic breast cancer cells invading into collagen I matrices
Invited Speaker: Robert Ros
Mechanical interactions between cells and the extracellular matrix (ECM) are critical to the metastasis of cancer cells. To investigate the mechanical interplay between the cells and ECM during invasion, we created thin bovine collagen I hydrogels ranging from 0.1-5 kPa in Young's modulus that were seeded with highly metastatic MDA-MB-231 breast cancer cells. Significant population fractions invaded the matrices either partially or fully within 24 h. We then combined confocal fluorescence microscopy and indentation with an atomic force microscope to determine the Young's moduli of individual embedded cells and the pericellular matrix using novel analysis methods for heterogeneous samples. In partially embedded cells, we observe a statistically significant correlation between the degree of invasion and the Young's modulus, which was up to an order of magnitude greater than that of the same cells measured in 2D. ROCK inhibition returned the cells' Young's moduli to values similar to 2D and diminished but did not abrogate invasion. This provides evidence that Rho/ROCK-dependent acto-myosin contractility is employed for matrix reorganization during initial invasion, and suggests the observed cell stiffening is due to an attendant increase in actin stress fibers. [Preview Abstract]
J12.00002: Mechanical Properties of Human Cells Change during Neoplastic Processes
Martin Guthold, Xinyi Guo, Keith Bonin, Karin Scarpinato
Using an AFM with a spherical probe of 5.3 $\mu$m, we determined mechanical properties of individual human mammary epithelial cells that have progressed through four stages of neoplastic transformation: normal, immortal, tumorigenic, and metastatic. Measurements on cells in all four stages were taken over both the nucleus and the cytoplasm. Moreover, the measurements were made for cells outside of a colony (isolated), on the periphery of a colony, and inside a colony. By fitting the AFM force vs. indentation curves to a Hertz model, we determined the Young's modulus, E. We found a distinct contrast in the influence a cell's colony environment has on its stiffness depending on whether the cells are normal or cancer cells. We also found that cells become softer as they advance to the tumorigenic stage and then stiffen somewhat in the final step to metastatic cells. For cells averaged over all locations the stiffness values of the nuclear region for normal, immortal, tumorigenic, and metastatic cells were (mean +/- sem) 880 +/- 50, 940+/-50, 400 +/- 20, and 600 +/-20 Pa respectively. Cytoplasmic regions followed a similar trend. These results point to a complex picture of the mechanical changes that occur as cells undergo neoplastic transformation. [Preview Abstract]
J12.00003: Causes of retrograde flow in fish keratocytes
Thomas Fuhs, Michael Goegler, Claudia A. Brunner, Charles W. Wolgemuth, Josef A. Kaes
Confronting motile cells with AFM-cantilevers serving as obstacles and doubling as force sensors we tested the limits of the driving actin and myosin machinery. We could directly measure the force necessary to stop actin polymerization as well as the force present in the retrograde actin flow. Combined with detailed measurements of the retrograde flow velocity and specific manipulation of actin and myosin we found that actin polymerization and myosin contractility are not enough to explain the cells behavior. We show that ever-present depolymerization forces, a direct entropic consequence of actin filament recycling, are sufficient to fill this gap, even under heavy loads. [Preview Abstract]
J12.00004: High-resolution elasticity maps and cytoskeletal dynamics of neurons measured by combined fluorescence and atomic force microscopy
Invited Speaker: Cristian Staii
Detailed knowledge of mechanical parameters such as cell elasticity, stiffness of the growth substrate, or traction stresses generated during axonal extensions is essential for understanding the mechanisms that control neuronal growth. Here I present results obtained in my research group, which combine Atomic Force Microscopy and Fluorescence Microscopy measurements to produce systematic, high-resolution elasticity maps for different types of live neuronal cells cultured on glass or biopolymer-based substrates. We measure how the stiffness of neurons changes both during neurite outgrowth and upon chemical modification (disruption of the cytoskeleton) of the cell. We find a reversible local stiffening of the cell during growth, and show that the increase in local elastic modulus is primarily due to the formation of microtubules in the cell soma. We also report a reversible shift in the elastic modulus of the cortical neurons cytoskeleton with temperature, from tubulin dominated regions at 37C to actin dominated regions at 25C. We demonstrate that the dominant mechanism by which the elasticity of the neuronal soma changes in response to temperature is the contractile stiffening of the actin component of the cytoskeleton induced by the activity of myosin II motors. [Preview Abstract]
J12.00005: Analysis of Load Rate Dependence of Neuronal Soma Using Atomic Force Microscopy
Elise Spedden, Maxim Dokukin, Igor Sokolov, Cristian Staii
Surfaces of biological cells are covered with a layer of molecules (glycocalyx) and membrane protrusions (microvilli and microridges). This so-called ``brush'' layer plays a distinct role in the measured elastic modulus of cells. We utilize atomic force microscopy (AFM) to study mechanical properties of the soma and brush layer of live rat cortical neurons. The elastic modulus of the soma and brush are measured for cells indented at different AFM probe loading rates, ranging from 1-10 $\mu$m/s. The cells were studied at both 37 $^{\circ}$C (near-physiological temperature at which microtubules dominate high stiffness regions in the soma) and at 25 $^{\circ}$C (reduced temperature state at which actin components dominate high stiffness regions in the soma). If one uses a model with no brush taken into account, the derived elastic modulus shows the rate dependence similar to the one reported previously in the literature. Using the model with brush, we observed no statistically significant rate dependence of the elastic modulus of the soma, whereas the effective brush length demonstrates strong rate dependence. These measurements yield insight into the mechanical reaction of living neurons to externally applied stresses. [Preview Abstract]
J12.00006: If mechanics of cells can be described by elastic modulus in AFM indentation experiments?
Igor Sokolov, Maxim Dokukin, Nataliia Guz, Vivekanand Kalaparthi
We study the question if cells, being highly heterogeneous objects, can be described with an elastic modulus (the Young's modulus) in a self-consistent way. We analyze the elastic modulus using indentation done with AFM of human cervical epithelial cells. Both sharp (cone) and dull AFM probes were used. The indentation data collected were processed through different elastic models. The cell was considered as a homogeneous elastic medium which had either smooth spherical boundary (Hertz/Sneddon models) or the boundary covered with a layer of glycocalyx and membrane protrusions (``brush'' models). Validity of these approximations was investigated. Specifically, we tested the independence of the elastic modulus of the indentation depth, which is assumed in these models. We demonstrate that only one model shows consistency with treating cells as homogeneous elastic medium, the bush model when processing the indentation data collected with the dull probe. The elastic modulus demonstrates strong depth dependence in all other three models. We conclude that it is possible to describe the elastic properties of the cell body by means of an effective elastic modulus in a self-consistent way when using the brush model to analyze data collected with a dull AFM probe. [Preview Abstract]
J12.00007: High spatiotemporal resolution imaging of mechanical processes in live cells using T- shaped cantilevers
Nicola Mandriota, Ozgur Sahin
Mechanical properties of cells are paramount regulators of a plethora of physiological processes, such as cell adhesion, motility and proliferation. Yet, their knowledge is currently hampered by the lack of techniques with sufficient spatiotemporal resolution to monitor the dynamics of such biological processes. We introduce an atomic force microscopy-based imaging platform based on newly-designed cantilevers with increased force sensitivity, while minimizing viscous drag. This allows us to uncover mechanical properties of a wide variety of living cells - including fibroblasts, neurons and Human Umbilical Vein Endothelial Cells - with an unprecedented spatiotemporal resolution. Our mechanical maps approach 50nm resolution and monitor cellular features within a minute's timescale. To identify the counterparts of our mechanical maps' features we perform simultaneous fluorescence microscopy and recognize cytoskeletal elements as the main molecular contributors of cellular stiffness at the nanoscale. Furthermore, the enhanced resolution and speed of our method allows the recognition of dynamic changes in the mechanics of fine cellular structures, which occurred independently of changes within optical images of fluorescently-labeled actin. [Preview Abstract]
J12.00008: Poking vesicles in silico
Ben Barlow, Martin Bertrand, Bela Joos
The Atomic Force Microscope (AFM) is used to poke cells and study their mechanical properties. Using Coarse-Grained Molecular Dynamics simulations, we study the deformation and relaxation of lipid bilayer vesicles, when poked with a constant force. The relaxation time, equilibrium area expansion, and surface tension of the vesicle membrane are studied over a range of applied forces. The relaxation time exhibits a strong force-dependence. Our force-compression curves show a strong similarity with results from a recent experiment by Schafer et al. (Langmuir, 2013). They used an AFM to ``poke'' adherent giant liposomes with constant nanonewton forces and observed the resulting deformation with a Laser Scanning Confocal Microscope. Results of such experiments, whether on vesicles or cells, are often interpreted in terms of dashpots and springs. This simple approach used to describe the response of a whole cell ---complete with cytoskeleton, organelles etc.--- can be problematic when trying to measure the contribution of a single cell component. Our modeling is a first step in a ``bottom-up'' approach where we investigate the viscoelastic properties of an in silico cell prototype with constituents added step by step. [Preview Abstract]
J12.00009: Morphology And Local Mechanical Properties Of A Block Copolymer Cell Substrate
Craig Wall, Ivan Yermolenko, G. Rajesh Krishnan, Debanjan Sarkar, John Alexander
Atomic force microscopy (AFM) was applied for the characterization of morphology and mechanical properties of a block copolymer coating designed for biomaterials applications. The material is a block-copolymer with poly(ethylene glycol) as one block and a peptide as second block, which are connected through urethane bonds. The AFM images obtained in amplitude modulation mode revealed the morphology is characterized by micron-scale sheaf-like structures embedded in a more homogeneous and, presumably, amorphous matrix. The self-assembly of the peptide segments is responsible for the formation of the ordered sheaf structures and this phenomenon was common for different variations of the components. Maps of elastic modulus and work of adhesion of the block copolymer, which also differentiate the matrix and ordered regions, were obtained with Hybrid mode at different tip-force levels. The quantitative estimates show that elastic modulus varies in the MPa range and work of adhesion in the hundreds of mJ/m$^{\mathrm{2}}$ range. These data are compared with AFM-based nanoindentation that was performed at higher tip-force level. The results indicate that material surface is more complicated and they suggest in-depth morphology variations. A tentative model of the structural organization is proposed. [Preview Abstract]
J12.00010: Toolkit for the Automated Characterization of Optical Trapping Forces on Microscopic Particles
Joseph Glaser, David Hoeprich, Andrew Resnick
Optical traps have been in use in microbiological studies for the past 40 years to obtain noninvasive control of microscopic particles. However, the magnitude of the applied forces is often unknown. Therefore, we have developed an automated data acquisition and processing system which characterizes trap properties for known particle geometries. Extensive experiments and measurements utilizing well-characterized objects were performed and compared to literature to confirm the system's performance. This system will enable the future analysis of a trapped primary cilium, a slender rod-shaped organelle with aspect ratio L/R \textgreater 30, where `L' is the cilium length and `R' the cilium diameter. The trapping of cilia is of primary importance, as it will lead to the precise measurements of mechanical properties of the organelle and its significance to the epithelial cell. [Preview Abstract]
J12.00011: A Simplified Model for the Optical Force exerted on a Vertically Oriented Cilium by an Optical Trap and the Resulting Deformation
Ian Lofgren, Andrew Resnick
Eukaryotic cilia are essentially whiplike structures extending from the cell body. Although their existence has been long known, their mechanical and functional properties are poorly understood. Optical traps are a non-contact method of applying a localized force to microscopic objects and an ideal tool for the study of ciliary mechanics. Starting with the discrete dipole approximation, a common means of calculating the optical force on an object that is not spherical, we tackle the problem of the optical force on a cilium. Treating the cilium as a homogeneous nonmagnetic cylinder and the electric field of the laser beam as linearly polarized results in a force applied in the direction of polarization. The force density in the polarization direction is derived from the force on an individual dipole within the cilium, which can be integrated over the volume of the cilium in order to find the total force. Utilizing Euler--Bernoulli beam theory, we integrate the force density over a cross section of the cilium and numerically solve a fourth order differential equation to obtain the final deformation of the cilium. This prediction will later be compared with experimental results to infer the mechanical stiffness of the cilium. [Preview Abstract] | CommonCrawl |
Univariate Data
Shape of data
Recognising the centre of data
Recognising the spread of data
Recognising the shape of data
Mean, median, mode and range (combined set)
Centre or Spread ?
Quartiles, Deciles and Percentiles
5 Number Summary
Level 6 - NCEA Level 1
We've already learnt about three measures of central tendency: mean, median and mode. We've also learnt about the range, which is a measure of a data's spread. This chapter is a refresher of all these concepts.
Let's see how much you remember!
The mean is the average of all the scores.
You calculate the mean by adding up all the scores, then dividing the total by the number of scores.
Find the mean of the following scores:
$-14$−14, $0$0, $-2$−2, $-18$−18, $-8$−8, $0$0, $-15$−15, $-1$−1
Think: We need to add up the scores and divide it by the number of scores.
$\frac{-14+0+\left(-2\right)+\left(-18\right)+\left(-8\right)+0+\left(-15\right)+\left(-1\right)}{8}$−14+0+(−2)+(−18)+(−8)+0+(−15)+(−1)8 $=$= $\frac{-58}{8}$−588
$=$= $-7.25$−7.25
The median is the middle score in a data set.
There are two ways you can find the median:
Write the numbers in the data set in ascending order, then find the middle score by crossing out a number at each end until you are left with one in the middle.
Calculate what score would be in the middle using the formula: $\text{middle term }=\frac{n+1}{2}$middle term =n+12, then count up in ascending order until you reach the score that is that term.
Given the following set of scores:
$65.2$65.2, $64.3$64.3, $71.6$71.6, $63.2$63.2, $45.2$45.2, $62.2$62.2, $46.8$46.8, $58.7$58.7
A) Sort the scores in ascending order
Think: Ascending means lowest to highest.
Do: $45.2,46.8,58.7,62.2,63.2,64.3,65.2,71.6$45.2,46.8,58.7,62.2,63.2,64.3,65.2,71.6
B) Calculate the median, writing your answer as a decimal.
Think: Which term will be in the middle?
$\text{Middle term }$Middle term $=$= $\frac{n+1}{2}$n+12
$=$= $\frac{8+1}{2}$8+12
$=$= $4.5$4.5
This means that the median lies between the fourth and fifth scores.
$\frac{62.2+63.2}{2}$62.2+63.22 $=$= $62.7$62.7
The median is $62.7$62.7.
The mode is the most frequently occurring score.
To find the mode, just count which score you see most frequently in your data set.
Find the mode of the following set of scores:
$2$2, $2$2, $6$6, $7$7, $7$7, $7$7, $7$7, $11$11, $11$11, $11$11, $13$13, $13$13, $16$16, $16$16
Think: How many of each score are there?
Do: 2, 2, 6, 7, 7, 7, 7, 11, 11, 11, 13, 13, 16, 16
$7$7 is the most frequently occurring score, so the mode is $7$7.
The range is the difference between the highest score and the lowest score.
To calculate the range, you need to subtract the lowest score from the highest score.
Find the range of the following set of scores: $10,7,2,14,13,15,11,4$10,7,2,14,13,15,11,4.
Think: What are the highest and lowest scores in this set?
Do: $15-2=13$15−2=13
The range is $13$13.
Calculating the mean, median, mode & range from a graph
We can also use the data from graphs to calculate the mean, median, mode and range using the same processes as we have learnt about in the sections above. Let's go through them by looking at an example!
More Worked Examples
The stem and leaf plot shows the number of hours students spent studying for a science exam.
From the data in the stem and leaf plot, find (to two decimal places if necessary) the:
mean.
median.
range.
Reveal Solution
The frequency table below shows the resting heart rate of some people taking part in a study.
Complete the table:
Class Centre ($x$x)
Frequency ($f$f)
$fx$fx
$30$30-$39$39 $\editable{}$
$13$13 $\editable{}$
Determine an estimate for the mean resting heart rate? Leave your answer to two decimal places if necessary.
Plan and conduct investigations using the statistical enquiry cycle: A justifying the variables and measures used B managing sources of variation, including through the use of random sampling C identifying and communicating features in context (trends, relationships between variables, and differences within and between distributions), using multiple displays D making informal inferences about populations from sample data E justifying findings, using displays and measures.
Investigate a given multivariate data set using the statistical enquiry cycle | CommonCrawl |
Survey | Open | Published: 29 March 2019
Adversarial attack and defense in reinforcement learning-from AI security view
Tong Chen ORCID: orcid.org/0000-0001-6042-01601,
Jiqiang Liu1,
Yingxiao Xiang1,
Wenjia Niu1,
Endong Tong1 &
Zhen Han1
Cybersecurityvolume 2, Article number: 11 (2019) | Download Citation
Reinforcement learning is a core technology for modern artificial intelligence, and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System (CAV). Therefore, a reliable RL system is the foundation for the security critical applications in AI, which has attracted a concern that is more critical than ever. However, recent studies discover that the interesting attack mode adversarial attack also be effective when targeting neural network policies in the context of reinforcement learning, which has inspired innovative researches in this direction. Hence, in this paper, we give the very first attempt to conduct a comprehensive survey on adversarial attacks in reinforcement learning under AI security. Moreover, we give briefly introduction on the most representative defense technologies against existing adversarial attacks.
Artificial intelligence (AI) is providing major breakthroughs in solving the problems that have withstood many attempts of natural language understanding, speech recognition, image understanding and so on. The latest studies (He et al. 2016) show that the correct rate of image understanding can reach 95% under certain conditions, meanwhile the success rate of speech recognition can reach 97% (Xiong et al. 2016).
Reinforcement learning (RL) is one of the main techniques that can realize artificial intelligence (AI), which is currently being used to decipher hard scientific problems at an unprecedented scale.
To summarized, the researches of reinforcement learning under artificial intelligence are mainly focused on the following fields. In terms of autonomous driving (Shalev-Shwartz et al. 2016; Ohn-Bar and Trivedi 2016), Shai et al. applied deep reinforcement learning to the problem of forming long term driving strategies (Shalev-Shwartz et al. 2016), and solved two major challenges in self driving. In the aspect of game play (Liang et al. 2016), Silver et al. (2016) introduced a new approach to computer Go which can evaluate board positions, and select the best moves with reinforcement learning from games of self-play. Meanwhile, for Atari game, Mnih et al. (2013) presented the first deep learning model to learn control policies directly from high-dimensional sensory input using reinforcement learning. Moreover, Liang et al. (Guo et al. 2014) also built a better real-time Atrai game playing agent with DQN. In the field of control system, Zhang et al. (2018) proposed a novel load shedding scheme against voltage instability with deep reinforcement learning(DRL). Bougiouklis et al. (2018) presented a system for calculating the optimum velocities and the trajectories of an electric vehicle for a specific route. In addition, in the domain of robot application (Goodall and El-Sheimy 2017; Martínez-Tenor et al. 2018), Zhu et al. (2017) applied their model to the task of target-driven visual navigation. Yang et al. (Yang et al. 2018) presented a soft artificial muscle driven robot mimicking cuttlefish with a fully integrated on-board system.
In addition, reinforcement learning is also an important technique for Connected and Automated Vehicle System(CAV), which is a hotspot issue in recent years. Meanwhile, the security research for this direction has attracted numerous concerns(Chen et al. 2018a; Jia et al. 2017). Chen et al. performed the first security analysis on the next-generation Connected Vehicle (CV) based transportation systems, and pointed out the current signal control algorithm design and implementation choices are highly vulnerable to data spoofing attacks from even a single attack vehicle. Therefore, how to build a reliable and security reinforcement learning system to support the security critical applications in AI, has become a concern which is more critical than ever.
However, the weaknesses of reinforcement learning are gradually exposed which can be exploited by attackers. Huang et al. (2017) firstly discovered that neural network policies in the context of reinforcement learning are vulnerable to "Adversarial Attacks" in the form of adding tiny perturbations to inputs which can lead a model to give wrong results. Regardless of the learned task or training algorithm, they observed a significant drop in performance, even with very small adversarial perturbations which are invisible to human. Even worse, they found that the cross-dataset transferability property (Szegedy et al. 2013 proposed in 2013) also holds in reinforcement learning applications, so long as both policies have been trained to solve the same task. Such discoveries have attracted public interests in the research of adversarial attacks and their corresponding defense technologies in the context of reinforcement learning.
After Huang et al. (2017), a lot of works have focused on the issue of adversarial attack in the field of reinforcement learning (e.g., Fig. 1). For instance, in the field of Atari game, Lin et al. (2017) proposed a "strategically-timed attack" whose adversarial example at each time step is computed independently of the adversarial examples at other time steps, instead of attacking a deep RL agent at every time step (see "Black-box attack" section). Moreover, in the terms of automatic path planning, Liu et al. (2017), Xiang et al. (2018), Bai et al. (2018) and Chen et al. (2018b) all proposed methods which can take adversarial attack on reinforcement learning algorithms (VIN (Tamar et al. 2016), Q-Learning (Watkins and Dayan 1992), DQN (Mnih et al. 2013), A3C (Mnih et al. 2016)) under automatic path planning tasks (see "Defense technology against adversarial attack" section).
Examples for adversarial attacks on reinforcement learning. As shown in the first line are the examples for adversarial attack in the field of Atari game. The first image denotes the original clean game background, while the others show the perturbed game background which can be called as "adversarial example". Huang et al. (2017) found that the adversarial examples which are invisible to human have a significant impact on the game result. Moreover, the second line shows the examples for adversarial attack in the domain of automatic path planning. Same as the first row, the first image represents the original pathfinding map, and the remaining two images denote the adversarial examples generated by noise added. Chen et al. (2018b) found that the trained agent could not find its way correctly under such adversarial examples
In view of the extensive and valuable applications of the reinforcement learning in modern artificial intelligence (AI), and the critical role for reinforcement learning in AI security, inspiring innovative researches in the field of adversarial research.
The main contributions of this paper can be concluded as follows:
We give the very first attempt to conduct a comprehensive and in-depth survey on the literatures of adversarial research in the context of reinforcement learning from AI security view.
We make a comparative analysis for the characteristics of adversarial attack mechanisms and defense technologies respectively, to compare the specific scenarios and advantages/disadvantages of the existing methods, in addition, give a prospect for the future work direction.
The structure of this paper is organized as follow. In "Preliminaries" section, we first give a description for the common term related to adversarial attack under reinforcement learning, and briefly introduce the most representative RL algorithms. "Adversarial attack in reinforcement learning" section reviews the related research of adversarial attack in the context of reinforcement learning. For the defense technologies against adversarial attack in the context of reinforcement learning are discussed in "Defense technology against adversarial attack" section. Finally, we draw conclusion and discussion in "Conclusion and discussion" section.
In this section, we give explanation for the common terms related to adversarial attack in the field of reinforcement learning. In addition, we also briefly introduce the most representative reinforcement learning algorithms, and take comparison of these algorithms from approach type, learning type, and application scenarios. So as to facilitate readers' understanding of the content for the following sections.
Common terms definitions
Reinforcement Learning: is an important branch of machine learning, which contains two basic elements state and action. Performing a certain action under the certain state, what the agent need to do is to continuously explore and learn, so as to obtain a good strategy.
Adversarial Example: Deceiving AI system which can lead them make mistakes. The general form of adversarial examples is the information carrier (such as image, voice or txt) with small perturbations added, which can remain imperceptible to human vision system.
Implicit Adversarial Example: is a modified version of clean information carrier, which generated by adding human invisible perturbations to the global information on pixel level to confuse/fool a machine learning technique.
Dominant Adversarial Example: is a modified version of clean map, which generated by adding physical-level obstacles to change the local information to confuse/fool A3C path finding.
Adversarial Attack: Attacking on artificial intelligence (AI) system by utilizing adversarial examples. Adversarial attacks are generally can be classified into two categories:
Misclassification attacks: aiming for generating adversarial examples which can be misclassified by target network.
Targeted attacks: aiming for generating adversarial examples which can target misclassifies into an arbitrary label designated by adversary specially.
Perturbation: The noise added on the original clean information carriers (such as image, voice or txt), which can make them to be adversarial examples.
Adversary: The agent who attack AI system with adversarial examples. However, in some cases, it also refer to adversarial example itself (Akhtar and Mian 2018).
Black-Box Attack: The attacker has no idea of the details related to training algorithm and corresponding parameters of the model. However, the attacker can still interact with the model system, for instance, by passing in arbitrary input to observe changes in output, so as to achieve the purpose of attack. In some work (Huang et al. 2017), for black-box attack, authors assume that the adversary has access to the training environment (e.g., the simulator) but not the random initialization of the target policy, and additionally may not know what the learning algorithm is.
White-Box Attack: The attacker has access to the details related to training algorithm and corresponding parameters of the model. Attacker can interact with the target model in the process of generating adversarial attack data.
Threat Model: Finding system potential threat to establish an adversarial policy, so as to achieve the establishment of a secure system (Swiderski and Snyder 2004). In the context of adversarial research, threat model considers adversaries capable of introducing small perturbations to the raw input of the policy.
Transferability: an adversarial example designed to be misclassified by one model is often misclassified by other models trained to solve the same task (Szegedy et al. 2013).
Target Agent: The target subject attacked by adversarial examples, usually can be a network model trained by reinforcement learning policy, which can detect whether adversarial examples can attack successfully.
Representative reinforcement learning algorithms
In this section, we list the most representative reinforcement learning algorithms, and make comparison among them which can be shown in Table 1, where "value-based" denotes that the reinforcement learning algorithm calculates the expected reward of actions under potential rewards, and takes it as the basis for selecting actions. Meanwhile, the learning strategy for "value-based" reinforcement learning is constant, in other words, under the certain state the action will be fixed.
Table 1 The comparison of the most representation reinforcement learning algorithms
While the "policy-based" represented that the reinforcement learning algorithm trains a probability distribution by strategy sampling, and enhances the probability of selecting actions with high reward value. This kind of reinforcement learning algorithm will learn different strategies, in other words, the probability of taking one action under the certain state is constantly adjusted.
Q-Learning
Q-Learning is a classical algorithm for reinforcement learning, was proposed earlier and has been used widely. Q-Learning was firstly proposed by C.Watkins (Watkins and Dayan 1992) in his Doctoral Dissertation Learning from delayed rewards in 1989. It is actually a variant of Markov Decision Process (MDP)(Markov 1907). The idea of Q-Learning is based on the value iteration, which can be concluded as, the agent perceives surrounding information from the environment and selects appropriate methods to change the sate of environment according to its own method, and obtains corresponding incentives and penalties to correct the strategy. Q-Learning proposes a method to update the Q-value, which can be concluded as Q(St,At)←Q(St,At)+α(Rt+1+λ maxaQ(St+1,a)−Q(St,At)). Throughout the continuous iteration and learning process, the agent tries to maximize the rewards it receives and finds the best path to the goal, and the Q matrix can be obtained. Q is an action utility function that evaluates the strengths and weakness of actions in a particular state and can be interpreted as the brain of an intelligent agent.
Deep Q-Network (DQN)
DQN is the first deep enhancement learning algorithm proposed by Google DeepMing in 2013 (Mnih et al. 2013) and further improved in 2015 (Mnih et al. 2015). DeepMind applies DQN to Atari games, which is different from the previous practice, utilizing the video information as input and playing games against humans. In this paper, authors gave the very first attempt to introduce the concept of Deep Reinforcement Learning, and has attracted public attentions in this direction. For DQN, as the output for the value network is the Q-value, then if the target Q-value can be constructed, the loss function can be obtained by Mean-Square Error (MSE). However, the input for value network are state S, action A, and feedback reward R. Therefore, how to calculate the target Q-value correctly is the key problem in the context of DQN.
Value Iterative Network (VIN)
Tamar et al. (2016) proposed the value iteration network, a fully differentiable CNN planning module for approximate value iterative algorithms that can be used for learning to plan, such as the strategies in reinforcement learning. This paper mainly solved the problem of weak generalization ability of deep reinforcement learning. There is a special value iterative network structure in VIN (Touretzky et al. 1996). For this novel method proposed in this work, it not only needs to use neural network to learn a direct mapping from state to decision, but also can embeds the traditional planning algorithm into the neural network so that the neural network can learn how to act under current environment, and use long-term planning-assisted neural networks to give a better decision.
Asynchronous Advantage Actor-Critic Algorithm (A3C)
The A3C algorithm is a deep enhancement learning algorithm proposed by DeepMind in 2016 (Mnih et al. 2016). A3C completely utilizes the Actor-Critic framework and introduces the idea of asynchronous training, which can improves the performance and speeds up the whole training process. If the action is considered to be bad, the possibility for this action will be reduced. Through iterative training, A3C constantly adjusts the neural network to find the best action selected policy.
Trust Region Policy Optimization (TRPO)
TRPO is proposed by J.Schulman in 2015 (Schulman et al. 2015), it is a kind of random strategy search method in strategy search method. TRPO can solves the problem of step selection of gradient update, and gives a monotonous strategy improvement method. For each training iterative, whole-trajectory rollouts of a stochastic policy are used to calculate the update to the policy parameters θ, while controlling the change in policy as measured by the KL divergence between the old and the new policies.
The UNREAL algorithm is the latest depth-enhancement learning algorithm proposed by DeepMind in 2016(Jaderberg et al. 2016). Based on the A3C algorithm, the performance and training process for this algorithm are further improved. The experimental results show that the performance for UNREAL at Atari is 8.8 times against human performance and 3D at the first perspective, moreover, UNREAL has reached 87% of human level in the first-view 3D maze environment Labyrinth. For UNREAL, there are two types of auxiliary tasks, the first one is the control task, including pixel control and hidden layer activation control. The other one is back prediction tasks, as in many scenarios feedback r is not always available, allowing the neural network to predict the feedback value will give it a better ability to express. UNREAL algorithm uses historical continuous multi-frame image input to predict the next-step feedback value as a training target and uses history information to additionally increase the value iteration task.
Adversarial attack in reinforcement learning
In this section, we discuss the related research of adversarial attack in the field of reinforcement learning. The reviewed literatures mainly conduct the adversarial research on specific application scenarios, and generate adversarial examples by adding perturbations to the information carrier, so as to realize the adversarial attack on reinforcement learning system.
We organize the review mainly according to chronological order. Meanwhile, in order to make readers can understand the core technical concepts of the surveyed works, we go into technical details of important methods and representative technologies by referring to the original papers. In part 3.1, we discuss the related works of adversarial attack against the reinforcement learning system in the domain of White-box attacking. In terms of Black-box attacking, the design of adversarial attack against the target model is shown in part 3.2. Meanwhile, we analyze the availability and contribution of adversarial attack researches in the above two fields. Additionally, we also give summary on the attributions of adversarial attacking methods discussed in this section in part 3.3.
White-box attack
Fast gradient sign method (FGSM)
Huang et al. (2017) first showed that adversarial attacks are also effective when targeting neural network policies in reinforcement learning system. Meanwhile, for this work, the adversary attacks a deep RL agent at every time step, by perturbing each image the agent observes.
The main contributions for Huang et al. (2017) can be concluded as the following two aspects:
They gave the very first attempt to prove that reinforcement learning systems are vulnerable to adversarial attack, and the traditional generation algorithms designed for adversarial examples still can be utilized to attack under such scenario.
Authors creatively verified how effectiveness of adversarial examples are impacted by the deep RL algorithm used to learn the policy.
Figure 2 shows the adversarial attack on Pong game trained with DQN, we can see that after adding small perturbation to the original clean game background, the trained agent cannot make a correct judgment according to the motion direction of ball. Noting that the adversarial examples are calculated by fast gradient sign method (FGSM) (Goodfellow et al. 2014a).
Examples for adversarial attacks on Pong policy trained with DQN(Huang et al. 2017). The first line: computing adversarial perturbations by fast gradient sign method (FGSM)(Goodfellow et al. 2014a) with an ℘∞-norm constraint. The trained agent who should have taken the "down" action took "noop" action instead under adversarial attack. The second line: authors utilized the FGSM with ℘1-norm constraint to compute the adversarial perturbations. The trained agent can not take action correctly, which should have moved up, but took "down" action after interference. Videos are available at http://r11.berkeley.edu/adversarial
FGSM expects the classifier can assign the same class to the real example x and the adversarial example $\tilde {x}$ with a small enough perturbation η which can be concluded as
$$\begin{array}{@{}rcl@{}} \eta=\epsilon sign(\omega)~~,~~\| \eta \|_{\infty} < \epsilon \end{array} $$
where ω denotes a weight vector, since this perturbation maximizes the change in output for the adversarial example $\tilde {x}$, $\omega ^{T} \tilde {x} =\omega ^{T} x + \omega ^{T} \eta $.
Moreover, under image classification network with parameters θ, model input x, targets related to input y, and cost function J(θ,x,y). Linearizing the cost function to obtain an optimal max-norm constrained perturbation which can be concluded as
$$\begin{array}{@{}rcl@{}} \eta = \epsilon sign(\nabla_ x J(\theta,x,y)) \end{array} $$
In addition, authors also proved that policies trained with reinforcement learning are vulnerable to the adversarial attack. However, among the RL algorithms tested in this paper (DQN, TRPO (Schulman et al. 2015), and A3C), TRPO and A3C seem to be more resistant to adversarial attack.
Under the domain of Atari game, authors showed that by adding human invisible noises to the original clean game background can make the game unable to work properly, and realize adversarial attack successfully. Huang et al. (2017) gave a new attempt to take adversarial research under the scenario of reinforcement learning, and this work proved that the adversarial attack still exists in the domain of reinforcement learning. Moreover, FGSM motivates a series of related research work, Miyato et al. (2018) proposed a closely related mechanism to compute the perturbation for a given image, and Kurakin et al. (2016) named this algorithm as "Fast Gradient L2" and also proposed a alternative of using ℓ∞ for normalization which named as "Fast Gradient L∞".
Start point-based adversarial attack on Q-learning (SPA)
Xiang et al. (2018) focused on the adversarial example-based attack on a representative reinforcement learning named Q-learning in automatic path finding. They proposed a probabilistic output model based on the influence factors and the corresponding weights to predict the adversarial examples under such scenario.
Calculating on four factors including the energy point gravitation, the key point gravitation, the path gravitation, and the included angle, a natural linear model is constructed to fit these factors with the weight parameters computation based on the principal component analysis(PCA) (Wold et al. 1987).
The main contribution for Xiang et al. is that they built a model, which can generate the corresponding probabilistic outputs for certain input points, and the probabilistic output of our model refers to the possibility of interference caused by interference point on the path of agent pathfinding.
Xiang et al. proposed 4 factors to determine wether the perturbation can impact the final result for the agent path planning, which can be concluded as:
Formula expression
Factor 1: $\left \{\!\!\!\begin {array}{l} e_{ic} = k_{c} + i*d^{\prime }* \frac {k^{\prime }_{c} - k_{c}}{\sqrt {(k^{\prime }_{c} - k_{c})^{2}+(k^{\prime }_{r} - k_{r})^{2}}}\\ e_{ir} = k_{r} + i*d^{\prime }* \sqrt {1\,-\,\left (\frac {k^{\prime }_{c} - k_{c}}{\sqrt {(k^{\prime }_{c} - k_{c})^{2}+(k^{\prime }_{r} - k_{r})^{2}}}\right)^{2}} \end {array}\right.\quad $
The energy point gravitation
Factor 2: d1i=|aic−kc|+|air−kr|,(kc,kr)=k, (aic,air)=ai∈A
The key point gravitation
Factor 3: $\begin {aligned} d_{2i}&= \min \{d_{2}|d_{2} = |a_{ic}-z_{jc}|+|a_{ir}\\ &\quad -\!z_{jr}|, z_{j} \!\in \! Z_{1}\}, \!(z_{jc},z_{jr})\!=z_{j}, (a_{ic},a_{ir})\\ &=a_{i} \in A \end {aligned}$
The path gravitation
Factor 4: $\begin {aligned} & \boldsymbol \!\! { v_{ka}}\,=\, (a_{ic}\,-\,k_{c},a_{ir}\,-\,k_{r}),\! \boldsymbol {v_{kt}} \,=\, (t_{c}\,-\,k_{c},\!t_{r}\,-\,k_{r}\!) \\ &\! \cos \theta _{i} \,=\, v_{ka} \!\cdot \! v_{kt} / |v_{ka}||v_{kt}|,\!\quad \! \theta _{i} = \arccos \theta _{i} \end {aligned}$
The included angle
For Factor 1 can be named as the energy point gravitation, which denotes that it is more successful if the adversarial point k is the point on the key vector v. Factor 2 is the key point gravitation, which represents that the closer adversarial point is to the key point k, the more likely it is to cause interference. Factor 3 can be called as the path gravitation, which denotes that the closer adversarial point is to the initial path Z1, the more possible it is to bring about obstruct. Meanwhile, factor 4 can be concluded as the included angle, which represents that the angle θ between the vector from the point k to the adversarial point ai and the vector from the key point to the goal t.
Therefore, the probability for each adversarial point ai can be concluded as
$$\begin{array}{@{}rcl@{}} {}p_{a_{i}}=\sum_{j=1}^{4}p_{{ja}_{i}}=\omega_{1}\cdot a_{ie}+\omega_{2}\cdot d^{\prime}_{1i}+\omega_{3}\cdot d^{\prime}_{2i}+\omega_{4}\cdot \theta^{\prime}_{i} \end{array} $$
where ωi denotes the weight for each factor respectively. Storing the $p_{a_{i}}$ for each point, and select the top 10 as the adversarial point.
For this work, the adversarial examples can be found successfully for the first time on Q-learning in path finding and their model can make a satisfactory prediction (e.g., Fig. 3). Under a guaranteed recall, the precision of the proposed model can reach to 70% with the proper parameter setting. By adding small obstacle points to the original clean map, can interfere the agent's path finding. However, the experimental map size for this work is 28×28, and there is no additional verification for a larger maze map, which can be considered to research in future works. However, Xiang et al. paid attention to the adversarial attack problem in automatic path finding under the scenario of reinforcement learning. Meanwhile this work own practical significance, as the objective for this study is Q-learning which is the most widely used and representative reinforcement learning algorithm.
An illustration of the interference effect before and after adding adversarial points when the path size is 2. We show two types of maps here, where (a) denotes the first type, and (b), (c) all belong to the second category
White-box based adversarial attack on DQN (WBA)
Based on the SPA algorithm introduced above, Bai et al. (2018) proposed that they first use DQN to find the optimal path, and analyzed the rules of DQN pathfinding. They proposed a method that can effectively find vulnerable points towards White-Box Q-table variation in DQN pathfinding training. Meanwhile, they built a simulation environment as a basic experiment platform to test their method.
Moreover, they classified two types of vulnerable points.
The vulnerable point is most likely on the boundary line. Moreover, the smaller ΔQ (the Q-value difference between the right and downward direction) is the more likely be a vulnerable point is.
For this characteristic of vulnerable pints, they proposed a method to detect adversarial examples. Let P denotes the set of points on the map P={P1,P2,...,Pn}, and each point Pi obtains four Q-values Dij=(Qi1,Qi2,Qi3,Qi4) respectively, which indicate up, down, right, and left. Meanwhile, selecting the direction with the max Q-vale f(Pi)={j| maxjQij}, and determining wether point Pi is on the boundary line
$$\begin{array}{*{20}l} {}\varphi(P_{i})=OR(f(P_{i})!=f(P_{i1}),&f(P_{i})!=f(P_{i2}),\\ &f(P_{i})!=f(P_{i3}),f(P_{i})!=f(P_{i4})) \end{array} $$
where Pij={Pi1,Pi2,Pi3,Pi4} is the set of the adjoining points for four directions of Pi, A={a1,a2,...,an} represents the points on boundary line. Calculating the Q-value difference ΔQ=|Qi2−Qi3|, and sorting ΔQ ascending to construct B={b1,b2,...,bn}. They took the first 3% of the list as the smallest ΔQ-value points. Finally got the set of suspected adversarial examples, which can be concluded as $X=\{x_{1},x-2,...,x_{n}\},X=A\bigcap B$.
For the other type of vulnerable points can be concluded as:
Adversarial examples are related to the gradient of maximum Q-value for each point on the path.
Bai et al. found that when the Q-values of consecutive two points fluctuate greatly, their gradient is greater and they are more vulnerable to be attacked.
Meanwhile, they found that the larger angle between two adjacent lines is, the greater slope of the straight line is. Set angle between the direction vectors of two straight lines to be $\theta \left (0<\theta <\frac {\pi }{2}\right)$, the function can be concluded as
$$\begin{array}{@{}rcl@{}} {}\cos \theta=\frac{|s_{1}\cdot s_{2}|}{|s_{1}||s_{2}|}=\frac{|m_{1}m_{2}+n_{1}n_{2}+p_{1}p_{2}|}{\sqrt{m_{1}^{2}+n_{1}^{2}+p_{1}^{2}}\sqrt{m_{2}^{2}+n_{2}^{2}+p_{2}^{2}}} \end{array} $$
where s1=(m1,n1,p1),s2=(m2.n2,p2) are the direction vectors for Line L1,L2. Finally, can find the first large 1% of the angle between the two lines on the path as the suspected interference point.
For WBA, authors successfully found the adversarial examples and the supervised method they proposed is effective, which can be shown in Table 2 for details. However, in this work, with the increase of training times, the accuracy rate decreases. In other words, when training times are large enough, the interference point can make the path converge, although the training efficiency is reduced.
Table 2 Features for adversarial perturbations against single same original clean map, which can show how the different characteristics affect the interference of adversarial example
Similar to the work of Xiang et al., the maps used for experiment are 16×16 and 17×17 is size, and there is no way to verify the proposed adversarial attack method more accurately with such map size. It is recommended that the attack method can be verified on different categories of map-size, which can better illustrate the effectiveness of the proposed method in this paper.
Common dominant adversarial examples generation method (CDG)
Chen at al. (2018b) showed that dominant adversarial examples are effective when targeting A3C path finding, and designed a Common Dominant Adversarial Examples Generation Method (CDG) to generate dominant adversarial examples against any given map.
As shown in Fig. 4, are the dominant adversarial examples for the original map which can attack successfully. Chen et al. found that on the dominant adversarial example perturbation band, the value gradient rises the fastest. Therefore, they call this perturbation band as "gradient band". By adding obstacles on the cross section of gradient band can perturb the agent's path finding successfully. The generation rule for dominant adversarial example can be defined as:
The first line shows dominant adversarial examples for the original map. The fist picture denotes the original map for attack, and the three columns on the right are the dominant adversarial examples of successful attacks. Meanwhile, the red dotted lines represent the perturbation band. The second line denotes the direction in which the value gradient rises the fastest. By comparison between dominant adversarial examples and the contour graph, can found that on the perturbation band, the value gradient rises fastest
Generation Rule: Adding "baffle-like" obstacles to the cross section of gradient band in which the value gradient rises the fastest, can impact A3C path finding.
Moreover, in order to calculate the Gradient Band more accurately, authors considered two kinds of situations according to the difference for original map and gradient function, one situation is that obstacles exist on both sides of the gradient function, and the other is that obstacles exist on one side if the gradient function.
A. Case 1: Obstacles exist on both sides of the gradient function.
As in this case, obstacles exist on the both sides of the gradient curve, then need to traverse all the coordinate points in $Obstacle = \{(O_{x_{1}},O_{y_{1}}),(O_{x_{2}},O_{y_{2}}),\cdots,(O_{x_{n}},O_{y_{n}})\}$, and to find the nearest two points from this gradient curve in the upper and lower part respectively. Therefore, the Gradient Band function FGB(x,y) under such case can be concluded as:
$$ {}\left\{\begin{aligned} f(x,y)_{upper}=y-(U + &a_{0}+a_{1}x+...+a_{k}x^{k})\\ f(x,y)_{lower}=y-(L + &a_{0}+a_{1}x+...+a_{k}x^{k})\\ X_{L} < x < X_{max} &, Y_{L} < y <Y_{max} \end{aligned}\right. $$
where f(x,y)upper and f(x,y)lower denote the upper/lower bound function respectively, Xmax and Ymax denote the boundary value of the map, (XL,0) and (0,YL) are the intersection points of f(x,y)lower and the coordinate axis.
A. Case 2: Obstacles exist on one side of the gradient function.
In this case, the calculating for distance between obstacle edge points and gradient function is same with case 1. However, under such scenario, obstacles exist on one side of the gradient function curve, hence, under this case can only obtain the upper/lower bound function for the Gradient Band. Therefore, the Gradient Band function FGB(x,y) can be concluded as:
$$ {}\left\{\begin{aligned} f(x,y)_{upper}=\min\{f(&X_{max},0),f(0,Y_{max})\}\\ f(x,y)_{lower}=y-\left(L +{\vphantom{a_{k}x^{k}}} \right.&\left.a_{0}+a_{1}x+...+a_{k}x^{k}\right)\\ X_{L} < x < X_{max} &, Y_{L} < y <Y_{max} \end{aligned} \right. $$
Finally setting Y=[1,2,...,Ymax] and X=[1,2,...,Xmax] respectively, and generating the obstacle function set $\phantom {\dot {i}\!}O_{baffle}=\{F_{Y_{1}},...,F_{X_{1}},...\}$
For this paper, the lowest generation precision for CDG algorithm is 91.91% (e.g., Fig. 5), which can prove that the method proposed in this work can realize the common dominant adversarial examples generated under A3C path finding with a high confidence.
Samples for Dominant Adversarial Examples. For the first column is the original clean map for path finding. For columns on the right are the samples for Dominant Adversarial Examples generated by CDG algorithm proposed in this paper, and (a), (b), (c), (d) represent four different samples for dominant adversarial examples
This paper showed that, the generation accuracy for adversarial examples of CDG algorithm is relatively high. By adding small obstacles at physical level on the original clean map, it will interfere with the path finding process of A3C agent. Comparing to other works in this field, the experimental map size for Chen's work contains 10 categories, 10×10, 20×20, 30×30, 40×40, 50×50, 60×60, 70×70, 80×80, 90×90, 100×100, which makes it possible to better verify the effectiveness of the proposed CDG algorithm proposed in this paper.
Black-box attack
Policy induction attack (PIA)
Behzadan and Munir (2017) also discover that Deep Q-network(DQN) based policy is vulnerability under adversarial perturbations, and verified that the transferability(Szegedy et al. (2013) proposed in 2013) of adversarial examples across different DQN model does exist.
Therefore, they proposed a new type of adversarial attack named policy induction attack based on this vulnerability of DQN. Their threat model considers that adversary can get limited priori information, reward function R and an estimate for the update frequency of the target network. In other words, adversary is not aware of target's network architecture and its parameters at every time step, adversarial examples must be generated by black-box techniques (Papernot et al. 2016c).
For every time step, adversary computes the perturbation vectors $\hat {\delta }_{t+1}$ for the next state st+1 such that $\max _{a^{\prime }}\hat {Q}(s_{t+1}+\hat {\delta }_{t+1},a^{\prime };\theta ^{-}_{t})$ causes $\hat {Q}$ to generate its maximum when $a^{\prime }=\pi ^{*}_{adv}(s_{t+1})$. The whole process for policy induction attack can be divided into two parts, namely initialization and exploitation.
The initialization phase must be done before target starts interacting with the environment. Specifically, this phase can be divided as follow:
Training DQN policy based on the adversary's reward function r′ to obtain a adversarial strategy $\pi ^{*}_{adv}$.
Creating a replica of the target's DQN and initializing it with random parameters.
The exploitation phase takes adversarial attack operations (e.g., designing adversarial input), and constitutes the life cycle which can be shown in Fig. 6. The cycle is initialized by the first observation value of the environment, and to cooperate with the operation of the target agent.
The exploitation cycle of policy induction attack (Behzadan and Munir 2017). For the first phase, adversary will observes the current state, and transitions in the environment. Then adversary will estimate the optimal action to select based on the adversrial policy. For the next phase, adversary take perturbation into application, and perturb the target's input. Finally, adversary will waits for the action that agent selected
In the context of policy induction attacks, this paper conjectured that the temporal features of the training process may be utilize to provide protection mechanisms. However, an analytical treatment of the problem to establish the relationship of model parameters will suggest a deeper insight and guidelines into design a more security deep reinforcement learning architecture.
Specific time-step attack
As the uniform attack strategies (e.g. Huang et al. (2017)) can be regarded as a direct extension of the adversarial attack in DNN-based classification system, since the adversarial example at each time step is computed independently of the adversarial examples at other time step. However, such tactic has not consider the uniqueness of the RL problem.
Lin et al. (2017) proposed two tactics of adversarial attack in the specific scenario of reinforcement learning problem, which namely strategically-time attack and the enchanting attack.
Strategically-Timed Attack (STA)
As the reward signal in many RL problems is sparse, an adversary need not attack the RL agent at every time step. Therefore, this adversarial attack tactic utilizes this unique characteristic to attack selected subset of time steps of RL agents. The core of strategically-timed attack is that the adversary can minimize the expected accumulated reward of target agent by strategically attacking less than Γ<<L time steps, to achieve the purpose of adversarial attack, which can be formulated intuitively as an optimization problem
$$ \begin{aligned} ~~~~~~~~~~~~~~~\min_{b_{1},b_{2},...,b_{L},\delta_{1},\delta_{2},...,\delta_{L}}&R_{1}(\bar{s}_{1},...,\bar{s}_{L})\\ ~~~~~~~~~~~~~~~\bar{s}_{t}=s_{t}+b_{t}\delta_{t}~~~for~&all~ t=1,...,L\\ ~~~~~~~~~~~~~~~b_{t}\in{0,1},~~~for~&all~t=1,...,L\\ ~~~~~~~~~~~~~~~~~~~~\sum_{t} b_{t} &\leq \Gamma \end{aligned} $$
where s1,...,sL denotes the sequence of observations or states, δ1,...,δL is the sequence of perturbations, R1 represents the expected return at the first time step, b1,...,bL denotes when an adversarial example is applied, and the Γ is a constant to limit the total number of attacks.
However, the optimization problem in 8 is a mixed integer programming problem, which is difficult to solve. Hence, authors proposed a heuristic algorithm to solve this task, with a relative action preference function c, which computes the preference of the agent in taking the most preferred action over the least preferred action at the current state (similar to Farahmand (2011)).
For policy gradient-based methods such as A3C algorithm, Lin et al. defined the function c as
$$\begin{array}{@{}rcl@{}} {}c(s_{t})=\max_{a_{t}}\pi(s_{t},a_{t})=\min_{a_{t}}\pi(s_{t},a_{t}) \end{array} $$
where st denotes the state at time step t, and at denotes the action at time step t, and π is the policy network which maps the state-action pair (st,at) to a probability.
Meanwhile, for value-based methods such as DQN, the function c can be defined as
$$\begin{array}{@{}rcl@{}} {}c(s_{t})=\max_{a_{t}}\frac{e\frac{Q(s_{t},a_{t})}{T}}{\sum_{a_{k}}e\frac{Q(s_{t},a_{k})}{T}}-\min_{a_{t}}\frac{e\frac{Q(s_{t},a_{t})}{T}}{\sum_{a_{k}}e\frac{Q(s_{t},a_{k})}{T}} \end{array} $$
where Q denotes the Q-values of actions, and T denotes the temperature constant.
Enchanting Attack (EA)
The purpose for enchanting attack is to push the RL agent to achieve the expected state sg after H steps under the current state st at time step t. Under such attacking approach, the adversary needs to specially design a series of adversarial examples st+1+δt+1,...,st+H+δt+H, hence, this tactic of attack is more difficult than strategically-timed attack.
The first hypothesis assumed that we can take full control of the target agent, and enable to take any action in any time step. Therefore, under such condition, this problem can be simplified to planning an action sequence, which can make agent to the target sate sg from state st. For the second hypothesis, Lin et al. specially designed an adversarial example st+δt to lure target agent to implement the first action in planned action sequence with method proposed by Carlini and Wagner (2017). After agent observes the adversarial examples and takes the first action designed by adversary, the environment will return a new sate st+1 and iterative build adversarial examples in this way. The attack flow for enchanting attack can is shown in Fig. 7.
Attacking flow for enchanting attack (Lin et al. 2017). Enchanting attack from the original state st, the whole processing flow can be concluded as follow: 1) action sequence planning; 2) generating adversarial examples with target actions; 3) agent takes actions under adversarial example; 4) environment gives the next sate st+1. Meanwhile, adversary utilizes the prediction model to attack the target agent with initial state st
For this work, strategically-time attack can achieve the same effect as the traditional method (Huang et al. 2017), while reduce the total time step for attacking. Moreover, enchanting attack can lures target agent to take planned action sequence, which suggests a new research idea for the follow-up studies. Videos are available at http://yclin.me/adversarial_attack_RL/.
Adversarial attack on VIN (AVI)
The main contribution for Liu et al. (2017) is that they proposed a method for detecting potential attack which can obstruct VIN effectiveness. They built a 2D navigation task demonstrate VIN and studied how to add obstacles to effectively affect VIN's performance and propose a general method suitable for different kinds of environment.
Their threat model assumed that the entire environment (including obstacles, starting point and destination) is available, and they also know that the robot is trained by VIN, meanwhile, it is easy to get the VIN planning path and the theoretical path. Based on this threat model, they summarized three rules which can effectively obstructing VIN.
Rule 1: The father away from the VIN planning path, the less disturbance to the path.
Such rule can be formulated as:
$$ \begin{aligned} {}v_{1y_{k}}\,=\,\omega_{1} \min\left\{d_{1}|d_{1}\,=\,{\vphantom{(x_{r},x_{c})\,=\,x\in X,(y_{k}r,y_{kc})=y_{k}\in Y}}\right.&\sqrt{(x_{r}-y_{kr})^{2}+(x_{c}-y_{kc})^{2}},\\ {}&\left(x_{r},x_{c})\,=\,x\in X,\!(y_{k}r,y_{kc})\,=\,y_{k}\in Y\right\} \end{aligned} $$
where xr,xc is the coordinate of x, (ykr,ykc) is the coordinate of yk, ω1 is the weight of v1.
Rule 2: It is most likely to be success when adding obstacles around the turning points on the path.
$$\begin{array}{*{20}l} {}v_{2y_{k}}\,=\,\omega_{2}\min \left\{d_{2}|d_{2}\,=\,{\vphantom{(t_{r},t_{c})\,=\,t\in T,(y_{kr},y_{kc})=y_{k}\in Y}}\right.&\max(|t_{r},-y_{kr}|,|t_{c}-y_{kc}|),\\ &\left.(t_{r},t_{c})\,=\,t\in T,(y_{kr},y_{kc})=y_{k}\in Y\right\} \end{array} $$
where (tr,tc) denotes the coordinate of t, (ykr,ykc) represents the coordinate of yk, ω2 is the weight for v2. The formula considers the Chebyshev distance from yk to the nearest turning point, and utilize the weight ω2 to control the attenuation of v2.
Rule 3: The closer the adding obstacle position is to the destination, the less likely it is to change the path.
The representative for (xnr,xnc) is the coordinate of xn, (ykr,ykc) denotes the coordinate of yk, ω3 is the weight for v3. Hence, the formula can be concluded as:
$$\begin{array}{*{20}l} v_{3y_{k}}&=\omega_{3}\max(|x_{nr}-y_{kr}|,|x_{nc}-y_{kc}|),(x_{nr},x_{nc})\\ &=x_{n},(y_{kr},y_{kc})=y_{k}\in Y \end{array} $$
this formula considers the Chebyshev distance from yk to the destination, and utilize the weight ω3 to control the attenuation of v3.
Calculating the value v considering three rules for each available point, meanwhile, sorting the values to pick up most valuable points S=y|vyk∈ maxiV,y∈Y,V=vy1,vy2,...,vyk.
Liu's method has great performance on automatically finding vulnerable points of VIN and thus obstructing navigation task, which can be shown in Fig. 8.
Examples for adversarial examples successfully attack. The examples show that the method proposed in this paper do have ability to find vulnerabilities under VIN pathfinding, and thus interfere the performance of agent automatic pathfinding. a Sample of testing set. b Available Obstacle 1. c Available Obstacle 2. d Available Obstacle 3. e Available Obstacle 4. f Available Obstacle 5
However, this work has not give an analysis of the successful adversarial attack from the algorithm level, but summarized the generation rules from the successful black-box adversarial examples. Meanwhile, similar to the work of Xiang et al. and Bai et al., the map size has too many limitations. Only the size under 28×28 have been experimentally verified, and such size is not enough to prove the accuracy of the method proposed in this paper.
Summary for adversarial attack in reinforcement learning
We give summary on the attributions of adversarial attacking methods described above, which can be shown in Table 3.
Table 3 Summary for the attributes of diverse attacking method
FGSM (Goodfellow et al. 2014a), SPA (Xiang et al. 2018), WBA (Bai et al. 2018), and CDG (Chen et al. 2018b) belong to White-box attack, which have access to the details related to training algorithm and corresponding parameters of the target model. Meanwhile, the PIA (Behzadan and Munir 2017), STA (Lin et al. 2017), EA (Lin et al. 2017), and AVI (Liu et al. 2017) are Black-box attacks, in which adversary has no idea of the details related to training algorithm and corresponding parameters of the model, for the threat model discussed in these literatures, authors assumed that the adversary has access to the training environment bat has no idea of the random initializations of the target policy, and additionally does not know what the learning algorithm is.
For White-box attack policies, we summarize the parameters utilized for such methods. SPA, WBA, CDG, PIA, and AVI all have the specific target algorithm, however, the target for FGSM, STA, anf EA is not single reinforcement learning algorithm, in this sense, such adversarial attack methods are more universal adaptability.
Moreover, the learning way for these adversarial attack methods are different, as FGSM, SPA, WBA, CDG, and AVI are all "One-shot" learning, and PIA, STA, and EA are "Iterative" learning. Additionally, for all attack methods introduced here can generate adversarial examples to achieve the purpose of attacking successfully under a relatively high confidence. The application scenario for FGSM, PIA, STA, and EA are Atari game, meanwhile, the scenario for SPA, WBA, CDG, and AVI are all path planning. We also take a statistical analysis of the attack results for the algorithms discussed above.
Defense technology against adversarial attack
Since the adversarial examples attack proposed by Szegedy et al. (2013) in 2013, meanwhile, there are many related researchers have investigated the approaches to defense against adversarial examples. In this section, we briefly discussed some representative attempts that have been done to resist adversarial examples. Mainly divided into three parts, which are modifying input, modifying the objective function, and modifying the network structure.
Modifying input
Adversarial training and its variants
Adversarial training
Adversarial training is the one of the most common strategies in the related literature to improve the robustness of neural networks. By continuously inputting new types of adversarial examples and conducting adversarial training, the network's robustness is continuously improved. In 2015, Goodfellow et al. (2014b) developed a method of generating adversarial examples (FGSM, see (Goodfellow et al. 2014a)), and they also proposed to conduct adversarial training to resist adversarial perturbation exploiting the adversarial examples generated by the attack method, the adversarial examples are constantly updated during the training process so that the classification model can resist the adversarial examples. However, Moosavi-Dezfooli et al. (2017) pointed out that no matter how many adversarial examples are added, there are new adversarial examples that can cheat the trained networks in 2017. After that, by combining adversarial examples with other methods, researchers have produced better approaches defending adversarial examples in some recent works.
Ensemble adversarial training
It trains networks by utilizing the several pre-trained vanilla networks to generate one-step adversarial examples. The model by adversarial training can defend weak perturbations attack but can't defend against strong ones. Based on this, Florian Tramer et al. (2017) introduced the ensemble adversarial training, which enhances training data with perturbations transferred from other static pre-trained models, this approach separates the generation of adversarial examples from the model being trained, simultaneously drawing an explicit connection with robustness to black-box adversaries. This model trained by ensemble adversarial training has strong robustness to black-box attacks on ImageNet.
Cascade adversarial training
For unknown iterative attacks, Na et al. (2018) proposed cascade adversarial training, they trained the network by inputting adversarial images generated from the iterative defended network and one-step adversarial images from the network being trained. At the same time, the authors regularized the training with a unified embedding so that the convolution filters can gradually learn how to ignore pixel-level perturbations. The cascade adversarial training is shown in Fig. 9.
Principled adversarial training
The structure of cascade adversarial training
From the perspective of distributed robust optimization, Aman Sinha et al. (2018) provided a principled adversarial training, which guaranteed the performance of neural networks under adversarial data perturbation. By utilizing the Lagrange penalty form of perturbation under the potential data distribution in the Wasserstein ball, the authors provide a training process that uses worst-case perturbations of training data to reinforce model parameter updates.
Gradient Band-based Adversarial Training
Chen et al. (2018b) proposed a generalized attack immune model based on gradient band, which can be shown in Fig. 10, mainly consists of Generation Module, Validation Module, and Adversarial Training Module.
Architecture for the gradient band-based generalized attack immune model
For the original clean map, Generation Module can generate dominant adversarial examples based on the Common Dominant Adversarial Examples Generation Method (CDG) (see Section 3.2.4). Validation Module can utilize the well trained A3C agent against the original clean map, to calculate the Fattack for each example based on the success criteria for attack proposed in this paper. Adversarial Training Module utilize a single example which can attack successfully for adversarial training, and obtain a newly well trained A3C agentnew which can finally realize "1:N" attack immunity.
Data randomization
In 2017, Xie et al. (2017) found that introducing random resizing to the training images can reduce the strength of the attack. After that, they further proposed (Xie et al. 2018) to use randomization at inference time to mitigate the effects of adversarial attack. They add a random resize layer and a random padding layer before the network of classification, their experiments demonstrate that the proposed randomization method is very effective at resisting one-step and iterative attacks.
Input transformations
Guo et al. (2018) proposed strategies to defend against adversarial examples through transforming the inputs before feeding them to the image-classification system. The input transformations include bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image. And the authors showed that total variance minimization and image quilting are very effective defenses on ImageNet.
Input gradient regularization
Ross and Doshi-Velez (2017) first exploited input gradient regularization (Drucker and Le Cun 1992) to improve the adversarial robustness. In this defense technology trains differentiable models that penalizes the degree to which small changes in inputs can alter model predictions. And the work shown that training with gradient regularization strengthened the robustness to adversarial perturbations, and it has a greater robustness combined the gradient regularization with adversarial training, but the computational complexity is too high.
Modifying the objective function
Adding stability term
Zheng et al. (2016) conducted stability training through adding stability term to the objective function to encourage DNN to generate similar output for images of various perturbed versions. The perturbed copy I′ of the input image I is generated by a Gaussian noise ε, the final loss L is consisted of the task objective Lo and the stability loss Lstability.
Adding regularization term
Yan et al. (2018) append the regularization term based on adversarial perturbations to the objective function, they proposed a training recipe called "deep defense". Specifically, the authors optimize the objective function jointed the original objective function term and a scaled ∥Δx∥p as a regularization term. Given a training set {(xk,yk)} and the parameterized function f, and W collects learnable parameters of f, the new objective function can be optimized as bellow:
$$ \min_{W} \sum_{k}{L(y_{k},f(x_{k};W))}+\lambda \sum_{k}{R\left(-\frac{\|\Delta_{xk}\|_{p}}{\|x_{k}\|_{p}}\right)} $$
By combining an adversarial perturbation-based regularization with the classification objective function, the training model can learn to defend against adversarial attacks directly and accurately.
Dynamic quantized activation function
Rakin et al. (2018) first explored to use quantization of activation functions and proposed to exploit adaptive quantization techniques for the activation functions so that training the network to defend against adversarial examples. They show the proposed Dynamic Quantized Activation(DQA) method greatly heightened the robustness of DNN under white-box attack, such as FGSM (Goodfellow et al. 2014a), PGD (Madry et al. 2017), and C&W (Carlini and Wagner 2017) attacks on MNIST and CIFAR-10 datasets. In this approach, the authors integrate the quantized activation functions into on adversarial training method, in which training model to learn parameters γ to minimize the risk R(x,y)∼L[J(γ,x,y)],γ consists of parameters in DNN. Based on this, given the input image x and the adversary example x+ε, this work aim to minimize the objective function to enhance the robustness
$$ ~~~~~~~~~~~~~~~~~~~~~~~~~\min R_{(x,y)\sim L[max J([\gamma,T],x+\epsilon,y)]} $$
where adding a new set of learnable parameters T:=[t1,t2,...,tm−1]. For n-bit quantized activation function, the quantization will have 2n−1 threshold values T, let m=2n−1, sgn represents the sign function, then m level quantization function is as follows:
$$ {}\begin{aligned} f(x)&=0.5 \times \left[sgn(x-t_{m-1})+ \sum_{i=m-1}^{m/2+1}t_{i}(sgn(t_{i}-x)\right.\\ &\quad+sgn(x-t_{i-1}))+ \sum_{i=m/2}^{2}t_{i-1}(sgn(t_{i}-x)\\ &\left.\quad+sgn(x-t_{i-1}))-sgn(t_{1}-x){\vphantom{sgn(x-t_{m-1})+ \sum_{i=m-1}^{m/2+1}t_{i}(sgn(t_{i}-x)}}\right] \end{aligned} $$
Stochastic activation pruning
Inspired by game theory, S. Dhillon et al. (2018) proposed a mixed strategy Stochastic Activation Pruning (SAP) for adversarial defense. The method prunes a random activation subset (preferentially pruning those with smaller magnitude) and expands survivors to compensate, using SAP to pretrained networks without any additional training provides robustness against adversarial examples. And the authors showed that combining SAP with adversarial examples has a greater benefits. In particularly, their experiments demonstrate that SAP can effectively defend against adversarial examples in reinforcementlearning.
Modifying the network structure
Defensive distillation
Papernot et al. (2016a) proposed the defensive distillation mechanism for training network to resist adversarial attacks. Defensive distillation, a strategy that trains models to output the probability of different classes rather than the difficult decision of which class to output, the probability is provided by an early model that uses the labels of hard classification to train on the same task. Papernot et al. showed that defensive distillation can be used to resist small-disturbed adversarial attacks through training network to defend L-BFGS (Szegedy et al. 2013) attack and FGSM (Goodfellow et al. 2014a) attack. Unfortunately, defensive distillation is only applicable to DNN models based on energy probability distributions. Nicholas Carlini and David Wagner proved that defensive distillation is ineffective in (Carlini and Wagner 2016), and they introduced a method of constructing adversarial examples (Carlini and Wagner 2017), this method is not affected by various anti-attack methods, including defensive distillation.
High-level representation guided denoiser
Liao et al. (2018) proposed high-level representation guided denoiser (HGD) to defend adversarial examples for image classification. The main idea is to train a denoiser based on neural network for removing the adversarial perturbation before sending them to the target model. FLiao et al. use denoising U-net (Ronneberger et al. 2015) (DUNET) as a denoising model. Compared to denoising autoencoder (DAE) (Vincent et al. 2008), DUNET is directly connected between encoder layers and decoder layers of the same resolution, so the network only needs to learn how to remove noise, instead of learning how to reconstruct the whole image. And without using a pixel-level reconstruction loss function, the authors use the difference between top-level outputs of the target model induced by original and adversarial examples as the loss function to guide the training of an image denoiser. The proposed HGD has a good generalization and the target model is more robust against both white-box and black-box attacks.
Add detector subnetwork
Metzen et al. (2017) proposed to add a detector subnetwork for augmenting deep neural networks, the subnetwork is trained on a binary classification task that distinguishes real data from data containing adversarial perturbations. Considering that detector is also adversarial, they proposed dynamic adversary training, which introduces a novel adversary that aims at fooling both the classifier and the detector, and trains the detector to counteracting this novel adversary. The experiment results show that dynamic detector has the robustness and its detectability is more than 70% on the CIFAR10 dataset (Krizhevsky and Hinton 2009).
Multi-model-based defense
Srisakaokul et al. (2018) explored a novel defense approach, MULDEF, based on the principle of diversity. The MULDEF approach firstly constructs a family of models by combining the seed model (the target model) with additional models(constructed from the seed model), the constructed family of models are complementary to each other to obtain robustness diversity, specifically, the adversarial examples of a model usually doesn't be the adversarial examples of other models in model family. Then the method randomly selects one model in these models to be applied on a given input example. The randomness of selection reduces can reduce the success rate of the attack. The evaluation results demonstrate that MULDEF augmented the adversarial accuracy of the target model by about 35-50% and 2-10% in the white-box and black-box attack scenarios, respectively.
PixelDefend
Song et al. (2017) proposed a method named PixelDefend which can utilized generative models to defend against adversarial examples. In this paper, authors showed that the adversarial examples mainly lie on low probability regions of training distribution, regardless of the attack type and target model. Moreover, they found that neural density model outperform on detecting the human invisible adversarial perturbations, and based on this discovery, Song et al. proposed a new approach named PixelDefend which can purifies a perturbed image return to the distribution of training data. Meanwhile, they announced that PixelDefend can be utilized as a novel family of methods which can combined with other model-specific defenses. Experimental results (e.g., Fig. 11) showed that PixelDefend can greatly improves the recovery capability of varieties state-of-art defense methods.
Defense-GAN
The example for PixelDefend (Song et al. 2017). The first image denote the original clean image in CIFAR-10 (Krizhevsky et al. 2014), and the remaining pictures represent the adversarial examples based on varieties attack methods which have been shown above each example, and the predicted label has been shown on the bottom. Meanwhile, the second line denotes the corresponding purified images
Samangouei et al. (2018) gave the first attempt to construct a defense model against adversarial attack based on GAN (Radford et al. 2015). They proposed a new defense policy named Defense-GAN which takes use of generation model to improve the robustness against Black/White-Box Attack. Moreover, any classification model can utilize the Defense-GAN proposed in this paper, and will not change the structure of classifier or the process for training. Defense-GAN can be used as a defense technology that can against any adversarial attack as such method does not assume knowledge of the process for generating the adversarial examples. The experimental results showed that Defense-GAN proposed in this paper is effective when against different adversarial attacks, and can improve the performance on existing defense technologies.
Discriminative model
Since it is not guaranteed that the generated adversarial examples will obstruct the VIN path planning successfully generated in Liu et al. (2017), Wang et al. explored a fast approach to automatically identify VIN adversarial examples. In order to estimate whether an attack is successful, they compared the difference between the two paths on a pair of maps, the normal map and the adversarial map. By visualizing the pair of paths on a path image, they transformed the different attack results into different categories of path images. In this way, they analyzed the possible scenarios of the adversarial maps and define the categories of the predicted path pairs. They divided the results into four limited categories, which are the unreached path (UrP) class, the fork path (FP) class, the detour path (DP) class and the unchanged path (UcP) class. Based on the categories definition, they implemented a training-based identification method by combining the path feature comparison and path images classification.
In this method, the UrP and UcP can be identified through path feature comparison and the DP and FP can be identified through path image classification. The experimental results showed that this method can achieve a high-accuracy and faster identification than manual observation method (e.g., Fig. 12).
Four categories of VIN adversarial maps. The first line denotes the original maps, the second line represents the adversarial eaxmples generated, and the third line is the extracted path image. a The UrP. b The FP. c The DP. d The UcP
Characterizing adversarial subspaces
Ma et al. (2018) gived the first attempt to explain the extent adversarial perturbation can effect the Local Intrinsic Dimensionality (LID) (Houle 2017) characteristic of adversarial regions. Moreover, they showed empirically that LID characteristics can facilitate the distinction of adversarial examples generated by several state-of-art attacks. Meanwhile, they proved that LID can be utilized to differentiate adversarial examples, and the experimental results show that among the five attack strategies (FGSM (Goodfellow et al. 2014a), BIM-a (Saad 2003), BIM-b (Saad 2003), JSMA (Papernot et al. 2016b), Opt) based on three benchmark data sets (MNIST (LeCun et al. 2010), CIFAR-10 (Krizhevsky et al. 2014), SVHN (Netzer et al. 2011)) considered for this paper, the method based on LID can outperform against most state-of-art methods.
Ma et al. announced that their analysis of LID characteristic for adversarial region, not only can motivates new direction for effective adversarial defense, but also provides more challenges for the development of new adversarial attacks, meanwhile, enable us to better understand the vulnerabilities of DNNs (LeCun et al. 1989).
Conclusion and discussion
In this paper, we give the very first attempt to conduct a comprehensive survey on adversarial attacks in the context of reinforcement learning under AI security. Reinforcement learning is a workhorse for AI applications ranging from Atari Game to Connected and Automated Vehicle System (CAV), hence, how to build a reliable reinforcement learning system to support the security critical applications in AI, has become a concern which is more critical than ever. However, Huang et al. (2017) discovered that the interesting attack mode adversarial attack also be effective when targeting neural networks under reinforcement learning, which has inspired innovative researches in this direction. Therefore, our work reviews such contributions, and mainly focus on the most influential and interesting works in this field. We give a comprehensive introduction to the literatures on adversarial attack under various fields of reinforcement learning applications, and briefly analyze the most valuable defense technologies against existing adversarial attacks (Table 4).
Table 4 Different attacks targeted by different defense technologies
Although, the RL system does exist the security vulnerability of "Adversarial attack", by the survey on existing adversarial attack technologies, it is found that the exist of complete Black-box attacks are rare (complete Black-box attack means that the adversary has no idea of the target model, and can not interact with the target agent at all), which makes it very difficult for adversaries to attack the reinforcement learning system in practice. Moreover, owing to the very high activity in this research direction, it can be expected that, in the future an largely reliable reinforcement learning system will be available to support critical security applications in AI.
Akhtar, N, Mian A (2018) Threat of adversarial attacks on deep learning in computer vision: A survey. arXiv preprint arXiv:1801.00553.
Bai, X, Niu W, Liu J, Gao X, Xiang Y, Liu J (2018) Adversarial Examples Construction Towards White-Box Q Table Variation in DQN Pathfinding Training In: 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC), 781–787.. IEEE.
Behzadan, V, Munir A (2017) Vulnerability of deep reinforcement learning to policy induction attacks In: International Conference on Machine Learning and Data Mining in Pattern Recognition, 262–275.. Springer, Cham.
Bougiouklis, A, Korkofigkas A, Stamou G (2018) Improving Fuel Economy with LSTM Networks and Reinforcement Learning In: International Conference on Artificial Neural Networks, 230–239.. Springer, Cham.
Carlini, N, Wagner D (2016) Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311.
Carlini, N, Wagner D (2017) Towards evaluating the robustness of neural networks In: 2017 IEEE Symposium on Security and Privacy (SP), 39–57.. IEEE.
Chen, QA, Yin Y, Feng Y, Mao ZM, Liu HX (2018a) Exposing Congestion Attack on Emerging Connected Vehicle based Traffic Signal Control In: Network and Distributed Systems Security (NDSS) Symposium.
Chen, T, Niu W, Xiang Y, Bai X, Liu J, Han Z, Li G (2018b) Gradient band-based adversarial training for generalized attack immunity of a3c path finding. arXiv preprint arXiv:1807.06752.
Dhillon, GS, Azizzadenesheli K, Bernstein JD, Kossaifi J, Khanna A, Lipton ZC, Anandkumar A (2018) Stochastic activation pruning for robust adversarial defense In: International Conference on Learning Representations. https://openreview.net/forum?id=H1uR4GZRZ.
Drucker, H, Le Cun Y (1992) Improving generalization performance using double backpropagation. IEEE Trans Neural Netw 3(6):991–997.
Farahmand, AM (2011) Action-gap phenomenon in reinforcement learning In: Advances in Neural Information Processing Systems, 172–180.
Goodall, C, El-Sheimy N (2017) System and method for intelligent tuning of Kalman filters for INS/GPS navigation applications: U.S. Patent No. 9,593,952. Washington, DC: U.S. Patent and Trademark Office.
Goodfellow, IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
Goodfellow, IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. CoRR abs/1412.6572. 1412.6572.
Guo, C, Rana M, Cisse M, van der Maaten L (2018) Countering adversarial images using input transformations In: International Conference on Learning Representations. https://openreview.net/forum?id=SyJ7ClWCb.
Guo, X, Singh S, Lee H, Lewis RL, Wang X (2014) Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning In: Advances in neural information processing systems, 3338–3346.
He, K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition In: Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778.
Houle, ME (2017) Local intrinsic dimensionality I: an extreme-value-theoretic foundation for similarity applications In: International Conference on Similarity Search and Applications, 64–79.. Springer, Cham.
Huang, S, Papernot N, Goodfellow I, Duan Y, Abbeel P (2017) Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284.
Jaderberg, M, Mnih V, Czarnecki WM, Schaul T, Leibo JZ, Silver D, Kavukcuoglu K (2016) Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397.
Jia, YJ, Zhao D, Chen QA, Mao ZM (2017) Towards secure and safe appified automated vehicles In: 2017 IEEE Intelligent Vehicles Symposium (IV), 705–711.. IEEE.
Krizhevsky, A., Hinton G. (2009) Learning multiple layers of features from tiny images. Technical report, University of Toronto 1(4):7.
Krizhevsky, A, Nair V, Hinton G (2014) The cifar-10 dataset. online: http://www.cs.toronto.edu/kriz/cifar.html.
Kurakin, A, Goodfellow I, Bengio S (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236.
LeCun, Y, Cortes C, Burges C (2010) Mnist handwritten digit database 2. AT&T Labs [Online]. Available: http://yann.lecun.com/exdb/mnist.
LeCun, Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551.
Liang, Y, Machado MC, Talvitie E, Bowling M (2016) State of the art control of atari games using shallow reinforcement learning In: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, 485–493.. International Foundation for Autonomous Agents and Multiagent Systems.
Liao, F, Liang M, Dong Y, Pang T, Hu X, Zhu J (2018) Defense against adversarial attacks using high-level representation guided denoiser In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1778–1787.
Lin, Y-C, Hong Z-W, Liao Y-H, Shih M-L, Liu M-Y, Sun M (2017) Tactics of adversarial attack on deep reinforcement learning agents. arXiv preprint arXiv:1703.06748.
Liu, J, Niu W, Liu J, Zhao J, Chen T, Yang Y, Xiang Y, Han L (2017) A Method to Effectively Detect Vulnerabilities on Path Planning of VIN In: International Conference on Information and Communications Security, 374–384.. Springer, Cham.
Ma, X, Li B, Wang Y, Erfani SM, Wijewickrema S, Houle ME, Schoenebeck G, Song D, Bailey J (2018) Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613.
Madry, A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
Markov, A (1907) Investigation of a remarkable case of dependent trials. Izv Ros Akad Nauk 1.
Martínez-Tenor, Á, Cruz-Martín A, Fernández-Madrigal JA (2018) Teaching machine learning in robotics interactively: the case of reinforcement learning with Lego Mindstorms. Interact Learn Environ:1–14.
Metzen, JH, Genewein T, Fischer V, Bischoff B (2017) On detecting adversarial perturbations. CoRR abs/1702.04267. 1702.04267.
Miyato, T, Maeda SI, Ishii S, Koyama M (2018) Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans Pattern Anal Mach Intell PP(99):1.
Mnih, V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller M (2013) Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
Mnih, V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529.
Mnih, V, Badia AP, Mirza M, Graves A, Lillicrap T, Harley T, Silver D, Kavukcuoglu K (2016) Asynchronous methods for deep reinforcement learning In: International conference on machine learning, 1928–1937.
Moosavi-Dezfooli, S-M, Fawzi A, Fawzi O, Frossard P (2017) Universal adversarial perturbations. arXiv preprint.
Na, T, Ko JH, Mukhopadhyay S (2018) Cascade adversarial machine learning regularized with a unified embedding. arXiv preprint arXiv:1708.02582.
Netzer, Y, Wang T, Coates A, Bissacco A, Wu B, Ng AY (2011) Reading digits in natural images with unsupervised feature learning.
Ohn-Bar, E, Trivedi MM (2016) Looking at humans in the age of self-driving and highly automated vehicles. IEEE Trans Intell Veh 1(1):90–104.
Papernot, N, McDaniel P, Wu X, Jha S, Swami A (2016a) Distillation as a defense to adversarial perturbations against deep neural networks In: 2016 IEEE Symposium on Security and Privacy (SP), 582–597.. IEEE.
Papernot, N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016b) The limitations of deep learning in adversarial settings In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), 372–387.. IEEE.
Papernot, N, Mcdaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2016c) Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697 1(2):3.
Radford, A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
Rakin, AS, Yi J, Gong B, Fan D (2018) Defend deep neural networks against adversarial examples via fixed anddynamic quantized activation functions. arXiv preprint arXiv:1807.06714.
Ronneberger, O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation In: International Conference on Medical image computing and computer-assisted intervention, 234–241.. Springer, Cham.
Ross, AS, Doshi-Velez F (2017) Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. arXiv preprint arXiv:1711.09404.
Saad, Y (2003) Iterative methods for sparse linear systems, vol. 82. siam.
Samangouei, P, Kabkab M, Chellappa R (2018) Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605.
Schulman, J, Levine S, Abbeel P, Jordan MI, Moritz P (2015) Trust Region Policy Optimization In: Icml, 1889–1897.
Shalev-Shwartz, S, Shammah S, Shashua A (2016) Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295.
Silver, D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, et al (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484.
Sinha, A, Namkoong H, Duchi J (2018) Certifiable distributional robustness with principled adversarial training In: International Conference on Learning Representations. https://openreview.net/forum?id=Hk6kPgZA-.
Song, Y, Kim T, Nowozin S, Ermon S, Kushman N (2017) Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766.
Srisakaokul, S, Zhong Z, Zhang Y, Yang W, Xie T (2018) Muldef: Multi-model-based defense against adversarial examples for neural networks. arXiv preprint arXiv:1809.00065.
Swiderski, F, Snyder W (2004) Threat modeling. Microsoft Press.
Szegedy, C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
Tamar, A, Wu Y, Thomas G, Levine S, Abbeel P (2016) Value iteration networks In: Advances in Neural Information Processing Systems, 2154–2162.
Touretzky, DS, Mozer MC, Hasselmo ME (eds)1996. Advances in Neural Information Processing Systems 8: Proceedings of the 1995 Conference, vol. 8. Mit Press.
Tramèr, F, Kurakin A, Papernot N, Boneh D, McDaniel PD (2017) Ensemble adversarial training: Attacks and defenses. CoRR abs/1705.07204. 1705.07204.
Vincent, P, Larochelle H, Bengio Y, Manzagol P-A (2008) Extracting and composing robust features with denoising autoencoders In: Proceedings of the 25th international conference on Machine learning, 1096–1103.. ACM.
Watkins, C, Dayan P (1992) Machine learning. Technical Note: Q-Learning 8:279–292.
Wold, S, Esbensen K, Geladi P (1987) Principal component analysis. Chemometrics and intelligent laboratory systems 2(1-3):37–52.
Xiang, Y, Niu W, Liu J, Chen T, Han Z (2018) A PCA-Based Model to Predict Adversarial Examples on Q-Learning of Path Finding In: 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC), 773–780.. IEEE.
Xie, C, Wang J, Zhang Z, Zhou Y, Xie L, Yuille AL (2017) Adversarial examples for semantic segmentation and object detection. CoRR abs/1703.08603. 1703.08603.
Xie, C, Wang J, Zhang Z, Ren Z, Yuille A (2018) Mitigating adversarial effects through randomization In: International Conference on Learning Representations. https://openreview.net/forum?id=Sk9yuql0Z.
Xiong, W, Droppo J, Huang X, Seide F, Seltzer M, Stolcke A, Yu D, Zweig G (2016) Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256.
Yan, Z, Guo Y, Zhang C (2018) Deepdefense: Training deep neural networks with improved robustness. CoRR abs/1803.00404. 1803.00404.
Yang, T, Xiao Y, Zhang Z, Liang Y, Li G, Zhang M, Li S, Wong T-W, Wang Y, Li T, et al (2018) A soft artificial muscle driven robot with reinforcement learning. Sci Rep 8(1):14518.
Zhang, J, Lu C, Fang C, Ling X, Zhang Y (2018) Load Shedding Scheme with Deep Reinforcement Learning to Improve Short-term Voltage Stability In: 2018 IEEE Innovative Smart Grid Technologies-Asia (ISGT Asia), 13–18.. IEEE.
Zheng, S, Song Y, Leung T, Goodfellow I (2016) Improving the robustness of deep neural networks via stability training In: Proceedings of the ieee conference on computer vision and pattern recognition, 4480–4488.
Zhu, Y, Mottaghi R, Kolve E, Lim JJ, Gupta A, Fei-Fei L, Farhadi A (2017) Target-driven visual navigation in indoor scenes using deep reinforcement learning In: 2017 IEEE international conference on robotics and automation (ICRA), 3357–3364.. IEEE.
The authors would like to thank the guidance of Professor Wenjia Niu and Professor Jiqiang Liu. Meanwhile this research is supported by the National Natural Science Foundation of China (No. 61672092), Science and Technology on Information Assurance Laboratory (No. 614200103011711), the Project (No. BMK2017B02-2), Beijing Excellent Talent Training Project, the Fundamental Research Funds for the Central Universities (No. 2017RC016), the Foundation of China Scholarship Council, the Fundamental Research Funds for the Central Universities of China under Grants 2018JBZ103.
This research is supported by the National Natural Science Foundation of China (No. 61672092), Science and Technology on Information Assurance Laboratory (No. 614200103011711), the Project (No. BMK2017B02-2), Beijing Excellent Talent Training Project, the Fundamental Research Funds for the Central Universities (No. 2017RC016), the Foundation of China Scholarship Council, the Fundamental Research Funds for the Central Universities of China under Grants 2018JBZ103.
Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, Beijing Jiaotong University, Beijing, China
Tong Chen
, Jiqiang Liu
, Yingxiao Xiang
, Wenjia Niu
, Endong Tong
& Zhen Han
Search for Tong Chen in:
Search for Jiqiang Liu in:
Search for Yingxiao Xiang in:
Search for Wenjia Niu in:
Search for Endong Tong in:
Search for Zhen Han in:
TC conceived and designed the study. TC and YX wrote the paper. JL, WN, ET, and ZH reviewed and edited the manuscript. All authors read and approved the manuscript.
Correspondence to Wenjia Niu.
Wenjia Niu obtained his Bachelor degree from Beijing Jiaotong University in 2005, PhD degree from Chinese Academy of Science in 2010, all in Computer Science. Now he is currently a professor in Beijing Jiaotong University. His research interests are AI Security, Agent and Data Mining. He has published more than 50 research papers, including a number of regular papers in the famous international journals and Conferences, such as KAIS (Elsevier), ICDM, CIKM and ICSOC. He has published 2 edited books. He serves on the Steering Committee of ATIS (2013-2016) and the PC Chair of ASONAM C3'2015. He has been PhD Thesis Examiner of Deakin University, Title Page Click here to access/download;Title Page;cover letter.pdf the guest editor for the Chinese Journal of Computer, Enterprise Information System, Concurrency and Computing: Practise and Experience, and Future Generation Computer Systems, etc.. He is also members both of the IEEE and ACM.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Adversarial attack
Adversarial example | CommonCrawl |
View all Nature Research journals
Optoplasmonic characterisation of reversible disulfide interactions at single thiol sites in the attomolar regime
Serge Vincent1,
Sivaraman Subramanian1 &
Frank Vollmer1
Nature Communications volume 11, Article number: 2043 (2020) Cite this article
Characterization and analytical techniques
A Publisher Correction to this article was published on 09 June 2020
This article has been updated
Probing individual chemical reactions is key to mapping reaction pathways. Trace analysis of sub-kDa reactants and products is obfuscated by labels, however, as reaction kinetics are inevitably perturbed. The thiol-disulfide exchange reaction is of specific interest as it has many applications in nanotechnology and in nature. Redox cycling of single thiols and disulfides has been unresolvable due to a number of technological limitations, such as an inability to discriminate the leaving group. Here, we demonstrate detection of single-molecule thiol-disulfide exchange using a label-free optoplasmonic sensor. We quantify repeated reactions between sub-kDa thiolated species in real time and at concentrations down to 100's of attomolar. A unique sensing modality is featured in our measurements, enabling the observation of single disulfide reaction kinetics and pathways on a plasmonic nanoparticle surface. Our technique paves the way towards characterising molecules in terms of their charge, oxidation state, and chirality via optoplasmonics.
Access to single-molecule reactions to determine the state of participating species and their reaction mechanisms remains a significant technological challenge. The application of fluorescent optical methods to investigate a single molecule's reaction pathway is often non-trivial. Sophisticated fluorescent labelling may not be available, while the temporal resolution is limited by photobleaching and transit times1,2. Monitoring reactions between molecules that weigh less than 1 kDa is further complicated by labels, as adducts can have severely altered reaction kinetics. Non-invasive optical techniques for studying the nanochemistry of single molecules have thus been elusive.
Thiol and disulfide exchange reactions are particularly relevant to the field of nanotechnology3,4. The reversibility of the disulfide bond has, for example, paved the way to realising molecular walkers and motors5,6. Bottom-up thiol self-assembled monolayers have shown potential as building blocks for sensors and nanostructuring7. The precise attachment/detachment of thiolated DNA origami has even extended to the movement of plasmonic nanoparticles (NPs) along an engineered track8. In nature, disulfide bonds are a fulcrum for cell biochemistry. Reactions that form these links usually occur post-translation, stabilising folding and providing structure for a number of proteins9,10,11. The cell regularly controls disulfide bonds between thiol groups, alternately guiding species through reduction and oxidation12. Redox potentials and oxidative stress in this context are reflected in the relative concentrations of thiols and disulfides13.
Thiol/disulfide equilibria can be quantified in bulk, although often at the expense of high kinetic reactivity and the need for fluorescent or absorptive reagents to measure the exchange14. One such approach is an enzymatic recycling assay with 5-thio-2-nitrobenzoic acid absorbers capable of detecting thiols and disulfides down to 100's of picomolar concentrations15. This trades off quenching of thiol oxidation and exchange with the optimisation of reaction rates and the disruption of the thiol/disulfide equilibrium. As a disulfide bridge consists of two sulfur atoms that can interact with a thiolate (i.e. the conjugate base of a thiol), disulfide exchange is fundamentally intricate and the reaction branches for single molecules have yet to be fully characterised in the literature. Distinguishing leaving groups through a sensing element has so far been unachievable.
State-of-the-art sensors capable of transducing single-molecule interactions into optical16,17,18, mechanical19,20,21, electrical22,23,24, or thermal25 signals continue to emerge. Here we employ a label-free optoplasmonic system26 that has the specific advantage of detecting individual disulfide interactions in solution. Due to the hybridisation between an optical whispering-gallery mode (WGM) resonator and localised surface plasmon (LSP) resonance of a NP, perturbations to an LSP are observed through readout of a WGM coupled to it27,28,29. One strategy we propose is to immobilise thiolates on a gold NP surface with a separate functional group. Following selective covalent binding, immobilised thiolates may participate in redox reactions while under non-destructive probing. Reactions between sub-kDa reactants are monitored in real time and at concentrations as low as 100's of attomolar, hence isolating for the disulfide chemistry of single molecules in vitro. Such reactions frequently result in abrupt changes in hybrid LSP-WGM resonance linewidth/lifetime—a surprising phenomenon that was considered unresolvable by WGM mode broadening or splitting30,31,32. We clarify in this study that disulfide linkages to bound thiolate receptors can exclusively affect the hybrid LSP-WGM resonance linewidth, beyond a description via an unresolved mode split. Each linewidth transition per exchange also assigns a status to the leaving group. Our data suggests a sensing modality for inferring kinetics and chains of single disulfide reactions in proximity to a plasmonic NP, paving the way towards assessing molecular charge, oxidation, and chirality states on an integrated platform.
Experimental scheme
A gold NP surface serves as an effective detection area for biomolecular characterisation on an optoplasmonic sensor. Light field localisation and nanoscale mode volumes at the NP hotspots enable sensitivity to surface modification, wherein covalent bonding to the NP restricts the total number of binding sites. Previously, thiol and amine based immobilisation has been explored on our optoplasmonic sensor33. Under particular pH conditions that dictate the molecular charge of the analyte, thiol and amine functional groups were reported to bind to different facets of gold NPs34,35,36,37. For thiols, the binding preference is in the (111) and (100) planes of a gold surface which are present in an ordered crystal lattice. For amines, binding preferentially takes place at uncoordinated gold adatoms. Measurements from33 showed an approximate 2 orders of magnitude larger number of binding sites for thiols compared to amines on gold nanorods (NRs), demonstrating variable selectivity depending on surface regularity. If molecular charge is controlled and the NR surfaces are appropriately deformed, conditions can be reached where molecules containing both amine and thiol groups can predominantly bind onto gold via amine to create recognised thiolates38. These nucleophiles may attack disulfide bonds in molecules that diffuse to them. Reducing agents introduced in solution, such as tris(2-carboxyethyl)phosphine (TCEP), can then reduce bound disulfides and complete a redox cycle. This pathway establishes cyclical reactions near the NP surface to be analysed statistically.
The LSP resonance of a plasmonic NP can be weakly coupled to a WGM resonance of a dielectric microcavity. Through this coupling, molecules that successfully perturb the gold NP surface can be detected as shifts in a LSP-WGM resonance. Light coupled in and out of the hybrid system allows for evaluation of gold NP perturbations, i.e. by laser frequency sweeping across the LSP-WGM resonance and spectrally resolving the resonant lineshape of the transmitted light. In our setup we excite WGMs in a silica microsphere, with diameters in the range of 70–90 µm, using a tuneable external cavity laser with 642-nm central wavelength. The laser beam is focused onto a prism surface to excite WGMs by frustrated total internal reflection. With a sweep frequency of 50 Hz, the transmission spectrum is acquired through photodetection at the output arm every 20 ms and a Lorentzian-shaped resonance is tracked (Fig. 1a, b). The evanescent field at the periphery of the microcavity is subsequently enhanced and localised in the vicinity of bound, LSP-resonant gold NRs. The cetyltrimethylammonium bromide-coated NRs have a 10-nm diameter and 24-nm length with longitudinal LSP resonance at \(\lambda _0 =\) 650 m. In the event of molecules interacting with the gold NR, the LSP-WGM lineshape position \(\lambda _{{\mathrm{Res}}}\) and/or full width at half maximum \(\kappa\) will vary. Discrete jumps in these parameters may be measured in the time domain and are indicative of molecular bond formation with gold. Groupings of signal fluctuations exceeding 3σ from transient arrival/departure can also arise (Fig. 1c), where σ is the standard deviation of the noise derived from a representative 20-s trace segment. The resonance shifts of these signal packets are compiled for a series of analyte concentrations to confirm Poissonian statistics and first-order reaction rates (Fig. 1d). An extrapolation error exists in Fig. 1d given the chosen concentration range yet the event rate is most closely linear with concentration. Despite the negligible scattering and absorption cross-section of a single molecule, the ultrahigh-quality-factor WGM and its back-action on the perturbed LSP acts as a channel to sense loss changes intrinsic to or induced by a gold NP antenna. NP absorption spectroscopy by means of optoplasmonics39 provides groundwork for such a modality, as the absorption cross-section change in a NP due to surface reactions may become detectable. We affirm that signal traces can exhibit (1) simultaneous shifts in resonant wavelength, linewidth, and resolved mode splitting30,32 and (2) exclusive linewidth variation when single molecules diffuse within the LSP evanescent field decay length of the NP. Note here that the spectral resolution of our system is set by the laser frequency noise.
Fig. 1: Optoplasmonic sensor setup and quantification of adsorbing d-cysteine.
a Scheme for LSP-WGM based sensing. A beam emitted from a tuneable laser source, with central wavelength of 642 nm, is focused onto a prism face to evanescently couple to a microspherical WGM cavity. The WGM excites the LSPs of Au NRs on the cavity surface and the hybrid system's transmission spectrum is acquired at the output arm of the setup. d-cysteine (d-Cys) analytes have carboxyl, thiol, and amine groups. b Sensing through tracking perturbations of the Lorentzian resonance extremum in the transmission spectrum. The resonant wavelength \(\lambda _{{\mathrm{Res}}}\) and linewidth \(\kappa\) that define the quality factor \(Q = \lambda _{{\mathrm{Res}}}/\kappa\) are shown in the subfigure, as is unresolved mode splitting due to scattering. c Single-molecule time-domain signatures with signal value \({\mathrm{\Delta }}\lambda _{{\mathrm{Res}}}\) and duration \({\mathrm{\Delta }}\tau\) from the transit of d-Cys near Au NRs. The solvent used is 0.02% sodium dodecyl sulfate (SDS) in deionised water. d Linear dependence of event frequency on analyte concentration that suggests first-order rates. Events conform to a Poisson process (Supplementary Fig. 1).
Disulfide reaction mechanism and statistical analysis
Loading of the gold NR surface with thiolate linkers requires a set of restrictions on the solvent environment at room temperature. To promote amine-gold bonds, we use a buffer at a pH that is above an aminothiol's logarithmic acid dissociation constants \({\mathrm{pKa}}_{{\mathrm{SH}}}\) and \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). Within this balance, anionic species with negatively charged \(S^ -\) and neutral \({\mathrm{NH}}_2\) groups will dominate as per the Henderson–Hasselbalch equation40. A molecule must first reach the gold surface by overcoming Debye screening from surface charges41, e.g. from the gold NR's coating and pre-functionalisation of the glass microcavity. Such electrostatic repulsive forces can be reduced by electrolyte ions in substantial excess of the molecules under study. Analogous to raising the melting temperature of DNA from ambient conditions by increasing the salt concentration, the arrival rate of molecules to detection sites plateaus when the salt concentration is on the order of 1 M. Due to indiscriminate attachment of gold NRs onto the glass microcavity in steps preceding single-molecule measurements (Supplementary Fig. 2a), molecules in the medium should also be replenished to account for capture by NRs outside of the WGM's evanescent field (i.e. those that do not contribute to LSP-WGM hybridisation). Overall, these factors necessitate high electrolyte concentrations and recurring injection of analyte into a buffer of pH > \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\) to attain a sufficient reaction rate in the subfemtomolar regime.
The aminothiol linkers of interest for our experiments are chemically simple amino acids or pharmaceuticals with minimal side chains. For chiral studies, d- and l-cysteine (\({\mathrm{pKa}}_{{\mathrm{SH}}}\) = 8.33 and \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\) = 10.7842) are good candidates as they contain a carboxyl group that does not interfere with disulfide reactions. Nevertheless, for simplicity, we began with cysteamine (\({\mathrm{pKa}}_{{\mathrm{SH}}}\) = 8.19 and \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\) = 10.7543) as it is a stable aminothiol that excludes any side chains. The cysteamine's amine group favourably binds to our optoplasmonic sensor in a sodium carbonate–bicarbonate buffer at a pH slightly above 10.75, with 1 M sodium chloride (Fig. 2a, b). Typical signal patterns in Fig. 2a for amine-gold binding are discontinuous steps in both \(\lambda _{{\mathrm{Res}}}\) and \(\kappa\) on the order of 1 to 10 fm, with monotonic redshifts in \(\lambda _{{\mathrm{Res}}}\). Signal magnitude and direction depends on variables such as the position and orientation of the gold NR detector on the microcavity26, the detection site on the NR itself, and the analyte's molecular mass/polarisability44. As time evolves and analyte is steadily supplied, the binding sites become occupied and event rate decreases (Fig. 2c). These independent shifts in \(\lambda _{{\mathrm{Res}}}\) and \(\kappa\) are collected in Fig. 2d to showcase non-monotonic linewidth narrowing and broadening once single molecules bind. This is an unconventional result as there are equally likely signs for \({\mathrm{\Delta }}\kappa\) without apparent proportionality to \({\mathrm{\Delta }}\lambda _{{\mathrm{Res}}}\). A singlet of an unresolved mode split that would generate the linewidth shift is thus unsubstantiated.
Fig. 2: Single cysteamine binding to gold NRs via amine at subfemtomolar concentration.
a Discrete signals in the LSP-WGM resonance trace from covalent bonding of the \({\mathrm{NH}}_2\) ligands to Au in a basic buffer. b Conceptual diagram of the cysteamine surface reaction. Cysteamine, with its thiol and amine groups, forms an amine-gold bond as indicated by the red arrow. c Exponential decay in cumulative binding step count as the system approaches saturation. In this regime, it is necessary to periodically inject more analyte in solution as scarce analytes are lost to external immobilisation (i.e. from undetected NRs that are not excited by the WGM). d Histograms depicting the resonance shift \({\mathrm{\Delta }}\lambda _{{\mathrm{Res}}}\) and linewidth shift \({\mathrm{\Delta }}\kappa\) for binding events, as well as their related event time separations \({\mathrm{\Delta }}t_1\) and \({\mathrm{\Delta }}t_2\). The \({\mathrm{\Delta }}\kappa\) distribution shows both positive and negative shifts, while \({\mathrm{\Delta }}t_1\) and \({\mathrm{\Delta }}t_2\) distributions are Poissonian.
An added convenience of choosing cysteamine is its comparable diffusion kinetics with respect to N-acetylcysteine (NAC)—a synthetic precursor of cysteine with acetyl protecting group in place of primary amine. We used NAC as a negative control and the response revealed a negligible rate of thiol-gold bond formation at high pH and high concentration (Fig. 3). A lack of step discontinuities within the trace supports amine-gold bonding in the basic buffer and therefore thiol-functionalisation of the gold NRs with cysteamine.
Fig. 3: Background and negative control measurement with NAC at micromolar concentration.
a Resonance and linewidth shift traces exhibiting transient signal above 3σ with rates on the order of 0.1 s−1 over several minutes; however, these persist in the presence and absence of NAC and TCEP in solution. No permanent binding patterns were found during peak tracking. b NAC molecule, with carboxyl, thiol, and (amine-attached) acetyl groups, near a detection site.
pH-dependent disulfide nanochemistry
The charge of single molecules diffusing to the optoplasmonic sensor can lead to a diverse set of reactions and LSP-WGM resonance perturbations. Dimerisation, for instance, is maximised when thiol groups are made nucleophilic through deprotonation at a pH above the \({\mathrm{pKa}}_{{\mathrm{SH}}}\). To circumvent electrostatic repulsion between primary amines, high aminothiol dimerisation and disulfide exchange rates demand a pH greater than the \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). We therefore investigated these effects by way of pH variation near the \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). After pre-loading the gold NRs on the glass WGM microcavity with cysteamine in Fig. 4a, we flushed the chamber volume and replaced the surrounding dielectric with sodium carbonate-bicarbonate buffer, 1 M sodium chloride, at pH 10.19 < \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). Figure 4b highlights signal activity upon addition of a racemic, subfemtomolar mixture of reduced d- and l-cysteine. Transient peaks in the linewidth appear in packets that dissipate (see Fig. 4c) as external capture removes available dl-cysteine. We attribute these peaks to thiolates that fail to form a disulfide bond (Fig. 4d). The Poisson-distributed events for t ≤ 2 min. have a mean rate = 0.01 aM−1s−1 that surpasses diffusion (i.e. DDL-Cys ~ 10−10 m2 s−1, kon ~ 1 nM−1s−1 45), implying molecular trapping near the gold NR hotspots. Charged molecules are, by analogy to atomic ions41, bounded by an electrostatic potential well whose depth is increased in proportion to ionic strength.
Fig. 4: Cysteamine pre-functionalisation and disulfide events from converging dl-cysteine.
a Binding of cysteamine to Au NRs via amine in basic buffer. b Linewidth fluctuations induced by racemic dl-cysteine interacting with immobilised cysteamine thiolates at \({\mathrm{pKa}}_{{\mathrm{SH}}}\) < pH < \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). TCEP reducing agent is employed here to counteract cysteine oxidation/dimerisation. c Linewidth shift \({\mathrm{\Delta }}\kappa\) and event time duration \({\mathrm{\Delta }}\tau\) histograms extracted from the resonance trace of (b). The mean event rate of the Poisson distributions passed through an inflection point, decreasing from 0.01 aM−1 s−1 to 0.003 aM−1 s−1 within an 8-min interval as the diffusing cysteines were captured. d dl-cysteine and bound cysteamine transiently interacting via their thiol groups.
For proof of principle, we increased the environmental pH to 11.09 and raised the analyte concentration. In this regime we expected sustained reversible disulfide reactions with defined signal states in the resonance trace. The neutral amines of the highly anionic cysteamine and l-cysteine indeed result in binding/unbinding state transitions as in Fig. 5a, with clear linewidth broadening and narrowing steps of roughly equal mean height. The stability of the disulfide reactions is attributed to an order of magnitude rise in hydroxide ion concentration past the \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\) and the event rate is maintained by electrostatic trapping. Since TCEP continually cleaves bound dimers during redox cycling, the monomer or dimer state of the leaving group can also be identified (cf. Supplementary Fig. 4). This trial was repeated in Fig. 5b for a larger molecule, 5,5′-dithiobis-(2-nitrobenzoic acid) (DTNB/Ellman's reagent), which readily underwent disulfide exchange with bound cysteamine linkers. In all cases, reducing agent concentration was adjusted until switching signals in the linewidth were observed. Resolvable dwell times and hence steady diffusion of reducing agent to the detection site were found at high molar excess > 1000.
Fig. 5: Cyclical binding/unbinding and exchange interactions with single mixed disulfides.
a Real-time linewidth step oscillations in the LSP-WGM resonance trace from redox reactions involving individual cysteamine-l-cysteine disulfides at pH > \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). These bridges are formed between cysteamine linkers and l-cysteine thiolates/disulfides (with neutral amines), then promptly cleaved by excess TCEP. b Linewidth patterns similar to a from individual cysteamine-TNB disulfides. Thiol-disulfide exchange may be triggered by DTNB dimers alone; however, cycling is ensured through reduction with TCEP. TNB has a benzene ring with carboxyl, thiol, and nitro groups. c Apparent resonant wavelength and linewidth signal steps, from thiol-disulfide exchange with DTNB and bound cysteamine, in a resolvable LSP-WGM doublet/split mode.
Some insight into oscillation patterns is provided by the mode split traces of Fig. 5c, the lineshapes for which are discernible when the coupling/scattering rate is larger than the cavity decay rate. The WGM eigenmode degeneracy is lifted here and the resonant wavelength traces for the high-energy and low-energy modes, respectively denoted as \(\lambda _ +\) and \(\lambda _ -\), disclose two separate binding events in time. Such divergence comes from perturbations of two distinct gold NRs lying at different spatial locations along the standing wave formed by counterpropagating WGMs. One NP is excited near a node of a constituent mode and the second lies near in an antinode, and then the situation inverts for the other constituent mode. Information in the split mode resonance wavelengths is encoded in the linewidth trace during single peak tracking; a shortcoming that, if corrected by available splitting, offers more robust molecular analysis by correlation to split mode properties and further detection site discrimination. Anomalous linewidth signatures of Fig. 5a, b that exclude resonance wavelength shifting are, however, only superficially explained via mode splitting. In order for the resonant wavelength to stay constant and relative mode splitting to be a contributing factor, either \({\mathrm{\Delta }}\lambda _ +\) = \(- {\mathrm{\Delta }}\lambda _ -\) or the transmission dip depth must oscillate—two features that we have not detected in our recorded split mode traces. For the former to hold true, any heterodyne beat note tied to frequency splitting would have to stably oscillate between two beat frequencies. It is instead conceivable that the combination of LSP-WGM resonance energy invariance and lifetime variance implicates a relationship between the LSP resonance and molecular vibrational modes46. A transition between bound vibrational states that are close in energy and reside in two continuums is possible. With shifts in the electronic resonance-dependent Raman cross-section upon chemical reaction and/or charge transfer, the Raman tensor and hence the optomechanical coupling rate may be decipherable. In this way the charge state of bound cysteamine linkers and their disulfide linkages can influence the optoplasmonic sensor response to grant molecular charge sensitivity47.
Experimental results were presented for single aminothiols binding to gold nanoantennae of an optoplasmonic sensor system at subfemtomolar concentrations. We leveraged these aminothiol linkers (i.e. cysteamine) by way of reaction of their amine groups with gold, followed by repeatable disulfide interactions between the linkers and diffusing thiolates/disulfides incorporating TCEP reducing agent as a counterbalancing reagent. The thiol-functionalisation of gold was reinforced by negative controls performed with thiolated molecules in an equivalent sensor configuration. Statistical analysis of signal patterns at 100's of attomolar concentration revealed finite single-molecule detection due to removal from external adsorption, ligands, or other forms of capture. This recent advance is in part guided by selection of low-complexity analytes and saturation of environmental conditions to suppress Debye screening. Signatures in the linewidth traces were championed throughout our measurements as they were shown to contain leaving group information imprinted onto LSP-WGM resonance perturbations.
Despite the existence of identifiable disulfide interactions from DTNB, d-cysteine, and l-cysteine, a comprehensive theory to describe the underlying optoplasmonic detection mechanism has yet to emerge. Nonetheless, the dwell times and statistical inferences of cyclical single-molecule interactions in this work remain critical in circumventing site heterogeneity and characterising surface-bound thiolates and disulfides. Reactions near the nanoantennea hotspots have demonstrably lower degrees of freedom via spatial constraints and redox cycling. We foresee future refinements to the temporal resolution by locking the laser frequency to the WGM resonance. Our disulfide quantification paradigm ultimately opens avenues for charge transfer observation, including direct implementation of all sensing channels towards pinpointing single molecules and unravelling their nanochemistry.
Sample and microsphere preparation
Chemicals were purchased from Sigma-Aldrich and Thermo Scientific. The principal solvent in which analytes were dissolved was ultrapure water delivered from a Merck Q-POD dispenser. Solutions without NRs were passed through a 0.2 µm Minisart syringe filter and dilutions were performed with Gilson P2L, P20L, and P1000L pipettes. Each microspherical cavity was reflowed from a Corning SMF-28 Ultra, single-mode telecommunications fibre by CO2 laser light absorption. Surface tension during heating yielded a circularly symmetric cavity structure with a smooth dielectric interface. Mechanical stabilisation of the suspended microcavity was provided by prior insertion into a Thorlabs CF126-10 ceramic ferrule, which was then secured to an aluminium holder fixed to a three-axis translation stage. The diffusion-limited sample volume of 300–500 µL was enclosed by a glass window, N-SF 11 prism face, and sandwiched polydimethylsiloxane (PDMS) basin.
Surface chemistry protocol
Once the cavity was submerged in aqueous solution and a coupling geometry was found via alignment, cetyltrimethylammonium bromide-coated gold NRs (diameter = 10 nm, length = 24 nm, and LSPR wavelength = 650 nm) from Nanopartz were deposited onto the microcavity surface. A desirable linewidth change \({\mathrm{\Delta }}\kappa\) accumulated during deposition was roughly 40–60 fm. Microsphere surface functionalisation and passivation are further detailed in Supplementary Methods 2. All aminothiol linkers were bound to the gold NRs in sodium carbonate-bicarbonate buffer at a pH above 10.75 with 1 M of sodium chloride ions. Additionally, washing steps were interspersed throughout each experiment to expel extraneous adsorbents.
Resonance tracking
In experiment, the whispering-gallery mode resonance extremum of our sensor is monitored using a bespoke centroid method41,48
$${\mathrm{First}}\,{\mathrm{Moment}} = \frac{{\mathop {\sum}\nolimits_{i = 1}^n {i[T_{{\mathrm{Threshold}}} - T(i)]} }}{{\mathop {\sum}\nolimits_{i = 1}^n {[T_{{\mathrm{Threshold}}} - T(i)]} }},$$
where \(T_{{\mathrm{Threshold}}}\) is the fixed transmission threshold and \(n\) is the number of points defined to be in the resonant mode. The external cavity laser is swept linearly across an ~8.5 pm wavelength range as driven by a triangular scan waveform, wherein hysteresis is averted by selective recording of the upscan. The transmission spectra are acquired with a sampling rate of 2.5 MHz and bit depth of 14. Given that laser diode emission intensity differs over the wavelength scan, 200 spectra are first averaged prior to coupling. Flattening of the spectrum is then executed and a fixed transmission threshold for peak detection is set. A resonance dip is only recognised if it falls below the transmission threshold and its width exceeds a successive point minimum. If these conditions are satisfied, the time trace of the computed lineshape position and width can be visualised in real time and stored for post-analysis. Many noise sources in the frequency domain are also put into account during our measurements, e.g. temperature drift (i.e. thermorefractivity and thermoelasticity), mechanical vibrations, laser mode hopping, and nanorod displacement.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Elson, E. L. Fluorescence correlation spectroscopy: past, present, future. Biophys. J. 101, 2855–2870 (2011).
ADS CAS Article Google Scholar
Lerner, E., Cordes, T., Ingargiola, A., Alhadid, Y., Chung, S., Michalet, X. & Weiss, S., Toward dynamic structural biology: Two decades of single-molecule Förster resonance energy transfer. Science 359, https://doi.org/10.1126/science.aan1133 (2018).
Hillmering, M., Pardon, G., Vastesson, A., Supekar, O., Carlborg, C. F., Brandner, B. D., van der Wijngaart, W. & Haraldsson, T. Off-stoichiometry improves the photostructuring of thiol–enes through diffusion-induced monomer depletion. Microsyst. Nanoeng. 2, https://doi.org/10.1038/micronano.2015.43 (2016).
McBride, M. K., Martinez, A. M., Cox, L., Alim, M., Childress, K., Beiswinger, M., Podgorski, M., Worrell, B. T., Killgore, J. & Bowman, C. N. A readily programmable, fully reversible shape-switching material. Sci. Adv. 4, https://doi.org/10.1126/sciadv.aat4634 (2018).
Pulcu, G. S., Mikhailova, E., Choi, L.-S. & Bayley, H. Continuous observation of the stochastic motion of an individual small-molecule walker. Nat. Nanotechnol. 10, 76–83 (2014).
ADS Article Google Scholar
Kassem, S., van Leeuwen, T., Lubbe, A. S., Wilson, M. R., Feringa, B. L. & Leigh, D. A. Artificial molecular motors. Chem. Soc. Rev. 46, 2592–2621 (2017).
Pensa, E., Cortés, E., Corthey, G., Carro, P., Vericat, C., Fonticelli, M. H., Benı́tez, G., Rubert, A. A. & Salvarezza, R. C. The chemistry of the sulfur–gold interface: in search of a unified model. Acc. Chem. Res. 45, 1183–1192 (2012).
Zhou, C., Duan X. & Liu, N. A plasmonic nanorod that walks on DNA origami. Nat. Commun. 6, https://doi.org/10.1038/ncomms9102 (2015).
Betz, S. F. Disulfide bonds and the stability of globular proteins. Protein Sci. 2, 1551–1558 (1993).
Carl, P., Kwok, C. H., Manderson, G., Speicher, D. W. & Discher, D. E. Forced unfolding modulated by disulfide bonds in the Ig domains of a cell adhesion molecule. Proc. Natl Acad. Sci. 98, 1565–1570 (2001).
Song, J., Yuan, Z., Tan, H., Huber, T. & Burrage, K. Predicting disulfide connectivity from protein sequence using multiple sequence feature vectors and secondary structure. Bioinformatics 23, 3147–3154 (2007).
Winterbourn, C. C. & Hampton, M. B. Thiol chemistry and specificity in redox signaling. Free Radic. Biol. Med. 45, 549–561 (2008).
Fu, X., Cate, S. A., Dominguez, M., Osborn, W., Özpolat, T., Konkle, B. A., Chen, J. & López, J. A. Cysteine Disulfides (Cys-ss-X) as Sensitive Plasma Biomarkers of Oxidative Stress. Sci. Rep. 9, https://doi.org/10.1038/s41598-018-35566-2 (2019).
Winther, J. R. & Thorpe, C. Quantification of thiols and disulfides. Biochim. Biophys. Acta, Gen. Subj. 1840, 838–846 (2014).
Rahman, I., Kode, A. & Biswas, S. K. Assay for quantitative determination of glutathione and glutathione disulfide levels using enzymatic recycling method. Nat. Protoc. 1, 3159–3165 (2006).
Kneipp, K., Wang, Y., Kneipp, H., Perelman, L. T., Itzkan, I., Dasari, R. R. & Feld, M. S. Single molecule detection using surface-enhanced raman scattering (SERS). Phys. Rev. Lett. 78, 1667–1670 (1997).
Nie, S. & Emory, S. R. Probing single molecules and single nanoparticles by surface-enhanced Raman scattering. Science 275, 1102–1106 (1997).
Zijlstra, P., Paulo, P. M. R. & Orrit, M. Optical detection of single non-absorbing molecules using the surface plasmon resonance of a gold nanorod. Nat. Nanotechnol. 7, 379–382 (2012).
Gross, L., Mohn, F., Moll, N., Liljeroth, P. & Meyer, G. The chemical structure of a molecule resolved by atomic force microscopy. Science 325, 1110–1114 (2009).
Hanay, M. S., Kelber, S., Naik, A. K., Chi, D., Hentz, S., Bullard, E. C., Colinet, E., Duraffourg, L. & Roukes, M. L. Single-protein nanomechanical mass spectrometry in real time. Nat. Nanotechnol. 7, 602–608 (2012).
Ndieyira, J. W., Kappeler, N., Logan, S., Cooper, M. A., Abell, C., McKendry, R. A. & Aeppli, G. Surface-stress sensors for rapid and ultrasensitive detection of active free drugs in human serum. Nat. Nanotechnol. 9, 225–232 (2014).
Xu, B. & Tao, N. J. Measurement of single-molecule resistance by repeated formation of molecular junctions. Science 301, 1221–1223 (2003).
Garaj, S., Hubbard, W., Reina, A., Kong, J., Branton, D. & Golovchenko, J. A. Graphene as a subnanometre trans-electrode membrane. Nature 467, 190–193 (2010).
Sorgenfrei, S., Chiu, C.-y, Gonzalez, R. L. Jr., Yu, Y.-J., Kim, P., Nuckolls, C. & Shepard, K. L. Label-free single-molecule detection of DNA-hybridization kinetics with a carbon nanotube field-effect transistor. Nat. Nanotechnol. 6, 126–132 (2011).
Cui, L., Hur, S., Akbar, Z. A., Klöckner, J. C., Jeong, W., Pauly, F., Jang, S.-Y., Reddy, P. & Meyhofer, E. Thermal conductance of single-molecule junctions. Nature 572, 628–633 (2019).
Baaske, M. D., Foreman, M. R. & Vollmer, F. Single-molecule nucleic acid interactions monitored on a label-free microcavity biosensor platform. Nat. Nanotechnol. 9, 933–939 (2014).
Foreman, M. R. & Vollmer, F. Theory of resonance shifts of whispering gallery modes by arbitrary plasmonic nanoparticles. New J. Phys. 15, https://doi.org/10.1088/1367-2630/15/8/083006 (2013).
Foreman, M. R. & Vollmer, F. Level repulsion in hybrid photonic-plasmonic microresonators for enhanced biodetection. Phys. Rev. A 88, https://doi.org/10.1103/PhysRevA.88.023831 (2013).
Klusmann, C., Suryadharma, R. N. S., Oppermann, J., Rockstuhl, C. & Kalt, H. Hybridizing whispering gallery modes and plasmonic resonances in a photonic metadevice for biosensing applications [Invited]. J. Opt. Soc. Am. B 34, D46–D55 (2017).
Zhu, J., Ozdemir, S. K., Xiao, Y.-F., Li, L., He, L., Chen, D.-R. & Yang, L. On-chip single nanoparticle detection and sizing by mode splitting in an ultrahigh-Q microresonator. Nat. Photonics 4, 46–49 (2009).
Shao, L., Jiang, X.-F., Yu, X.-C., Li, B.-B., Clements, W. R., Vollmer, F., Wang, W., Xiao, Y.-F. & Gong, Q. Detection of single nanoparticles and lentiviruses using microcavity resonance broadening. Adv. Mater. 25, 5616–5620 (2013).
Lu, T., Su, T.-T. J., Vahala, K. J. & Fraser, S. E. Split frequency sensing methods and systems. US Patent 8593638 (2013).
Kim, E., Baaske, M. D. & Vollmer, F. In situ observation of single-molecule surface reactions from low to high affinities. Adv. Mater. 28, 9941–9948 (2016).
Leff, D. V., Brandt, L. & Heath, J. R. Synthesis and characterization of hydrophobic, organically-soluble gold nanocrystals functionalized with primary amines. Langmuir 12, 4723–4730 (1996).
Pong, B.-K., Lee, J.-Y. & Trout, B. L. First principles computational study for understanding the interactions between ssDNA and gold nanoparticles: adsorption of methylamine on gold nanoparticulate surfaces. Langmuir 21, 11599–11603 (2005).
Venkataraman, L., Klare, J. E., Tam, I. W., Nuckolls, C., Hybertsen, M. S. & Steigerwald, M. L. Single-molecule circuits with well-defined molecular conductance. Nano Lett. 6, 458–462 (2006).
Kim, Y., Hellmuth, T. J., Bürkle, M., Pauly, F. & Scheer, E. Characteristics of amine-ended and thiol-ended alkane single-molecule junctions revealed by inelastic electron tunneling spectroscopy. ACS Nano 5, 4104–4111 (2011).
Xie, H.-J., Lei, Q.-F. & Fang, W.-J. Intermolecular interactions between gold clusters and selected amino acids cysteine and glycine: a DFT study. J. Mol. Model. 18, 645–652 (2011).
Heylman, K. D., Thakkar, N., Horak, E. H., Quillin, S. C., Cherqui, C., Knapper, K. A., Masiello, D. J. & Goldsmith, R. H. Optical microresonators as single-particle absorption spectrometers. Nat. Photonics 10, 788–795 (2016).
Nelson, J. W. & Creighton, T. E. Reactivity and ionization of the active site cysteine residues of DsbA, a protein required for disulfide bond formation in vivo. Biochemistry 33, 5974–5983 (1994).
Baaske, M. D. & Vollmer, F. Optical observation of single atomic ions interacting with plasmonic nanorods in aqueous solution. Nat. Photonics 10, 733–739 (2016).
O'Neil, M. J. The Merck Index, 15th edn (Royal Society of Chemistry, Cambridge, 2013).
Serjeant, E. P. & Dempsey, B. Ionisation Constants of Organic Acids in Aqueous Solution (Pergamon Press, Oxford/New York, 1979).
Arnold, S., Khoshsima, M., Teraoka, I., Holler, S. & Vollmer, F. Shift of whispering-gallery modes in microspheres by protein adsorption. Opt. Lett. 28, 272–274 (2003).
Jin, W. & Chen, H. A new method of determination of diffusion coefficients using capillary zone electrophoresis (peak-height method). Chromatographia 52, 17–21 (2000).
Roelli, P., Galland, C., Piro, N. & Kippenberg, T. J. Molecular cavity optomechanics as a theory of plasmon-enhanced Raman scattering. Nat. Nanotechnol. 11, 164–169 (2015).
Mauranyapin, N. P., Madsen, L. S., Taylor, M. A., Waleed, M. & Bowen, W. P. Evanescent single-molecule biosensing with quantum-limited precision. Nat. Photonics 11, 477–481 (2017).
Kukanskis, K., Elkind, J., Melendez, J., Murphy, T., Miller, G. & Garner, H. Detection of DNA Hybridization Using the TISPR-1 Surface Plasmon Resonance Biosensor. Anal. Biochem. 274, 7–17 (1999).
The authors acknowledge funding from the University of Exeter, the Engineering and Physical Sciences Research Council (Ref. EP/R031428/1), and from the European Research Council under an H2020-FET open grant (ULTRACHIRAL, ID: 737071). Spectral data was acquired and step signals were evaluated using LabVIEW software developed by M.D. Baaske.
Living Systems Institute, School of Physics, University of Exeter, Exeter, EX4 4QD, UK
Serge Vincent, Sivaraman Subramanian & Frank Vollmer
Serge Vincent
Sivaraman Subramanian
Frank Vollmer
S.V. designed and performed the experiments, completed the data analysis, and composed the manuscript. S.S. wrote the MATLAB application for transient signal analysis, while F.V. supervised the project and revised the manuscript. All authors discussed and interpreted the results.
Correspondence to Serge Vincent or Frank Vollmer.
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this manuscript. Peer review reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Vincent, S., Subramanian, S. & Vollmer, F. Optoplasmonic characterisation of reversible disulfide interactions at single thiol sites in the attomolar regime. Nat Commun 11, 2043 (2020). https://doi.org/10.1038/s41467-020-15822-8
Biosensors and Diagnostics for Fungal Detection
Khalil K. Hussain
, Dhara Malavia
, Elizabeth M. Johnson
, Jennifer Littlechild
, C. Peter Winlove
, Frank Vollmer
& Neil A. R. Gow
Journal of Fungi (2020)
Effective linewidth shifts in single-molecule detection using optical whispering gallery modes
, Serge Vincent
& Frank Vollmer
Applied Physics Letters (2020)
Opto-fluidic-plasmonic liquid-metal core microcavity
Qijing Lu
, Xiaogang Chen
, Xianlin Liu
, Junqiang Guo
, Shusen Xie
, Xiang Wu
, Chang-Ling Zou
& Chun-Hua Dong
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Editors' Highlights
Nature Communications ISSN 2041-1723 (online)
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | CommonCrawl |
TRAPPIST-1 - so much data, it all goes here
Thread: TRAPPIST-1 - so much data, it all goes here
2018-Aug-31, 03:03 PM #1
Roger E. Moore
TRAPPIST-1, the only known planetary system with seven terrestrial worlds, packed together like eggs in a box around a red dwarf star. Latest information, more to come no doubt.....
Cometary impactors on the TRAPPIST-1 planets can destroy all planetary atmospheres and rebuild secondary atmospheres on planets f, g, h
Quentin Kral, Mark C. Wyatt, Amaury H.M.J. Triaud, Sebastian Marino, Philippe Thebault, Oliver Shorttle
(Submitted on 14 Feb 2018 (v1), last revised 3 Jul 2018 (this version, v2))
The TRAPPIST-1 system is unique in that it has a chain of seven terrestrial Earth-like planets located close to or in its habitable zone. In this paper, we study the effect of potential cometary impacts on the TRAPPIST-1 planets and how they would affect the primordial atmospheres of these planets. We consider both atmospheric mass loss and volatile delivery with a view to assessing whether any sort of life has a chance to develop. We ran N-body simulations to investigate the orbital evolution of potential impacting comets, to determine which planets are more likely to be impacted and the distributions of impact velocities. We consider three scenarios that could potentially throw comets into the inner region (i.e., within 0.1 au where the seven planets are located) from an (as yet undetected) outer belt similar to the Kuiper belt or an Oort cloud: Planet scattering, the Kozai-Lidov mechanism and Galactic tides. For the different scenarios, we quantify, for each planet, how much atmospheric mass is lost and what mass of volatiles can be delivered over the age of the system depending on the mass scattered out of the outer belt. We find that the resulting high velocity impacts can easily destroy the primordial atmospheres of all seven planets, even if the mass scattered from the outer belt is as low as that of the Kuiper belt. However, we find that the atmospheres of the outermost planets f, g and h can also easily be replenished with cometary volatiles (e.g. ∼ an Earth ocean mass of water could be delivered). These scenarios would thus imply that the atmospheres of these outermost planets could be more massive than those of the innermost planets, and have volatiles-enriched composition.
Detectability of biosignatures in anoxic atmospheres with the James Webb Space Telescope: A TRAPPIST-1e case study
Joshua Krissansen-Totton, Ryan Garland, Patrick Irwin, David C. Catling
(Submitted on 25 Aug 2018)
The James Webb Space Telescope (JWST) may be capable of finding biogenic gases in the atmospheres of habitable exoplanets around low mass stars. Considerable attention has been given to the detectability of biogenic oxygen, which could be found using an ozone proxy, but ozone detection with JWST will be extremely challenging, even for the most favorable targets. Here, we investigate the detectability of biosignatures in anoxic atmospheres analogous to those that likely existed on the early Earth. Arguably, such anoxic biosignatures could be more prevalent than oxygen biosignatures if life exists elsewhere. Specifically, we simulate JWST retrievals of TRAPPIST-1e to determine whether the methane plus carbon dioxide disequilibrium biosignature pair is detectable in transit transmission. We find that ~10 transits using the Near InfraRed Spectrograph (NIRSpec) prism instrument may be sufficient to detect carbon dioxide and constrain methane abundances sufficiently well to rule out known, non-biological CH 4 production scenarios to ~90% confidence. Furthermore, it might be possible to put an upper limit on carbon monoxide abundances that would help rule out non-biological methane-production scenarios, assuming the surface biosphere would efficiently drawdown atmospheric CO. Our results are relatively insensitive to high altitude clouds and instrument noise floor assumptions, although stellar heterogeneity and variability may present challenges.
Predicting the Orbit of TRAPPIST-1i
David Kipping
(Submitted on 27 Jul 2018)
The TRAPPIST-1 system provides an exquisite laboratory for advancing our understanding exoplanetary atmospheres, compositions, dynamics and architectures. A remarkable aspect of TRAPPIST-1 is that it represents the longest known resonance chain, where all seven planets share near mean motion resonances with their neighbors. Prior to the measurement of 1h's period, Luger et al. (2017) showed that six possible and highly precise periods for 1h were expected, assuming it also participated in the resonant chain. We show here that combining this argument with a Titius-Bode law fit of the inner six worlds narrows the choices down to a single precise postdiction for 1h's period, which is ultimately the correct period. But a successful postdiction is never as convincing as a successful prediction, and so we take the next step and apply this argument to a hypothetical TRAPPIST-1i. We find two possible periods predicted by this argument, either 25.345 or 28.699 days. If successful, this may provide the basis for planet prediction in compact resonant chain systems. If falsified, this would indicate that the argument lacks true predictive power and may not be worthwhile pursuing further in our efforts to build predictive models for planetary systems.
Interior characterization in multiplanetary systems: TRAPPIST-1
Caroline Dorn, Klaus Mosegaard, Simon L Grimm, Yann Alibert
(Submitted on 6 Aug 2018)
Interior characterization traditionally relies on individual planetary properties, ignoring correlations between different planets of the same system. For multi-planetary systems, planetary data are generally correlated. This is because, the differential masses and radii are better constrained than absolute planetary masses and radii. We explore such correlations and data specific to the multiplanetary-system of TRAPPIST-1 and study their value for our understanding of planet interiors. Furthermore, we demonstrate that the rocky interior of planets in a multi-planetary system can be preferentially probed by studying the most dense planet representing a rocky interior analogue. Our methodology includes a Bayesian inference analysis that uses a Markov chain Monte Carlo scheme. Our interior estimates account for the anticipated variability in the compositions and layer thicknesses of core, mantle, water oceans and ice layers, and a gas envelope. Our results show that (1) interior estimates significantly depend on available abundance proxies and (2) that the importance of inter-dependent planetary data for interior characterization is comparable to changes in data precision by 30 %. For the interiors of TRAPPIST-1 planets, we find that possible water mass fractions generally range from 0-25 %. The lack of a clear trend of water budgets with orbital period or planet mass challenges possible formation scenarios. While our estimates change relatively little with data precision, they critically depend on data accuracy. If planetary masses varied within ~24 %, interiors would be consistent with uniform (~7 %) or an increasing water mass fractions with orbital period (~2-12 %).
Non-detection of Contamination by Stellar Activity in the Spitzer Transit Light Curves of TRAPPIST-1
Brett M. Morris, Eric Agol, Leslie Hebb, Suzanne L. Hawley, Michaël Gillon, Elsa Ducrot, Laetitia Delrez, James Ingalls, Brice-Olivier Demory
We apply the transit light curve self-contamination technique of Morris et al. (2018) to search for the effect of stellar activity on the transits of the ultracool dwarf TRAPPIST-1 with 2018 Spitzer photometry. The self-contamination method fits the transit light curves of planets orbiting spotted stars, allowing the host star to be a source of contaminating positive or negative flux which influences the transit depths but not the ingress/egress durations. We find that none of the planets show statistically significant evidence for self-contamination by bright or dark regions of the stellar photosphere. However, we show that small-scale magnetic activity, analogous in size to the smallest sunspots, could still be lurking in the transit photometry undetected.
Updated Compositional Models of the TRAPPIST-1 Planets
Cayman T. Unterborn, Natalie R. Hinkel, Steven J. Desch
(Submitted on 26 Jun 2018)
After publication of our initial mass-radius-composition models for the TRAPPIST-1 system in Unterborn et al. (2018), the planet masses were updated in Grimm et al. (2018). We had originally adopted the data set of Wang et al., 2017 who reported different densities than the updated values. The differences in observed density change the inferred volatile content of the planets. Grimm et al. (2018) report TRAPPIST-1 b, d, f, g, and h as being consistent with <5 wt% water and TRAPPIST-1 c and e has having largely rocky interiors. Here, we present updated results recalculating water fractions and potential alternative compositions using the Grimm et al., 2018 masses. Overall, we can only reproduce the results of Grimm et al., 2018 of planets b, d and g having small water contents if the cores of these planets are small (<23 wt%). We show that, if the cores for these planets are roughly Earth-sized (33 wt%), significant water fractions up to 40 wt% are possible. We show planets c, e, f, and h can have volatile envelopes between 0-35 wt% that are also consistent with being totally oxidized and lacking an Fe-core entirely. We note here that a pure MgSiO 3 planet (Fe/Mg = 0) is not the true lowest density end-member mass-radius curve for determining the probability of a planet containing volatiles. All planets that are rocky likely contain some Fe, either within the core or oxidized in the mantle. We argue the true low density end-member for oxidizing systems is instead a planet with the lowest reasonable Fe/Mg and completely core-less. Using this logic, we assert that planets b, d and g likely must have significant volatile layers because the end-member planet models produce masses too high even when uncertainties in both mass and radius are taken into account.
There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
— Mark Twain, Life on the Mississippi (1883)
A few more recent articles of importance....
Dimensionality and integrals of motion of the Trappist-1 planetary system
Johannes Floß, Hanno Rein, Paul Brumer
(Submitted on 22 Feb 2018 (v1), last revised 18 Apr 2018 (this version, v2))
The number of isolating integrals of motion of the Trappist-1 system - a late M-dwarf orbited by seven Earth-sized planets - was determined numerically, using an adapted version of the correlation dimension method. It was found that over the investigated time-scales of up to 20 000 years the number of isolating integrals of motion is the same as one would find for a system of seven non-interacting planets - despite the fact that the planets in the Trappist-1 system are strongly interacting. Considering perturbed versions of the Trappist-1 system shows that the system may occupy an atypical part of phase-space with high stability. These findings are consistent with earlier studies.
The Impact of Stellar Distances on Habitable Zone Planets
Stephen R. Kane
(Submitted on 1 Jul 2018 (v1), last revised 11 Jul 2018 (this version, v2))
Among the most highly valued of exoplanetary discoveries are those of terrestrial planets found to reside within the Habitable Zone (HZ) of the host star. In particular, those HZ planets with relatively bright host stars will serve as priority targets for characterization observations, such as those involving mass determinations, transmission spectroscopy, and direct imaging. The properties of the star are greatly affected by the distance measurement to the star, and subsequent changes to the luminosity result in revisions to the extent of the HZ and the properties of the planet. This is particularly relevant in the realm of Gaia which has released updated stellar parallaxes for the known exoplanet host stars. Here we provide a generalized formulation of the effect of distance on planetary system properties, including the HZ. We apply this methodology to three known systems and show that the recent Gaia Data Release 2 distances have a modest effect for TRAPPIST-1 but a relatively severe effect for Kepler-186 and LHS 1140.
https://link.springer.com/article/10...63772918060033
Activity of the M8 Dwarf TRAPPIST-1
Dmitrienko, E. S.; Savanov, I. S.
The results of an analysis of observations of the cool (M8) dwarf TRAPPIST-1 obtained on the Kepler Space Telescope (the K2 continuation mission) are presented. TRAPPIST-1 possesses a planetary system containing at least seven planets. In all, the observations consist of 105 584 individual brightness measurements made over a total duration of 79 days. Brightness power spectra computed for TRAPPIST-1 exhibit a peak corresponding to P 0 = 3.296 ± 0.007 d . There are also two peaks with lower significances at P 1 = 2.908 d and P 2 = 2.869 d , which cannot be explained by the presence of differential rotation. The observational material available for TRAPPIST-1 is subdivided into 21 datasets, each covering one stellar rotation period. Each of the individual light curves was used to construct a map of the star's temperature inhomogeneities. On average, the total spotted area of TRAPPIST-1 was S = 5% of the entire visible area. The difference between the angular rotation rates at the equator and at the pole is estimated to be DeltaOmega = 0.006. The new results obtained together with data from the literature are used to investigate the properties of this unique star and compare them to the properties of other cool dwarfs. Special attention is paid to the star's evolutionary status (its age). All age estimates for TRAPPIST-1 based on its activity characteristics (rotation, spot coverage, UV and X-ray flux, etc.) indicate that the star is young.
http://iopscience.iop.org/article/10...357/aac104/pdf
The Productivity of Oxygenic Photosynthesis around Cool, M Dwarf Stars
Lehmer, Owen R.; Catling, David C.; Parenteau, Mary N.; Hoehler, Tori M.
In the search for life around cool stars, the presence of atmospheric oxygen is a prominent biosignature, as it may indicate oxygenic photosynthesis (OP) on the planetary surface. On Earth, most oxygenic photosynthesizing organisms (OPOs) use photons between 400 and 750 nm, which have sufficient energy to drive the photosynthetic reaction that generates O2 from H2O and CO2. OPOs around cool stars may evolve similar biological machinery capable of producing oxygen from water. However, in the habitable zones (HZs) of the coolest M dwarf stars, the flux of 400-750 nm photons may be just a few percent that of Earth's. We show that the reduced flux of 400-750 nm photons around M dwarf stars could result in Earth-like planets being growth limited by light, unlike the terrestrial biosphere, which is limited by nutrient availability. We consider stars with photospheric temperatures between 2300 and 4200 K and show that such light-limited worlds could occur at the outer edge of the HZ around TRAPPIST-1-like stars. We find that even if OP can use photons longer than 750 nm, there would still be insufficient energy to sustain the Earth's extant biosphere throughout the HZ of the coolest stars. This is because such stars emit largely in the infrared and near-infrared, which provide sufficient energy to make the planet habitable, but limits the energy available for OP. TRAPPIST-1f and g may fall into this category. Biospheres on such planets, potentially limited by photon availability, may generate small biogenic signals, which could be difficult for future observations to detect.
http://iopscience.iop.org/article/10...881/aabee8/pdf
Dynamical Constraints on Nontransiting Planets Orbiting TRAPPIST-1
Jontof-Hutter, Daniel; Truong, Vinh H.; Ford, Eric B.; Robertson, Paul; Terrien, Ryan C.
We derive lower bounds on the orbital distance and inclination of a putative planet beyond the transiting seven planets of TRAPPIST-1, for a range of masses ranging from 0.08 M Jup to 3.5 M Jup. While the outer architecture of this system will ultimately be constrained by radial velocity measurements over time, we present dynamical constraints from the remarkably coplanar configuration of the seven transiting planets, which is sensitive to modestly inclined perturbers. We find that the observed configuration is unlikely if a Jovian-mass planet inclined by >=3° to the transiting planet exists within 0.53 au, exceeding any constraints from transit timing variations (TTV) induced in the known planets from an undetected perturber. Our results will inform RV programs targeting TRAPPIST-1, and for near coplanar outer planets, tighter constraints are anticipated for radial velocity (RV) precisions of ≲140 m s-1. At higher inclinations, putative planets are ruled out to greater orbital distances with orbital periods up to a few years.
The nature of the TRAPPIST-1 exoplanets
Simon L. Grimm, Brice-Olivier Demory, Michaël Gillon, Caroline Dorn, Eric Agol, Artem Burdanov, Laetitia Delrez, Marko Sestovic, Amaury H.M.J. Triaud, Martin Turbet, Émeline Bolmont, Anthony Caldas, Julien de Wit, Emmanuël Jehin, Jérémy Leconte, Sean N. Raymond, Valérie Van Grootel, Adam J. Burgasser, Sean Carey, Daniel Fabrycky, Kevin Heng, David M. Hernandez, James G. Ingalls, Susan Lederer, Franck Selsis, Didier Queloz
(Submitted on 5 Feb 2018)
The TRAPPIST-1 system hosts seven Earth-sized, temperate exoplanets orbiting an ultra-cool dwarf star. As such, it represents a remarkable setting to study the formation and evolution of terrestrial planets that formed in the same protoplanetary disk. While the sizes of the TRAPPIST-1 planets are all known to better than 5% precision, their densities have significant uncertainties (between 28% and 95%) because of poor constraints on the planet's masses. Aims.The goal of this paper is to improve our knowledge of the TRAPPIST-1 planetary masses and densities using transit-timing variations (TTV). The complexity of the TTV inversion problem is known to be particularly acute in multi-planetary systems (convergence issues, degeneracies and size of the parameter space), especially for resonant chain systems such as TRAPPIST-1. Methods. To overcome these challenges, we have used a novel method that employs a genetic algorithm coupled to a full N-body integrator that we applied to a set of 284 individual transit timings. This approach enables us to efficiently explore the parameter space and to derive reliable masses and densities from TTVs for all seven planets. Our new masses result in a five- to eight-fold improvement on the planetary density uncertainties, with precisions ranging from 5% to 12%. These updated values provide new insights into the bulk structure of the TRAPPIST-1 planets. We find that TRAPPIST-1c and -1e likely have largely rocky interiors, while planets b, d, f, g, and h require envelopes of volatiles in the form of thick atmospheres, oceans, or ice, in most cases with water mass fractions less than 5%.
TRAPPIST-1e Has a Large Iron Core
Gabrielle Suissa, David Kipping
(Submitted on 26 Apr 2018)
The TRAPPIST-1 system provides an exquisite laboratory for understanding exoplanetary atmospheres and interiors. Their mutual gravitational interactions leads to transit timing variations, from which Grimm et al. (2018) recently measured the planetary masses with precisions ranging from 5% to 12%. Using these masses and the <5% radius measurements on each planet, we apply the method described in Suissa et al. (2018) to infer the minimum and maximum CRF (core radius fraction) of each planet. Further, we modify the maximum limit to account for the fact that a light volatile envelope is excluded for planets b through f. Only planet e is found to have a significant probability of having a non-zero minimum CRF, with a 0.7% false-alarm probability it has no core. Our method further allows us to measure the CRF of planet e to be greater than (49 +/- 7)% but less than (72 +/- 2)%, which is compatible with that of the Earth. TRAPPIST-1e therefore possess a large iron core similar to the Earth, in addition to being Earth-sized and located in the temperature zone.
Interior Structures and Tidal Heating in the TRAPPIST-1 Planets
Amy C. Barr, Vera Dobos, László L. Kiss
(Submitted on 15 Dec 2017 (v1), last revised 24 Jan 2018 (this version, v2))
With seven planets, the TRAPPIST-1 system has the largest number of exoplanets discovered in a single system so far. The system is of astrobiological interest, because three of its planets orbit in the habitable zone of the ultracool M dwarf. Assuming the planets are composed of non-compressible iron, rock, and H 2 O, we determine possible interior structures for each planet. To determine how much tidal heat may be dissipated within each planet, we construct a tidal heat generation model using a single uniform viscosity and rigidity for each planet based on the planet's composition. With the exception of TRAPPIST-1c, all seven of the planets have densities low enough to indicate the presence of significant H 2 O in some form. Planets b and c experience enough heating from planetary tides to maintain magma oceans in their rock mantles; planet c may have eruptions of silicate magma on its surface, which may be detectable with next-generation instrumentation. Tidal heat fluxes on planets d, e, and f are lower, but are still twenty times higher than Earth's mean heat flow. Planets d and e are the most likely to be habitable. Planet d avoids the runaway greenhouse state if its albedo is ≳ 0.3. Determining the planet's masses within ∼0.1 to 0.5 Earth masses would confirm or rule out the presence of H 2 O and/or iron in each planet, and permit detailed models of heat production and transport in each planet. Understanding the geodynamics of ice-rich planets f, g, and h requires more sophisticated modeling that can self-consistently balance heat production and transport in both rock and ice layers.
A very nice, large image of the TRAPPIST-1 planets, as NASA envisions them. The caption with the image is:
This chart shows, on the top row, artist concepts of the seven planets of TRAPPIST-1 with their orbital periods, distances from their star, radii, masses, densities and surface gravity as compared to those of Earth.
Image credit: NASA/JPL-Caltech.
PIA21425_-_TRAPPIST-1_Statistics_Table.jpg (911.1 KB, 12 views)
2018-Sep-01, 03:16 PM #5
BigDon
If this was a science fiction story I would smile and nod politely and hope the author didn't go into any more high fantasy...
Time wasted having fun is not time wasted - Lennon
(John, not the other one.)
2018-Sep-02, 02:21 AM #6
Originally Posted by BigDon
In the old days (1970 or so), I would have labeled this as baloney and thrown out the book, because of course it was baloney to have seven Earths stacked on top of each other, etc. I mean, seriously.
The Near-Infrared Transmission Spectra of TRAPPIST-1 Planets b, c, d, e, f, and g and Stellar Contamination in Multi-Epoch Transit Spectra
Zhanbo Zhang, Yifan Zhou, Benjamin V. Rackham, Daniel Apai
(Submitted on 6 Feb 2018 (v1), last revised 31 Aug 2018 (this version, v3))
The seven approximately Earth-sized transiting planets in the \object{TRAPPIST-1} system provide a unique opportunity to explore habitable zone and non-habitable zone small planets within the same system. Its habitable zone exoplanets -- due to their favorable transit depths -- are also worlds for which atmospheric transmission spectroscopy is within reach with the Hubble Space Telescope (HST) and with the James Webb Space Telescope (JWST). We present here an independent reduction and analysis of two \textit{HST} Wide Field Camera 3 (WFC3) near-infrared transit spectroscopy datasets for six planets (b through g). Utilizing our physically-motivated detector charge trap correction and a custom cosmic ray correction routine, we confirm the general shape of the transmission spectra presented by \textbf{\citet{deWit2016, deWit2018}}. Our data reduction approach leads to a 25\% increase in the usable data and reduces the risk of confusing astrophysical brightness variations (e.g., flares) with instrumental systematics. No prominent absorption features are detected in any individual planet's transmission spectra; by contrast, the combined spectrum of the planets shows a suggestive decrease around 1.4\,$\micron$ similar to an inverted water absorption feature. Including transit depths from \textit{K2}, the SPECULOOS-South Observatory, and \textit{Spitzer}, we find that the complete transmission spectrum is fully consistent with stellar contamination owing to the transit light source effect. These spectra demonstrate how stellar contamination can overwhelm planetary absorption features in low-resolution exoplanet transit spectra obtained by \textit{HST} and \textit{JWST} and also highlight the challenges in combining multi epoch observations for planets around rapidly rotating spotted stars.
The 0.8-4.5μ m broadband transmission spectra of TRAPPIST-1 planets
E. Ducrot, et al.
(Submitted on 3 Jul 2018 (v1), last revised 2 Sep 2018 (this version, v2))
The TRAPPIST-1 planetary system represents an exceptional opportunity for the atmospheric characterization of temperate terrestrial exoplanets with the upcoming James Webb Space Telescope (JWST). Assessing the potential impact of stellar contamination on the planets' transit transmission spectra is an essential precursor step to this characterization. Planetary transits themselves can be used to scan the stellar photosphere and to constrain its heterogeneity through transit depth variations in time and wavelength. In this context, we present our analysis of 169 transits observed in the optical from space with K2 and from the ground with the SPECULOOS and Liverpool telescopes. Combining our measured transit depths with literature results gathered in the mid/near-IR with Spitzer/IRAC and HST/WFC3, we construct the broadband transmission spectra of the TRAPPIST-1 planets over the 0.8-4.5 μ m spectral range. While planets b, d, and f spectra show some structures at the 200-300ppm level, the four others are globally flat. Even if we cannot discard their instrumental origins, two scenarios seem to be favored by the data: a stellar photosphere dominated by a few high-latitude giant (cold) spots, or, alternatively, by a few small and hot (3500-4000K) faculae. In both cases, the stellar contamination of the transit transmission spectra is expected to be less dramatic than predicted in recent papers. Nevertheless, based on our results, stellar contamination can still be of comparable or greater order than planetary atmospheric signals at certain wavelengths. Understanding and correcting the effects of stellar heterogeneity therefore appears essential to prepare the exploration of TRAPPIST-1's with JWST.
Originally Posted by Roger E. Moore
I was deeply struck with this while setting up an outreach event using ping-pong balls to show the TRAPPIST-1 planets to scale with their orbits. They would be really obviously big to our unaided eyes. Running some numbers, from the innermost of the 7, the outermost and smallest would still show a distinct visible disk at opposition (about 4.5 arcminutes in diameter). It would look like some of the old magazine paintings of a downright crowded sky.
2018-Sep-21, 03:09 PM #10
Most recent paper is not optimistic about colonization conditions on any of the planets. If the temperature is okay, there's no water, there's sulfuric acid in the air, etc. Nonetheless, this is only a collection of simulations and maybe we'll find one of the seven suitable for building suburbs. Don't take your spacesuit off, though. Could be a terraformer's paradise.
Evolved Climates and Observational Discriminants for the TRAPPIST-1 Planetary System
Andrew P. Lincowski, et al. (Submitted on 20 Sep 2018)
The TRAPPIST-1 planetary system provides an unprecedented opportunity to study terrestrial exoplanet evolution with the James Webb Space Telescope (JWST) and ground-based observatories. Since M dwarf planets likely experience extreme volatile loss, the TRAPPIST-1 planets may have highly-evolved, possibly uninhabitable atmospheres. We used a versatile, 1D terrestrial-planet climate model with line-by-line radiative transfer and mixing length convection (VPL Climate) coupled to a terrestrial photochemistry model to simulate environmental states for the TRAPPIST-1 planets. We present equilibrium climates with self-consistent atmospheric compositions, and observational discriminants of post-runaway, desiccated, 10-100 bar O2- and CO2-dominated atmospheres, including interior outgassing, as well as for water-rich compositions. Our simulations show a range of surface temperatures, most of which are not habitable, although an aqua-planet TRAPPIST-1 e could maintain a temperate surface given Earth-like geological outgassing and CO2. We find that a desiccated TRAPPIST-1 h may produce habitable surface temperatures beyond the maximum greenhouse distance. Potential observational discriminants for these atmospheres in transmission and emission spectra are influenced by photochemical processes and aerosol formation, and include collision-induced oxygen absorption (O2-O2), and O3, CO, SO2, H2O, and CH4 absorption features, with transit signals of up to 200 ppm. Our simulated transmission spectra are consistent with K2, HST, and Spitzer observations of the TRAPPIST-1 planets. For several terrestrial atmospheric compositions, we find that TRAPPIST-1 b is unlikely to produce aerosols. These results can inform JWST observation planning and data interpretation for the TRAPPIST-1 system and other M dwarf terrestrial planets.
QUOTES: We have calculated the possible ocean loss and oxygen accumulation for the seven known TRAPPIST-1 planets, modeled potential O2/CO2-dominated and potentially habitable environments, and computed transit transmission and emission spectra. These evolved terrestrial exoplanet spectra are consistent with broad constraints from recent HST and Spitzer data. Our evolutionary modeling suggests that the current environmental states can include the hypothesized desiccated, post-ocean-runaway O2-dominated planets, with at least partial ocean loss persisting out to TRAPPIST-1 h. These O2 dominated atmospheres have unusual temperature structures, with low-altitude stratospheres and no tropospheres, which result in distinctive features in both transmission and emission, including strong collision-induced absorption from O2. Alternatively, if early volatile outgassing (e.g. H2O, SO2, CO2) occurred, as was the case for Earth and Venus, Venus-like atmospheres are possible, and likely stable, throughout and beyond the habitable zone, so the maximum greenhouse limit may not apply for evolved M dwarf planets. If Venus-like, these planets could form sulfuric acid hazes, though we find that TRAPPIST-1 b would be too hot to condense H2SO4 aerosols. From analyzing our simulated spectra, we find that there are observational discriminants for the environments we modeled in both transit and emission, with transit signals up to 200 ppm for TRAPPIST-1 b. Detection of CO2 in all considered compositions may be used to probe for the presence of a terrestrial atmosphere. We find that the detection of water is not a good indicator of a habitable environment, as Venus-like atmospheres exhibit similar spectral features for water, so the detection of low stratospheric water abundance maybe a necessary but not sufficient condition for a habitable environment. The discriminants between these environments involve several trace gases. Careful atmospheric modeling that includes photochemistry and realistic interior out gassing is required to predict the diversity of potentially observable spectral features, to interpret future data, and to infer the underlying physical processes producing the observed features. Nevertheless, these discriminants may be used to assess the viability of detecting evolutionary outcomes for the TRAPPIST-1 planets with upcoming observatories, particularly JWST, and this will be assessed in subsequent work. While specifically applied here to the TRAPPIST-1 system, our results may be broadly relevant for other multi-planet M dwarf systems.
TRAPPIST-1's LAW: If it is at all possible for the TRAPPIST-1 system to get any more complicated than it already is, that possibility will immediately reach 100% and a paper will come out to prove it.
Planet-Planet Tides in the TRAPPIST-1 System
Jason T. Wright (Submitted on 21 Sep 2018)
The star TRAPPIST-1 hosts a system of seven transiting, terrestrial exoplanets apparently in a resonant chain, at least some of which are in or near the Habitable Zone. Many have examined the roles of tides in this system, as tidal dissipation of the orbital energy of the planets may be relevant to both the rotational and orbital dynamics of the planets, as well as their habitability. Generally, tides are calculated as being due to the tides raised on the planets by the star, and tides raised on the star by the planets. I write this research note to point out a tidal effect that may be at least as important as the others in the TRAPPIST-1 system and which is so far unremarked upon in the literature: planet-planet tides. Under some reasonable assumptions, I find that for every planet p in the TRAPPIST-1 system there exists some other planet q for which the planet-planet dynamical tidal strain is within an order of magnitude of the stellar eccentricity tidal strain, and that the effects of planet f on planet g are in fact greater than that of the star on planet g. It is thus not obvious that planet-planet tides can be neglected in the TRAPPIST-1 exoplanetary system, especially the tides on planet g due to planet f, if the planets are in synchronous rotation.
I note that "Planet-Planet Tides in the TRAPPIST-1 System" has just been updated. If you are interested in this paper you might wish to recheck it to see if anything else has changed.
2018-Oct-15, 01:49 AM #13
Can you see the stars from the TRAPPIST-1 planets? Maybe not.
Limits on Clouds and Hazes for the TRAPPIST-1 Planets
Sarah E. Moran, et al. (Submitted on 11 Oct 2018)
The TRAPPIST-1 planetary system is an excellent candidate for study of the evolution and habitability of M-dwarf planets. Transmission spectroscopy observations performed with the Hubble Space Telescope (HST) suggest the innermost five planets do not possess clear hydrogen atmospheres. Here we reassess these conclusions with recently updated mass constraints and expand the analysis to include limits on metallicity, cloud top pressure, and the strength of haze scattering. We connect recent laboratory results of particle size and production rate for exoplanet hazes to a one-dimensional atmospheric model for TRAPPIST-1 transmission spectra. Doing so, we obtain a physically-based estimate of haze scattering cross sections. We find haze scattering cross sections on the order of 1e-26 to 1e-19 cm squared are needed in hydrogen-rich atmospheres for TRAPPIST-1 d, e, and f to match the HST data. For TRAPPIST-1 g, we cannot rule out a clear hydrogen-rich atmosphere. We also modeled the effects an opaque cloud deck and substantial heavy element content have on the transmission spectra. We determine that hydrogen-rich atmospheres with high altitude clouds, at pressures of 12mbar and lower, are consistent with the HST observations for TRAPPIST-1 d and e. For TRAPPIST-1 f and g, we cannot rule out clear hydrogen-rich cases to high confidence. We demonstrate that metallicities of at least 60xsolar with tropospheric (0.1 bar) clouds agree with observations. Additionally, we provide estimates of the precision necessary for future observations to disentangle degeneracies in cloud top pressure and metallicity. Our results suggest secondary, volatile-rich atmospheres for the outer TRAPPIST-1 planets d, e, and f.
2018-Oct-29, 07:05 PM #14
One thing not greatly explored for TRAPPIST-1's planets is whether they generate earth tides on each other, and therefore a lot of internal heat and volcanism, as happens on Io.
Constraining the environment and habitability of TRAPPIST-1
Emeline Bolmont (Submitted on 26 Oct 2018)
The planetary system of TRAPPIST-1, discovered in 2016-2017, is a treasure-trove of information. Thanks to a combination of observational techniques, we have estimates of the radii and masses of the seven planets of this very exotic system. With three planets within the traditional Habitable Zone limits, it is one of the best constrained system of astrobiological interest. I will review here the theoretical constraints we can put on this system by trying to reconstruct its history: its atmospheric evolution which depends on the luminosity evolution of the dwarf star, and its tidal dynamical evolution. These constraints can then be used as hypotheses to assess the habitability of the outer planets of the system with a Global Climate Model.
QUOTES: In many aspects, TRAPPIST-1 is comparable to the system of Jupiter and its satellites. The eccentricity of Io is damped by tides and excited by the other satellites (especially by Europa and Ganymede, in mean motion resonance with Io), this leads to a small remnant equilibrium eccentricity of ~ 0.004. This non-zero eccentricity leads to a tidal deformation of the satellite, which is responsible for the observed intense surface activity (tidal heat flux of ~ 3 W/m2, Spencer et al. 2000; intense volcanic activity, Spencer et al. 2007). The exact same situation is true for the planets of TRAPPIST-1. The tidal heat flux for each planets has been evaluated in Luger et al. (2017) and Turbet et al. (2018). In particular the flux of TRAPPIST-1b is always higher than Io's and the flux of planets c and d are higher than the heat flux of Earth (Pollack et al. 1993; Davies & Davies 2010). Depending on the assumption on the dissipation of the planets, TRAPPIST-1e can experience a tidal heat flux of the order of magnitude of Earth's heat flux. The effect of this tidal heat flux on the internal structure of the planets (Barr et al. 2018) and their climate (Turbet et al. 2018) should be investigated further (see Sylvain Breton's proceeding from this same conference).
Cannot get the original research paper, but there's this news bit on Panspermia within TRAPPIST-1.
https://phys.org/news/2018-10-life-planets-door.html
Sharing life with the planets next door
October 30, 2018 by Starre Vartan, Astrobiology Magazine
Dr. Dimitri Veras, an astrophysicist at the University of Warwick in the UK, and lead author of a new paper on the subject, says that, "Within the last century, [panspermia] has been focused on life transport within the solar system, including Earth." The TRAPPIST-1 system, which is 41 light years away and includes seven planets packed into an orbit smaller than Mercury's, changes this Earth-centric idea. The TRAPPIST-1 sun is an ultra-cool red dwarf, so even though the seven nearby planets orbit closely, they are possibly all still in the habitable zone for life, to varying degrees depending upon the make-up of their atmospheres. That makes them a perfect model for exploring the idea of panspermia, per Hawking, anywhere in the universe.
2018-Nov-13, 02:10 PM #16
Continuing to try to keep up with the TRAPPIST-1 family, looking at stellar flares' effects on the planets, and whether the star is messing up atmospheric evaluations of one of the worlds.
Magnetic Fields on the Flare Star Trappist-1: Consequences for Radius Inflation and Planetary Habitability
D. J. Mullan, J. MacDonald, S. Dieterich, H. Fausey (Submitted on 9 Nov 2018)
We construct evolutionary models of Trappist-1 in which magnetic fields impede the onset of convection according to a physics-based criterion. In the models that best fit all observational constraints, the photospheric fields in Tr-1 are found to be in the range 1450-1700 G. These are weaker by a factor of about 2 than the fields we obtained in previous magnetic models of two other cool dwarfs (GJ65A/B). Our results suggest that Tr-1 possesses a global poloidal field which is some one hundred times stronger than in the Sun. In the context of exoplanets in orbit around Tr-1, the strong poloidal fields on the star may help to protect the planets from the potentially destructive effects of coronal mass ejections. This, in combination with previous arguments about beneficial effects of flare photons in ultraviolet and visible portions of the spectrum, suggests that conditions on Tr-1 are not necessarily harmful to life on a planet in the habitable zone of Tr-1.
Disentangling the planet from the star in late type M dwarfs: A case study of TRAPPIST-1g
Hannah R. Wakeford, et al. (Submitted on 12 Nov 2018)
The atmospheres of late M stars represent a significant challenge in the characterization of any transiting exoplanets due to the presence of strong molecular features in the stellar atmosphere. TRAPPIST-1 is an ultra-cool dwarf, host to seven transiting planets, and contains its own molecular signatures which can potentially be imprinted on planetary transit light curves due to inhomogeneities in the occulted stellar photosphere. We present a case study on TRAPPIST-1g, the largest planet in the system, using a new observation together with previous data, to disentangle the atmospheric transmission of the planet from that of the star. We use the out-of-transit stellar spectra to reconstruct the stellar flux based on one-, two-, and three-temperature components. We find that TRAPPIST-1 is a 0.08 M∗, 0.117 R∗, M8V star with a photospheric effective temperature of 2400 K, with ~35% 3000 K spot coverage and a very small fraction, <3%, of ~5800 K hot spot. We calculate a planetary radius for TRAPPIST-1g to be Rp = 1.124 R⊕ with a planetary density of ρp = 0.8214 ρ⊕. Based on the stellar reconstruction there are eleven plausible scenarios for the combined stellar photosphere and planet transit geometry; in our analysis we are able to rule out 8 of the 11 scenarios. Using planetary models we evaluate the remaining scenarios with respect to the transmission spectrum of TRAPPIST-1g. We conclude that the planetary transmission spectrum is likely not contaminated by any stellar spectral features, and are able to rule out a clear solar H2/He-dominated atmosphere at greater than 3-sigma.
New climate model renders all but one TRAPPIST-1 planet a dud.
https://phys.org/news/2018-11-climat...ntriguing.html
Study brings new climate models of small star TRAPPIST 1's seven intriguing worlds
November 21, 2018 by Peter Kelley, University of Washington
Not all stars are like the sun, so not all planetary systems can be studied with the same expectations. New research from a University of Washington-led team of astronomers gives updated climate models for the seven planets around the star TRAPPIST-1. The work also could help astronomers more effectively study planets around stars unlike our sun, and better use the limited, expensive resources of the James Webb Space Telescope, now expected to launch in 2021.
"We are modeling unfamiliar atmospheres, not just assuming that the things we see in the solar system will look the same way around another star," said Andrew Lincowski, UW doctoral student and lead author of a paper published Nov. 1 in Astrophysical Journal. "We conducted this research to show what these different types of atmospheres could look like." The team found, briefly put, that due to an extremely hot, bright early stellar phase, all seven of the star's worlds may have evolved like Venus, with any early oceans they may have had evaporating and leaving dense, uninhabitable atmospheres. However, one planet, TRAPPIST-1 e, could be an Earthlike ocean world worth further study, as previous research also has indicated.
TRAPPIST-1, 39 light-years or about 235 trillion miles away, is about as small as a star can be and still be a star. A relatively cool "M dwarf" star—the most common type in the universe—it has about 9 percent the mass of the sun and about 12 percent its radius. TRAPPIST-1 has a radius only a little bigger than the planet Jupiter, though it is much greater in mass.
Actual paper
Last edited by Roger E. Moore; 2018-Nov-21 at 06:47 PM.
2019-Jan-09, 01:43 PM #18
This recent study indicates that the low-mass star (M8) TRAPPIST-1 has condensates in its upper atmosphere... i.e., "clouds".
Time-resolved image polarimetry of Trappist-1 during planetary transits
P. A. Miles-Páez, M. R. Zapatero Osorio, E. Pallé, S. A. Metchev (Submitted on 7 Jan 2019)
We obtained linear polarization photometry (J -band) and low-resolution spectroscopy (ZJ -bands) of Trappist-1, which is a planetary system formed by an M8-type low-mass star and seven temperate, Earth-sized planets. The photopolarimetric monitoring campaign covered 6.5 h of continuous observations including one full transit of planet Trappist-1d and partial transits of Trappist-1b and e. The spectrophotometric data and the photometric light curve obtained over epochs with no planetary transits indicate that the low-mass star has very low level of linear polarization compatible with a null value. However, the "in transit" observations reveal an enhanced linear polarization signal with peak values of p ∗ =0.1% with a confidence level of 3 σ , particularly for the full transit of Trappist-1d, thus confirming that the atmosphere of the M8-type star is very likely dusty. Additional observations probing different atmospheric states of Trappist-1 are needed to confirm our findings, as the polarimetric signals involved are low. If confirmed, polarization observations of transiting planetary systems with central ultra-cool dwarfs can become a powerful tool for the characterization of the atmospheres of the host dwarfs and the validation of transiting planet candidates that cannot be corroborated by any other method.
QUOTES: Trappist-1 has an effective temperature T-eff = 2516 +/- 41 K (Van Grootel et al. 2018); this is low enough for naturally forming liquid and solid condensates in the upper photosphere... These condensates [are] sometimes referred to as "dusty" particles that can be organized into "clouds"
It is possible that the innermost worlds of TRAPPIST-1 interact with their sun as to produce stellar flares.
Time-variable electromagnetic star-planet interaction: The TRAPPIST-1 system as an exemplary case
Christian Fischer, Joachim Saur (Submitted on 9 Jan 2019)
Exoplanets sufficiently close to their host star can in principle couple electrodynamically to the star. This process is known as electrodynamic star-planet interaction (SPI). The expected emission associated with this coupling is however difficult to observe due to the bright intrinsic stellar emission. Identification of time-variability in the stellar lightcurve is one of the most promising approaches to identify SPI. In this work we therefore systematically investigate various mechanisms and their associated periods, which generate time-variability to aid the search for SPI. We find that the synodic and half the synodic rotation periods of the stars as measured in the rest frames of the orbiting exoplanets are basic periods occurring in SPI. We apply our findings to the example of TRAPPIST-1 with its seven close-in planets for which we investigate the possibility of SPI and the associated time-variabilities. We show that especially TRAPPIST-1b and c, are very likely subject to sub-Alfvénic interaction, a necessary condition for SPI. Both planets are therefore expected to generate Alfvén wings, which can couple to the star. The associated Poynting fluxes are on the order of 10 11 to 10 15 W and thus can hardly be the direct source of currently observable time-variability from TRAPPIST-1. However these Poynting fluxes might trigger flares on the star. We find correlations between the observed flares and the expected planetary induced signals, which could be due to SPI but our findings are not conclusive and warrant further observations and modelling.
QUOTES: We performed an analysis of TRAPPIST-1's are time-series as observed by the K2-mission (Luger et al. 2017). Our results hint at a quasi-periodic occurrence of flares with T1c's synodic period of 9.1 d and the stellar rotation period of 3.3 d but the results are inconclusive.
2019-Feb-12, 02:16 PM #20
Another look at stellar radiation bombardment of TRAPPIST-1 and similar planetary systems.
Stellar energetic particles in the magnetically turbulent habitable zones of TRAPPIST-1-like planetary systems
F. Fraschetti, J. J. Drake, J. D. Alvarado-Gomez, S. P. Moschou, C. Garraffo, O. Cohen (Submitted on 11 Feb 2019)
Planets in close proximity to their parent star, such as those in the habitable zones around M dwarfs, could be subject to particularly high doses of particle radiation. We have carried out test-particle simulations of ~GeV protons to investigate the propagation of energetic particles accelerated by flares or travelling shock waves within the stellar wind and magnetic field of a TRAPPIST-1-like system. Turbulence was simulated with small-scale magnetostatic perturbations with an isotropic power spectrum. We find that only a few percent of particles injected within half a stellar radius from the stellar surface escape, and that the escaping fraction increases strongly with increasing injection radius. Escaping particles are increasingly deflected and focused by the ambient spiralling magnetic field as the superimposed turbulence amplitude is increased. In our TRAPPIST-1-like simulations, regardless of the angular region of injection, particles are strongly focused onto two caps within the fast wind regions and centered on the equatorial planetary orbital plane. Based on a scaling relation between far-UV emission and energetic protons for solar flares applied to M dwarfs, the innermost putative habitable planet, TRAPPIST-1e, is bombarded by a proton flux up to 6 orders of magnitude larger than experienced by the present-day Earth. We note two mechanisms that could strongly limit EP fluxes from active stars: EPs from flares are contained by the stellar magnetic field; and potential CMEs that might generate EPs at larger distances also fail to escape.
Are the TRAPPIST-1 planets habitable from the standpoint of heat and runaway greenhouse effects?
Tidal Heating and the Habitability of the TRAPPIST-1 Exoplanets
Vera Dobos, Amy C. Barr, László L. Kiss (Submitted on 11 Feb 2019)
Context. New estimates of the masses and radii of the seven planets orbiting the ultracool M-dwarf TRAPPIST-1 star permit improved modelling of their compositions, heating by tidal dissipation, and removal of tidal heat by solid-state convection. Aims. Here, we compute the heat flux due to insolation and tidal heating for the inner four planets. Methods. We apply a Maxwell viscoelastic rheology to compute the tidal response of the planets using the volume-weighted average of the viscosities and rigidities of the metal, rock, high-pressure ice and liquid water/ice I layers. Results. We show that TRAPPIST-1d and e can avoid entering a runaway greenhouse state. Planet e is the most likely to support a habitable environment, with Earth-like surface temperatures and possibly liquid water oceans. Planet d also avoids a runaway greenhouse, if its surface reflectance is at least as high as that of the Earth. Planets b and c, closer to the star, have heat fluxes high enough to trigger a runaway greenhouse and support volcanism on the surfaces of their rock layers, rendering them too warm for life. Planets f, g, and h are too far from the star to experience significant tidal heating, and likely have solid ice surfaces with possible subsurface liquid water oceans.
2019-Mar-13, 01:21 AM #21
Imagine the tides if several planets line up at once. Bad day to go to the beach.
Tides between the TRAPPIST-1 planets
Hamish Hay, Isamu Matsuyama (Submitted on 11 Mar 2019)
The TRAPPIST-1 system is sufficiently closely packed that tides raised by one planet on another are significant. We investigate whether this source of tidal heating is comparable to eccentricity tides raised by the star.
2019-May-03, 12:22 PM #22
A bit dense, but fans of the TRAPPIST system might find value in this.
The tidal parameters of TRAPPIST-1 b and c
R. Brasser, A. C. Barr, V. Dobos (Submitted on 1 May 2019)
The TRAPPIST-1 planetary system consists of seven planets within 0.05 au of each other, five of which are in a multi-resonant chain. {These resonances suggest the system formed via planet migration; subsequent tidal evolution has damped away most of the initial eccentricities. We used dynamical N-body simulations to estimate how long it takes for the multi-resonant configuration that arises during planet formation to break. From there we use secular theory to pose limits on the tidal parameters of planets b and c. We calibrate our results against multi-layered interior models constructed to fit the masses and radii of the planets, from which the tidal parameters are computed independently.} The dynamical simulations show that the planets typically go unstable 30 Myr after their formation. {Assuming synchronous rotation throughout} we compute \frac{k_2}{Q} \gtrsim 2\times 10^{-4} for planet b and \frac{k_2}{Q} \gtrsim 10^{-3} for planet c. Interior models yield (0.075-0.37) \times 10^{-4} for TRAPPIST-1 b and (0.4-2)\times 10^{-4} for TRAPPIST-1 c. The agreement between the {dynamical and interior} models is not too strong, but is still useful to constrain the dynamical history of the system. We suggest that this two-pronged approach could be of further use in other multi-resonant systems if the planet's orbital and interior parameters are sufficiently well known.
No further terrestrial or larger planets detected in the TRAPPIST-1 system. We seem to have found all the bigger ones.
Ground-based follow-up observations of TRAPPIST-1 transits in the near-infrared
A. Y. Burdanov, et al. (Submitted on 15 May 2019)
The TRAPPIST-1 planetary system is a favorable target for the atmospheric characterization of temperate earth-sized exoplanets by means of transmission spectroscopy with the forthcoming James Webb Space Telescope (JWST). A possible obstacle to this technique could come from the photospheric heterogeneity of the host star that could affect planetary signatures in the transit transmission spectra. To constrain further this possibility, we gathered an extensive photometric data set of 25 TRAPPIST-1 transits observed in the near-IR J band (1.2 μ m) with the UKIRT and the AAT, and in the NB2090 band (2.1 μ m) with the VLT during the period 2015-2018. In our analysis of these data, we used a special strategy aiming to ensure uniformity in our measurements and robustness in our conclusions. We reach a photometric precision of ∼0.003 (RMS of the residuals), and we detect no significant temporal variations of transit depths of TRAPPIST-1 b, c, e, and g over the period of three years. The few transit depths measured for planets d and f hint towards some level of variability, but more measurements will be required for confirmation. Our depth measurements for planets b and c disagree with the stellar contamination spectra originating from the possible existence of bright spots of temperature 4500 K. We report updated transmission spectra for the six inner planets of the system which are globally flat for planets b and g and some structures are seen for planets c, d, e, and f.
2019-Jun-13, 01:58 PM #24
Not a good discovery, if we are talking about the chances for life in the TRAPPIST-1 system.
On The XUV Luminosity Evolution of TRAPPIST-1
David P. Fleming, Rory Barnes, Rodrigo Luger, Jacob T. VanderPlas (Submitted on 12 Jun 2019)
We model the long-term XUV luminosity of TRAPPIST-1 to constrain the evolving high-energy radiation environment experienced by its planetary system. Using Markov Chain Monte Carlo (MCMC), we derive probabilistic constraints for TRAPPIST-1's stellar and XUV evolution that account for observational uncertainties, degeneracies between model parameters, and empirical data of low-mass stars. We constrain TRAPPIST-1's mass to m⋆ =0.089 ± 0.001 M⊙ and find that its early XUV luminosity likely saturated at log 10 (L XUV /L bol )= −3.05 +0.24 −0.10. From our posterior distributions, we infer that there is a ∼43% chance that TRAPPIST-1 is still in the saturated phase today, suggesting that TRAPPIST-1 has maintained high activity and L XUV /L bol ≈10 −3 for several Gyrs. TRAPPIST-1's planetary system therefore likely experienced a persistent and extreme XUV flux environment, potentially driving significant atmospheric erosion and volatile loss. The inner planets likely received XUV fluxes ∼10^3 − 10^4 × that of the modern Earth during TRAPPIST-1's 1 Gyr-long pre-main sequence phase. Deriving these constraints via MCMC is computationally non-trivial, so scaling our methods to constrain the XUV evolution of a larger number of M dwarfs that harbor terrestrial exoplanets would incur significant computational expenses. We demonstrate that approxposterior, a Python machine learning package for approximate Bayesian inference using Gaussian processes, can efficiently replicate our analysis. We find that it derives constraints that are in good agreement with our MCMC, although it underestimates the uncertainties for two parameters by 30% . approxposterior requires 330× less computational time than traditional MCMC methods in this case, demonstrating its utility in efficient Bayesian inference.
2019-Jul-07, 06:44 PM #25
No bad news for TRAPPIST-1 worlds' habitability.
Constraining the Radio Emission of TRAPPIST-1
Anna Hughes, Aaron Boley, Rachel Osten, Jacob White (Submitted on 3 Jul 2019)
Exposure to outgoing high energy particle radiation - traceable by radio flux - can erode planetary atmospheres. While our results do not imply that the TRAPPIST-1 planets are suitable for life, we find no evidence that they are overtly unsuitable due to proton fluxes.
Quick Navigation Astronomy Top
exoplanets, planetary systems, red dwarf, terrestrial, trappist-1 | CommonCrawl |
Rice life cycle-based global mercury biotransport and human methylmercury exposure
Maodian Liu ORCID: orcid.org/0000-0001-5059-03341 nAff8,
Qianru Zhang1,
Menghan Cheng1,
Yipeng He2,
Long Chen ORCID: orcid.org/0000-0001-9574-73073,
Haoran Zhang1,4,
Hanlin Cao5,
Huizhong Shen6,
Wei Zhang7,
Shu Tao ORCID: orcid.org/0000-0002-7374-70631 &
Xuejun Wang ORCID: orcid.org/0000-0001-9990-13911
Environmental social sciences
Protecting the environment and enhancing food security are among the world's greatest challenges. Fish consumption is widely considered to be the single significant dietary source of methylmercury. Nevertheless, by synthesizing data from the past six decades and using a variety of models, we find that rice could be a significant global dietary source of human methylmercury exposure, especially in South and Southeast Asia. In 2013, globalization caused 9.9% of human methylmercury exposure via the international rice trade and significantly aggravated rice-derived exposure in Africa (62%), Central Asia (98%) and Europe (42%). In 2016, 180 metric tons of mercury were generated in rice plants, 14-fold greater than that exported from oceans via global fisheries. We suggest that future research should consider both the joint ingestion of rice with fish and the food trade in methylmercury exposure assessments, and anthropogenic biovectors such as crops should be considered in the global mercury cycle.
Mercury (Hg) is a global pollutant and poses health risks to wildlife and humans1. As one of the most toxic forms of Hg, methylmercury (MeHg) can reduce the intelligence quotient (IQ) and cause developmental delays in children and may also result in cardiovascular impairment in adults2,3,4. Although Hg occurs naturally, human activities have altered its global biogeochemical cycle in the environment5,6. Given its long-range transport, efficient bioaccumulation in the food web, and human health impacts, global Hg cycling among various environmental media has been studied over the past several decades7,8,9. Nevertheless, most of these efforts have focused on emissions of Hg to the atmosphere, and few have examined other components, such as vegetation in the terrestrial ecosystem, in detail as well as the impacts of these components on human exposure. Recent evidence suggests that vegetation could play an important role in global Hg cycles10. Thus it would be desirable to identify the impacts of vegetation such as commercial crops, which could be labeled as anthropogenic biovectors, on both the global biogeochemical cycle of Hg and human exposure.
Fish consumption has been considered the single significant dietary source of MeHg in most studies11,12,13. However, this conclusion was recently challenged by studies in some rural areas in China, where elevated rice (Oryza sativa)-derived MeHg levels and low fish ingestion rates were reported14. Saturated agricultural soils, such as rice paddies, have been demonstrated to be potential MeHg production sites15. Although a recent paper found that human MeHg exposure across China was dominated by fish intake (including marine and freshwater products), rice consumption could be a significant dietary source of human MeHg exposure in inland China16. Rice is a staple food for half of the global population. However, MeHg exposure through rice ingestion has received relatively little attention compared to fish, and a global comprehensive evaluation of human MeHg exposure through rice consumption is required14. In addition, globalization induces a geospatial separation between the production and consumption of goods. As a consequence, unprecedented displacements of environmental and social impacts are associated with the international trade of goods17. Nevertheless, in many high-profile cases, the impacts of the regional and global food trades on human Hg exposure have not been suitably evaluated18,19. Unlike with fish, substantial rice residues, defined as the non-edible rice plant parts, are left or burned in the fields after harvest20. Globally, Hg emission from biomass burning contributes significantly to the Hg cycle9, but the contributions of different crop residues via burning have not been quantified. Thus it would be of interest to quantitatively connect global Hg cycles with human health impacts by considering the effects of anthropogenic biovectors such as rice cultivation, which is of considerable global and societal importance.
The main objective of this study is to identify and discuss the role of rice in the Hg exposure continuum (from the environment to people), including its production, residue disposition, regional trade, human MeHg intake, and potential health impacts. We first establish 27 total Hg (THg, including all forms of Hg) and MeHg inventories covering 56 years for different countries and regions, including global rice production, import, export, stock variation, domestic supplies (including food, feed, seed, processing, losses, and other uses) and human intake and potential health impacts, as well as Hg amounts in rice residue yields and those emitted from residue burning. We then evaluate the impacts of the international rice trade on domestic human MeHg exposure. We also identify human MeHg exposure through consumption of rice from different types of Hg-contaminated sites.
Here we find that rice could be a significant global dietary source of human MeHg exposure, especially in South and Southeast Asia, and globalization significantly aggravates the MeHg exposure levels in Africa, Central Asia, and Europe via the international rice trade. In addition, MeHg exposure via the joint ingestion of fish and rice is an emerging health issue in Hg-contaminated areas in Southeast Asia. This novel assessment is motivated by our recognition of the potential importance of anthropogenic biotransport in the global Hg cycle and its impact on human health.
Mercury accumulation in rice plants and human exposure
In this study, we found that rice plants contributed a significant amount to the global anthropogenic biovectors of Hg (Figs. 1 and 2, Supplementary Figs. 1–8 and Supplementary Data 1–3). Globally, 5.3 (4.0–7.0 interquartile range from the Monte Carlo simulation) Mg of THg and 1.8 (1.5–2.2) Mg of MeHg in rice grain were harvested from terrestrial ecosystems in 2016, a substantial increase from 1.3 and 0.46 Mg, respectively, in 1961. In addition, 180 (89–360) Mg of THg and 1.7 (0.71–4.2) Mg of MeHg in rice residues were generated; both these values significantly increased from 69 and 0.66 Mg, respectively, in 1961. In contrast, 13 Mg of THg (including the edible and inedible fractions in seafood) were exported from the ocean via marine fisheries in 201421. The amount of THg that was generated in rice plants (including rice grains and residues) was higher than the amount that was exported from the ocean via global marine fisheries by a factor of 1421. Among 281 countries and territories across the world, India (South Asia) produced the most THg in rice grain and residues (2.1 and 64 Mg, respectively, in 2016), due to its large-scale rice production and the relatively high THg concentration of rice in India22,23,24. Substantial THg was also generated in respective rice grain and residues in China (East Asia, 1.2 and 38 Mg), followed by Bangladesh (South Asia, 0.46 and 15 Mg) and Indonesia (Southeast Asia, 0.41 and 15 Mg, Fig. 1a, j). The above four countries accounted for 75% of the global THg generated by rice cultivation in 2016. Bangladesh had the highest THg production density (3.8 and 120 g km−2 in rice grain and residues, respectively, in 2016), followed by India (0.80 and 25 g km−2), Vietnam (Southeast Asia, 0.71 and 26 g km−2), and the Philippines (Southeast Asia, 0.62 and 20 g km−2), primarily due to the high population densities and the use of rice as a staple food in these countries23.
Global distribution of mercury generated during the rice life cycle. a, b Amounts of THg and MeHg generated in rice grain in 2016; c, d amounts of MeHg transported by rice export and import in 2016, respectively; e stock variation of MeHg in 2016; f amounts of MeHg supplied as food in 2013; g amounts of MeHg related to processing in 2013; h amounts of MeHg losses during transportation in 2013; i per capita probable weekly intake (PWI) of MeHg in 2013; j, k THg and MeHg sequestered in rice residues in 2016; l THg emitted from rice residue burning in fields in 2016
Temporal trends of mercury generated during the rice life cycle. a, b Amounts of THg and MeHg generated in rice grain from 1961 to 2016; c, d amounts of MeHg transported by rice export and import, respectively, from 1961 to 2016; e stock variation of MeHg from 1961 to 2016; f amounts of MeHg supplied as food from 1961 to 2013; g amounts of MeHg related to processing from 1961 to 2013; h amounts of MeHg losses during transportation from 1961 to 2013; i per capita probable weekly intake (PWI) of MeHg from 1961 to 2013; j, k THg and MeHg sequestered in rice residues from 1961 to 2016; l THg emitted from rice residue burning in fields from 1961 to 2016
Among the different regions, South, East, and Southeast Asia generated most of the MeHg in rice grain in 2016, i.e., 0.77, 0.45, and 0.42 Mg, respectively, accounting for 91% of the world's total; these values were approximately 3.7-, 3.3-, and 5.4-fold higher than that in 1961 (Fig. 2b). The generation of MeHg in rice grain in South and Southeast Asia underwent a rapid increase during the period from 1961 to 2016, while that in East Asia (mostly contributed by China, Japan, and the Republic of Korea) remained stable after 1997. The generation of MeHg in rice grain in Africa, North America (including the Caribbean), and South America has also increased rapidly in the past six decades, while the MeHg generated in Europe and Oceania substantially decreased in 1988 and 2001, respectively, but increased slowly thereafter.
We found that rice could be a significant dietary source of human MeHg exposure globally, especially in South and Southeast Asia (Fig. 1i and Supplementary Note 1). We determined that 1.4 Mg (1.3–2.0 Mg) of MeHg was domestically supplied with food in 2013, and other rice grain might contribute to significant anthropogenic biotransport of MeHg (Fig. 1c–h), which might decrease (e.g., export to partner countries and losses during transportation) or indirectly increase (e.g., import from producing countries and used as feed for livestock) the risk of human Hg exposure. Globally, the average human MeHg intake rate contributed by rice consumption was 0.057 (0.053–0.080) μg kg−1 week−1 (per capita weekly intake) in 2013 (Fig. 3a). Subsequently, 0.026 (0.012–0.047) points of per-fetus IQ decreases and 11,000 (6200–19,000) deaths from fatal heart attacks were related to the intake of MeHg through rice consumption in 2013. Interestingly, among different countries, we found that inhabitants of Bangladesh faced the highest exposure to MeHg through rice consumption (0.21 μg kg−1 week−1), followed by the Philippines (0.16 μg kg−1 week−1) and Nepal (South Asia, 0.16 μg kg−1 week−1, Fig. 3a), which are all underdeveloped countries. This situation occurred mainly due to the relatively high rice consumption rates in these countries, which were 470, 330, and 240 g day−1, respectively, in 2013; these values were 3.2-, 2.3-, and 1.6-folds, respectively, to the global average23. Subsequently, decreases of 0.10, 0.084, and 0.082 IQ points per fetus and 610, 510, and 140 deaths from fatal heart attacks in these countries, respectively, were related to the intake of MeHg via rice consumption in 2013 (Supplementary Fig. 8). Among the top 30 countries with high MeHg intake levels, 23% are in Southeast Asia and Africa.
Human methylmercury intake through rice consumption. a Per capita probable weekly intake (PWI) of THg and MeHg through rice consumption in the top 30 countries and the world average and b PWI of MeHg through consumption of rice from Hg-contaminated regions. Population-weighted world average PWI values of THg and MeHg are shown. More comparison are presented in Supplementary Figs. 17 and 18. A Africa, NAC North America & Caribbean, S South America, CWA Central & West Asia, EA East Asia, SA South Asia, SEA Southeast Asia, O Oceania. Error bars in figures represent the interquartile range of the confidence intervals
Owing to the pressure of population growth, the portion of rice supplied as food has increased rapidly in the past six decades (Fig. 2f), especially in some underdeveloped regions. For instance, in Central Asia (also including West Asia) and Africa, MeHg exposure through rice consumption in 2013 was higher than that in 1961, by factors of 3.7 and 2.9, respectively. Southeast, South, and East Asia faced high exposure to MeHg through rice consumption, i.e., 0.11, 0.097, and 0.065 μg kg−1 week−1, respectively, in 2013. Nevertheless, we found that the MeHg intake rate was slowly increasing in Southeast Asia and peaked in 1989 and 1983 in South and East Asia, respectively (Fig. 2i). In parallel, consumption of other food (e.g., pork and poultry) has been increasing gradually23, which might be due to the improvement of living standards in these regions. The MeHg intake rates of inhabitants of North America and Europe through rice consumption increased in general (2.7- and 2.5-fold increases from 1961 to 2013, respectively), while the intake rates remained steady for inhabitants of South America and Oceania in recent decades.
In most regions of the world, inhabitant MeHg exposure through rice consumption would not exceed that from fish, except perhaps in special areas where rice is a staple food and is cultivated in heavily Hg-contaminated soil, e.g., gold and Hg mining areas14,25. Based on 1259 rice Hg measurements in 57 articles from 19 Hg-contaminated areas (Supplementary Data 4), we found that inhabitants of a gold mining area in Lombox Island (Indonesia) potentially faced the highest MeHg exposure risk through rice consumption. Assuming an inhabitant only consumes local rice19, the MeHg intake rate could reach 1.9 (range from 0.94 to 3.4) μg kg−1 week−1 (Fig. 3b), higher than the global general population by a factor of 33. Subsequently, 0.76 (0.34–1.5) points of per-fetus IQ decreases were related to the intake of MeHg in this area, higher than the global average by a factor of 29. MeHg intake rates through local rice consumption were also high near a chloralkili facility in Ganjam (India) and a gold mining area in Phichit (Thailand, Southeast Asia) (Fig. 3b), and 0.39 (0.19–0.73) and 0.37 (0.18–0.68) points of per-fetus IQ decreases, respectively, were related to the intake of rice MeHg in these areas. Indeed, the consumption rates of marine fish are also high in Indonesia and Thailand, which were 33 and 28 g day−1, respectively, in 2013, higher than the global average consumption rate by factors of 2.8 and 2.3, respectively23. These findings suggest that MeHg exposure through the joint ingestion of fish and rice is an emerging health issue in Hg-contaminated areas in Southeast Asia (Supplementary Note 1).
Impacts of international rice trade and domestic economics
We found that the international rice trade could have significant impacts on human MeHg exposure via rice consumption in Africa, Central Asia, and Europe (Fig. 4). Globally, 9.9% of human MeHg exposure through rice consumption was embodied in the international rice trade in 2013, an increase from 3.4% in 1961. Subsequently, 2.3 × 10−3 (1.1 × 10−3–3.9 × 10−3) points of per-fetus IQ decreases and 710 (420–1100) deaths from fatal heart attacks were related to the international rice trade. The international rice trade aggravated MeHg exposure in Africa, Central Asia, East Asia, and Europe (increases of 62%, 98%, 3.4%, and 42%, respectively, in 2013) and mitigated exposure in North America, South America, South Asia, Southeast Asia, and Oceania (decreases of 19%, 13%, 11%, 12%, and 26%, respectively). Inhabitants of Africa consumed the highest amounts of MeHg from the trade, i.e., 35, 12, 1.5, and 1.3 kg MeHg from South Asia, Southeast Asia, North America, and South America, respectively, in 2013 (Fig. 5a). Although the amounts were lower than that in Africa, inhabitants of Central Asia also consumed 17 and 2.7 kg MeHg from South Asia and North America, respectively, in 2013.
Life cycle of methylmercury generated in rice grain. Components of the life cycle of MeHg generated in rice grain include rice production, import, export, stock variation, and domestic supplies (including food, feed, seed, processing, losses, and other uses). a–i represent different regions, i.e., Africa (a), North America & Caribbean (b), South America (c), Central & West Asia (d), East Asia (e), South Asia (f), Southeast Asia (g), Europe (h), and Oceania (i) in the world. The life cycle of THg generated in rice grain in different regions are included in Supplementary Data 1. Data are in 2013
Global biotransport of methylmercury through the international rice trade. a shows the major MeHg flows (>0.5 kg yr−1) induced by the international rice trade between the regions. b–e show the top ten partner countries of the major rice MeHg-exporting countries, i.e., India (b), the United States (c), Vietnam (d), and Thailand (e), respectively. Data are in 2013. Error bars in figures represent the interquartile range of the confidence intervals
Among the different countries, India exported the most MeHg through the international rice trade, i.e., 62 kg in 2013 (Fig. 5b), followed by the United States (23 kg, North America, Fig. 5c), Vietnam (18 kg, Fig. 5d), and Thailand (17 kg, Fig. 5e). The MeHg imports through the rice trade in India, Thailand, and Vietnam were <0.5 kg, and these countries were identified as significant net sources of global MeHg exports. Accordingly, the MeHg exposure through rice consumption for inhabitants of the four major exporters listed above was mitigated by 11%, 54%, 28%, and 24%, respectively. The total MeHg exported by these four countries accounted for 78% of the global exports in 2013. Interestingly, in contrast to other countries, the amount of MeHg generated in rice grain in the United States was not high (Fig. 1b). More than 61% of the MeHg in the United States was exported to other countries in 2013, which might be due to the low rice consumption rate in this country. In 2013, Benin (Africa), Iran (Central Asia), Saudi Arabia (Central Asia), and Senegal (Africa) consumed significant amounts of MeHg from India due to the rice trade (7.6, 6.7, 6.7, and 5.8 kg, respectively), while China also consumed 7.1 kg MeHg from Vietnam, and Mexico (North America) consumed 4.9 kg MeHg from the United States (Fig. 5b–e).
We found that underdeveloped countries might face a relatively high level of MeHg exposure through rice consumption, while developed countries might have a lower level of exposure, based on the significant negative relationship between the MeHg intake rate and the gross domestic product of each country (p < 0.01, Fig. 6a). Overall, the correlation coefficient (R = − 0.33) was low, because at the individual level, food choices will obviously play a critical role in determining an individual's MeHg exposure and associated risk. In addition to countries that use rice as a staple food, such as those in Asia, the relationship was also significant in other regions, such as America (R = − 0.36, p = 0.038) and Oceania (R = − 0.81, p < 0.01), but there was a lack significance in Africa (R = − 0.29, p = 0.055) and a positive relationship in Europe (R = 0.48, p < 0.01). One potential explanation is that many developed countries in Europe and North America use wheat as a staple food, and rice consumed in Europe is mainly imported from other regions (Fig. 4). Another explanation is that, owing to the improvement of living standards in many countries, consumption of meat products has gradually increased and has partly replaced the traditional staple food. For instance, the per capita consumption rate of rice in Japan decreased by 47% in the period from 1961 to 2013, while the per capita consumption rates of meat products such as pork and poultry rapidly increased by factors of 8.5 and 13, respectively, during the same period, and the consumption rate of fish products peaked in 198823. The per capita consumption rates of rice in some countries, such as Brazil (South America) and China, increased initially but then decreased23. Future studies that could further examine this initial finding will be desirable.
Potential driving factors of human methylmercury intake. a Relationship of the per capita probable weekly intake (PWI) of MeHg through rice with the gross domestic product; b relationship of the PWI of MeHg through rice with amount of MeHg in rice grain; c relationship of the PWI of MeHg through rice with the international rice trade. The gross domestic product data were obtained from the World Bank (http://www.worldbank.org/). Sample size (n) = 163. The sizes of the dots in the figure represent the population densities of the countries
There is no doubt that inhabitants of countries with a high density of rice production face a relatively high per capita MeHg intake level due to rice consumption (R = 0.71, p < 0.01, Fig. 6b). The intake is relatively low in other countries that have a high percentage of rice export and a low rice consumption rate, such as the United States. We further considered that rice import could aggravate the domestic human MeHg exposure through rice consumption (R = − 0.40, p < 0.01), especially for inhabitants of some major rice-importing countries, such as Cuba (North America), Liberia (Africa), and the United Arab Emirates (Central Asia), where the trade deficits of rice are high (Fig. 6c). For countries that have a relatively high rice consumption rate and rice production density (e.g., Bangladesh, the Philippines, Nepal, and Indonesia), the impacts of MeHg exposure caused by the international rice trade were relatively small. MeHg exposure might be different when the rice trade balance has a surplus. The exposure was low for inhabitants of Paraguay (South America) and Uruguay (South America) in 2013 because most of their rice was exported. Although Vietnam and Thailand are major rice-exporting countries in Asia, their exposure was still high because rice is a staple food in these two countries.
Global biotransport of mercury associated with rice plants
We summarized the global lifecycle of Hg associated with rice plants from its production to its final consumption (Fig. 7a). In total, 180 (94–410) Mg of THg and 3.6 (1.8–7.1) Mg of MeHg were generated in rice plants in 2013, significantly increased from 69 and 1.1 Mg, respectively, in 1961. Although the amounts of THg and MeHg generated in rice grain were lower than those in seafood, rice consumption could still be a globally significant exposure source for humans (consumed as food, 4.2 and 1.4 Mg for THg and MeHg, respectively, in 2013) and livestock (supplied as feed, 0.28 and 0.10 Mg), and possibly also for wildlife (losses during transportation, 0.26 and 0.093 Mg). According to the results of material flow analyses among different regions (Fig. 4), in 2013, 16% of MeHg in rice grain was supplied as feed for livestock in Southeast Asia, followed by East Asia (10%) and Europe (5.4%). It is advisable to further investigate whether this pathway would increase aggregate MeHg exposure. In North America, 12% of MeHg in rice grain was supplied as processed commodities and was potentially available for human intake, followed by South America (8.5%) and Southeast Asia (4.3%). Besides North America, South America, and Europe, the amount of MeHg in stocks of rice was reduced in many regions (range: −3.1% to −12%) in 2013, which were potentially used for human consumption (Fig. 1f)23.
Global biotransport of mercury through rice and other crops. a Global biotransport of THg and MeHg through production, trade, supply, and consumption of rice grain and through different management options of rice residues; b global THg generated in crop residues in 2016. In a, fluxes of atmospheric THg emission and deposition refer to the study of Outridge et al.31; the flux of root absorption of THg from soil was calculated based on the rice THg uptake flux synthesized by Kwon et al.67; and the absorption of atmospheric Hg by rice leaves was estimated by the flux of Hg over foliage estimated by Wang et al.68. The global harvested area of rice was obtained from the FAO23
Substantial Hg in rice residues is generated after harvest. Globally, 41 (20–89) and 0.40 (0.18–0.98) Mg of THg and MeHg, respectively, in rice residues were supplied as feed for livestock (Fig. 7a). Considering the substantial amounts of inorganic Hg sequestered in the stems and leaves of rice plants, Hg might present a potential risk to livestock and therefore indirectly contribute to human exposure. In addition, 39 (17–75) and 0.38 (0.16–0.95) Mg of THg and MeHg, respectively, were transported back to cropland as fertilizer, which might be a good way to return this portion of Hg to the soil. However, increasing evidence suggests that incorporating crop residues into paddy soils could enhance MeHg accumulation in rice grain26,27. Substantial rice residues are used for domestic purposes (some were potentially used as domestic fuel) and as industrial fuel20,28. Therefore, these two pathways are potential THg sources to the atmosphere (up to 50 Mg in 2016, Fig. 7a). Owing to the lack of detailed information regarding uses of rice residues as industrial and domestic fuels in different countries, their total amount and the spatial distribution of THg emissions to the atmosphere remain unknown.
In this study, we quantified that, in 2016, 11 Mg (5.2–20 Mg) of THg was emitted from rice residue burning in fields worldwide. Nevertheless, this flux might be underestimated; we discuss this issue in more detail later. Overall, India, China, Bangladesh, and Indonesia contributed 71% to the total flux (Fig. 1l). The flux of THg emissions from rice residue burning generally increased in Africa, Central Asia, South Asia, and Southeast Asia (4.8-, 6.6-, 1.6-, and 2.0-fold increases from 1961 to 2016), while it became stable in North America after 1981 and peaked in South America, East Asia, Europe, and Oceania in 1976, 1977, 1987, and 2001, respectively (Fig. 2l). The percentages of rice residue burning were typically high in Africa and Central Asia, especially in the Congo (15% in 2016), Mozambique (15%), and Gambia (14%, Supplementary Fig. 9). Outside Central Asia and Oceania, the percentages decreased over the past five decades.
We further quantified the fate of THg that was sequestered in rice residues in India, China, Thailand, and the Philippines (Supplementary Fig. 10) based on the known survey data of these countries28,29. These four countries contributed 62% of the global THg accumulation in rice residues in 2016 (Fig. 1j). In India, most rice residues were utilized as feed and thatching. In China, 42% of the THg in rice residues was transported back to cropland as fertilizer. A previous study showed that large amounts of rice residues were burned in the field in Thailand and the Philippines (48% and 95%, respectively28), and 1.4 and 2.5 Mg THg would be subsequently emitted into the atmosphere. However, data from Food and Agriculture Organization (FAO) showed that 8.5% and 7.3% of rice residues in Thailand and the Philippines, respectively, were burned in the field23. Similar data discrepancies also exist for India (11% and 7.5% from the literature and FAO data, respectively) and China (6.2% and 4.9%, respectively)23,28. Based on the literature, 17% (mass-weighted average) of THg could be released into the atmosphere through rice residue burning in these four countries, which would be 20 Mg yr−1, 3-fold higher than that calculated by the FAO data. In either case, we found that the present contribution of rice residue burning in the field to global THg emissions was limited, compared with 600 Mg yr−1 of global annual THg emissions from biomass burning30,31.
Motivated by the substantial THg sequestered in rice residues during cultivation, we further quantified that 460 Mg (190–1100 Mg) of THg was globally sequestered in crop residues in 2016 (Fig. 7b), 2.8-fold of that in 1961. Substantial THg (160 Mg in 2016) was also sequestered in wheat residues. This result suggests that crop residues, especially rice and wheat residues, are important biovectors induced by human activities. The fate of this portion of THg in the terrestrial ecosystem should be considered in the future.
Overall, our analysis showed that rice consumption could be a significant dietary source of MeHg globally, even for inhabitants of Africa, North America, and South America. Unexpectedly, countries in South and Southeast Asia, such as Bangladesh, the Philippines, and Nepal, are primary hotspots of MeHg exposure worldwide due to their high rice consumption. We found that underdeveloped countries face a relatively high level of MeHg exposure through rice consumption, and the contribution of rice consumption might be mitigated by economic growth. However, significant proportions of MeHg impacts are associated with the international rice trade, especially in Africa, Central Asia, and Europe, and this trend has risen rapidly to date. We infer that, owing to globalization, MeHg exposure induced by rice import will continue increasing for the foreseeable future.
Many modeling studies have focused on soil re-emission, assuming that plant litter would become a part of the surface soil during decomposition, and have thus neglected the role of vegetation in the global Hg cycle. Nevertheless, increasing evidence suggests that vegetation plays an important role in connecting the atmospheric and edaphic Hg cycles and emphasizes the importance of seasonal and spatial variability in vegetation uptake of gaseous elemental Hg to the global Hg balance10,32. We found that, at the global scale, crop plants serve as an important anthropogenic biovector and sequester a substantial amount of THg from the atmosphere and pedosphere. This component of THg could be re-emitted to the environment through burning or could be laterally transported and pose risks to other biological systems, but global Hg models have not yet been able to determine its fate. Considering that the leaves of plants sequester at least 1000 Mg of atmospheric Hg in aboveground biomass per year33, the Hg pools of natural vegetation and anthropogenic biovectors related to climate change and land-use change should receive detailed consideration in the global biogeochemical cycle of Hg.
The rice cultivation practices may influence THg cycling and MeHg production in paddy soil. For example, rice plants cultivated at high densities can decrease photo-demethylation of MeHg in soil14,34. For treatments of rice residues in different regions, burning the residues could increase atmospheric Hg emissions into the air (Fig. 7a), while the residues that degrade in paddy soils could enhance MeHg accumulation in rice grain, especially in Asia26,27. Globally 95% of rice acreage is cultivated under irrigated conditions35, while >90% of rice is cultivated in Asia23. Freshwater resources are stressed owing to increased water demand in Asia, and alternating wetting and drying cultivation practices have replaced continuous flooding of paddy soil since the 1970s36, which could lead to substantial MeHg pulses after fields are dried and re-flooded37. Nevertheless, an important knowledge gap remains regarding whether the alternating wetting and drying cultivation practices could lead to increased accumulation of MeHg from soil to rice grains, especially in different regions37,38. This is because the impacts of rice cultivation methods on MeHg accumulation in rice grain have received relatively little attention to date. In the present study, we used THg and MeHg concentrations in rice grain and residues in different regions to directly estimate THg and MeHg cycling and human MeHg intake, and different rice cultivation methods would not increase the uncertainty of the current results. Nevertheless, as suggested by a previous study, it would be desirable to further investigate the impacts of cultivation practices on MeHg accumulation in rice grain in different regions and to develop separate cultivation practices for Hg-polluted and non-polluted sites14.
This study represents a first attempt to quantitatively evaluate the global THg cycle and inhabitant MeHg exposure continuum through the production and trade of rice, but it has some major limitations and uncertainties. Similar to previous studies16,18, the age and socio-economic status of people were not considered in the MeHg intake modeling because of the difficulties in obtaining such statistical information at broad scales. However, these factors could be important to human MeHg exposure, as shown in some published studies39,40. Age and socio-economic status should be considered in the future, when more statistical data become available. In the present study, Monte Carlo simulation was applied to analyze the robustness of the fluxes of THg and MeHg and subsequent human health impacts, and the interquartile range was used to quantify the uncertainty. The uncertainty of results related to global and countries is provided in Supplementary Figs. 8, 11, and 12. The overall uncertainty of the results in this study is not low, especially for the amounts of THg and MeHg generated in rice residues (range: −51% to 100% and −59% to 140%, respectively) and MeHg-related health impacts (range: −50% to 67% and −39% to 56% for IQ decreases and fatal heart attacks, respectively), which are driven by the relative sparsity and large variability in measured Hg concentrations in rice. For example, as major rice-producing countries, uncertainties of amounts of THg generated in rice residues in Thailand and Pakistan ranged from −59% to 140% and from −57% to 130%, respectively. Although India potentially has high IQ decreases and fatal heart attacks associated with MeHg intake through rice consumption, substantial uncertainties existed, i.e., from −56% to 70% and from −49% to 66%, respectively. This is because Hg exposure through rice ingestion has received relatively little attention to date, particularly in geographic regions outside China14. This circumstance makes it difficult to accurately evaluate global Hg exposure and related health impacts through rice consumption, especially in some major rice-producing countries, such as India. It is difficult to estimate Hg accumulation in biota in a time series analysis, and thus we used modeling data to estimate the trend of Hg accumulation in rice plants, following previous studies41,42; this choice might have increased the uncertainty of the results. Hence, our results should be further updated when better estimation methodologies are available in the future, and future investigation of the geographic distribution of rice MeHg concentration in India and other countries is urgently needed.
At the global scale, approximately 98% of the human population could potentially consume rice, and more than half depends on rice as a staple food23, although MeHg exposure through rice consumption is lower than that from fish products in many regions. Indeed, the consumption rates of marine fish are also high in South and Southeast Asia, especially for inhabitants of the Philippines, Malaysia, and Indonesia23. We are concerned about the joint ingestion of rice as a staple food and marine fish as a major protein source in these countries because the inhabitants might be at a higher MeHg risk than the other populations of the world, which is a particular problem for MeHg-susceptible populations, such as pregnant women.
In conclusion, we present the first attempt to quantitatively evaluate the global Hg cycle and inhabitant MeHg exposure continuum via the production, trade, and consumption of rice. Our analysis indicates that a rapid increase in rice cultivation over the past six decades has resulted in substantial amounts of Hg to be accumulated in rice plants. Rice could be a significant global dietary source of human MeHg exposure, especially in South and Southeast Asia, and globalization causes significant human MeHg exposure in Africa, Central Asia, and Europe via rice consumption that is attributed to the international rice trade. In addition, MeHg exposure via the joint ingestion of fish and rice is an emerging health issue in Hg-contaminated areas in Southeast Asia. We suggest that future research should consider both the joint ingestion of rice and fish and the food trade in MeHg exposure assessments, especially in Hg-contaminated areas, and anthropogenic biovectors such as crops should be considered in the global Hg cycle.
Mercury generated in rice production
We first quantified THg and MeHg that were generated in rice grain and residues in each country from 1961 to 2016. The annual production of rice grain (paddy) from 1961 to 2016 was determined using the statistical data from the FAO23. In 2018, the FAO data provided free access to rice data for 281 countries and territories and covered all FAO regional groupings from 1961 to 2016. Some data, such as the domestic supply quantity of each country, were available until 2013. The FAO data are a pivotal tool for evaluating THg and MeHg generated in rice plants in the present study and have been previously used in a large number of studies worldwide20,43. To ensure that the values were comparable, we used the milled equivalents of rice grain data from the database.
Concentrations of THg and MeHg in rice grain were obtained from peer-reviewed publications, as summarized in Supplementary Data 4. In accordance with a previous study16, all the samples we obtained were collected in fields, from markets that were locally supplied, or imported from other countries. We excluded any data from the literature where the producing country of the rice was not provided. To make an attempt to identify human MeHg exposure through consumptions of rice from different Hg-contaminated areas, THg and MeHg concentration data from rice grown in contaminated areas were also collected in this study. A Monte Carlo simulation was applied to analyze the robustness of the fluxes of THg and MeHg and subsequent human exposure through rice consumption in the present study. To avoid the influence of any extreme values, the median values (50%) of THg and MeHg concentrations were modeled based on the Monte Carlo method. In addition, we did not consider any rice data with a sample size <3 or lacking measurement quality control in the concentration data16.
Globally, Hg measurements for rice grain are relatively scarce. Previous studies have used the relationships between THg and MeHg in fish and rice grain to estimate the concentrations of MeHg in the United States and China16,18. Based on the database we collected from published literature, we used the best fit between all THg and MeHg concentrations of rice grain suggested by the R software (version 3.3.2, R Foundation for Statistical Computing, Vienna, Austria) to model the missing THg or MeHg concentration for rice grain from non-contaminated areas (Eq. (1)) and Hg-contaminated areas (Eq. (2)) (Supplementary Fig. 13):
$${\rm{MeHg}} = \left( {0.80 \pm 1.1} \right) \times {\rm{THg}}^{\left( {0.65 \pm 0.072} \right)},\;R^2 = 0.46,p {\,} < {\,} 0.01^{ \ast\ast }$$
$${\rm{MeHg}} = \left( {0.74 \pm 1.4} \right) \times {\rm{THg}}^{\left( {0.67 \pm 0.091} \right)},R^2 = 0.61,p {\,} < {\,} 0.01^{ \ast\ast }$$
where ±SE is the standard error of the fit and is considered one of the uncertainties in the model and R2 is the correlation coefficient of the relationship. The above relationships are statistically significant (p < 0.01). Following the published literature41,44, we restricted observations to the year 2000 and later. We found that global THg and MeHg concentrations in rice grain followed a power function rather than a linear relation.
To convert the point data of THg and MeHg concentrations in rice grain into raster data and to use the data for each country or territory in the world, we applied the kriging interpolation method in this study45. Kriging interpolation is a useful method for estimating the geographical distribution of variables at broad scales, based on the variogram function and spatial structure analysis. Here we applied the ordinary kriging method to depict the spatial variability distribution of THg and MeHg in rice grain worldwide, and the simulation was performed using ArcGIS version 10.3. The standard errors of the interpolation results were ±8.0% and ±6.0% for THg and MeHg, respectively, and were considered in the uncertainty analysis. We also compared the measurements with our modeling results, and the comparison showed that the method was reasonable (R2 = 0.86 and 0.87 for THg and MeHg, respectively, p < 0.01, Supplementary Fig. 14).
Rice residues (mostly stems and leaves) are the inedible parts of the rice plant; most are left or burned in the fields after harvest. Rice residues vary widely in their properties and decomposition rates in different places. Rather than performing direct measurement, researchers prefer to estimate the mass of rice residue yield in each country based on the straw/grain ratio20,46. This ratio for rice is 1.5 on average, and the range of this ratio is large (0.75–2.5), which could increase the uncertainty. Rather than using this ratio, we estimated the mass of rice residues based on the FAO database of the total nitrogen content in rice residues (Mg yr−1) in each country. Based on the total nitrogen content in rice residues in the published literature (see Supplementary Data 5), we determined that the average total nitrogen content of rice residues was 6.5 ± 1.1‰ (average ± standard deviation).
Researchers have previously suggested that trace metal concentrations in rice grain and residues follow a linear relationship47,48. We examined this relationship for THg and MeHg between rice grain, stem, leaf, and residues (stem:leaf = 3: 1)29, based on our dataset (Supplementary Data 6) and the best relationship suggested by the R software (Supplementary Fig. 15). We found that, in most cases, relationships of THg and MeHg in different rice organs followed a power function rather than a linear relation:
$${\rm{Residues}}_{{\rm{THg}}} = \left( {7.5 \pm 0.34} \right) \times {\rm{Grain}}_{{\rm{THg}}} + \left( {34 \pm 38} \right),R^2 = 0.84,p {\,} < {\,} 0.01^{ \ast\ast }$$
$${\rm{Stem}}_{{\rm{THg}}} = \left( {4.3 \pm 1.3} \right) \times {\rm{Grain}}_{{\rm{THg}}}^{(0.92 \pm 0.063)},R^2 = 0.83,p {\,} < {\,} 0.01^{ \ast\ast }$$
$${\rm{Leaf}}_{{\rm{THg}}} = \left( {20 \pm 1.0} \right) \times {\rm{Grain}}_{{\rm{THg}}} + \left( {18 \pm 160} \right),R^2 = 0.91,p {\,} < {\,} 0.01^{ \ast\ast }$$
$${\rm{Leaf}}_{{\rm{THg}}} = \left( {2.1 \pm 1.4} \right) \times {\rm{Stem}}_{{\rm{THg}}}^{(1.2 \pm 0.067)},R^2 = 0.88,p {\,} < {\,} 0.01^{ \ast\ast }$$
$${\rm{Residues}}_{{\rm{MeHg}}} = \left( {0.27 \pm 1.3} \right) \times {\rm{Grain}}_{{\rm{MeHg}}}^{(1.1 \pm 0.10)},R^2 = 0.80,p {\,} < {\,} 0.01^{ \ast\ast }$$
$${\rm{Stem}}_{{\rm{MeHg}}} = \left( {0.31 \pm 1.3} \right) \times {\rm{Grain}}_{{\rm{MeHg}}}^{(0.85 \pm 0.12)},R^2 = 0.76,p {\,} < {\,} 0.01^{ \ast \ast }$$
$${\rm{Leaf}}_{{\rm{MeHg}}} = \left( {0.23 \pm 1.3} \right) \times {\rm{Grain}}_{{\rm{MeHg}}}^{(0.82 \pm 0.13)},R^2 = 0.71,p {\,} < {\,} 0.01^{ \ast\ast }$$
$${\rm{Leaf}}_{{\rm{MeHg}}} = \left( {0.73 \pm 1.1} \right) \times {\rm{Stem}}_{{\rm{MeHg}}}^{(0.95 \pm 0.079)},R^2 = 0.90,p {\,} < {\,} 0.01^{ \ast \ast }$$
All the fitting errors above were considered as the uncertainty of the model. If the literature provided the THg or MeHg concentrations of the stem and leaf of rice but not the bulk THg or MeHg concentration in rice residues, we calculated the ResiduesTHg and ResiduesMeHg as follows29:
$${\rm{Residues}}_{{\rm{THg/MeHg}}} = \frac{{(3 \times {\rm{Stem}}_{{\rm{THg/MeHg}}} + {\rm{Leaf}}_{{\rm{THg/MeHg}}})}}{4}$$
We made primary estimates of the amounts of THg sequestered in other crop residues, including major cereals (excluding rice), legumes, oil crops, sugar crops, and tubers. The estimates of global crop residue production were based on the production of different crops and on research information on the straw/grain ratios of different crops (Supplementary Data 7)20. The THg concentrations of corn and wheat residues were obtained from Rothenberg et al. and Wang et al.49,50. THg measurements for other crop residues are very limited. Following the published literature29,46, we set the average THg concentrations (42 ng g−1, range: 1.0–180 ng g−1) for these crops based on the published data49,50,51,52,53.
It is difficult to estimate the Hg accumulation in biota using a time series analysis. Following the published literature41,42, we used regional Hg enrichment factors (relative to the 2010s) in soil simulated from a global box model and estimated the amount of Hg accumulation in rice plants in different regions. In the model, atmospheric Hg deposition (including anthropogenic sources and natural background) is the source of Hg in surface soil, and the trends of Hg concentrations in the soil and air in each region are the same. We compared our results with measurement data from the published literature (Supplementary Fig. 16), and the results showed that the Hg enrichment factors in soil and rice grain had similar trends.
Biotransport of mercury through rice-related processes
Material flow analysis is extensively used as an effective tool to provide a system-oriented view of the interlinked processes of contaminants54. Here we used it to understand the fates of THg and MeHg in rice grain in the environment and to assess the impact of the international rice trade on domestic human MeHg exposure through rice consumption. The annual balance and trade matrix of rice grain in each country from 1961 to 2013 were determined using statistical data from the FAO23. In the calculation of material flow for either THg or MeHg, we considered all anthropogenic processes involving rice grain after harvest and sun-drying, including THg and MeHg in domestic export, import, stock variation, supply as feed or seed, processing, other uses, food, and losses in transportation of rice, with a final step of human exposure in each country. We established the analysis based on the mass balance principle and ensured that the amounts of THg or MeHg in the sources were equal to the amounts in the sinks, as shown below23,54:
$$\begin{array}{c}\mathop {\sum }\limits_{jk} \left[ {{\rm{Production}}_{i,j}(x) - {\rm{Export}}_{i,j}(x) + {\rm{Stock}}\;{\rm{variation}}_{i,j}(x) + {\rm{Import}}_{i,k}(x)} \right]\\ = \mathop {\sum }\limits_j \left[ {{\rm{Feed}}_{i,j}(x) + {\rm{Seed}}_{i,j}(x) + {\rm{Processing}}_{i,j}(x)} \right.\\ \left. { + {\rm{Other}}\;{\rm{uses}}_{i,j}(x) + {\rm{Food}}_{i,j}(x) + {\rm{Losses}}_{i,j}(x)} \right]\end{array}$$
where i represents THg (i = 1, kg yr−1) or MeHg (i = 2, kg yr−1), j represents each country (or reporting country in the international trade), k represents each partner country in the international trade, and (x) is the probabilistic distribution of each variable generated from the Monte Carlo simulation. Following previous studies16,41, we did not consider rice from Hg-contaminated sites in the material flow analysis since previous studies suggested that rice from Hg-contaminated sites was locally consumed14,55, and amounts of rice produced in most Hg-contaminated sites are unknown. For example, researchers found that THg and MeHg concentrations in commercial rice were generally not high in markets across China56. A similar situation was also found in other regions, such as Europe24.
To calculate THg emissions from rice residue burning in fields, the amount of THg in the burned rice residues was multiplied by the combustion efficiency using the following equation46:
$${\rm{Emission}}_{{\rm{THg}}}\left( x \right) = \mathop {\sum }\limits_j \left[ {\frac{{R_j}}{M} \times C_{{\rm{THg}},j}\left( x \right) \times E \times 10^{ - 6}} \right]$$
where EmissionTHg(x) is the probabilistic distribution of global THg emissions (kg yr−1) from rice residue burning, Rj is the mass of rice residue burning (Mg yr−1) in country j, M is the moisture content (%) of rice residues (Supplementary Data 8), CTHg,j(x) is the probabilistic distribution of THg concentrations in rice residues in country j, and E is the average combustion efficiency (%) of rice residues46. It is challenging to quantify the fates of THg and MeHg in rice residues apart from burning due to the lack of statistical data. We made primary estimates and classified the amounts of THg and MeHg transported with rice residues as well as those left in the field and discussed associated impacts, based on existing investigation data from India, China, Thailand, and the Philippines28,29. These four countries contributed 62% of the global THg accumulation in rice residues in 2016.
Domestic mercury exposure through rice consumption
Probable weekly intake (PWI) values for Hg (including THg and MeHg) were applied to evaluate the exposure through rice consumption of an individual inhabitant in each country. This method has been extensively used to estimate weekly intake of chemical contaminants57. Weekly human exposure to THg and MeHg was calculated based on the known THg and MeHg contents of rice grain supplied as food in each country. The calculation method is described below:
$${\rm{PWI}}_{ij}(x) = {\rm{Food}}_{i,j,k}(x)/P_j/{\rm{BW}}_l/52 \times 10^9$$
where PWIij is the probabilistic distribution of the per capita PWI of THg (i = 1, μg kg−1 week−1) or MeHg (i = 2, μg kg−1 week−1) in country j, Foodi,j,k(x) is the amount of THg (kg yr−1) or MeHg (kg yr−1) in rice grain calculated from Eq. (12) that is supplied as food in country j, Pj is the population (capita) in country j, and BWl is the average body weight (kg) in region l. In the present study, a different average body weight of the human population was used for each continent: Africa = 61 kg, Asia = 58 kg, Europe = 71 kg, North America = 81 kg, Oceania = 74 kg, and South America = 68 kg58. We calculated the global average PWI of THg and MeHg through rice consumption based on the population-weighted method. Because rice from Hg-contaminated sites was locally consumed14,55, we separately identified human MeHg exposure through consumption of rice from different types of Hg-contaminated areas, such as gold- or Hg-mining areas, areas close to smelting facilities, and other significant industrial pollution areas (Supplementary Data 4) based on Eq. (14). If the literature did not report the standard deviation of the concentration data, a 65% uncertainty was assumed41. Owing to lack of rice consumption rates in most of the contaminated areas in the literature, we used the rice consumption rate of the country where a contaminated area was located19.
Human health impacts associated with methylmercury intake
The health impacts associated with dietary MeHg intake include neurotoxicity and cardiovascular impacts2,3,4. The neurotoxicity impact of MeHg would result in IQ decreases in fetuses, and the impact could persist into adulthood2,59. The association of cardiovascular outcomes and MeHg intake has been proposed for nearly two decades60. Nevertheless, significant uncertainties exist since inconsistent outcomes are still reported. Recently, researchers found that MeHg could diminish the cardiovascular protective effect of omega-3 polyunsaturated fatty acids and increase the risk of cardiovascular disease61. In addition, the United States Environmental Protection Agency has suggested sufficient evidence for the dose–response relationship between cardiovascular impacts and MeHg intake3,18. Following the published literature, we included fatal heart attacks as a MeHg-related health impact in the present study18,62. The IQ decreases and fatal heart attacks associated with MeHg intake were calculated based on the methods of Rice et al.63:
$$\Delta {\rm{IQ}}_j(x) = \gamma \times \lambda \times \beta \times \left[ {\frac{{\Delta {\rm{PWI}}_j(x) \times {\rm{BW}}_l}}{7}} \right]$$
$$\Delta {\rm{CF}}_j\left( x \right) = {\rm{Pf}}_j \times \omega \times \left\{ {1 - {\rm{exp}}\left[ { - \varphi \times \lambda \times \beta \times \frac{{\Delta {\rm{PWI}}_j\left( x \right) \times {\rm{BW}}_l}}{7}} \right]} \right\}$$
where ΔIQj(x) represents the probabilistic distribution of the changes in IQ points in country j; γ, λ, and β are the slopes of linear relationships between the child's IQ and the mother's hair MeHg (IQ points per μg Hg g− hair), blood MeHg (μg Hg g−1 hair per μg Hg L−1 blood), and MeHg intake (μg Hg L−1 blood per μg Hg day−1), respectively. ΔPWIj(x) is the probabilistic distribution of the change in the PWI of MeHg. In Eq. (16), ΔCFj(x) is the probabilistic distribution of the change in the number of deaths from fatal heart attacks associated with MeHg intake in country j; Pfj is the number of deaths due to fatal heart attacks for people aged ≥30 years in country j, which was determined using statistical data from the World Health Organization (WHO). ω is the probability that reflects the uncertainty of the association between the hair Hg level and heart attack risk, which is a probability of one-third for the causal epidemiological associations (i.e., ω = 1) and two-thirds for no causal associations (i.e., ω = 0)63; and φ is the heart attack–hair Hg coefficient that reflects the relationship between hair MeHg levels and fatal heart attack risks (risk per μg Hg g−1 hair). The values for the coefficients γ, λ, β, and φ are based on epidemiologic studies and are referred to in the study of Rice et al.63, and associated uncertainties are considered in the present estimate. In the present study, we quantified IQ decreases in fetuses and fatal heart attacks in general populations associated with the intake of MeHg in rice in different countries. We also quantified the IQ decreases in fetuses associated with the intake of MeHg in rice from Hg-contaminated areas. We did not quantify the fatal heart attacks in Hg-contaminated areas since the amounts of rice produced in most of the Hg-contaminated sites are unknown. We included the time lag between MeHg intake and the response of fatal heart attacks in the estimation. The central tendency of the lag is estimated as 6 (range: 2–12) years63. Thus it is acceptable for the Pf data collected in the year 2016, the newest data published by WHO, to represent the impacts of MeHg intake in 2013.
Uncertainty analysis
As described above, Monte Carlo simulation was applied to analyze the robustness of the fluxes of THg and MeHg and subsequent human health impacts, and we ran the models 10,000 times in accordance with previous studies45. The distribution of THg and MeHg concentrations in rice grain was confirmed to be log-normal, as found previously16. Because the FAO database includes official, semi-official, estimated, and calculated data23, uniform distributions with a fixed coefficient of deviation of 30% were set in the simulation64,65. Probabilistic distribution of coefficients in the health impact assessments refers to the study of Rice et al.63. Median values and 50% confidence intervals (interquartile range, range from 25% to 75%) of the results were generated in MATLAB (version R2017a) to quantify the uncertainties66. Significance was determined at the p < 0.05 and <0.01 levels.
The authors declare that all data supporting the present study are available at the FAO website (http://www.fao.org), the World Bank website (http://www.worldbank.org/), the WHO website (https://www.who.int/), or within the article and the Supplementary Information. All data generated during this study, including source data underlying figures, are included in the Supplementary Dataset (Supplementary Data 1–19).
Code availability
All computer codes generated during this study are available from the corresponding authors upon reasonable request.
Driscoll, C. T., Mason, R. P., Chan, H. M., Jacob, D. J. & Pirrone, N. Mercury as a global pollutant: sources, pathways, and effects. Environ. Sci. Technol. 47, 4967–4983 (2013).
Grandjean, P. et al. Cognitive deficit in 7-year-old children with prenatal exposure to methylmercury. Neurotoxicol. Teratol. 19, 417–428 (1997).
Roman, H. A. et al. Evaluation of the cardiovascular effects of methylmercury exposures: current evidence supports development of a dose–response function for regulatory benefits analysis. Environ. Health Perspect. 119, 607–614 (2011).
Clarkson, T. W., Magos, L. & Myers, G. J. The toxicology of mercury—current exposures and clinical manifestations. N. Engl. J. Med. 349, 1731–1737 (2003).
Mason, R. P., Fitzgerald, W. F. & Morel, F. M. The biogeochemical cycling of elemental mercury: anthropogenic influences. Geochim. Cosmochim. Acta 58, 3191–3198 (1994).
Obrist, D. et al. A review of global environmental mercury processes in response to human and natural perturbations: changes of emissions, climate, and land use. Ambio 47, 116–140 (2018).
Nriagu, J. O. & Pacyna, J. M. Quantitative assessment of worldwide contamination of air, water and soils by trace metals. Nature 333, 134–139 (1988).
Streets, D. G. et al. Total mercury released to the environment by human activities. Environ. Sci. Technol. 51, 5969–5977 (2017).
Pirrone, N. et al. Global mercury emissions to the atmosphere from anthropogenic and natural sources. Atmos. Chem. Phys. 10, 5951–5964 (2010).
Jiskra, M. et al. A vegetation control on seasonal variations in global atmospheric mercury concentrations. Nat. Geosci. 11, 244–250 (2018).
FDA. Total Diet Study Statistics on Element Results, Revision 1, 1991–1998 (Food and Drug Administration (FDA), Washington, DC, 2000).
MacIntosh, D. L., Spengler, J. D., Ozkaynak, H., Tsai, L.-h & Ryan, P. B. Dietary exposures to selected metals and pesticides. Environ. Health Perspect. 104, 202–209 (1996).
Sunderland, E. M., Li, M. & Bullard, K. Decadal changes in the edible supply of seafood and methylmercury exposure in the United States. Environ. Health Perspect. 126, https://doi.org/10.1289/EHP2644 (2018).
Rothenberg, S. E., Windham-Myers, L. & Creswell, J. E. Rice methylmercury exposure and mitigation: a comprehensive review. Environ. Res. 133, 407–423 (2014).
Podar, M. et al. Global prevalence and distribution of genes and microorganisms involved in mercury methylation. Sci. Adv. https://doi.org/10.1126/sciadv.1500675 (2015).
Liu, M. et al. Impacts of farmed fish consumption and food trade on methylmercury exposure in China. Environ. Int. 120, 333–344 (2018).
Wiedmann, T. & Lenzen, M. Environmental and social footprints of international trade. Nat. Geosci. 11, 314–321 (2018).
Giang, A. & Selin, N. E. Benefits of mercury controls for the United States. Proc. Natl Acad. Sci. 113, 286–291 (2016).
Zhang, H., Feng, X., Larssen, T., Qiu, G. & Vogt, R. D. In inland China, rice, rather than fish, is the major pathway for methylmercury exposure. Environ. Health Perspect. 118, 1183–1188 (2010).
Lal, R. World crop residues production and implications of its use as a biofuel. Environ. Int. 31, 575–584 (2005).
Lavoie, R. A., Bouffard, A., Maranger, R. & Amyot, M. Mercury transport and human exposure from global marine fisheries. Sci. Rep. https://doi.org/10.1038/s41598-41018-24938-41593 (2018).
Al-Saleh, I. & Abduljabbar, M. Heavy metals (lead, cadmium, methylmercury, arsenic) in commonly imported rice grains (Oryza sativa) sold in Saudi Arabia and their potential health risk. Int. J. Hyg. Environ. Health 220, 1168–1178 (2017).
FAO. Food and agriculture data. Fisheries and Aquaculture Department (FAO) web site: www.fao.org/home/en (2018).
Brombach, C.-C. et al. Methylmercury varies more than one order of magnitude in commercial European rice. Food Chem. 214, 360–365 (2017).
Basu, N. et al. A state-of-the-science review of mercury biomarkers in human populations worldwide between 2000 and 2018. Environ. Health Perspect. 126, https://doi.org/10.1289/EHP3904. (2018).
Windham-Myers, L. et al. Mercury cycling in agricultural and managed wetlands of California, USA: seasonal influences of vegetation on mercury methylation, storage, and transport. Sci. Total Environ. 484, 308–318 (2014).
Zhu, H., Zhong, H. & Wu, J. Incorporating rice residues into paddy soils affects methylmercury accumulation in rice. Chemosphere 152, 259–264 (2016).
Gadde, B., Menke, C. & Wassmann, R. Rice straw as a renewable energy source in India, Thailand, and the Philippines: overall potential and limitations for energy contribution and greenhouse gas mitigation. Biomass Bioenerg. 33, 1532–1546 (2009).
Huang, X. et al. Mercury emissions from biomass burning in China. Environ. Sci. Technol. 45, 9442–9448 (2011).
Friedli, H., Arellano, A., Cinnirella, S. & Pirrone, N. Initial estimates of mercury emissions to the atmosphere from global biomass burning. Environ. Sci. Technol. 43, 3507–3513 (2009).
Outridge, P. M., Mason, R., Wang, F., Guerrero, S. & Heimbürger-Boavida, L. Updated global and oceanic mercury budgets for the United Nations Global Mercury Assessment 2018. Environ. Sci. Technol. 52, 11466–11477 (2018).
Obrist, D. et al. Tundra uptake of atmospheric elemental mercury drives Arctic mercury pollution. Nature 547, 201–204 (2017).
Obrist, D. Atmospheric mercury pollution due to losses of terrestrial carbon pools? Biogeochemistry 85, 119–123 (2007).
Alpers, C. N. et al. Mercury cycling in agricultural and managed wetlands, Yolo Bypass, California: spatial and seasonal variations in water quality. Sci. Total Environ. 484, 276–287 (2014).
Kirk, G. The Biogeochemistry of Submerged Soils (John Wiley & Sons, 2004).
Bouman, B. & Tuong, T. P. Field water management to save water and increase its productivity in irrigated lowland rice. Agric. Water Manag. 49, 11–30 (2001).
Marvin-DiPasquale, M., Agee, J., Bouse, R. & Jaffe, B. Microbial cycling of mercury in contaminated pelagic and wetland sediments of San Pablo Bay, California. Environ. Geol. 43, 260–267 (2003).
Rothenberg, S. E. et al. Characterization of mercury species in brown and white rice (Oryza sativa L.) grown in water-saving paddies. Environ. Pollut. 159, 1283–1289 (2011).
Mahaffey, K. R., Clickner, R. P. & Jeffries, R. A. Adult women's blood mercury concentrations vary regionally in the United States: association with patterns of fish consumption (NHANES 1999–2004). Environ. Health Perspect. 117, 47–53 (2008).
Karagas, M. R. et al. Evidence on the human health effects of low-level methylmercury exposure. Environ. Health Perspect. 120, 799–806 (2012).
Amos, H. M. et al. Global biogeochemical implications of mercury discharges from rivers and sediment burial. Environ. Sci. Technol. 48, 9514–9522 (2014).
Chen, L. et al. Historical and future trends in global source-receptor relationships of mercury. Sci. Total Environ. 610, 24–31 (2018).
Muthayya, S., Sugimoto, J. D., Montgomery, S. & Maberly, G. F. An overview of global rice production, supply, trade, and consumption. Ann. NY Acad. Sci. 1324, 7–14 (2014).
Liu, M. et al. Mercury export from mainland China to adjacent seas and its influence on the marine mercury balance. Environ. Sci. Technol. 50, 6224–6232 (2016).
Liu, M. et al. Impact of water-induced soil erosion on the terrestrial transport and atmospheric emission of mercury in China. Environ. Sci. Technol. 52, 6945–6956 (2018).
Streets, D., Yarber, K., Woo, J. H. & Carmichael, G. Biomass burning in Asia: annual and seasonal estimates and atmospheric emissions. Glob. Biogeochem. Cycle 17, 10-11-20 (2003).
Jung, M. C. & Thornton, I. Environmental contamination and seasonal variation of metals in soils, plants and waters in the paddy fields around a Pb-Zn mine in Korea. Sci. Total Environ. 198, 105–121 (1997).
Wang, D., Wei, Z., Tang, S. & Qi, Z. Distribution of selenium and cadmium in soil-rice system of selenium-rich area in Hainan, China. Pak. J. Pharm. Sci. 27, 1633–1639 (2014).
Rothenberg, S., Du, X., Zhu, Y.-G. & Jay, J. The impact of sewage irrigation on the uptake of mercury in corn plants (Zea mays) from suburban Beijing. Environ. Pollut. 149, 246–251 (2007).
Wang, S. et al. Accumulation, transfer, and potential sources of mercury in the soil-wheat system under field conditions over the Loess Plateau, northwest China. Sci. Total Environ. 568, 245–252 (2016).
Obrist, D., Moosmüller, H., Schürmann, R., Chen, L.-W. A. & Kreidenweis, S. M. Particulate-phase and gaseous elemental mercury emissions during biomass combustion: controlling factors and correlation with particulate matter emissions. Environ. Sci. Technol. 42, 721–727 (2007).
Friedli, H., Radke, L., Prescott, R., Hobbs, P. & Sinha, P. Mercury emissions from the August 2001 wildfires in Washington State and an agricultural waste fire in Oregon and atmospheric mercury budget estimates. Glob. Biogeochem. Cycle 17, 8-1-8 (2003).
Liu, W., Shen, L., Liu, J., Wang, Y. & Li, S. Uptake of toxic heavy metals by rice (Oryza sativa L.) cultivated in the agricultural soil near Zhengzhou City, People's Republic of China. Bull. Environ. Contam. Toxicol. 79, 209–213 (2007).
Allesch, A. & Brunner, P. H. Material flow analysis as a tool to improve waste management systems: the case of Austria. Environ. Sci. Technol. 51, 540–551 (2016).
Han, J. et al. Health risk assessment of inorganic mercury and methylmercury via rice consumption in the urban city of Guiyang, Southwest China. Int. J. Environ. Res. Public Health 16, 216–222 (2019).
Zhao, H. et al. Mercury contents in rice and potential health risks across China. Environ. Int. 126, 406–412 (2019).
WHO. Guidance for Identifying Populations at Risk from Mercury Exposure (World Health Organization (WHO), 2008).
Walpole, S. C. et al. The weight of nations: an estimation of adult human biomass. BMC Public Health 12, 439–445 (2012).
Debes, F., Weihe, P. & Grandjean, P. Cognitive deficits at age 22 years associated with prenatal exposure to methylmercury. Cortex 74, 358–369 (2016).
Ha, E. et al. Current progress on understanding the impact of mercury on human health. Environ. Res. 152, 419–433 (2017).
Hu, X. F., Laird, B. D. & Chan, H. M. Mercury diminishes the cardiovascular protective effect of omega-3 polyunsaturated fatty acids in the modern diet of Inuit in Canada. Environ. Res. 152, 470–477 (2017).
Chen, L. et al. Trans-provincial health impacts of atmospheric mercury emissions in China. Nat. Commun. https://doi.org/10.1038/s41467-41019-09080-41466 (2019).
Rice, G. E., Hammitt, J. K. & Evans, J. S. A probabilistic characterization of the health benefits of reducing methyl mercury intake in the United States. Environ. Sci. Technol. 44, 5216–5224 (2010).
AMAP/UNEP. Technical Background Report for the Global Mercury Assessment 2013. pp vi-236 (Arctic Monitoring and Assessment Programme/United Nations Environment Programme, AMAP/UNEP, 2013).
Liu, M. et al. Mercury release to aquatic environments from anthropogenic sources in China from 2001 to 2012. Environ. Sci. Technol. 50, 8169–8177 (2016).
Shen, H. et al. Global atmospheric emissions of polycyclic aromatic hydrocarbons from 1960 to 2008 and future predictions. Environ. Sci. Technol. 47, 6415–6424 (2013).
Kwon, S., Selin, N., Giang, A., Karplus, V. & Zhang, D. Present and future mercury concentrations in Chinese rice: insights from modeling. Glob. Biogeochem. Cycle 32, 437–462 (2018).
Wang, X. et al. Emission-dominated gas exchange of elemental mercury vapor over natural surfaces in China. Atmos. Chem. Phys. 16, 11125–11143 (2016).
We very much appreciate the editor's and reviewers' insightful comments and suggestions on the paper. This work was funded by the National Natural Science Foundation of China (Nos. 41630748, 41977311, 41821005, 41571484, 41571130010, and 41671492). L.C. thanks the China Postdoctoral Science Foundation Grant (2017M611492).
Maodian Liu
Present address: School of Forestry and Environmental Studies, Yale University, New Haven, CT, 06511, USA
Ministry of Education Laboratory of Earth Surface Processes, College of Urban and Environmental Sciences, Peking University, 100871, Beijing, China
, Qianru Zhang
, Menghan Cheng
, Haoran Zhang
, Shu Tao
& Xuejun Wang
Department of Marine Sciences, University of Connecticut, 1080 Shennecossett Road, Groton, CT, 06340, USA
Yipeng He
Key Laboratory of Geographic Information Science (Ministry of Education), East China Normal University, 200241, Shanghai, China
Long Chen
Center for Industrial Ecology, School of Forestry and Environmental Studies, Yale University, New Haven, CT, 06511, USA
Haoran Zhang
Finance Department, Guanghua School of Management, Peking University, 100871, Beijing, China
Hanlin Cao
School of Civil and Environmental Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
Huizhong Shen
School of Environment and Natural Resources, Renmin University of China, 100872, Beijing, China
Wei Zhang
Search for Maodian Liu in:
Search for Qianru Zhang in:
Search for Menghan Cheng in:
Search for Yipeng He in:
Search for Long Chen in:
Search for Haoran Zhang in:
Search for Hanlin Cao in:
Search for Huizhong Shen in:
Search for Wei Zhang in:
Search for Shu Tao in:
Search for Xuejun Wang in:
M.L., X.W., and Q.Z. designed the research. M.L. and X.W. led the writing of the paper. M.L., Q.Z., M.C., Y.H., L.C., H.Z., and H.C. analyzed the data and generated the models. H.S., L.C., W.Z., S.T., and X.W. contributed to the interpretation of the results and edited the paper. All authors participated in the discussion and writing of this article.
Correspondence to Xuejun Wang.
Peer review information Nature Communications thanks Milena Horvat and the other anonymous reviewer(s) for their contribution to the peer review of this work.
Description of Additional Supplementary Files
Supplementary Data 1
Supplementary Data 10
Liu, M., Zhang, Q., Cheng, M. et al. Rice life cycle-based global mercury biotransport and human methylmercury exposure. Nat Commun 10, 5164 (2019) doi:10.1038/s41467-019-13221-2
Methylmercury and inorganic mercury in Chinese commercial rice: Implications for overestimated human exposure and health risk
Xiaohang Xu
, Jialiang Han
, Jian Pang
, Xun Wang
, Yan Lin
, Yajie Wang
& Guangle Qiu
Environmental Pollution (2020)
Significant elevation of human methylmercury exposure induced by the food trade in Beijing, a developing megacity
, Gunnar Hansen
, Yipeng He
, Chenghao Yu
, Huiming Lin
Environment International (2020) | CommonCrawl |
Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. It only takes a minute to sign up.
Use one_body_integrals to know which orbitals to freeze in ElectronicStructureProblem
In exercise 5 of the this year's IBM Quantum Challenge, you need to use the FreezeCoreTransformer (along two_qubit_reduction and z2symmetry_reduction) to reduce the number of qubits to 4 and achieve a cost of 3. I managed to figure out that the optimal array to pass to the remove_orbitals parameter was [3,4]; however, I did this by experimenting with different arrays.
In the Qiskit slack, I saw that the one body integrals of the QMolecule are supposed to give you an insight on which orbitals to freeze. However, they didn't explain how to use it to figure this out.
The molecule and one body integrals I am working with is the following.
molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474'
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
Matrix(np.round(qmolecule.one_body_integrals, 10))
$$ \displaystyle \left[\begin{array}{cccccccccccc}-4.7385372413 & 0.1075391382 & 0.1675852953 & 0.0 & 0.0 & -0.0302628413 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0\\0.1075391382 & -1.5131757719 & 0.0343466943 & 0.0 & 0.0 & -0.0680291694 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0\\0.1675852953 & 0.0343466943 & -1.1291622926 & 0.0 & 0.0 & 0.031432226 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0\\0.0 & 0.0 & 0.0 & -1.1407709359 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0\\0.0 & 0.0 & 0.0 & 0.0 & -1.1407709359 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0\\-0.0302628413 & -0.0680291694 & 0.031432226 & 0.0 & 0.0 & -0.9418187042 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0\\0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & -4.7385372413 & 0.1075391382 & 0.1675852953 & 0.0 & 0.0 & -0.0302628413\\0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.1075391382 & -1.5131757719 & 0.0343466943 & 0.0 & 0.0 & -0.0680291694\\0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.1675852953 & 0.0343466943 & -1.1291622926 & 0.0 & 0.0 & 0.031432226\\0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & -1.1407709359 & 0.0 & 0.0\\0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & -1.1407709359 & 0.0\\0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & -0.0302628413 & -0.0680291694 & 0.031432226 & 0.0 & 0.0 & -0.9418187042\end{array}\right] $$
How am I supposed to interpret this matrix to know which orbitals to freeze?
programming qiskit vqe chemistry
glS♦
epelaaezepelaaez
Lab 8 explains exactly how to do this for LiH
For more information, check out Introduction to Quantum Computing and Quantum Hardware
Lecture 22 Quantum Chemistry I: Obtaining the Qubit Hamiltonian for H2 and LiH Part 1
João GalegoJoão Galego
$\begingroup$ Thanks for your response. Lookint at the lab lecture, I'm not sure I actually used the FreezeCoreTransformer correctly. Do you know if I'm doing it right? Or how should I use it taking into consideration the one body integral? $\endgroup$
– epelaaez
$\begingroup$ For context, I'm using the transformer as: problem = ElectronicStructureProblem(driver, [FreezeCoreTransformer(remove_orbitals=[3,4])]) . I guess I'm missing the part in which the guy in the lab uses freeze=[0,6] in the video. However, my understanding is that this is done when freeze_core=True (the default); is this right? $\endgroup$
$\begingroup$ Keep in mind that the lecturer uses qiskit.chemistry and not qiskit_nature - similar methods, slightly different behavior. Also, read the documentation on FreezeCoreTransformer: 1) when freeze_core is enabled, the core orbitals listed in the QMolecule are made inactive and removed; 2) additionally, unoccupied molecular orbitals can be removed via a list of indices passed to remove_orbitals. $\endgroup$
– João Galego
$\begingroup$ True, I hadn't considered that part. Anyways, I think I've figured out how it works. Thanks for your help! $\endgroup$
Thanks for contributing an answer to Quantum Computing Stack Exchange!
Not the answer you're looking for? Browse other questions tagged programming qiskit vqe chemistry or ask your own question.
Do we know anything about the computational complexity of the exchange-correlation functional?
How do I know in which state the qubit is in each step of the circuit for the simulator in qiskit?
In which situations is it hard to find an ansatz?
How to use VQE function in Qiskit
Unable to use iSWAP in Qiskit TwoLocal entangling_block
Which Qiskit Aer ideal quantum circuit simulator to use when aer_simulator is no longer available?
Molecular orbitals in Qiskit
How to know state coresponding to -1 or 1 for multi qubits VQE? | CommonCrawl |
Gd-DTPA Adsorption on Chitosan/Magnetite Nanocomposites
Ie. V. Pylypchuk ORCID: orcid.org/0000-0001-5467-28391,
D. Kołodyńska2,
M. Kozioł2 &
P. P. Gorbyk1
The synthesis of the chitosan/magnetite nanocomposites is presented. Composites were prepared by co-precipitation of iron(II) and iron(III) salts by aqueous ammonia in the 0.1 % chitosan solution. It was shown that magnetite synthesis in the chitosan medium does not affect the magnetite crystal structure. The thermal analysis data showed 4.6 % of mass concentration of chitosan in the hybrid chitosan/magnetite composite. In the concentration range of initial Gd-DTPA solution up to 0.4 mmol/L, addition of chitosan to magnetite increases the adsorption capacity and affinity to Gd-DTPA complex. The Langmuir and Freundlich adsorption models were applied to describe adsorption processes. Nanocomposites were characterized by scanning electron microscopy (SEM), differential thermal analysis (DTA), Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), and specific surface area determination (ASAP) methods.
Increasing interest in multifunctional nanomaterials for biomedical application necessitates detailed investigations of processes occurring at the nanolevel. Development of such nanomaterials brings scientists closer to realization of nanorobot concept—targeted drug delivery, cell recognition, and complex therapy and diagnostics. Creation of hybrid biopolymer/mineral nanomaterials can lead to development of novel nanocomposites that are sensitive to pH, temperature, magnetic field, and other physicochemical actions [1].
Nanocomposites based on magnetite (Fe3O4) are widely used for magnetic resonance imaging (MRI) and targeted drug delivery [2–5]. Promising trends of using magnetic materials with advanced surface are preparation of magnetosensitive nanocomposites with natural biopolymers (e.g., chitosan). Combination of chitosan and magnetite properties opens the way to creation of Pylypchuknew effective pH-controllable drug delivery and release systems with high biocompatibility. Chitosan-inorganic mineral composites have attracted researchers' attention due to their good adsorption properties, high speed of adsorption kinetics, and some technical advantages to operate with them. The previous studies exhibit great potential of chitosan-inorganic mineral composites [1–6].
There are a number of articles reporting on synthesis and properties of magnetite/chitosan nanomaterials. For instance, hydrogel, chitosan (CS) cross-linked carboxymethyl-β-cyclodextrin (CM-β-CD) polymer modified Fe3O4 magnetic nanoparticles were synthesized in [7]. Magnetic chitosan beads were synthesized by incorporating N,O-carboxymethyl chitosan-coated magnetic nanoparticles (NOCC-MNPs) into chitosan-citrate gel beads (CCGBs) for adsorbing Cu(II) ions. The maximal adsorption capacity as estimated by the Langmuir model was 294.11 mg/g [7]. A magnetic composite material composed of nanomagnetite (NMT), heulandite (HE), and cross-linked chitosan was prepared and used as an adsorbent for methylene blue (MB) and methyl orange (MO). The adsorption of MB and MO followed the pseudo-second-order kinetics, and the maximum adsorption capacities were 45.1 and 149.2 mg/g at pH 5.5, respectively [8]. The authors developed a novel chitosan/Al2O3/magnetic iron oxide nanoparticle composite acting as an adsorbent for removing MO, a model anionic dye, from aqueous solution. The adsorption isotherm was well described by the Langmuir model and showed a high MO adsorption capacity (1.27 mmol/g, i.e., 417 mg/g at 25 °C) [9].
Biofunctionalized chitosan@Fe3O4 nanoparticles were synthesized by combining Fe3O4 and CS chemically modified with PEG and lactobionic acid in one step [10]. A novel pyridinium-diethylenetriamine magnetic chitosan (PDFMC) was prepared and used for magnetic separation of Cr(VI) from aqueous solution. The PDFMC worked well on removal of Cr(VI) in any condition of acidic, neutral, and basic solutions with the capacity (q max) of 176 mg/L (at pH 3, acidic), 124 mg/L (pH 6, near-neutral), and 86 mg/L (pH 9, basic) based on the Langmuir isotherm model [11]. Cellulose grafted to nanomagnetites was found to be an efficient biopolymer composite for catalysis of Friedel-Crafts reaction between isatins and indoles, leading to selective synthesis of 3-hydroxy-3- indolylindolin-2-ones [12]. As another example, magnetic nanoparticles double-coated with different concentrations of dextran sulfate or reduced dextran and chitosan solutions were formed by layer-by-layer deposition. Chitosan-coated magnetic nanoparticles have been synthesized and developed as a highly efficient nano-adsorbent for the removal of Hg2+ ions from industrial aqueous and oily samples. The results confirmed formation of narrow-dispersed nanoparticles with a mean average diameter of about 10 nm [13].
Chitosan microspheres are the most widely studied drug delivery systems for the controlled release of drugs, such as antibiotics, antihypertensive agents, anticancer agents, proteins, peptide drugs, and vaccines [14]. In this respect, composites containing chitosan are widely used as a drug carrier for gadolinium and its biocompatible complexes (e.g., Gd-diethylenetriaminepentaacetic (DTPA), Gd-DOTA). Such substances find application as contrast agent in MRI due to paramagnetic properties of Gd3+ [15]. The conjugates of the complexes of Gd-DTPA with low-molecular-weight chitosan are described in [16–22]. In addition, gadolinium, due to its nucleus property to capture thermal neutron with release of γ-quantum's and Auger electrons, can be used in the neutron capture therapy (NCT) of cancer [22–31].
Unfortunately, there is no relevant information about Gd-DTPA adsorption on magnetite-chitosan composites. The main goal of this article is to obtain magnetite-chitosan nanocomposites and obtain information with respect to the Gd-DTPA adsorption on these materials.
Development of hybrid magnetic chitosan nanocomposites and peculiarities of Gd-DTPA adsorption on these nanocomposites is an important task for further development of composites for biomedical destination with a wide range of functions—targeted drug delivery, pH-controllable release, and chemo-, immuno-, and radiotherapy as well as diagnostic agents.
Experimental Part
All reagents were of analytical grade and used without further purification. Demineralized water was used for preparation of all sample solutions (Hydrolab).
Chitosan, Sigma-Aldrich, with a molecular weight from 190,000 to 370,000 Da, degree of deacetylation not less than 75 %, and solubility of 10 mg/mL.
Gd-DTPA was prepared by dissolution of two equivalents of Gd2O3 in 0.04 M DTPA to obtain 0.04 M Gd-DTPA solution. Obtained solution was adjusted to pH = 7.26 by NaOH.
Differential Thermal Analysis
Thermal behavior of magnetite and its nanocomposites was determined by thermogravimetric analysis (TGA) using Q50 TGA instrument. TGA measurements of 4–25-mg samples were carried out at 10 °C/min heating rate in the range of 25–1000 °C under nitrogen atmosphere with a flow rate of 50 cm3/min.
Surface Area and Average Pore Diameter (ASAP) Measurements
Specific surface areas and pore volumes were determined from the low-temperature nitrogen adsorption data (automatic sorption analyzer ASAP 2020, Micromeritics, USA). Before measurements, the samples were outgassed at 60 °C.
Carbon, Hydrogen, and Nitrogen Analysis
Elemental analysis of chitosan-silica composite was carried out by using the CHN/O analyzer (Perkin Elmer, Series II CHNS/O Analyzer 2400). The analysis was made at the combustion temperature of 925 °C and the reduction temperature of 640 °C.
Surface Morphology Analysis
The surface morphology of chitosan-magnetite composite was observed by using a scanning electron microscope (SEM, LEO 1430VP, Carl Zeiss, Germany).
The Fourier Transform Infrared Spectra
The Fourier transform infrared spectra were registered using a Cary 630 ATR-FTIR instrument (Agilent Technologies) by the attenuated total internal reflection technique.
The pH of the Point of Zero Charge pHpzc
The pH of the point of zero charge pHpzc was measured using the pH drift method. The pH of the sorbent in the 0.01 M NaCl solution was adjusted between 2 and 12 by adding 0.01 M NaOH and 0.01 M HCl. To 50 cm3 of the solution, 0.2 g of the adsorbent was added, and after 24 h, the final pH was measured.
Magnetite Synthesis
In 1 L of deionized water, 24 g of ferrous chloride (FeCl2) and 48 g of Ferric chloride solution (FeCl3) were dissolved. This solution was added dropwise to 250 mL of ammonia solution (NH4OH, 25 % in water). Black precipitate was collected and washed several times by distilled water to pH = 7.
The synthesis of magnetite was carried out by the co-precipitation of iron salts according to the reaction
$$ {\mathrm{Fe}}^{+2} + 2{\mathrm{Fe}}^{+3} + 8{\mathrm{NH}}_4\mathrm{O}\mathrm{H}\ \to\ {\mathrm{Fe}}_3{\mathrm{O}}_4 + 4{\mathrm{H}}_2\mathrm{O} + {{8\mathrm{N}\mathrm{H}}_4}^{+} $$
Chitosan/Magnetite Nanocomposite Synthesis
Chitosan/magnetite nanocomposites were made by co-precipitation method. Chitosan solution was obtained by dissolving 0.5 g of chitosan (low Mw) in 50 cm3 of 1 % CH3COOH. The obtained solution was mixed with 0.5 L solution of Fe2+ and Fe3+ (24 g FeCl3·6H2O + FeSO4·2H2O) and left stirring overnight at 40 °C. The resulting solution was added dropwise to 150 mL of 25 % NH4OH solution in water (scheme in Fig. 1). The precipitate was collected by a permanent magnet and washed by doubly distilled water several times up to pH = 7. The obtained composite was dried in air at 60 °C. Ten grams of black powder was obtained.
Scheme of chitosan/magnetite nanocomposite synthesis
There are many ways to obtain chitosan-magnetite nanocomposites. For instance, magnetic nanoparticles with an average crystallite size of 21.8 nm were covered in a core/shell type; magnetite/silica, magnetite/chitosan, and a double-shell magnetite/silica/chitosan were developed for attaching an antineoplastic drug [32]. Chitosan-coated magnetite nanocomposites (Fe3O4/CS) were prepared under different external magnetic fields by the co-precipitation method [33]. A chitosan-based hydrogel, graft-copolymerized with methylenebisacrylamide and poly(acrylic acid) (i.e., CS-co-MMB-co-PAA), was employed in the studies on the adsorption kinetics of Pb(II), Cd(II), and Cu(II) ions in aqueous solution [34].
Various magnetic films of chitosan and the synthesized magnetite nanopowders containing different concentrations of the latter were prepared by the ultrasonication route [35].
The use of the biopolymer chitosan as a template for the preparation of magnetite and magnetite/silver core/shell nanoparticle systems, following a two-step procedure of magnetite nanoparticles in situ precipitation and subsequent silver ion reduction, is discussed in [36].
A magnetic nanoparticle drug carrier for the controlled drug release that responds to the change in external temperature or pH was described in [37], with characteristics of longer circulation time and reduced side effects. The novel nanocarrier is characterized by a functionalized magnetite (Fe3O4) core that is conjugated with drug via acid-labile hydrazone bond and encapsulated by the thermosensitive smart polymer, chitosan-g-poly(N-isopropylacrylamide-co-N,N-dimethylacrylamide) [chitosan-g-poly(NIPAAm-co-DMAAm)]. The polyelectrolyte complex (PEC) effect between hyaluronic acid (HA) and chitosan was explored to recover HA from the fermentation broth. Chitosan was conjugated with the magnetic nanoparticles by the co-precipitation method to facilitate its recovery [38]. Chitosan/magnetite nanocomposite was synthesized induced by magnetic field via in situ hybridization under ambient conditions. The saturated magnetization (Ms) of nanomagnetite in chitosan was 50.54 emu/g, which is as high as 54 % of bulk magnetite [39].
In this work, we used a method based on co-precipitation of Fe2+/Fe3+ salts by aqueous ammonia in 0.1 % chitosan solution.
It is well known that chitosan is soluble in water in acidic media (pH = 2–6). At this pH, chitosan swells and its chains undergo deploying due to electrostatic repulsion of positively charged –NH3 + groups.
Iron salts, after mixing with chitosan solution, surrounding the chitosan molecule form complexes with its amino groups [40]. The addition of ammonia affects the charge of chitosan molecule, resulting in shrinkage of chitosan and precipitation of Fe3O4, and leads to the formation of chitosan-magnetite aggregates. The scheme of synthesis is presented in Fig. 1.
The obtained magnetite and composite powders were analyzed by SEM. In the SEM images of magnetite particles (Fig. 2a, b), we can observe that they are of spherical shape and their size varies from 20 to 40 nm. Nanocrystals form aggregates up to 200 nm, which can assemble in large aggregates. The specific surface area, calculated by the BET method, is equal to 132 m2/g.
SEM images of magnetite (a, b) and magnetite/chitosan composites (c, d)
Chitosan/magnetite nanocomposite (Fig. 2c, d) has similar but not identical surface with some differences. Chitosan/magnetite nanocomposite has a rough and irregular surface, which is common for hybrid materials. Compared to unmodified magnetite (Fig. 2a, b), composite particles surface and pores between them look like filled by polymer species. Filling of magnetite particle aggregates with polymer causes decrease in the specific surface area of composite (101 m2/g). The influence of chitosan addition to magnetite on the specific surface area of composite is discussed further.
Specific Surface Area Determination
The isotherm plots were used to calculate the specific surface area and the average pore diameter of the magnetite and chitosan/magnetite nanocomposite.
The pore size and volume analysis was made using two models. The first model is based on the Barrett-Joyner-Halenda (BJH) method implemented in the firm software. The second one takes into account slit/cylindrical pores and voids between spherical particles with the self-consistent regularization (SCV/SCR) [41–44]. According to the calculations based on the BJH method (Fig. 3a, b), magnetite mesopores of 10-nm size prevail. In the case of magnetite/chitosan composites, it can be equal to magnetite.
Pore size distribution for magnetite (a) and magnetite/chitosan composites (b) calculated by the BJH analysis method
According to the results of ASAP analysis of the chitosan/magnetite composites compared to unmodified magnetite, the BET surface area of composite decreased after synthesis of magnetite in the chitosan medium (Table 1).
Table 1 The BET and the Langmuir surface areas of Fe3O4 and Chitosan/Fe3O4 composites
Pore size distribution calculated using the SCV/SCR procedure (integral adsorption equations based on a complex model with slit-shaped and cylindrical pores and voids between spherical nonporous particles packed in random aggregates) is presented in Fig. 4. It was found that all obtained samples of chitosan-magnetite composite have an average pore diameter up to 10 nm and can be defined as mesopores. The presence of mesopores for all obtained samples of the composites was confirmed by the diagrams of pore size distribution (Fig. 4), which was obtained by the adsorption branch of the isotherm using the SCV/SCR method.
Pore size distribution for magnetite (a) and magnetite/chitosan composites (b) calculated by the SCV/SCR analysis method (А—summ of pore impact, B—slit-shaped pores, C—cylindrical pores, D—voids between spherical particles)
For magnetite, the contribution of cylindrical pores calculated by the SCV/SCR method gives 62 %, contribution of pores between particles is 22 %, and slit-like pores 15.9 %. For chitosan/magnetite nanocomposite, contribution of cylindrical pores is equal to 56.3 %, contribution of pores between particles is 38.47 %, and slit-like pores 5.2 %. Changes in pore distribution can be caused by polymer filling in the cylindrical pores and redistribution in favor of the pores between particles.
FTIR Analysis
FTIR spectrum confirms the presence of chitosan in composite and is presented in Fig. 5.
FTIR spectra of chitosan/magnetite composite
In the FTIR spectrum of chitosan/magnetite, the broad adsorption band from 3600 to 3100 cm−1 corresponds to the stretching vibrations O–H of hydroxyl groups. The bands at 2919 and 2846 cm−1 were attributed to the asymmetric and symmetric CH2 stretch vibrations of chitosan, respectively. The AB at 1657 cm−1 can be coupled with water molecules. The band at 1561 cm−1 corresponds to the deformation vibrations of -NH2; 1415 and 1310 сm−1 for C–H bending vibrations, 1310 сm−1 for asymmetric С–О–С stretching vibrations, and 1080 сm−1 for С–О stretching vibration of СН–ОН were observed. Magnetite adsorption bands can be observed at 581 cm−1.
The thermal stability of the magnetite and magnetite/chitosan composite is presented in Fig. 6a, b, respectively.
Thermograms of magnetite (a) and magnetite/chitosan (b)
The low temperature weight loss from 25 up to 190 °C (endothermic peak) in the magnetite and hybrid composite corresponds to evaporation of physically adsorbed water. Chitosan started to undergo thermal destruction at 190 °C. The temperature range from 227 to 286 °C (exothermic process) could be assigned to chitosan oxidation (oxidation of –NH2 and –CH2OH groups).
Decomposition of the polymer chain in magnetite/chitosan starts from 190 °C and practically is over at 410–420 °C. In the temperature range from 289 to 775 °C, we can observe descending endothermic curve which can be explained by chitosan destruction (decarboxylation of oxidized –CH2OH groups, macromolecule chain breaks, release of nitrogen oxides from oxidized amino groups, etc.). In the case of chitosan/magnetite, the onset of the chitosan decomposition shifted to the temperatures of up to 20 °C.
The weight loss at the 190–800 °C temperature range for magnetite/chitosan composite was 4.6 % [45].
For the obtained composites, the total carbon content (CHN elemental analysis) as well as carbon concentration on the composite surface (EDAX analysis) was investigated (Table 2). As expected, carbon concentration on the surface of composite is higher than the total C content. The CHN elemental analysis data is in good agreement with the chitosan content in the composite obtained by the thermogravimetric analysis.
Table 2 Comparison of the chitosan/magnetite and magnetite element composition
XRD Analysis
According to the XRD analysis, the main phase in both magnetite and chitosan/composite is iron oxide Fe3O4 (Fig. 7). No difference in crystal structure was observed.
XRD patterns for magnetite (curve a), and magnetite/chitosan composites (curve b)
Adsorption of Gd-DTPA
Adsorption of Gd-DTPA ions on the surface of nanocomposite was calculated by the formula q e = (С 0 − С eq)V/m, where C and C eq are the initial and equilibrium concentration of ions in solution, respectively, mg/mL; V is the volume of solution (cm3); and m is the mass of adsorbent (g).
Adsorption isotherm of Gd-DTPA complex on the magnetite and chitosan-magnetite composite at point of zero charge (pH 7.23) is presented in Fig. 8.
Adsorption isotherm of Gd-DTPA complex on the magnetite and chitosan-magnetite composite (pH 7.23)
Surface adsorption on chitosan-magnetite composite is due to the presence of Fe–OH groups at the surface of iron oxides. These groups attain negative or positive charge by dissociation
$$ \equiv \mathrm{FeOH}\to \equiv {\mathrm{FeO}}^{-}+{\mathrm{H}}^{+} $$
or association of protons
$$ \equiv \mathrm{FeOH}+{\mathrm{H}}^{+}\to {{\mathrm{FeOH}}_2}^{+} $$
Therefore, the surface charging is a pH dependent. From the literature, it is well known that the pzc will vary with the particle concentration, the ionic strength of the medium. The magnetite surface is positively charged up to pH ≈ 6.8. Therefore, at decreased pH, the negatively charged Gd-DTPA complexes can be adsorbed. At higher pH, their repulsion from the negative sites at the magnetite surface reduces the adsorbed amount.
The Langmuir and Freundlich isotherm models were applied to obtain data of Gd-DTPA adsorption mechanism on nanocomposite surface.
The Langmuir model is based on the assumption that maximum adsorption occurs when saturated monolayer of solute molecules is present on the adsorbent surface and the energy of adsorption is constant and there is no migration of adsorbate molecules in the surface plane [45].
The essential characteristic of Langmuir isotherm can be expressed by the R L parameter, which determines the favorable conditions for isotherm. If R L < 1, the process is favorable, if R L = 1, linear, and if R L > 1, unfavorable. As we can see, the adsorption process of Gd-DTPA on the chitosan/magnetite composite surface is favorable [46].
The linearization for Freundlich equation is empirical and applicable to adsorption on heterogeneous surfaces as well as multilayer adsorption. As seen from Table 3, Freundlich isotherm correlation coefficient R 2 = 0.837 that confirms appropriate fitting to that model. According to that model, the maximal adsorption capacity of Gd-DTPA complex on magnetite/chitosan composite is 0.59 mmol/g.
Table 3 Langmuir and Freundlich parameters for adsorption of Gd-DTPA complex on the magnetite and chitosan-magnetite composite
In the concentration range from 0.4 to 1.2 mmol/L, the adsorption capacity of composite is not much more higher than for pure magnetite. In the concentration range of the initial solution up to 0.4 mmol/L, addition of chitosan increased the adsorption capacity and affinity to Gd-DTPA complex. This parameter is very important for loading of Gd-DTPA micro quantities for possible use in medicine.
Unfortunately, there is no relevant information about Gd-DTPA adsorption on magnetite-chitosan composites. Adsorption capacities of the obtained magnetite-chitosan composites can be compared with other magnetite-based composites. For example, composite adsorbent prepared by entrapping cross-linked chitosan and nanomagnetite on the heulandite surface was used to remove Cu(II) and As(V) in aqueous solution. The composite gave the maximum equilibrium uptake of Cu(II) and As(V) of 17.2 (0.26875 mmol/g) and 5.9 mg/g (0.07878 mmol/g) in the initial concentration ranges of 16–656 and 17–336 mg/L, respectively [47]. In [48], chitosan/magnetite nanocomposite beads were prepared by a simple and effective process. The maximum adsorption capacities for Pb(II) and Ni(II), which occurred at pH 6 at room temperature, were as high as 63.33 (0.30564 mmol/g) and 52.55 mg/g (0.8952 mmol/g), respectively, according to the Langmuir isotherm model [48].
A comparison of the results obtained in this article (0.6 mmol of Gd-DTPA per gram of composite) with other authors leads to the conclusion about the prospects of the obtained nanocomposites for adsorption of Gd-DTPA complexes.
The synthesis of chitosan/magnetite nanocomposites was made. It was shown that the magnetite synthesis in the chitosan medium does not affect magnetite crystal structure. The TGA data showed 4.6 % of mass concentration of chitosan in hybrid chitosan/magnetite. Despite low chitosan content, the adsorption properties of hybrid chitosan/magnetite composite with respect to Gd-DTPA increased in the micro quantity region to those of magnetite. In general, increase of Gd-DTPA adsorption in chitosan/magnetite composite, compared to unmodified magnetite, can be explained by the advent of chitosan. Interaction between chitosan and Gd-DTPA molecule can be connected with organic nature of these molecules coupled with electrostatic interaction. Good correlation with the Freundlich adsorption model can be explained by hybrid (organo-inorganic) nature of adsorbent.
Budnyak TM, Pylypchuk IV, Tertykh VA, Yanovska ES, Kolodynska D (2015) Synthesis and adsorption properties of chitosan-silica nanocomposite prepared by sol-gel method. Nanoscale Res Lett 10(1):1–10
Gorbyk PP, Dubrovin I V, Petranovska AL, Abramov M V, Usov DG, Storozhuk LP, Turanska SP, Turelyk MP, Chekhun VF, Lukyanova NY, Shpak AP, Korduban OM (2009) Chemical construction of polyfunctional nanocomposites and nanorobots for medico-biological applications. Nanomaterials and Supramolecular Structures. Physics, Chemistry, and Applications. Nederlands: Springer. AP Shpak, PP Gorbyk (eds.) p. 63–78
Pylypchuk IV, Petranovska AL, Gorbyk PP, Korduban OM, Rogovtsov AA, Shevchenko YB (2014) Gadolinium and boron containing nanocomposites based on magnetite. Metallofiz i Noveishie Tekhnologii 36(6):767–777
Petranovska AL, Kusyak AP, Pylypchuk IV, Gorbyk PP (2015) Adsorption of doxorubicin by fumed silica and magnetite/siloxane nanocomposites (in Ukrainian). Him Fiz ta Tehnol Poverhni 6(4):481–488
Pylypchuk IV, Zubchuk YO, Petranovskaya AL, Turanska SP, Gorbyk PP (2015) Synthesis and properties of Fe3O4/hydroxyapatite/pamidronic acid/diethylenetriaminepentaacetic acid/Gd3+ nanocomposites (in Ukrainian). Him Fiz Tehnol Poverhni 6(3):326–335. doi:10.15407/hftp06.03.326
Budnyak T, Tertykh V, Yanovska E (2014) Chitosan immobilized on silica surface for wastewater treatment. Mater Sci (Medžiagotyra) 20(Suppl 2):177–82
Mi F, Wu S, Chen Y (2015) Combination of carboxymethyl chitosan-coated magnetic nanoparticles and chitosan-citrate complex gel beads as a novel magnetic adsorbent. Carbohydr Polym 131:255–263
Cho D, Jeon B, Chon C, Schwartz FW, Jeong Y, Song H (2015) Magnetic chitosan composite for adsorption of cationic and anionic dyes in aqueous solution. J Ind Eng Chem 28:60–66
Tanhaei B, Ayati A, Lahtinen M, Sillanpää M (2015) Preparation and characterization of a novel chitosan/Al 2 O 3/magnetite nanoparticles composite adsorbent for kinetic, thermodynamic and isotherm studies of Methyl Orange adsorption. Chem Eng J 259:1–10
Song X, Luo X, Zhang Q, Zhu A, Ji L, Yan C (2015) Preparation and characterization of biofunctionalized chitosan/Fe 3 O 4 magnetic nanoparticles for application in liver magnetic resonance imaging. J Magn Magn Mater 388:116–122
Candra S, Sakti W, Narita Y, Sasaki T, Tanaka S (2015) A novel pyridinium functionalized magnetic chitosan with pH-independent and rapid adsorption kinetics for magnetic separation of Cr (VI). J Environ Chem Eng 3(3):1953–1961
Rad-moghadam K, Dehghan N (2014) Application of cellulose/chitosan grafted nano-magnetites as efficient and recyclable catalysts for selective synthesis of. J Mol Catal A Chem 392:97–104
Barbosa-barros L, García-jimeno S, Estelrich J (2014) Formation and characterization of biobased magnetic nanoparticles double coated with dextran and chitosan by layer-by-layer deposition. Colloids Surf A Physicochem Eng Asp 450:121–129
Sinha VR, Singla AK, Wadhawan S, Kaushik R, Kumria R, Bansal K, Dhawan S (2004) Chitosan microspheres as a potential carrier for drugs. Int J Pharm 274(1-2):1–33
Darras V, Nelea M, Winnik FM, Buschmann MD (2010) Chitosan modified with gadolinium diethylenetriaminepentaacetic acid for magnetic resonance imaging of DNA/chitosan nanoparticles. Carbohydr Polym 80(4):1137–1146
Huang Y, Cao B, Yang X, Zhang Q, Han X, Guo Z (2013) Gd complexes of diethylenetriaminepentaacetic acid conjugates of low-molecular-weight chitosan oligosaccharide as a new liver-specific MRI contrast agent. Magn Reson Imaging 31(4):604–609
Smith DR, Lorey DR, Chandra S (2004) Subcellular SIMS imaging of gadolinium isotopes in human glioblastoma cells treated with a gadolinium containing MRI agent. Appl Surf Sci 231-232:457–61
Takahashi K, Nakamura H, Furumoto S, Yamamoto K, Fukuda H, Matsumura A, Yamamoto Y (2005) Synthesis and in vivo biodistribution of BPA-Gd-DTPA complex as a potential MRI contrast carrier for neutron capture therapy. Bioorg Med Chem 13(3):735–743
Pinto Reis C, Neufeld RJ, Ribeiro AJ, Veiga F (2006) Nanoencapsulation I Methods for preparation of drug-loaded polymeric nanoparticles. Nanomedicine 2(1): 8–21
Fujimoto T, Ichikawa H, Akisue T, Fujita I, Kishimoto K, Hara H, Imabori M, Kawamitsu H, Sharma P, Brown SC, Moudgil BM, Fujii M, Yamamoto T, Kurosaka M, Fukumori Y (2009) Accumulation of MRI contrast agents in malignant fibrous histiocytoma for gadolinium neutron capture therapy. Appl Radiat Isot 67(7-8 Suppl):S355–S358
Saha TK, Ichikawa H, Fukumori Y (2006) Gadolinium diethylenetriaminopentaacetic acid-loaded chitosan microspheres for gadolinium neutron-capture therapy. Carbohydr Res 341(17):2835–2841
Sharma P, Brown SC, Walter G, Santra S, Scott E, Ichikawa H, Fukumori Y, Moudgil BM (2007) Gd nanoparticulates: from magnetic resonance imaging to neutron capture therapy. Adv Powder Technol 18(6):663–698
Brannon-Peppas L, Blanchette JO (2004) Nanoparticle and targeted systems for cancer therapy. Adv Drug Deliv Rev 56(11):1649–1659
Cerullo N, Bufalino D, Daquino G (2009) Progress in the use of gadolinium for NCT. Appl Radiat Isot 67(7-8 Suppl):S157–S160
Fukumori Y, Ichikawa H (2006) Nanoparticles for cancer therapy and diagnosis. Adv Powder Technol 17(1):1–28
Sauerwein, Wolfgang AG, Moss, Raymond, Wittig, Andrea Nakagawa, Yoshinobu (Ed.). (2012). Neutron capture therapy Principles and applications. Germany: Springer
Hawthorne M, Shelly K, Wiersema R (2001) Frontiers in neutron capture therapy. Springer US, Boston
Kobayashi H, Kawamoto S, Bernardo M, Brechbiel MW, Knopp MV, Choyke PL (2006) Delivery of gadolinium-labeled nanoparticles to the sentinel lymph node: comparison of the sentinel node visualization and estimations of intra-nodal gadolinium concentration by the magnetic resonance imaging. J Control Release 111(3):343–351
Le UM, Cui Z (2006) Long-circulating gadolinium-encapsulated liposomes for potential application in tumor neutron capture therapy. Int J Pharm 312(1-2):105–112
Magda D, Miller RA (2006) Motexafin gadolinium: a novel redox active drug for cancer therapy. Semin Cancer Biol 16(6):466–476
Shikata F, Tokumitsu H, Ichikawa H, Fukumori Y (2002) In vitro cellular accumulation of gadolinium incorporated into chitosan nanoparticles designed for neutron-capture therapy of cancer. Eur J Pharm Biopharm 53(1):57–63
Escobar EV, Martínez CA, Rodríguez CA, Castro JS, Quevedo MA, García-casillas PE (2012) Adherence of paclitaxel drug in magnetite chitosan nanoparticles. J Alloys Compd 536:S441–S444
Zhang W, Jia S, Wu Q, Wu S, Ran J, Liu Y, Hou J (2012) Studies of the magnetic field intensity on the synthesis of chitosan-coated magnetite nanocomposites by co-precipitation method. Mater Sci Eng C 32(2):381–384
Paulino AT, Bel LA, Kubota LT, Muniz EC, Almeida VC, Tambourgi EB (2011) Effect of magnetite on the adsorption behavior of Pb (II), Cd (II), and Cu (II) in chitosan-based hydrogels. Desalination 275:187–196
Bhatt AS, Ã DKB, Santosh MS (2010) Electrical and magnetic properties of chitosan-magnetite nanocomposites. Phys B Phys Condens Matter 405(8):2078–2082
Ortiz U, Garza-navarro M, Torres-castro A, Gonza V, de Rosa E (2010) Magnetite and magnetite/silver core/shell nanoparticles with diluted magnet-like behavior. J Solid State Chem 183:99–104
Yuan Q, Venkatasubramanian R, Hein S, Misra RDK (2008) A stimulus-responsive magnetic nanoparticle drug carrier: magnetite encapsulated by chitosan-grafted-copolymer. Acta Biomaterialia 4:1024–1037
Yang P, Lee C (2007) Hyaluronic acid interaction with chitosan-conjugated magnetite particles and its purification. Biochem Eng J 33:284–289.a
Li B, Jia D, Zhou Y, Hu Q, Cai W (2006) In situ hybridization to chitosan/magnetite nanocomposite induced by the magnetic field. J Magn Magn Mater 306:223–227
Nieto JM, Peniche-Covas C, Del Bosque J (1992) Preparation and characterization of a chitosan-Fe (III) complex. Carbohydr Polym 18(3):221–224
Tóth A, Voitko KV, Bakalinska O, Prykhod'Ko GP, Bertóti I, Martínez-Alonso A, Tascón JMD, Gun'Ko VM, László K (2012) Morphology and adsorption properties of chemically modified MWCNT probed by nitrogen, n-propane and water vapor. Carbon N Y 50(2):577–585
Gun'ko VM (2014) Composite materials: textural characteristics. Appl Surf Sci 307:444–454
Chemistry E (2000) Consideration of the multicomponent nature of adsorbents during analysis of their structural and energy parameters. Theor Exp Chem 36(6): 349–353.
Nguyen C, Do DD (1999) New method for the characterization of porous materials. Langmuir 15(10):3608–3615
Wiśniewska M, Chibowski S, Urban T, Sternik D (2010) Investigation of the alumina properties with adsorbed polyvinyl alcohol. J Therm Anal Calorim 103(1):329–337
Adamczuk A, Kołodyńska D (2015) Equilibrium, thermodynamic and kinetic studies on removal of chromium, copper, zinc and arsenic from aqueous solutions onto fly ash coated by chitosan. Chem Eng J 274:200–212
Cho DW, Jeon BH, Chon CM, Kim Y, Schwartz FW, Lee ES, Song H (2012) A novel chitosan/clay/magnetite composite for adsorption of Cu(II) and As(V). Chem Eng J 200-202:654–62
Vinh H, Dai L, Ngoc T (2010) Preparation of chitosan/magnetite composite beads and their application for removal of Pb (II) and Ni (II) from aqueous solution. Mater Sci Eng C 30(2):304–310
The authors are thankful to Prof. Vladimir M. Gun'ko, Chuiko Institute of Surface Chemistry of the National Academy of Sciences of Ukraine, for his kind assistance with the calculation of the pore size distribution.
This research was funded by the International Visegrad Fund (Visegrad/V4EaP Scholarship No 51500518).
Publication based on the research was provided by the grant support of the State Fund for Fundamental Research (project No. 61/100-2015).
Nanomaterials Department, Chuiko Institute of Surface Chemistry of the National Academy of Sciences of Ukraine, 17 General Naumov Str., 03164, Kyiv, Ukraine
Ie. V. Pylypchuk & P. P. Gorbyk
Department of Inorganic Chemistry, Faculty of Chemistry, Maria Curie Skłodowska University, M. Curie Skłodowska Sq. 2, 20-031, Lublin, Poland
D. Kołodyńska & M. Kozioł
Ie. V. Pylypchuk
D. Kołodyńska
M. Kozioł
P. P. Gorbyk
Correspondence to Ie. V. Pylypchuk.
IVP carried out chemical experiments and drafted the manuscript. DK coordinated analytical part and significantly edited the manuscript. MK carried out the DTG investigations. PPG conceived of the study and helped to draft the manuscript. All authors read and approved the final manuscript.
Pylypchuk, I.V., Kołodyńska, D., Kozioł, M. et al. Gd-DTPA Adsorption on Chitosan/Magnetite Nanocomposites. Nanoscale Res Lett 11, 168 (2016). https://doi.org/10.1186/s11671-016-1363-3
Gd-DTPA adsorption
Neutron capture therapy
Hybrid nanocomposites | CommonCrawl |
The potential of quantum annealing for rapid solution structure identification
Yuchen Pang ORCID: orcid.org/0000-0002-4532-70531,
Carleton Coffrin ORCID: orcid.org/0000-0003-3238-16992,
Andrey Y. Lokhov ORCID: orcid.org/0000-0003-3269-72632 &
Marc Vuffray ORCID: orcid.org/0000-0001-7999-98972
Constraints (2020)Cite this article
The recent emergence of novel computational devices, such as quantum computers, coherent Ising machines, and digital annealers presents new opportunities for hardware-accelerated hybrid optimization algorithms. Unfortunately, demonstrations of unquestionable performance gains leveraging novel hardware platforms have faced significant obstacles. One key challenge is understanding the algorithmic properties that distinguish such devices from established optimization approaches. Through the careful design of contrived optimization tasks, this work provides new insights into the computation properties of quantum annealing and suggests that this model has the potential to quickly identify the structure of high-quality solutions. A meticulous comparison to a variety of algorithms spanning both complete and local search suggests that quantum annealing's performance on the proposed optimization tasks is distinct. This result provides new insights into the time scales and types of optimization problems where quantum annealing has the potential to provide notable performance gains over established optimization algorithms and suggests the development of hybrid algorithms that combine the best features of quantum annealing and state-of-the-art classical approaches.
As the challenge of scaling traditional transistor-based Central Processing Unit (CPU) technology continues to increase, experimental physicists and high-tech companies have begun to explore radically different computational technologies, such as quantum computers [14, 41, 62], quantum annealers [43, 45] and coherent Ising machines [40, 47, 59]. The goal of all of these technologies is to leverage the dynamical evolution of a physical system to perform a computation that is challenging to emulate using traditional CPU technology, the most notable example being the simulation of quantum physics [29]. Despite their entirely disparate physical implementations, optimization of quadratic functions over binary variables (e.g., the Quadratic Unconstrained Binary Optimization (QUBO) and Ising models [13]) has emerged as a challenging computational task that a wide variety of novel hardware platforms can address. As these technologies mature, it may be possible for this specialized hardware to rapidly solve challenging combinatorial problems, such as Max-Cut [38] or Max-Clique [53], and preliminary studies have suggested that some classes of Constraint Satisfaction Problems can be effectively encoded in such devices because of their combinatorial structure [8, 9, 67, 72].
At this time, understanding the computational advantage that these hardware platforms may bring to established optimization algorithms remains an open question. For example, it is unclear if the primary benefit will be dramatically reduced runtimes due to highly specialized hardware implementations [31, 76, 77] or if the behavior of the underlying analog computational model will bring intrinsic algorithmic advantages [3, 26]. A compelling example is gate-based quantum computation (QC), where a significant body of theoretical work has found key computational advantages that exploit quantum properties [18, 34, 71]. Indeed, such advantages have recently been demonstrated on quantum computing hardware for the first time [5]. Highlighting similar advantages on other computational platforms, both in theory and in practice, remains a central challenge for novel physics-inspired computing models [36, 46, 51].
Focusing on quantum annealing (QA), this work provides new insights on the properties of this computing model and identifies problem structures where it can provide a computational advantage over a broad range of established solution methods. The central contribution of this work is the analysis of tricky optimization problems (i.e., Biased Ferromagnets, Frustrated Biased Ferromagnets, and Corrupted Biased Ferromagnets) that are challenging for established optimization approaches but are easy for QA hardware, such as D-Wave's 2000Q platform. This result suggests that there are classes of optimization problems where QA can effectively identify global solution structure while established heuristics struggle to escape local minima. Two auxiliary contributions that resulted from this pursuit are the identification of the Corrupted Biased Ferromagnet problem, which appears to be a useful benchmark problem beyond this particular study, and demonstration of the most significant performance gains of a quantum annealing platform to the established state-of-the-art alternatives, to the best of our knowledge.
This work begins with a brief introduction to both the mathematical foundations of the Ising model, Section 2, and quantum annealing, Section 3. It then reviews a variety of algorithms than can be used to solve such models in Section 4. The primary result of the paper is presented in carefully designed structure detection experiments in Section 5. Open challenges relating to developing hybrid algorithms are discussed in Section 6, and Section 7 concludes the paper.
A brief introduction to ising models
This section introduces the notations of the paper and provides a brief introduction to Ising models, a core mathematical abstraction of QA. The Ising model refers to the class of graphical models where the nodes, \({\mathcal {N}} = \left \{1,\dots , N\right \}\), represent spin variables (i.e., \(\sigma _{i} \in \{-1,1\} ~\forall i \in {\mathcal {N}}\)), and the edges, \({\mathcal {E}} \subseteq {\mathcal {N}} \times {\mathcal {N}}\), represent pairwise interactions of spin variables (i.e., \(\sigma _{i} \sigma _{j} ~\forall i,j \in {\mathcal {E}}\)). A local field \(\boldsymbol {h}_{i} ~\forall i \in {\mathcal {N}}\) is specified for each node, and an interaction strength \(\boldsymbol {J}_{ij} ~\forall i,j \in {\mathcal {E}}\) is specified for each edge. The energy of the Ising model is then defined as:
$$ \begin{array}{@{}rcl@{}} E(\sigma) &= \underset{i,j \in {\mathcal{E}}}{\sum} \boldsymbol{J}_{ij} \sigma_{i} \sigma_{j} + \underset{i \in {\mathcal{N}}}{\sum} \boldsymbol{h}_{i} \sigma_{i} \end{array} $$
Originally introduced in statistical physics as a model for describing phase transitions in ferromagnetic materials [32], the Ising model is currently used in numerous and diverse application fields such as neuroscience [39, 68], bio-polymers [63], gene regulatory networks [55], image segmentation [64], statistical learning [52, 74, 75], and sociology [25].
This work focuses on finding the lowest possible energy of the Ising model, known as a ground state, that is, finding the globally optimal solution of the following discrete optimization problem:
$$ \begin{array}{@{}rcl@{}} && \min: E(\sigma)\\ && \text{s.t.: } \sigma_{i} \in \{-1, 1\} ~\forall i \in {\mathcal{N}} \end{array} $$
The coupling parameters of Ising models are categorized into two groups based on their sign: the ferromagnetic interactions Jij < 0, which encourage neighboring spins to take the same value, i.e., σiσj = 1, and anti-ferromagnetic interactions Jij > 0, which encourage neighboring spins to take opposite values, i.e., σiσj = − 1.
The notion of frustration is central to the study of Ising models and refers to any instance of (2) where the optimal solution does not achieve the minimum of all local interactions [19]. Namely, the optimal solution of a frustrated Ising model, σ∗, satisfies the following property:
$$ \begin{array}{@{}rcl@{}} E(\sigma^{*}) > \underset{i,j \in {\mathcal{E}}}{\sum} - |\boldsymbol{J}_{ij}| - \underset{i \in {\mathcal{N}}}{\sum} |\boldsymbol{h}_{i}| \end{array} $$
Gauge Transformations
A valuable property of the Ising model is the gauge transformation, which characterizes an equivalence class of Ising models. Consider the optimal solution of Ising model S, σs. One can construct a new Ising model T where the optimal solution is the target state σt by applying the following parameter transformation:
$$ \begin{array}{@{}rcl@{}} \boldsymbol{J}^{t}_{ij} &=& \boldsymbol{J}_{ij}^{s} {\boldsymbol{\sigma}_{i}^{s}} {\boldsymbol{\sigma}_{j}^{s}} {\boldsymbol{\sigma}_{i}^{t}} {\boldsymbol{\sigma}_{j}^{t}} ~\forall i,j \in {\mathcal{E}} \end{array} $$
(4a)
$$ \begin{array}{@{}rcl@{}} {\boldsymbol{h}_{i}^{t}} &=& {\boldsymbol{h}_{i}^{s}} {\boldsymbol{\sigma}_{i}^{s}} {\boldsymbol{\sigma}_{i}^{t}} ~\forall i \in {\mathcal{N}} \end{array} $$
(4b)
This S-to-T manipulation is referred to as a gauge transformation. Using this property, one can consider the class of Ising models where the optimal solution is \(\sigma _{i} = -1 ~\forall i \in {\mathcal {N}}\) or any arbitrary vector of − 1, 1 values without loss of generality.
Classes of Ising Models
Ising models are often categorized by the properties of their optimal solutions with two notable categories being Ferromagnets (FM) and Spin glasses. Ferromagnetic Ising models are unfrustrated models possessing one or two optimal solutions. The traditional FM model is obtained by setting Jij = − 1,hi = 0. The optimal solutions have a structure with all spins pointing in the same direction, i.e., σi = 1 or σi = − 1, which mimics the behavior of physical magnets at low temperatures. In contrast to FMs, Spin glasses are highly frustrated systems that exhibit an intricate geometry of optimal solutions that tend to take the form of a hierarchy of isosceles sets [61]. Spin glasses are challenging for greedy and local search algorithms [7] due to the nature of their energy landscape [24, 60]. A typical Spin glass instance can be achieved using random interactions graphs with P(Jij = − 1) = 0.5,P(Jij = 1) = 0.5, and hi = 0.
Bijection of Ising and Boolean Optimization
It is valuable to observe that there is a bijection between Ising optimization (i.e., σ ∈ {− 1, 1}) and Boolean optimization (i.e., x ∈ {0,1}). The transformation of σ-to-x is given by:
$$ \begin{array}{@{}rcl@{}} \sigma_{i} &=& 2x_{i} - 1 ~\forall i \in {\mathcal{N}} \end{array} $$
$$ \begin{array}{@{}rcl@{}} \sigma_{i}\sigma_{j} &=& 4x_{i}x_{j} - 2x_{i} - 2x_{j} + 1 ~\forall i,j \in {\mathcal{E}} \end{array} $$
and the inverse x-to-σ is given by:
$$ \begin{array}{@{}rcl@{}} x_{i} &=& \frac{\sigma_{i} + 1}{2} ~\forall i \in {\mathcal{N}} \end{array} $$
$$ \begin{array}{@{}rcl@{}} x_{i} x_{j} &=& \frac{\sigma_{i} \sigma_{j} + \sigma_{i} + \sigma_{j} + 1}{4} ~\forall i,j \in {\mathcal{E}} \end{array} $$
Consequently, any results from solving Ising models are also immediately applicable to the class of optimization problems referred to as Pseudo-Boolean Optimization or Quadratic Unconstrained Binary Optimization (QUBO):
$$ \begin{array}{@{}rcl@{}} && \min: \underset{i,j \in {\mathcal{E}}}{\sum} \boldsymbol{c}_{ij} x_{i} x_{j} + \underset{i \in {\mathcal{N}}}{\sum} \boldsymbol{c}_{i} x_{i} + \boldsymbol{c} \end{array} $$
$$ \begin{array}{@{}rcl@{}} && \text{s.t.: } x_{i} \in \{0, 1\} ~\forall i \in {\mathcal{N}} \end{array} $$
In contrast to gate-based QC, which is Turing complete, QA specializes in optimizing Ising models. The next section provides a brief introduction of how quantum mechanics are leveraged by QA to perform Ising model optimization.
Foundations of quantum annealing
Quantum annealing is an analog computing technique for minimizing discrete or continuous functions that takes advantage of the exotic properties of quantum systems. This technique is particularly well-suited for finding optimal solutions of Ising models and has drawn significant interest due to hardware realizations via controllable quantum dynamical systems [43]. Quantum annealing is composed of two key elements: leveraging quantum state to lift the minimization problem into an exponentially larger space, and slowly interpolating (i.e., annealing) between an initial easy problem and the target problem. The quantum lifting begins by introducing for each spin σi ∈ {− 1, 1} a 2N × 2N dimensional matrix \(\widehat {\sigma }_{i}\) expressible as a Kronecker product of N matrices of dimension 2 × 2:
$$ \begin{array}{@{}rcl@{}} \widehat{\sigma}_{i} = \underbrace{\left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \mathop{\otimes} {\cdots} \mathop{\otimes} \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right)}_{\text{1 to $i-1$}} \mathop{\otimes} \underbrace{\left( \begin{array}{ll} 1 & ~ ~ ~ 0 \\ 0 & -1 \end{array}\right)}_{\text{$i^{\text{th}}$ term}} \mathop{\otimes} \underbrace{\left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \mathop{\otimes} {\cdots} \mathop{\otimes} \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right)}_{\text{$i+1$ to \textit{N}}} \end{array} $$
In this lifted representation, the value of a spin σi is identified with the two possible eigenvalues 1 and − 1 of the matrix \(\widehat {\sigma }_{i}\). The quantum counterpart of the energy function defined in (1) is the 2N × 2N matrix obtained by substituting spins with the \(\widehat {\sigma }\) matrices in the algebraic expression of the energy:
$$ \begin{array}{@{}rcl@{}} & \widehat{E} = \underset{i,j \in {\mathcal{E}}}{\sum} \boldsymbol{J}_{ij} \widehat{\sigma}_{i} \widehat{\sigma}_{j} + \underset{i \in {\mathcal{N}}}{\sum} \boldsymbol{h}_{i} \widehat{\sigma}_{i} \end{array} $$
Notice that the eigenvalues of the matrix in (9) are the 2N possible energy values obtained by evaluating the energy E(σ) from (1) for all possible configurations of spins. This implies that finding the lowest eigenvalue of \(\widehat {E}\) is tantamount to solving the minimization problem in (2). This lifting is clearly impractical from the classical computing context as it transforms a minimization problem over 2N configurations into computing the minimum eigenvalue of a 2N × 2N matrix. The key motivation for this approach is that it is possible to construct quantum systems with only N quantum bits that attempt to find the minimum eigenvalue of this matrix.
The annealing process provides a way of steering a quantum system into the a priori unknown eigenvector that minimizes the energy of (9) [28, 45]. The core idea is to initialize the quantum system at the minimal eigenvector of a simple energy matrix \(\widehat {E}_{0}\), for which an explicit formula is known. After the system is initialized, the energy matrix is interpolated from the easy problem to the target problem slowly over time. Specifically, the energy matrix at a point during the anneal is given by \(\widehat {E}_{a}({\varGamma }) = (1-{\varGamma })\widehat {E}_{0} + {\varGamma } \widehat {E}\), with Γ varying from 0 to 1. When the anneal is complete, Γ = 1 and the interactions in the quantum system are described by the target energy matrix. The annealing time is the physical time taken by the system to evolve from Γ = 0 to Γ = 1. For suitable starting energy matrices \(\widehat {E}_{0}\) and a sufficiently slow annealing time, theoretical results have demonstrated that a quantum system continuously remains at the minimal eigenvector of the interpolating matrix \(\widehat {E}_{a}({\varGamma })\) [3] and therefore achieves the minimum energy (i.e., a global optima) of the target problem. Realizing this optimality result in practice has proven difficult due to corruption of the quantum system from the external environment. Nevertheless, quantum annealing can serve as a heuristic for finding high-quality solutions to the Ising models, i.e., (2).
Quantum annealing hardware
Interest in the QA model is due in large part to D-Wave Systems, which has developed the first commercially available QA hardware platform [43]. Given the computational challenges of classically simulating QA, this novel computing device represents the only viable method for studying QA at non-trivial scales, e.g., problems with more than 1000 qubits [11, 22]. At the most basic level, the D-Wave platform allows the user to program an Ising model by providing the parameters J,h in (1) and returns a collection of variable assignments from multiple annealing runs, which reflect optimal or near-optimal solutions to the input problem.
This seemingly simple interface is, however, hindered by a variety of constraints imposed by D-Wave's 2000Q hardware implementation. The most notable hardware restriction is the Chimera connectivity graph depicted in Fig. 1, where each edge indicates if the hardware supports a coupling term Jij between a pair of qubits i and j. This sparse graph is a stark contrast to traditional quadratic optimization tools, where it is assumed that every pair of variables can interact.
A 2-by-2 Chimera graph illustrating the variable product limitations of D-Wave's 2000Q processor
The second notable hardware restriction is a limited coefficient programming range. On the D-Wave 2000Q platform the parameters are constrained within the continuous parameter ranges of − 1 ≤Jij ≤ 1 and − 2 ≤hi ≤ 2. At first glance these ranges may not appear to be problematic because the energy function (1) can be rescaled into the hardware's operating range without any loss of generality. However, operational realities of analog computing devices make the parameter values critically important to the overall performance of the hardware. These challenges include: persistent coefficient biases, which are an artifact of hardware slowly drifting out of calibration between re-calibration cycles; programming biases, which introduce some minor errors in the J,h values that were requested; and environmental noise, which disrupts the quantum behavior of the hardware and results in a reduction of solution quality. Overall, these hardware constraints have made the identification of QA-based performance gains notoriously challenging [16, 42, 54, 58, 65].
Despite the practical challenges in using D-Wave's hardware platform, extensive experiments have suggested that QA can outperform some established local search methods (e.g., simulated annealing) on carefully designed Ising models [4, 22, 49]. However, demonstrating an unquestionable computational advantage over state-of-the-art methods on contrived and practical problems remains an open challenge.
Methods for ising model optimization
The focus of this work is to compare and contrast the behavior of QA to a broad range of established optimization algorithms. To that end, this work considers three core algorithmic categories: (1) complete search methods from the mathematical programming community; (2) local search methods developed by the statistical physics community; and (3) quantum annealing as realized by D-Wave's hardware platform. The comparison includes both state-of-the-art solution methods from the D-Wave benchmarking literature (e.g., Hamze-Freitas-Selby [69], Integer Linear Programming [16]) and simple straw-man approaches (e.g., Greedy, Glauber Dynamics [33], Min-Sum [30, 60]) to highlight the solution quality of minimalist optimization approaches. This section provides high-level descriptions of the algorithms; implementation details are available as open-source software [17, 69].
Complete search
Unconstrained Boolean optimization, as in (7), has been the subject of mathematical programming research for several decades [10, 12]. This work considers the two most canonical formulations based on Integer Quadratic Programming and Integer Linear Programming.
Integer Quadratic Programming (IQP)
This formulation consists of using black-box commercial optimization tools to solve (7) directly. This model was leveraged in some of the first QA benchmarking studies [58] and received some criticism [66]. However, the results presented here suggest that this model has become more competitive due to the steady progress of commercial optimization solvers.
Integer Linear Programming (ILP)
This formulation is a slight variation of the IQP model where the variable products xixj are lifted into a new variable xij and constraints are added to capture the conjunction xij = xi ∧ xj as follows:
$$ \begin{array}{@{}rcl@{}} && \min: \underset{i,j \in {\mathcal{E}}}{\sum} \boldsymbol{c}_{ij} x_{ij} + \underset{i \in {\mathcal{N}}}{\sum} \boldsymbol{c}_{i} x_{i} + \boldsymbol{c} \end{array} $$
(10a)
$$ \begin{array}{@{}rcl@{}} && \text{s.t.: }\\ && x_{ij} \geq x_{i} + x_{j} - 1, ~x_{ij} \leq x_{i}, ~x_{ij} \leq x_{j} ~\forall i,j \in {\mathcal{E}} \end{array} $$
(10b)
$$ \begin{array}{@{}rcl@{}} && x_{i} \in \{0, 1\} ~\forall i \in {\mathcal{N}}, ~x_{ij} \in \{0, 1\} ~\forall i,j \in {\mathcal{E}} \end{array} $$
This formulation was also leveraged in some of the first QA benchmarking studies [20, 66] and [10], which suggest this is the best formulation for sparse graphs, as is the case with the D-Wave Chimera graph. However, this work indicates that IQP solvers have improved sufficiently and this conclusion should be revisited.
Although complete search algorithms are helpful in the validation of QA hardware [6, 16], it is broadly accepted that local search algorithms are the most appropriate point of computational comparison to QA methods [1]. Given that a comprehensive enumeration of local search methods would be a monumental undertaking, this work focuses on representatives from four distinct algorithmic categories including greedy, message passing, Markov Chain Monte Carlo, and large neighborhood search.
Greedy (GRD)
The first heuristic algorithm considered by this work is a Steepest Coordinate Decent (SCD) greedy initialization approach. This algorithm assigns the variables one-by-one, always taking the assignment that minimizes the objective value. Specifically, the SCD approach begins with unassigned values, i.e., \(\sigma _{i} = 0 ~\forall i \in {\mathcal {N}}\), and then repeatedly applies the following assignment rule until all of the variables have been assigned a value of − 1 or 1:
$$ \begin{array}{@{}rcl@{}} i, v &=& \underset{i \in {\mathcal{N}}, v \in \{-1, 1\}}{\text{argmin}} E(\sigma_{1}, \ldots, \sigma_{i-1}, v, \sigma_{i+1}, \ldots,\sigma_{N}) \end{array} $$
$$ \begin{array}{@{}rcl@{}} \sigma_{i} &=& v \end{array} $$
In each application, ties in the argmin are broken at random, giving rise to a potentially stochastic outcome of the heuristic. Once all of the variables have been assigned, the algorithm is repeated until a runtime limit is reached and only the best solution found is returned. Although this approach is very simple, it can be effective in Ising models with minimal amounts of frustration.
Message Passing (MP)
The second algorithm considered by this work is a message-based Min-Sum (MS) algorithm [30, 60], which is an adaptation of the celebrated Belief Propagation algorithm for solving minimization problems on networks. A key property of the MS approach is its ability to identify the global minimum of cost functions with a tree dependency structure between the variables; i.e., if no cycles are formed by the interactions in \(\mathcal {E}\). In the more general case of loopy dependency structures [60], MS provides a heuristic minimization method. It is nevertheless a popular technique favored in communication systems for its low computational cost and notable performance on random tree-like networks [73].
For the optimization model considered here, as in (2), the MS messages, \(\epsilon _{i \rightarrow j}\), are computed iteratively along directed edges \(i \rightarrow j\) and \(j \rightarrow i\) for each edge \((i,j)\in \mathcal {E}\), according to the Min-Sum equations:
$$ \begin{array}{@{}rcl@{}} {\epsilon}_{i \rightarrow j}^{t+1} = \text{SSL}(2\boldsymbol{J}_{ij},2\boldsymbol{h}_{i} + \underset{k \in \mathcal{E}(i) \setminus j}{\sum}{\epsilon}_{k \rightarrow j}^{t} ) \end{array} $$
$$ \begin{array}{@{}rcl@{}} \text{SSL}(x,y) = \min(x,y)-\min(-x,y) -x \end{array} $$
Here, \(\mathcal {E}(i) \setminus j\) denotes the neighbors of i without j and SSL denotes the Symmetric Saturated Linear transfer function. Once a fix-point of (12a) is obtained or a prescribed runtime limit is reached, the MS algorithm outputs a configuration based on the following formula:
$$ \begin{array}{@{}rcl@{}} \sigma_{i} = - \text{sign}\left( 2\boldsymbol{h}_{i} + \underset{k \in \mathcal{E}(i)}{\sum}\epsilon_{k \rightarrow j} \right) \end{array} $$
By convention, if the argument of the sign function is 0, a value of 1 or − 1 is assigned randomly with equal probability.
Markov Chain Monte Carlo (MCMC)
MCMC algorithms include a wide range of methods to generate samples from complex probability distributions. A natural Markov Chain for the Ising model is given by Glauber dynamics, where the value of each variable is updated according to its conditional probability distribution. Glauber dynamics is often used as a method for producing samples from Ising models at finite temperature [33]. This work considers the so-called Zero Temperature Glauber Dynamics (GD) algorithm, which is the optimization variant of the Glauber dynamics sampling method, and which is also used in physics as a simple model for describing avalanche phenomena in magnetic materials [23]. From the optimization perspective, this approach is a single-variable greedy local search algorithm.
A step t of the GD algorithm consists in checking each variable \(i\in \mathcal {N}\) in a random order and comparing the objective cost of the current configuration σt to the configuration with the variable \({{\sigma }_{i}^{t}}\) being flipped. If the objective value is lower in the flipped configuration, i.e., \(E(\underline {\sigma }^{t}) > E({{\sigma }_{1}^{t}},\ldots ,-{{\sigma }_{i}^{t}},\ldots ,{{\sigma }_{N}^{t}})\), then the flipped configuration is selected as the new current configuration \(\underline {\sigma }^{t+1} = ({{\sigma }_{1}^{t}},\ldots ,-{{\sigma }_{i}^{t}},\ldots ,{{\sigma }_{N}^{t}})\). When the objective difference is 0, the previous or new configuration is selected randomly with equal probability. If after visiting all of the variables, no one single-variable flip can improve the current assignment, then the configuration is identified as a local minimum and the algorithm is restarted with a new randomly generated configuration. This process is repeated until a runtime limit is reached.
Large Neighborhood Search (LNS)
The state-of-the-art meta-heuristic for benchmarking D-Wave-based QA algorithms is the Hamze-Freitas-Selby (HFS) algorithm [37, 70]. The core idea of this algorithm is to extract low treewidth subgraphs of the given Ising model and then use dynamic programming to quickly compute the optimal configuration of these subgraphs. This extract and optimize process is repeated until a specified time limit is reached. This approach has demonstrated remarkable results in a variety of benchmarking studies [16, 44, 48, 49, 65]. The notable success of this solver can be attributed to three key factors. First, it is highly specialized to solving Ising models on the Chimera graphs (i.e., Fig. 1), a topological structure that is particularly amenable to low treewidth subgraphs. Second, it leverages integer arithmetic instead of floating point, which provides a significant performance improvement but also leads to notable precision limits. Third, the baseline implementation is a highly optimized C code [69], which runs at near-ideal performance.
Quantum annealing
Extending the theoretical overview from Section 3, the following implementation details are required to leverage the D-Wave 2000Q platform as a reliable optimization tool. The QA algorithm considered here consists of programming the Ising model of interest and then repeating the annealing process some number of times (i.e., num_reads) and then returning the lowest energy solution that was found among all of those replicates. No correction or solution polishing is applied in this solver. By varying the number of reads considered (e.g., from 10 to 10,000), the solution quality and total runtime of the QA algorithm increases. It is important to highlight that the D-Wave platform provides a wide variety of parameters to control the annealing process (e.g., annealing time, qubit offsets, custom annealing schedules, etc.). In the interest of simplicity and reproducibility, this work does not leverage any of those advanced features and it is likely that the results presented here would be further improved by careful utilization of those additional capabilities [2, 50, 56].
Note that all of the problems considered in this work have been generated to meet the implementation requirements discussed in Section 3.1 for a specific D-Wave chip deployed at Los Alamos National Laboratory. Consequently, no problem transformations are required to run the instances on the target hardware platform. Most notably, no embedding or rescaling is required. This approach is standard practice in QA evaluation studies and the arguments for it are discussed at length in [15, 16].
Structure detection experiments
This section presents the primary result of this work. Specifically, it analyzes three crafted optimization problems of increasing complexity—the Biased Ferromagnet, Frustrated Biased Ferromagnet, and Corrupted Biased Ferromagnet—all of which highlight the potential for QA to quickly identify the global structural properties of these problems. The algorithm performance analysis focuses on two key metrics, solution quality over time (i.e., performance profile) and the minimum hamming distance to any optimal solution over time. The hamming distance metric is particularly informative in this study as the problems have been designed to have local minima that are very close to the global optimum in terms of objective value, but are very distant in terms of hamming distance. The core finding is that QA produces solutions that are close to global optimality, both in terms of objective value and hamming distance.
Problem generation
All problems considered in this work are defined by simple probabilistic graphical models and are generated on a specific D-Wave hardware graph. To avoid bias towards one particular random instance, 100 instances are generated and the mean over this collection of instances is presented. Additionally, a random gauge transformation is applied to every instance to obfuscate the optimal solution and mitigate artifacts from the choice of initial condition in each solution approach.
Computation Environment
The CPU-based algorithms are run on HPE ProLiant XL170r servers with dual Intel 2.10GHz CPUs and 128GB memory. Gurobi 9.0 [35] was used for solving the Integer Programming (ILP/IQP) formulations. All of the algorithms were configured to only leverage one thread and the reported runtime reflects the wall clock time of each solver's core routine and does not include pre-processing or post-processing of the problem data.
The QA computation is conducted on a D-Wave 2000Q quantum annealer deployed at Los Alamos National Laboratory. This computer has a 16-by-16 Chimera cell topology with random omissions; in total, it has 2032 spins (i.e., \(\mathcal {N}\)) and 5924 couplers (i.e., \(\mathcal {E}\)). The hardware is configured to execute 10 to 10,000 annealing runs using a 5-microsecond annealing time per run and a random gauge transformation every 100 runs, to mitigate the various sources of bias in the problem encoding. The reported runtime of the QA hardware reflects the amount of on-chip time used; it does not include the overhead of communication or scheduling of the computation, which takes about one to two seconds. Given a sufficient engineering effort to reduce overheads, on-chip time would be the dominating runtime factor.
The biased ferromagnet
$$ \begin{array}{@{}rcl@{}} \boldsymbol{J}_{ij} &=& -1.00 ~\forall i,j \in {\mathcal{E}};\\ P(\boldsymbol{h}_{i} &=& 0.00) = 0.990 , P(\boldsymbol{h}_{i} = -1.00) = 0.010 ~\forall i \in {\mathcal{N}} \end{array} $$
(BFM)
Inspired by the Ferromagnet model, this study begins with Biased FerroMagnet (BFM) model—a toy problem to build an intuition for a type of structure that QA can exploit. Notice that this model has no frustration and has a few linear terms that bias it to prefer σi = 1 as the global optimal solution. W.h.p. σi = 1 is a unique optimal solution and the assignment of σi = − 1 is a local minimum that is sub-optimal by \(0.02 \cdot |{\mathcal {N}}|\) in expectation and has a maximal hamming distance of \(|{\mathcal {N}}|\). The local minimum is an attractive solution because it is nearly optimal; however, it is hard for a local search solver to escape from it due to its hamming distance from the true global minimum. This instance presents two key algorithmic challenges: first, one must effectively detect the global structure (i.e., all the variables should take the same value); second, one must correctly discriminate between the two nearly optimal solutions that are very distant from one another.
Figure 2 presents the results of running all of the algorithms from Section 4 on the BFM model. The key observations are as follows:
Both the greedy (i.e., SCD) and relaxation-based solvers (i.e., IQP/ILP/MS) correctly identify this problem's structure and quickly converge on the globally optimal solution (Fig. 2, top-right).
Neighborhood-based local search methods (e.g., GD) tend to get stuck in the local minimum of this problem. Even advanced local search methods (e.g., HFS) may miss the global optimum in rare cases (Fig. 2, top).
The hamming distance analysis indicates that QA has a high probability (i.e., greater than 0.9) of finding the exact global optimal solution (Fig. 2, bottom-right). This explains why just 20 runs is sufficient for QA to find the optimal solution w.h.p. (Fig. 2, top-right).
A key observation from this toy problem is that making a continuous relaxation of the problem (e.g., IQP/ILP/MS) can help algorithms detect global structure and avoid local minima that present challenges for neighborhood-based local search methods (e.g., GD/LNS). QA has comparable performance to these relaxation-based methods, both in terms of solution quality and runtime, and does appear to detect the global structure of the BFM problem class.
Performance profile (top) and Hamming Distance (bottom) analysis for the Biased Ferromagnet instance
Howeve encouraging these results are, the BFM problem is a straw-man that is trivial for five of the seven solution methods considered here. The next experiment introduces frustration to the BFM problem to understand how that impacts problem difficulty for the solution methods considered.
The frustrated biased ferromagnet
$$ \begin{array}{@{}rcl@{}} \boldsymbol{J}_{ij} &=& -1.00 ~\forall i,j \in {\mathcal{E}}\\ P(\boldsymbol{h}_{i} &=& 0.00) = 0.970, P(\boldsymbol{h}_{i} = -1.00) = 0.020, P(\boldsymbol{h}_{i} = 1.00) = 0.010 ~\forall i \in {\mathcal{N}} \end{array} $$
(FBFM)
The next step considers a slightly more challenging problem called a Frustrated Biased Ferromagnet (FBFM), which is a specific case of the random field Ising model [21] and similar in spirit to the Clause Problems considered in [57]. The FBFM deviates from the BFM by introducing frustration among the linear terms of the problem. Notice that on average 2% of the decision variables locally prefer σi = 1 while 1% prefer σi = − 1. Throughout the optimization process these two competing preferences must be resolved, leading to frustration. W.h.p. this model has the same unique global optimal solution as the BFM that occurs when σi = 1. The opposite assignment of σi = − 1 remains a local minimum that is sub-optimal by \(0.02 \cdot |{\mathcal {N}}|\) in expectation and has a maximal hamming distance of \(|{\mathcal {N}}|\). By design, the energy difference of these two extreme assignments is consistent with BFM, to keep the two problem classes as similar as possible.
Figure 3 presents the same performance analysis for the FBFM model. The key observations are as follows:
When compared to BFM, FBFM presents an increased challenge for the simple greedy (i.e., SCD) and local search (i.e., GD/MS) algorithms.
Although the SCD algorithm is worse than HFS in terms of objective quality, it is comparable or better in terms of hamming distance (Fig. 3, bottom-left). This highlights how these two metrics capture different properties of the underlying algorithms.
The results of QA and the relaxation-based solvers (i.e., IQP/ILP), are nearly identical to the BFM case, suggesting that this type of frustration does not present a significant challenge for these solution approaches.
These results suggest that frustration in the linear terms alone (i.e., h) is not sufficient for building optimization tasks that are non-trivial for a wide variety of general purpose solution methods. In the next study, frustration in the quadratic terms (i.e., J) is incorporated to increase the difficulty for the relaxation-based solution methods.
Performance profile (top) and Hamming Distance (bottom) analysis for the Frustrated Biased Ferromagnet instance
The corrupted biased ferromagnet
$$ \begin{array}{@{}rcl@{}} P(\boldsymbol{J}_{ij} &=& -1.00) = 0.625, P(\boldsymbol{J}_{ij} = 0.20) = 0.375 ~\forall i,j \in {\mathcal{E}}\\ P(\boldsymbol{h}_{i} &=& 0.00) = 0.970, P(\boldsymbol{h}_{i} = -1.00) = 0.020 , P(\boldsymbol{h}_{i} = 1.00) = 0.010 ~\forall i \in {\mathcal{N}} \end{array} $$
(CBFM)
The inspiration for this instance is to leverage insights from the theory of Spin glasses to build more computationally challenging problems. The core idea is to carefully corrupt the ferromagnetic problem structure with frustrating anti-ferromagnetic links that obfuscate the ferromagnetic properties without completely destroying them. A parameter sweep of different corruption values yields the Corrupted Biased FerroMagnet (CBFM) model, which retains the global structure that σi = 1 is a near globally optimal solution w.h.p., while obfuscating this property with misleading anti-ferromagnetic links and frustrated local fields.
Figure 4 presents a similar performance analysis for the CBFM model. The key observations are as follows:
In contrast to the BFM and FBFM cases, solvers that leverage continuous relaxations, such as IQP and ILP, do not immediately identify this problem's structure and can take between 50 to 700 seconds to identify the globally optimal solution (Fig. 4, top-left).
The advanced local search method (i.e., HFS) consistently converges to a global optimum (Fig. 4, top-right), which does not always occur in the BFM and FBFM cases.
Although the MS algorithm is notably worse than GD in terms of objective quality, it is notably better in terms of hamming distance. This further indicates how these two metrics capture different properties of the underlying algorithms (Fig. 4, bottom-left).
Although this instance presents more of a challenge for QA than BFM and FBFM, QA still finds the global minimum with high probability; 500-1000 runs is sufficient to find a near-optimal solution in all cases. This is 10 to 100 times faster than the next-best algorithm, HFS (Fig. 4, top-right).
The hamming distance analysis suggests that the success of the QA approach is that it has a significant probability (i.e., greater than 0.12) of returning a solution that has a hamming distance of less than 1% from the global optimal solution (Fig. 4, bottom-right).
The overarching trend of this study is that QA is successful in detecting the global structure of the BFM, FBFM, and CBFM instances (i.e., low hamming distance to optimal, w.h.p.). Furthermore, it can do so notably faster than all of the other algorithms considered here. This suggests that, in this class of problems, QA brings a unique value that is not captured by the other algorithms considered. Similar to how the relaxation methods succeed at the BFM and FBFM instances, we hypothesize that the success of QA on the CBFM instance is driven by the solution search occurring in a smooth high-dimensional continuous space as discussed in Section 3. In this instance class, QA may also benefit from so-called finite-range tunnelling effects, which allows QA to change the state of multiple variables simultaneously (i.e., global moves) [22, 27]. Regardless of the underlying cause, QA's performance on the CBFM instance is particularly notable and worthy of further investigation.
Performance profile (top) and Hamming Distance (bottom) analysis for the Corrupted Biased Ferromagnet instance
Bias structure variants
As part of the design process uniform field variants of the problems proposed herein were also considered. These variants featured weaker and more uniform distributed bias terms. Specifically, the term P(hi = − 1.00) = 0.010 was replaced with P(hi = − 0.01) = 1.000. Upon continued analysis, it was observed that the stronger and less-uniform bias terms resulted in more challenging cases for all of the solution methods considered, and hence, were selected as the preferred design for the problems proposed by this work. In the interest of completeness, Appendix A provides a detailed analysis of the uniform-field variants of the BFM, FBFM, and CBFM instances to illustrate how this problem variant impacts the performance of the solution methods considered here.
A comparison to other instance classes
The CBFM problem was designed to have specific structural properties that are beneficial to the QA approach. It is important to note that not all instance classes have such an advantageous structure. This point is highlighted in Fig. 5, which compares three landmark problem classes from the QA benchmarking literature: Weak-Strong Cluster Networks (WSCN) [22], Frustrated Cluster Loops with Gadgets (FCLG) [4], and Random Couplers and Fields (RANF-1) [16, 20]. These results show that D-Wave's current 2000Q hardware platform can be outperformed by local and complete search methods on some classes of problems. However, it is valuable to observe that these previously proposed instance classes are either relatively easy for local search algorithms (i.e., WSCN and RANF) or relatively easy for complete search algorithms (i.e., WSCN and FCLG), both of which are not ideal properties for conducting benchmarking studies. To the best of our knowledge, the proposed CBFM problem is the first instance class that presents a notable computational challenge for both local search and complete search algorithms.
Performance profiles of other problem classes from the literature
Quantum annealing as a primal heuristic
QA's notable ability to find high-quality solutions to the CBFM problem suggests the development of hybrid algorithms, which leverage QA for finding upper bounds within a complete search method that can also provide global optimality proofs. A simple version of such an approach was developed where 1000 runs of QA were used to warm-start the IQP solver with a high-quality initial solution. The results of this hybrid approach are presented in Fig. 6. The IQP solver clearly benefits from the warm-start on short time scales. However, it does not lead to a notable reduction in the time to producing the optimality proof. This suggests that a state-of-the-art hybrid complete search solver needs to combine QA for finding upper bounds with more sophisticated lower-bounding techniques, such as those presented in [6, 44].
Performance profile of Warm-Starting IQP with QA solutions
This work explored how quantum annealing hardware might be able to support heuristic algorithms in finding high-quality solutions to challenging combinatorial optimization problems. A careful analysis of quantum annealing's performance on the Biased Ferromagnet, Frustrated Biased Ferromagnet, and Corrupted Biased Ferromagnet problems with more than 2,000 decision variables suggests that this approach is capable of quickly identifying the structure of the optimal solution to these problems, while a variety of local and complete search algorithms struggle to identify this structure. This result suggests that integrating quantum annealing into meta-heuristic algorithms could yield unique variable assignments and increase the discovery of high-quality solutions.
Although demonstration of a runtime advantage was not the focus of this work, the success of quantum annealing on the Corrupted Biased Ferromagnet problem compared to other solution methods is a promising outcome for QA and warrants further investigation. An in-depth theoretical study of the Corrupted Biased Ferromagnet case could provide deeper insights into the structural properties that quantum annealing is exploiting in this problem and would provide additional insights into the classes of problems that have the best chance to demonstrate an unquestionable computational advantage for quantum annealing hardware. It is important to highlight that while the research community is currently searching for an unquestionable computational advantage for quantum annealing hardware by any means necessary, significant additional research will be required to bridge the gap between contrived hardware-specific optimization tasks and practical optimization applications.
Availability of data and material
The data used to generate the figures in this work is not explicitly archived. It can be recreated using the software that is available as open-source.
Code availability
The core software tools that were used in this work are available as open-source under a BSD license. The dwig software is available at https://github.com/lanl-ansi/dwig and can be used to generate instances of the problem classes considered in this work. The optimization methods considered in this work are archived in the ising-solvers repository, which is available at https://github.com/lanl-ansi/ising-solvers. One of the solution methods considered in this work requires a commercial software licence.
Aaronson, S. (2017). Insert d-wave post here. Published online at http://www.scottaaronson.com/blog/?p=3192. Accessed 28 Apr 2017.
Adame, J.I., & McMahon, P.L. (2020). Inhomogeneous driving in quantum annealers can result in orders-of-magnitude improvements in performance. Quantum Science and Technology, 5(3), 035011. https://doi.org/10.1088/2058-9565/ab935a. https://iopscience.iop.org/article/10.1088/2058-9565/ab935a.
Albash, T., & Lidar, D.A. (2018). Adiabatic quantum computation. Reviews of Modern Physics, 90(1), 015,002.
MathSciNet Article Google Scholar
Albash, T., & Lidar, D.A. (2018). Demonstration of a scaling advantage for a quantum annealer over simulated annealing. Physical Review X, 8(031), 016. https://doi.org/10.1103/PhysRevX.8.031016.
Arute, F., Arya, K., Babbush, R., Bacon, D., Bardin, J.C., Barends, R., Biswas, R., Boixo, S., Brandao, F.G.S.L., & et al. (2019). Quantum supremacy using a programmable superconducting processor. Nature, 574(7779), 505–510. https://doi.org/10.1038/s41586-019-1666-5.
Baccari, F., Gogolin, C., Wittek, P., & Acín, A. (2018). Verification of quantum optimizers. arXiv:1808.012751808.01275.
Barahona, F. (1982). On the computational complexity of ising spin glass models. Journal of Physics A: Mathematical and General, 15(10), 3241.
Bian, Z., Chudak, F., Israel, R., Lackey, B., Macready, W.G., & Roy, A. (2014). Discrete optimization using quantum annealing on sparse ising models. Frontiers in Physics, 2, 56. https://doi.org/10.3389/fphy.2014.00056.
Bian, Z., Chudak, F., Israel, R.B., Lackey, B., Macready, W.G., & Roy, A. (2016). Mapping constrained optimization problems to quantum annealing with application to fault diagnosis. Frontiers in ICT, 3, 14. https://doi.org/10.3389/fict.2016.00014.
Billionnet, A., & Elloumi, S. (2007). Using a mixed integer quadratic programming solver for the unconstrained quadratic 0-1 problem. Mathematical Programming, 109(1), 55–68. https://doi.org/10.1007/s10107-005-0637-9.
MathSciNet MATH Article Google Scholar
Boixo, S., Ronnow, T.F., Isakov, S.V., Wang, Z., Wecker, D., Lidar, D.A., Martinis, J.M., & Troyer, M. (2014). Evidence for quantum annealing with more than one hundred qubits. Nature Physics, 10(3), 218–224. https://doi.org/10.1038/nphys2900.Article.
Boros, E., & Hammer, P.L. (2002). Pseudo-boolean optimization. Discrete Applied Mathematics, 123 (1), 155–225. https://doi.org/10.1016/S0166-218X(01)00341-9. http://www.sciencedirect.com/science/article/pii/S0166218X01003419.
Brush, S.G. (1967). History of the lenz-ising model. Reviews of Modern Physics, 39, 883–893. https://doi.org/10.1103/RevModPhys.39.883.
Chmielewski, M., Amini, J., Hudek, K., Kim, J., Mizrahi, J., Monroe, C., Wright, K., & Moehring, D. (2018). Cloud-based trapped-ion quantum computing. In APS Meeting abstracts.
Coffrin, C., Nagarajan, H., & Bent, R. (2016). Challenges and successes of solving binary quadratic programming benchmarks on the DW2x QPU. Tech. rep. Los Alamos National Laboratory (LANL).
Coffrin, C., Nagarajan, H., & Bent, R. (2019). Evaluating ising processing units with integer programming. In Rousseau, L.M., & Stergiou, K. (Eds.) Integration of constraint programming, artificial intelligence, and operations research (pp. 163–181). Cham: Springer International Publishing.
Coffrin, C., & Pang, Y. (2019). ising-solvers. https://github.com/lanl-ansi/ising-solvers.
Coles, P.J., Eidenbenz, S., Pakin, S., Adedoyin, A., Ambrosiano, J., Anisimov, P., Casper, W., Chennupati, G., Coffrin, C., Djidjev, H., & et al. (2018). Quantum algorithm implementations for beginners. arXiv:1804.03719.
Cugliandolo, L.F. (2018). Advanced statistical physics: Frustration. https://www.lpthe.jussieu.fr/leticia/TEACHING/master2018/frustration18.pdf.
Dash, S. (2013). A note on qubo instances defined on chimera graphs. arXiv:1306.1202.
d'Auriac, J.A., Preissmann, M., & Rammal, R. (1985). The random field ising model: algorithmic complexity and phase transition. Journal de Physique Lettres, 46(5), 173–180.
Denchev, V.S., Boixo, S., Isakov, S.V., Ding, N., Babbush, R., Smelyanskiy, V., Martinis, J., & Neven, H. (2016). What is the computational value of finite-range tunneling?. Physical Review X, 6, 031,015. https://doi.org/10.1103/PhysRevX.6.031015.
Dhar, D., Shukla, P., & Sethna, J.P. (1997). Zero-temperature hysteresis in the random-field ising model on a bethe lattice. Journal of Physics A: Mathematical and General, 30(15), 5259.
Ding, J., Sly, A., & Sun, N. (2015). Proof of the satisfiability conjecture for large k. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pp. 59–68. ACM.
Eagle, N., Pentland, A.S., & Lazer, D. (2009). Inferring friendship network structure by using mobile phone data. Proceedings of the national academy of sciences, 106(36), 15,274–15,278.
Fabio L., & Traversa, M.D.V. (2018). Memcomputing integer linear programming. arXiv:https://arxiv.org/abs/1808.09999.
Farhi, E., Goldstone, J., Gutmann, S., Lapan, J., Lundgren, A., & Preda, D. (2001). A quantum adiabatic evolution algorithm applied to random instances of an np-complete problem. Science, 292(5516), 472–475. https://doi.org/10.1126/science.1057726. http://science.sciencemag.org/content/292/5516/472.
Farhi, E., Goldstone, J., Gutmann, S., & Sipser, M. (2018). Quantum computation by adiabatic evolution. arXiv:https://arxiv.org/abs/quant-ph/0001106.
Feynman, R.P. (1982). Simulating physics with computers. International Journal of Theoretical Physics, 21(6), 467–488.
Fossorier, M.P., Mihaljevic, M., & Imai, H. (1999). Reduced complexity iterative decoding of low-density parity check codes based on belief propagation. IEEE Transactions on communications, 47(5), 673–680.
Fujitsu. (2018). Digital annealer. Published online at http://www.fujitsu.com/global/digitalannealer/. Accessed 26 Feb 2019.
Gallavotti, G. (2013). Statistical mechanics: A short treatise. Berlin: Springer Science & Business Media.
Glauber, R.J. (1963). Time-dependent statistics of the ising model. Journal of mathematical physics, 4(2), 294–307.
Grover, L.K. (1996). A fast quantum mechanical algorithm for database search. In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, pp. 212–219. ACM.
Gurobi Optimization, Inc. (2014). Gurobi optimizer reference manual Published online at http://www.gurobi.com.
Hamerly, R., Inagaki, T., McMahon, P.L., Venturelli, D., Marandi, A., Onodera, T., Ng, E., Langrock, C., Inaba, K., Honjo, T., & et al. (2019). Experimental investigation of performance differences between coherent ising machines and a quantum annealer. Science Advances, 5(5), eaau0823.
Hamze, F., & de Freitas, N. (2004). From fields to trees. In Proceedings of the 20th conference on uncertainty in artificial intelligence, UAI '04, pp. 243–250. AUAI Press, Arlington, Virginia, United States. http://dl.acm.org/citation.cfm?id=1036843.1036873.
Haribara, Y., Utsunomiya, S., & Yamamoto, Y. (2016). A coherent ising machine for MAX-CUT problems: performance evaluation against semidefinite programming and simulated annealing, pp. 251–262. Springer Japan, Tokyo. https://doi.org/10.1007/978-4-431-55756-2_12.
Hopfield, J.J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8), 2554–2558.
Inagaki, T., Haribara, Y., Igarashi, K., Sonobe, T., Tamate, S., Honjo, T., Marandi, A., McMahon, P.L., Umeki, T., Enbutsu, K., Tadanaga, O., Takenouchi, H., Aihara, K., Kawarabayashi, K.I., Inoue, K., Utsunomiya, S., & Takesue, H. (2016). A coherent ising machine for 2000-node optimization problems. Science, 354(6312), 603–606. https://doi.org/10.1126/science.aah4243. http://science.sciencemag.org/content/354/6312/603.
International Business Machines Corporation. (2017). Ibm building first universal quantum computers for business and science. Published online at https://www-03.ibm.com/press/us/en/pressrelease/51740.wss. Accessed 28 Apr 2017.
Isakov, S., Zintchenko, I., Rønnow, T., & Troyer, M. (2015). Optimised simulated annealing for ising spin glasses. Computer Physics Communications, 192, 265–271. https://doi.org/10.1016/j.cpc.2015.02.015. http://www.sciencedirect.com/science/article/pii/S0010465515000727.
Johnson, M.W., Amin, M.H., Gildert, S., Lanting, T., Hamze, F., Dickson, N., Harris, R., Berkley, A.J., Johansson, J., Bunyk, P., & et al. (2011). Quantum annealing with manufactured spins. Nature, 473(7346), 194–198.
Jünger, M., Lobe, E., Mutzel, P., Reinelt, G., Rendl, F., Rinaldi, G., & Stollenwerk, T. (2019). Performance of a quantum annealer for ising ground state computations on chimera graphs. arXiv:1904.11965.
Kadowaki, T., & Nishimori, H. (1998). Quantum annealing in the transverse ising model. Physical Review E, 58, 5355–5363. https://doi.org/10.1103/PhysRevE.58.5355.
Kalinin, K.P., & Berloff, N.G. (2018). Global optimization of spin hamiltonians with gain-dissipative systems. Scientific Reports, 8(1), 1–9.
Kielpinski, D., Bose, R., Pelc, J., Vaerenbergh, T.V., Mendoza, G., Tezak, N., & Beausoleil, R.G. (2016). Information processing with large-scale optical integrated circuits. In 2016 IEEE International conference on rebooting computing (ICRC), pp. 1–4. https://doi.org/10.1109/ICRC.2016.7738704.
King, A.D., Lanting, T., & Harris, R. (2015). Performance of a quantum annealer on range-limited constraint satisfaction problems. arXiv:1502.02098.
King, J., Yarkoni, S., Raymond, J., Ozfidan, I., King, A.D., Nevisi, M.M., Hilton, J.P., & McGeoch, C.C. (2017). Quantum annealing amid local ruggedness and global frustration. arXiv:https://arxiv.org/abs/1701.04579.
Lanting, T., King, A.D., Evert, B., & Hoskinson, E. (2017). Experimental demonstration of perturbative anticrossing mitigation using nonuniform driver hamiltonians. Physical Review A, 96(042), 322. https://doi.org/10.1103/PhysRevA.96.042322.
Leleu, T., Yamamoto, Y., McMahon, P.L., & Aihara, K. (2019). Destabilization of local minima in analog spin systems by correction of amplitude heterogeneity. Physical Review Letters, 122(4), 040,607.
Lokhov, A.Y., Vuffray, M., Misra, S., & Chertkov, M. (2018). Optimal structure and parameter learning of ising models. Science Advances, 4(3), e1700,.
Lucas, A. (2014). Ising formulations of many np problems. Frontiers in Physics, 2, 5. https://doi.org/10.3389/fphy.2014.00005.
Mandrà, S., Zhu, Z., Wang, W., Perdomo-Ortiz, A., & Katzgraber, H.G. (2016). Strengths and weaknesses of weak-strong cluster problems: a detailed overview of state-of-the-art classical heuristics versus quantum approaches. Physical Review A, 94(022), 337. https://doi.org/10.1103/PhysRevA.94.022337.
Marbach, D., Costello, J.C., Küffner, R., Vega, N.M., Prill, R.J., Camacho, D.M., Allison, K.R., Aderhold, A., Bonneau, R., Chen, Y., & et al. (2012). Wisdom of crowds for robust gene network inference. Nature Methods, 9(8), 796.
Marshall, J., Venturelli, D., Hen, I., & Rieffel, E.G. (2019). Power of pausing: Advancing understanding of thermalization in experimental quantum annealers. Physical Review Applied, 11(044), 083. https://doi.org/10.1103/PhysRevApplied.11.044083.
McGeoch, C.C., King, J., Nevisi, M.M., Yarkoni, S., & Hilton, J. (2017). Optimization with clause problems. Published online at https://www.dwavesys.com/sites/default/files/14-1001A_tr_Optimization_with_Clause_Problems.pdf. Accessed 10 Feb 2020.
McGeoch, C.C., & Wang, C. (2013). Experimental evaluation of an adiabiatic quantum system for combinatorial optimization. In Proceedings of the ACM international conference on computing frontiers, CF '13, pp. 23:1–23:11. ACM, New York, NY, USA. https://doi.org/10.1145/2482767.2482797.
McMahon, P.L., Marandi, A., Haribara, Y., Hamerly, R., Langrock, C., Tamate, S., Inagaki, T., Takesue, H., Utsunomiya, S., Aihara, K., & et al. (2016). A fully-programmable 100-spin coherent ising machine with all-to-all connections. Science, p aah5178.
Mezard, M., Mezard, M., & Montanari, A. (2009). Information, physics, and computation. Oxford: Oxford University Press.
Mézard, M., & Virasoro, M.A. (1985). The microstructure of ultrametricity. Journal de Physique, 46(8), 1293–1307.
Mohseni, M., Read, P., Neven, H., Boixo, S., Denchev, V., Babbush, R., Fowler, A., Smelyanskiy, V., & Martinis, J. (2017). Commercialize quantum technologies in five years. Nature, 543, 171–174. http://www.nature.com/news/commercialize-quantum-technologies-in-five-years-1.21583.
Morcos, F., Pagnani, A., Lunt, B., Bertolino, A., Marks, D.S., Sander, C., Zecchina, R., Onuchic, J.N., Hwa, T., & Weigt, M. (2011). Direct-coupling analysis of residue coevolution captures native contacts across many protein families. Proceedings of the National Academy of Sciences, 108(49), E1293–E1301.
Panjwani, D.K., & Healey, G. (1995). Markov random field models for unsupervised segmentation of textured color images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(10), 939–954.
Parekh, O., Wendt, J., Shulenburger, L., Landahl, A., Moussa, J., & Aidun, J. (2015). Benchmarking adiabatic quantum optimization for complex network analysis. arXiv:https://arxiv.org/abs/1604.00319.
Puget, J.F. (2013). D-wave vs cplex comparison. part 2: Qubo. Published online. Accessed 28 Nov 2018.
Rieffel, E.G., Venturelli, D., O'Gorman, B., Do, M.B., Prystay, E.M., & Smelyanskiy, V.N. (2015). A case study in programming a quantum annealer for hard operational planning problems. Quantum Information Processing, 14(1), 1–36. https://doi.org/10.1007/s11128-014-0892-x.
MATH Article Google Scholar
Schneidman, E., Berry II, M.J., Segev, R., & Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087), 1007.
Selby, A. (2013). Qubo-chimera. https://github.com/alex1770/QUBO-chimera.
Selby, A. (2014). Efficient subgraph-based sampling of ising-type models with frustration. https://arxiv.org/abs/1409.3934.
Shor, P.W. (1994). Algorithms for quantum computation: Discrete logarithms and factoring. In Proceedings 35th annual symposium on foundations of computer science, pp. 124–134. Ieee.
Venturelli, D., Marchand, D.J.J., & Rojo, G. (2015). Quantum annealing implementation of job-shop scheduling. arXiv:https://arxiv.org/abs/1506.08479.
Vuffray, M. (2014). The cavity method in coding theory. Tech. rep. EPFL.
Vuffray, M., Misra, S., Lokhov, A., & Chertkov, M. (2016). Interaction screening: Efficient and sample-optimal learning of ising models. In Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., & Garnett, R. (Eds.) Advances in neural information processing systems 29. pp 2595–2603. Curran Associates, Inc.
Vuffray, M., Misra, S., & Lokhov, A.Y. (2019). Efficient learning of discrete graphical models. arXiv:1902.00600.
Yamaoka, M., Yoshimura, C., Hayashi, M., Okuyama, T., & Aoki, H. (2015). Mizuno, h.: 24.3 20k-spin ising chip for combinational optimization problem with cmos annealing. In 2015 IEEE International solid-state circuits conference - (ISSCC) digest of technical papers, pp. 1–3. https://doi.org/10.1109/ISSCC.2015.7063111.
Yoshimura, C., Yamaoka, M., Aoki, H., & Mizuno, H. (2013). Spatial computing architecture using randomness of memory cell stability under voltage control. In 2013 European conference on circuit theory and design (ECCTD), pp. 1–4. https://doi.org/10.1109/ECCTD.2013.6662276.
The research presented in this work was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers 20180719ER and 20190195ER.
This work was funded by Los Alamos National Laboratory's Laboratory Directed Research and Development program as part of the projects 20180719ER and 20190195ER.
Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, IL, 61801, USA
Yuchen Pang
Advanced Network Science Initiative, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
Carleton Coffrin, Andrey Y. Lokhov & Marc Vuffray
Carleton Coffrin
Andrey Y. Lokhov
Marc Vuffray
Correspondence to Carleton Coffrin.
This article belongs to the Topical Collection: Special Issue on Constraint Programming, Artificial Intelligence, and Operations Research
Guest Editors: Emmanuel Hebrard and Nysret Musliu
Appendix A: Uniform fields
This appendix presents the results of the uniform-field variants of the BFM, FBFM, and CBFM instances and illustrates how uniform fields improve the performance of all solution methods considered. Specifically the uniform-field variants replace the bias term, P(hi = − 1.00) = 0.010, with the uniform variant P(hi = − 0.01) = 1.000. Throughout this study the field's probability distribution is modified such that there are no zero-value fields (i.e., P(hi = 0.00) = 0.000) and, for consistency with the BFM, FBFM, and CBFM cases presented in Section 5, the mean of the fields is selected to be -0.01 (i.e., μh = − 0.01) in all problems considered.
A1: The biased ferromagnet with uniform fields
$$ \begin{array}{@{}rcl@{}} \boldsymbol{J}_{ij} &=& -1.00 ~\forall i,j \in {\mathcal{E}};\\ \boldsymbol{h}_{i} &=& -0.01 ~\forall i \in {\mathcal{N}} \end{array} $$
(BFM-U)
The Biased Ferromagnet with Uniform Fields (BFM-U) is similar to the BFM case, but all of the linear terms are set identically to hi = − 0.01. All of the solution methods considered here perform well on this BFM-U case (see Fig. 7). However, the BFM-U case does appear to reduce both the optimality gap and hamming distance metrics by a factor of two compared to the BFM case. This suggests that BFM-U is easier than BFM based on the metrics considered by this work.
Performance profile (top) and Hamming Distance (bottom) analysis for the Biased Ferromagnet with Uniform Fields instance
A.2: The frustrated biased ferromagnet with uniform fields
$$ \begin{array}{@{}rcl@{}} \boldsymbol{J}_{ij} &=& -1.00 ~\forall i,j \in {\mathcal{E}}\\ P(\boldsymbol{h}_{i} &=& -0.03) = 0.666, P(\boldsymbol{h}_{i} = 0.03) = 0.334 ~\forall i \in {\mathcal{N}} \end{array} $$
(FBFM-U)
The Frustrated Biased Ferromagnet with Uniform Fields (FBFM-U) is similar to the FBFM case, but two-thirds of the linear terms are set to hi = − 0.03 and one-third is set to hi = 0.03. Although the performance of most of the algorithms on FBFM-U is similar to FBFM (see Fig. 8), there are two notable deviations. The performance of MS and SCD algorithms improves significantly in the FBFM-U case. This also suggests that the FBFM-U is easier than FBFM based on the metrics considered by this work.
Performance profile (top) and Hamming Distance (bottom) analysis for the Frustrated Biased Ferromagnet with Uniform Fields instance
A.3: The corrupted biased ferromagnet with uniform fields
$$ \begin{array}{@{}rcl@{}} P(\boldsymbol{J}_{ij} &=& -1.00) = 0.625, P(\boldsymbol{J}_{ij} = 0.20) = 0.375 ~\forall i,j \in {\mathcal{E}}\\ P(\boldsymbol{h}_{i} &=& -0.03) = 0.666, P(\boldsymbol{h}_{i} = 0.03) = 0.334 ~\forall i \in {\mathcal{N}} \end{array} $$
(CBFM-U)
The Corrupted Biased Ferromagnet with Uniform Fields (CBFM-U) is similar to the CBFM case, but two-thirds of the linear terms are set to hi = − 0.03 and one-third is set to hi = 0.03. This case exhibits the most variation from the CBFM alternative (see Fig. 9). The key observations are as follows:
In CBFM-U, QA has a higher probability of finding a near-optimal solution (i.e., > 0.50) than CBFM (i.e., < 0.20). However, it has a lower probability of finding the true-optimal solution (Fig. 9, bottom-right). Due to this effect, QA finds a near-optimal solution to CBFM-U faster than CBFM but never manages to converge to the optimal solution, as it does in CBFM.
The performance of the SCD algorithm improves significantly in the CBFM-U case. The SCD algorithm is among the best solutions for CBFM-U (< 0.5% optimality gap), while it has more than a 2% optimality gap in the CBFM case.
Overall, these results suggest that CBFM-U is easier than CBFM based on the metrics considered by this work. However, the subtle differences in the performance of QA between CBFM and CBFM-U suggest that varying the distribution of the linear terms in the CBFM family of problems could be a useful tool for developing a deeper understanding of how QA responds to different classes of optimization tasks.
Performance profile (top) and Hamming Distance (bottom) analysis for the Corrupted Biased Ferromagnet with Uniform Fields instance
Appendix B: Reference implementations
B.1: D-Wave instance generator (DWIG)
The problems considered in this work were generated with the open-source D-Wave Instance Generator tool, which is available at https://github.com/lanl-ansi/dwig. DWIG is a command line tool that uses D-Wave's hardware API to identify the topology of a specific D-Wave device and uses that graph for randomized problem generation. The following list provides the mapping of problems in this paper to the DWIG command line interface:
CBFM: dwig.py cbfm -rgt CBFM-U: dwig.py cbfm -rgt -j1-val -1.00 -j1-pr 0.625 -j2-val 0.02 -j2-pr 0.375 -h1-val -0.03 -h1-pr 0.666 -h2-val 0.03 -h2-pr 0.334 FBFM: dwig.py cbfm -rgt -j1-val -1.00 -j1-pr 1.000 -j2-val 0.00 -j2-pr 0.000 -h1-val -1.00 -h1-pr 0.020 -h2-val 1.00 -h2-pr 0.010 FBFM-U: dwig.py cbfm -rgt -j1-val -1.00 -j1-pr 1.000 -j2-val 0.00 -j2-pr 0.000 -h1-val -0.03 -h1-pr 0.666 -h2-val 0.03 -h2-pr 0.334 BFM: dwig.py cbfm -rgt -j1-val -1.00 -j1-pr 1.000 -j2-val 0.00 -j2-pr 0.000 -h1-val -1.00 -h1-pr 0.010 -h2-val 0.00 -h2-pr 0.000 BFM-U: dwig.py cbfm -rgt -j1-val -1.00 -j1-pr 1.000 -j2-val 0.00 -j2-pr 0.000 -h1-val -0.01 -h1-pr 1.000 -h2-val 0.00 -h2-pr 0.000
B.2: Ising model optimization methods
The problems considered in this work were solved with the open-source Ising-Solvers scripts that are available at https://github.com/lanl-ansi/ising-solvers. These scripts include a combination of calls to executables, system libraries, and handmade heuristics. Each script conforms to a standard API for measuring runtime and reporting results. The following commands were used for each of the solution approaches presented in this work:
ILP (GRB): ilp\_gurobi.py -ss -rtl <time\_limit> -f <case file> IQP (GRB): iqp\_gurobi.py -ss -rtl <time\_limit> -f <case file> MCMC (GD): mcmc\_gd.py -ss -rtl <time\_limit> -f <case file> MP (MS): mp\_ms.py -ss -rtl <time\_limit> -f <case file> GRD (SCD): grd\_scd.jl -s -t <time\_limit> -f <case file> LNS (HFS): lns\_hfs.py -ss -rtl <time\_limit> -f <case file> QA (DW): qa\_dwave.py -ss -nr <number of reads> -at 5 -srtr 100 -f <case file>
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Pang, Y., Coffrin, C., Lokhov, A.Y. et al. The potential of quantum annealing for rapid solution structure identification. Constraints (2020). https://doi.org/10.1007/s10601-020-09315-0
Discrete optimization
Ising model
Quadratic unconstrained binary optimization
Large neighborhood search
Belief propagation
Part of a collection:
Special Issue on Constraint Programming, Artificial Intelligence, and Operations Research
Not logged in - 3.236.156.34
Not affiliated | CommonCrawl |
Transcriptome analysis reveals global regulation in response to CO2 supplementation in oleaginous microalga Coccomyxa subellipsoidea C-169
Huifeng Peng1,
Dong Wei1,
Gu Chen1 &
Feng Chen1,2
Microalgae are emerging as suitable feedstock for renewable biofuel production and providing a promising way to alleviate green house gas CO2. Characterizing the metabolic pathways involved in the biosynthesis of energy-rich compounds and their global regulation upon elevated CO2 is necessary to explore the mechanism underlying rapid growth and lipid accumulation, so as to realize the full potential of these organisms as energy resources.
In the present study, 2 and 5 % CO2 increased growth rate and lipid accumulation in autotrophically cultured green alga Coccomyxa subellipsoidea C-169. Overall biomass productivity as 222 mg L−1 day−1 and fatty acid content as 48.5 % dry cell weight were attained in 2 % CO2, suggesting C-169 as a great candidate for lipid production via CO2 supplementation. Transcriptomic analysis of 2 % against 0.04 % CO2-cultured C-169 unveiled the global regulation of important metabolic processes. Other than enhancing gene expression in the Calvin cycle, C-169 upregulated the expression of phosphoenolpyruvate carboxylase, pyruvate carboxylase and carbamoyl-phosphate synthetase II to enhance the anaplerotic carbon assimilation reactions upon elevated CO2. Upregulation of ferredoxin and ferredoxin–NADP+ reductase implied that plentiful energy captured through photosynthesis was transferred through ferredoxin to sustain rapid growth and lipid accumulation. Genes involved in the glycolysis, TCA cycle and oxidative phosphorylation were predominantly upregulated presumably to provide abundant intermediates and metabolic energy for anabolism. Coordinated upregulation of nitrogen acquisition and assimilation genes, together with activation of specific carbamoyl-phosphate synthetase and ornithine pathway genes, might help C-169 to maintain carbon/nitrogen balance upon elevated CO2. Significant downregulation of fatty acid degradation genes, as well as the upregulation of fatty acid synthesis genes at the later stage might contribute to the tremendous lipid accumulation.
Global and collaborative regulation was employed by C-169 to assimilate more carbon and maintain carbon/nitrogen balance upon elevated CO2, which provide abundant carbon skeleton and affluent metabolic energy to sustain rapid growth and lipid accumulation. Data here for the first time bring significant insights into the regulatory profile of metabolism and acclimation to elevated CO2 in C-169, which provide important information for future metabolic engineering in the development of sustainable microalgae-based biofuels.
Given the fact that global demand for energy resources is continuously rising while traditional fuels (e.g. fossil fuels) are non-renewable and their combustion raise numerous environmental concerns, biofuels research worldwide has been developed rapidly. Microalgae have emerged as alternative feedstock for biofuels production with several advantages such as high growth rate, high lipid yield and not competing with food crops or forestry for arable land and clean water [1, 2]. They consist of extremely diverse unicellular photosynthetic microorganisms that can fix CO2 and convert solar energy into chemical energy efficiently, though many issues and problems are yet to be solved for commercial feasibility [3]. On the other hand, worldwide concerns about the negative effects of climate change on human and environment have synergized the development of CO2 sequestration technologies, and culturing of microalgae for CO2 bio-fixation is one of the promising strategies [4]. The phenomenon that increased CO2 concentration can enhance the carbon fixation efficiency and growth rate of phytoplankton has been known for a long time [5, 6]. Additionally, studies have shown that elevated CO2 concentration increased lipid productivity as well as growth rate in various microalgae such as Nannochloropsis oculata, Phaeodactylum tricornutum and Chlorella vulgaris [7–9]. Thus, high level of CO2 has been applied in more than 60 species of microalgae in attempt to enhance biomass production and lipid content, as well as alleviate green house gas effects [8, 10–16]. However, most previous studies mainly focused on physiological properties of microalgae, such as growth rate, lipid content, and CO2 tolerance. As for mechanisms exploring, a recent study on diatom P. tricornutum measured the activities of seven key enzymes and their mRNA expression, and showed that pentose phosphate pathway was upregulated to maintain the NADPH supply under high CO2 concentrations [8]. Another recent study indicated that cAMP signaling played an important role in coordinating gene expression in diatom Thalassiosira pseudonana acclimation to elevated CO2 [17]. To our knowledge, very few researches have been reported on the global analysis of the transcripts to reveal the mechanisms underlying rapid growth and lipid accumulation upon elevated CO2 in microalgae.
Coccomyxa subellipsoidea C-169, which will be referred to as C-169 hereafter, is an elongated non-motile unicellular green alga, sizing approximately 3–9 μm. It belongs to Coccomyxa, Coccomyxaceae, Trebouxiophyceae, Chlorophyta. As the first sequenced eukaryotic microorganism from polar environment, C-169 has relatively fragile cell wall and contains more genes of enzymes involved in lipid biosynthesis and modification than any other sequenced chlorophytes [18]. Its great cold adaption capacity, together with the characteristic genome, suggested it as an attractive and promising microalga for biofuel production. However, limited effort has been made to examine the feasibility of C-169 as a prospective strain for lipid production to date. Two recent investigations showed that nitrogen starvation increased lipid content in C-169, but the biomass productivity was low, which is common in nitrogen deprivation [19, 20]. Important questions, therefore, remain. Will C-169 have higher lipid productivity and biomass under high level of CO2? Is there any pathway other than the Calvin cycle contributing to the assimilation of carbon upon elevated CO2? How can the carbon–nitrogen balance be maintained under high CO2 concentration? The answers to these questions are of vital importance to realize the potential of C-169 as energy resources.
In the present study, we evaluated the growth rate and lipid content in C-169 that was subjected to three different concentrations of CO2 (0.04, 2 and 5 %). Then, transcriptomic analysis was performed to explore the mechanism of rapid growth and lipid accumulation under CO2 supplementation. Our research indicated that C-169 employed global regulation to assimilate carbon and balance carbon/nitrogen metabolism to sustain rapid growth and lipid accumulation. These results provide a sharp insight into the regulatory profile upon elevated CO2 and a rich source of genetic information for the development of C-169 as an oleaginous microalga.
Physiology and biochemical analysis under different CO2 concentrations
To investigate the effects of CO2 supplementation on the growth rate and lipid content of C-169, cells were incubated under 0.04, 2 and 5 % CO2. The cultivation was terminated on the 12th day when cells reached stationary phase. As shown in Fig. 1A, cell growth was markedly prompted by both 2 and 5 % CO2 supplementation, while 2 % CO2 was optimum for C-169 growth and presented a maximum cell growth rate of 0.56 day−1. The maximal biomass productivity for 2 % CO2 was 573 mg L−1 day−1, which was 627 and 88 % higher than that for 0.04 and 5 % CO2, respectively. The overall biomass productivity for 2 % CO2 was 222 mg L−1 day−1, which was 488 and 55 % higher than that for 0.04 and 5 % CO2, respectively. It is worth noting that 5 % CO2 incurred cellular aggregation in some degree (data not shown), which could partially explain the less vigorous cell growth in 5 % CO2 as compared to 2 % CO2. A remarkable increase in carbon fixation rate was observed in 2 % CO2 as compared to 0.04 % CO2, which was mainly attributed to the boosted biomass productivity rendered by 2 % CO2 (Table 1). The carbon fixation rate nearly doubled on the 4th day, and increased by six- to sevenfold on the 8th and 12th day.
Physiological and biochemical characterization of C-169 under different CO2 concentrations. A Growth rate represented by cell numbers counted via a hemocytometer. B Neutral lipid and Chl a content represented by fluorescence intensity of Nile Red-stained cells and auto-fluorescence of Chl a throughout cultivation. All data are expressed as mean ± standard deviation (n = 3). C Microscopic images of 12-day cells captured by a confocal laser scanning microscope. a, b and c, respectively, represent micrograph of the algal cells with 0.04, 2 and 5 % CO2 under bright channel; d, e and f correspondingly represent those cells stained with Nile Red and viewed under fluorescence channel with blue light excitation. D Fatty acid content under different CO2 concentration on the 4th, 8th and 12th day
Table 1 Total carbon content and carbon fixation rate
Flow cytometry analysis was employed to track the content of chlorophyll and neutral lipid in C-169 throughout the cultivation (Fig. 1B). On the first four days, the Chl a auto-fluorescence increased in all three conditions, but higher Chl a fluorescence was observed in 2 and 5 % CO2. It remained relatively constant in 0.04 % CO2 at the later stage while Chl a fluorescence decreased dramatically in 2 and 5 % CO2 after the 8th day. The decline of Chl fluorescence corresponded to the stages when the growth became stationary in 2 and 5 % CO2. Reduction in photosynthesis as a result of long-term elevated CO2 has been reported in microalgae and higher plants, for example, Arabidopsis [21]. It might be due to the light limitation and nutrient deficiency caused by rapid growth [15]. The lipid accumulation reflected by mean Nile Red (NR) fluorescence per cell remained at low level in 0.04 % CO2. While it was boosted dramatically in the cells cultured with 2 and 5 % CO2 after the 4th day and kept increasing steeply to the 12th day, the end of observation. Highest NR fluorescence was found in 2 % CO2. Additionally, fluorescence images of 12-day cells were taken via a confocal laser scanning microscope. Compared to cells cultured with 0.04 % CO2, CO2-supplemented cells were mainly occupied by lipid bodies (yellow), instead of chloroplasts (red) (Fig. 1C), which was consistent with the results indicated by flow cytometry analysis (Fig. 1B). These results demonstrated that CO2 supplementation was an effective trigger for lipid accumulation in C-169.
Fatty acid methyl esters (FAMEs) content and profile was further analyzed on the 4th, 8th, and 12th day (Fig. 1D; Table 2). Fatty acid (FA) content was comparable on the 4th day among three conditions. It remained steadily low in 0.04 % CO2 on the 8th and 12th day, while increased significantly in 2 and 5 % CO2 (Fig. 1D). The maximal FA content of 2 % CO2 reached 48.5 % dry cell weight (DCW) on the 12th day, which was 411.4 and 15.4 % higher than those of 0.04 and 5 % CO2, respectively. FA profiles indicated that C16 and C18 were the main FA components in C-169, which accounted for over 97 % of total FA (Table 2). The most remarkable change as a result of CO2 supplementation was observed in oleic acid (C18:1) content, whose percentage increased approximately by 6 times as compared to that of 0.04 % CO2 on the 12th day; while the percentage of C16:0, C18:2 and C18:3 decreased nearly by half. Such dramatic changes in FA profiles contributed to a lower degree of lipid unsaturation (DLU) with CO2 supplementation. The DLUs were 1.65, 1.20 and 1.19 ∇/mole for 0.04, 2 and 5 % CO2 on the 12th day, respectively. The favored formation of C18:1 in C-169 was also observed when cells were subjected to nitrogen deprivation [19], suggesting the similarity between the nitrogen deprivation and the later stage under CO2 supplementation.
Table 2 Fatty acid profiles under different CO2 concentration on the 4th, 8th and 12th day (% total fatty acid)
To explore the regulatory mechanisms of boosted growth and enhanced lipid content in C-169 in response to CO2 supplementation on transcriptomic level, cells from the 4th day were subjected to RNA extraction and digital gene expression (DGE) analysis. Three biological replicates from 0.04 % CO2 group (termed AG) and 2 % CO2 supplementation group (termed CG) were employed to guarantee statistically comparable and reliable data from DGE. Raw data ranged from 5,750,178 to 9,999,804 reads per sample. After removing the low-quality sequences and adaptor sequences, over 46 million clean reads were generated. Unambiguous reads that were uniquely matched to one gene of the reference genome with no more than one mismatch represented 9409 genes, approximately 96 % of the protein coding genes (n = 9851) in C-169. These reads were counted and normalized to RPKM values. Results of saturation analysis ensured that each sample had attained enough reads to approach saturation. The normalized read abundances from AG and CG libraries are compared in 3D scatter plots (Fig. 2a, b). High correlations among the three biological replicates indicated high degree of reproducibility, with an average Pearson's correlation coefficient of r = 0.994 and r = 0.896 for the AG and CG replicates, respectively. A complete list of expression level and fold changes for all genes is presented in Additional file 1.
Reproducibility and reliability of the transcriptomic data. a, b The 3D scatter plots of normalized transcripts reads abundance. High correlations among three biological replicates of AG (0.04 % CO2) (a) and CG (2 % CO2) (b) indicated high degree of reproducibility of transcriptomic data. c DGE data were validated by quantitative RT-PCR via Pearson's correlation coefficient
For brevity, unless otherwise stated, gene regulation hereafter refers to transcript abundance fold change (FC) of CG to AG. To sort out genes that were differentially regulated upon elevated CO2, mean RPKM values from two groups were compared. Using the criteria as |log2 fold change| > 1 and FDR < 0.001, 1737 differentially expressed genes (DEGs) were identified, with 871 up-regulated and 866 downregulated. To validate the DGE data, quantitative real-time PCR (qRT-PCR) analysis was performed on sixteen genes. Expression fold changes from qRT-PCR presented a high correlation with DGE (Fig. 2c), which further demonstrated the reliability of the transcriptomic data.
The transcripts detected in DGE dataset were further classified based on Gene Ontology (GO) term. Totally 3901 transcripts were assigned with 1110 GO term categories according to Gene Ontology consortium [22] (Additional file 1). GO enrichment analysis of DEGs indicated that upregulated genes were enriched in ATPase, proton transporter, tricarboxylic acid (TCA) cycle, nitrogen compound metabolism, while the downregulated genes were significantly enriched in photosynthesis, including light harvesting, photosystem I and photosystem II (Fig. 3).
DEG-enriched GO terms. Regulatory profiles are presented as the percentage of up-regulated (red), downregulated genes (blue), and non-DEGs (gray) within each category of GO terms, in which DEGs were significantly enriched (p < 0.01)
With the annotation information of the transcripts in C-169 available on the JGI genome portal (http://www.genome.jgi.doe.gov/pages/dynamicOrganismDownload.jsf?organism=PhytozomeV10#), an overview of metabolic pathway regulation in response to CO2 supplementation was generated via iPath2.0 [23] (Additional file 2: Figure S1). With the help of KEGG (http://www.kegg.jp), DEGs were assigned to 147 KEGG pathways and significantly enriched in 28 KEGG pathways with P value <0.05, including carbon fixation, glycolysis/gluconeogenesis, TCA cycle, pentose phosphate pathway, nitrogen metabolism, oxidative phosphorylation, etc. (Additional file 1). Exploring the variations of these pathways revealed the remodeling of metabolism in C-169 upon high CO2 concentration.
Enhanced carbon fixation in Calvin cycle
Upon elevated CO2, genes associated with photosynthetic CO2 fixation, known as the Calvin cycle, were concertedly upregulated, some of which were statistically significant (Fig. 4a). Notably, genes encoding phosphoglycerate kinase (PGK) and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were significantly upregulated in CG cells. These two enzymes are crucial in the Calvin cycle, which, respectively, catalyze the phosphorylation and reduction of 3-carbon intermediates in the presence of ATP and NADPH to generate glyceraldehyde-3-phosphate. Interestingly, among four genes coding fructose-bisphosphatealdolase (ALDO) homologs, only the gene for the chloroplast homologs (34109) was upregulated to enhance conversion of glyceraldehyde-3-phosphate into d-fructose-1,6,-biphosphate. Also genes coding for enzymes in regenerating ribulose-1,5-biphosphate were upregulated coordinately. No significant upregulation was found in transcripts of RuBisCo (Ribulose-1, 5-bisphosphate carboxylase/oxygenase), the critical enzyme catalyzing the initial step of CO2 fixing, which might be regulated posttranscriptionally [24]. Upregulation of Calvin cycle genes implied that more carbon was fixed through the Calvin cycle to sustain rapid cell growth under high CO2 concentration, which was consistent with the increased carbon fixation rate in CG (Table 1). Enhanced carbon fixation raises higher demand of ATP and NADPH, which are normally generated from photophosphorylation, glycolysis, oxidative phosphorylation and pentose phosphate pathway.
Changes in transcript abundance of genes involved in central metabolic pathways and bioprocesses in response to elevated CO2 in C-169. Significantly modulated pathways and bioprocesses are presented as a Calvin cycle and pentose phosphate pathway; b photosynthesis; c glycolysis, gluconeogenesis and TCA cycle; d oxidative phosphorylation; e nitrogen metabolism; f fatty acid degradation. Key enzymes are included in the map and presented as their names (in red up-regulated; in blue downregulated; in black relatively unchanged), gene IDs and fold changes as indicated by color boxes
Enhanced chloroplast oxidative pentose phosphate pathway
To fix more CO2 through Calvin cycle, C-169 chloroplasts required an increased supply of NADPH for the reductive reactions. Among others, oxidative pentose phosphate (OPP) pathway is a strategy to provide NADPH for biosynthetic processes. The critical enzymes for OPP pathway, glucose-6-phosphate dehydrogenase (G6PD) and 6-phosphogluconate dehydrogenase (6PGD) were found to be upregulated more than fourfold (Fig. 4a). G6PD and 6PGD participate in the irreversible reactions of OPP pathway to generate NADPH and ribulose-5-phosphate [25, 26]. Interestingly, only the chloroplast G6PD (39093) was upregulated, while the cytoplasmic G6PD (66151) did not change significantly. Thus, it was suggested that the OPP pathway in chloroplast was notably enhanced to provide NADPH for carbon assimilation, and the side product ribulose-5-phosphate was employed through the Calvin cycle.
Remodeling of photosynthesis
It is well established that photophosphorylation in photosynthesis provides the assimilatory power (ATP and NADPH) for the Calvin cycle. Since C-169 enhanced the Calvin cycle to fix more carbon under CO2 supplementation, it was assumed that the photosynthesis was also enhanced simultaneously to provide metabolic energy. However, the transcriptomic data turned out that C-169 adopted another strategy (Fig. 4b). The expression of nearly all the light harvesting center proteins, in photosystem I as Lhca and in photosystem II as Lhcb, was dramatically reduced. Correspondingly, the other components of photosystem I (PS I) and photosystem II (PS II) were notably downregulated. Though some of these components are encoded by chloroplast genome and cannot be detected by our transcriptomic analysis, expression of more than 50 % nuclear gene-encoded PSII components and all the nuclear gene-encoded PSI components were significantly suppressed. Such intensive shrink of PSI and PSII gene expression was consistent with the decreased chlorophyll fluorescence detected after the 8th day of 2 % CO2 (Fig. 1B). Together with the downregulation of other photosynthesis components such as the electron transporter plastocyanin, it was suggested that the photosynthesis apparatuses might be attenuated notably later. However, the expression of one of the chloroplast ferredoxins (31164) increased more than 20-fold, and ferredoxin–NADP+ reductase (FNR, 54553) was also upregulated. In non-cyclic photophosphorylation, ferredoxin is the last electron carrier that accepts electrons produced from sunlight-excited photosystem and transfers them to FNR to generate NADPH. Other than being the electron carriers in photosynthetic electron transport chain, ferredoxins are also electron donors of various cellular proteins, such as glutamate synthase, nitrate reductase and sulfite reductase. Notable upregulation of ferredoxin implied the active electron transport through redox state transition of ferredoxin, and suggested upregulated ferredoxin as a critical point to transfer electrons to various cellular proteins from large amount of reductant generated by enhanced photosynthesis at the early stage, which was indicated by higher Chl fluorescence during the first 6 days (Fig. 1B).
Enhanced glycolysis and suppressed gluconeogenesis
The gene encoding starch phosphorylase (35194) was significantly activated, indicating the upregulated starch degradation towards glucose. Coordinately, transcripts for critical enzymes in glycolysis and gluconeogenesis were found remarkably upregulated and downregulated, respectively. As the opposite catabolism and anabolism pathways of glucose, glycolysis and gluconeogenesis share most reversible enzymes while use different enzymes for the critical steps. Genes encoding critical enzymes of glycolysis, phosphofructokinase (PFK) and pyruvate kinase (PK) were found notably upregulated. These two enzymes catalyze the phosphorylation of fructose-6-phosphate to fructose-1,6-bisphosphate and the conversion of phosphoenolpyruvate (PEP) to pyruvate, respectively, which are the key regulatory steps in glycolysis. Correspondingly, genes encoding enzymes for the unique reactions in gluconeogenesis, PEP carboxykinase (PEPCK) and fructose-1,6-bisphosphatase (FBP) were significantly repressed. PEPCK catalyzes the conversion of oxaloacetate into PEP, and FBP is responsible for the conversion of fructose-1,6-biphosphate into fructose-6-phosphate. Other than cytoplasmic gluconeogenesis, FBP is also involved in chloroplast conversion of d-fructose-1,6-biphosphate into fructose-6-phosphate, using the product generated through Calvin cycle. It was interesting to find that expression of gene encoding chloroplast FBP (27479) was not significantly changed, different from its downregulated cytoplasm homologs that were involved in gluconeogenesis. Such variations indicated the precise regulation of gene expression in C-169 upon high CO2 concentration, and provided hints for its biotechnological modification.
All together these gene regulation data implied that CO2 supplementation markedly enhanced glycolysis and suppress gluconeogenesis to provide building blocks and energy for rapid growth and lipid accumulation. But it seemed that there was one exception. The gene for pyruvate carboxylase (26440), which catalyzed the first step of gluconeogenesis, was notably upregulated. However, 26440 was predicted to localize in mitochondrion by LocTREE3 [27], and the reaction product oxaloacetate actually could be fueled into TCA cycle and consumed quickly by the accelerated TCA cycle.
Accelerated TCA cycle and enhanced oxidative phosphorylation
In accordance with enhanced glycolysis to generate more pyruvate, more than 50 % genes encoding components of the pyruvate dehydrogenase complex were upregulated in CG (Fig. 4c). Through this multienzyme complex, pyruvate is converted into acetyl-CoA to enter the TCA cycle. Impressively, genes coding nearly all the enzymes throughout the TCA cycle were consistently upregulated, including citrate synthase, aconitase, isocitrate dehydrogenase, oxoglutarate dehydrogenase, succinyl-CoA synthetase and fumarase (Fig. 4c). Their concerted upregulation could accelerate the TCA cycle to generate more NADH and GTP/ATP. However, the last step of TCA cycle, conversion of malate to oxaloacetate might be decelerated as the gene coding mitochondria malate dehydrogenase (54363) that catalyzes the reversible conversion between malate and oxaloacetate was significantly suppressed. Such downregulation was actually coordinated with the upregulated anaplerosis of oxaloacetate by PEP carboxylase and pyruvate carboxylase. Genes for PEP carboxylase (15132, 52173) and pyruvate carboxylase (26440) were dramatically upregulated to catalyze the carboxylation of PEP and pyruvate, respectively. External CO2 could be incorporated through these anaplerotic reactions to generate oxaloacetate and replenish the TCA cycle. Downregulation of mitochondria malate dehydrogenase (54363) might prevent the dissipation of oxaloacetate to malate in their reversible conversion. Broadened entries of the TCA cycle suggested by upregulation in genes responsible for generating oxaloacetate and acetyl-CoA, together with the upregulation of TCA cycle genes, implied that more metabolic energy and intermediates were generated through the accelerated TCA cycle to sustain robust cell growth and anabolism in C-169 under high CO2 concentration.
To further investigate whether the carboxylase activities were increased in CG cells as suggested by transcriptomic analysis, the activities of PEP carboxylase (PEPcase) and pyruvate carboxylase on the 4th day were analyzed. As expected, CG cells exhibited higher PEPcase and pyruvate carboxylase activities, which were 78.6 and 46.2 % higher than those of AG cells, respectively. Thus, it was inferred that the upregulated PEPcase and pyruvate carboxylase intensified the incorporation of external CO2, which led to the higher carbon fixation rate in CG (Table 1).
Metabolic energy and electrons captured in the form of reduced coenzymes, NADH or FADH2, are passed through electron transport chain to generate ATP in oxidative phosphorylation apparatus within the inner membrane of mitochondria (Fig. 4d). In accordance with accelerated TCA cycle, genes involved in electron transport and oxidative phosphorylation were found remarkably upregulated, including several subunits of the Complex I (NADH dehydrogenase), III (cytochrome bc1 complex), IV (cytochrome c oxidase) and ATP synthase. Through the upregulated NADH dehydrogenase and cytochrome bc1 complex, more electrons carried by NADH and FADH2 might be passed to upregulated cytochrome C oxidase, and finally reached O2, the terminal electron acceptor. Using the proton gradients across the inner mitochondrial membrane generated by electron transport, ATP is synthesized by ATP synthase. Dramatically, more than 50 % genes coding ATP synthase were upregulated in CG (Fig. 4d). The metabolic energy from accelerated oxidation of nutrients and intermediates might finally be utilized to synthesize more ATP. Correspondingly, ADP/ATP transporter, the integral membrane protein responsible for transporting ATP outside and ADP inside mitochondria, was significantly upregulated. Therefore, it was inferred that the accelerated TCA cycle and enhanced oxidative phosphorylation ensured the adequate supply of ATP for sustaining rapid growth and biosynthetic processes under high CO2 concentration.
Upregulated nitrogen acquisition and assimilation
Since the carbon/nitrogen (C/N) balance is critical for cell growth, genes involved in nitrogen acquisition and assimilation showed strong upregulation in CG (Fig. 4e). The genes coding nitrate/nitrite transporter and several ammonium transporters were all dramatically upregulated, ranging from nearly 10- to more than 100-fold. Both nitrate reductase and nitrite reductase were notably upregulated to assimilate extracellular nitrogen into ammonium.
Among the three enzymatic reactions introducing ammonium into organic molecules, glutamate dehydrogenase (GLDH) and glutamine synthetase (GS) are responsible for most of the ammonium assimilated into carbon compounds under nitrogen-rich environment; while the third one, carbamoyl-phosphate synthetase (CPS), was important during nitrogen starvation [24, 28, 29]. In CG, the gene coding mitochondrion GLDH was downregulated, while the chloroplast GLDH was upregulated. Other than synthesizing glutamate, GLDH also acts in the catabolic direction to generate 2-oxoglutarate from glutamate. Given the highly suppressed gene expression of photosynthetic apparatuses and the decrease in chlorophyll fluorescence at the later stage in CG, upregulation of chloroplast GLDH might play a prominent role in the catabolism and cannibalization of photosynthetic proteins at the later stage. Three out of four glutamine synthetase (GS) homologs were upregulated significantly. GS catalyzes the ATP-dependent amidation of glutamate to form glutamine. As the most abundant amino acid in many organisms, glutamine is a major nitrogen donor in the biosynthesis of many organic N compounds such as purines, pyrimidines, and other amino acids. Thus, the upregulated GS might supply substantial nitrogen for cellular anabolism. Other than GLDH, there was an alternative mode to replenish the glutamate consumed by the upregulated GS reaction. The gene for glutamate synthase (also known as GOGAT, glutamate oxoglutarate aminotransferase) was notably upregulated. Upregulation of GS/GOGAT pathway was also observed in diatom P. tricornutum under nitrogen stress [28].
Instead of having carbamoyl-phosphate synthetase I (CPS I) that catalyzes the incorporation of ammonium with bicarbonate to generate carbamoyl-phosphate, C-169 contains CPS II that synthesizes carbamoyl-phosphate with glutamine as amido-N-donor (Fig. 4e). Both the large and small subunits (25106, 54860) of CPS II were significantly upregulated. Together, upregulated GS/GOGAT and CPSII constituted an enhanced ammonium assimilation pathway to incorporate bicarbonate into carbamoyl-phosphate, which is the precursor for arginine and pyrimidine synthesis and intermediate in the ornithine–urea cycle. Genes involved in the next several reactions of ornithine pathway showed consistent upregulation to generate arginine and fumarate, including ornithine transcarboxylase (OTC), argininosuccinate synthase (ASS) and argininosuccinate lyase (ASL) (Fig. 4e). C-169 does not possess a complete urea cycle, since it lacks the gene for arginase, the last enzyme of urea cycle to breakdown arginine into urea and regenerate ornithine [29]. However, genes involved in the alternative route to generate ornithine from glutamate showed strong upregulation. Thus, C-169 might employ GS/GOGAT and the specific ornithine pathway to incorporate ammonium and bicarbonate into arginine and replenish the TCA cycle through fumarate upon elevated CO2. The ammonium and bicarbonate might come from extracellular environment, given the high CO2 concentration and intensified nitrogen acquisition and assimilation; they might also be derived from the catabolism and cannibalization of preexistent amino acid and protein.
Suppressed triacylglycerol (TAG) hydrolysis and fatty acid (FA) degradation was found in CG cells (Fig. 4f). The expression of TAG lipase (20497) that catalyzed the hydrolysis of TAG was significantly repressed. Genes involved in FA β-oxidation were generally downregulated, some of which reached significant level. Acyl-CoA synthetase (ACS) catalyzes the initial step of FA degradation through activation of FA with Coenzyme A. Two out of five ACS homologs were significantly downregulated. Following FA activation, cyclic reactions lead to a complete degradation of FA via the repeated cleavage of acetate units from the thiol end of FA [30]. Genes involved in these repetitive reactions showed consistent downregulation in CG cells, including genes coding acyl-CoA oxidase (ACOX), enoyl-CoA hydratase/3-hydroxyacyl-CoA dehydrogenase (MFP-2) and acetyl-CoA acyltransferase (ACAT). Lipid hydrolysis and FA degradation yield large amount of ATP through complete oxidation. Given the accelerated TCA cycle and enhanced oxidative phosphorylation in CG cells, it seemed that enough metabolic energy could be generated through these routes. Thus, lipid hydrolysis and FA degradation might be slowed down, which contributed to the increased lipid content observed at the later stage of CG cells.
Though the lipid content was slightly higher in CG than AG cells on the 4th day, it was subsequently enhanced over time in CG cells (Fig. 1D). Genes involved in lipid biosynthesis were usually reported to be upregulated in lipid-producing microalgae [19]. But on the 4th day, when the FA content difference was relatively small between 0.04 and 2 % CO2, it was reasonable to observe that the upregulation of genes involved in fatty acid biosynthesis and elongation was not as remarkable as the downregulation of genes involved in fatty acid degradation. Using the strict criteria of DEG as |log2 fold change| > 1 and FDR < 0.001, only one gene involved in fatty acid biosynthesis and elongation, 48328 (fatty acid elongase 3-ketoacyl-CoA synthase 1) was significantly upregulated on the 4th day. However, a series of genes were found upregulated when using the less strict criteria as |log2 fold change| > 1 and FDR < 0.05 (Additional file 1). They included genes encoding acetyl-CoA carboxylase subunit (ACCase subunit, 65159), fatty acid synthase (FAS, 49000) and several 3-ketoacyl-CoA synthases. ACCase catalyzes the first and committed step of FA biosynthesis through generation of malonyl-CoA from acetyl-CoA [31]. Subsequently, malonyl-CoA is transferred to an acyl-carrier protein, which is followed by a series of repetitive reactions catalyzed by FAS. Based on annotation of genomic data, C-169 is the only known Plantae member to have both the plant-type FAS (FAS II) and the animal-type FAS (FAS I) [18]. In CG cells, gene coding one FAS I (49000) was upregulated, thus providing experimental evidence that the animal-type FAS was functional in C-169 and contributed to lipid accumulation upon elevated CO2. Five 3-ketoacyl-CoA synthase homologs, 48328, 12119, 12451, 18441 and 64433, were also upregulated to build the long-chain fatty acid. Increased gene expression of ACCase, FAS and 3-ketoacyl-CoA synthases implied an upregulated trend in fatty acid biosynthesis, which should be investigated further at the later stage of elevated CO2.
Other DEG-enriched gene families
Last but not least, DEGs were interestingly found to enrich in some gene families other than those mentioned above. Among the 100 most upregulated genes, 15 encode transporters for different nutrients (Additional file 1). They were, for example, urea transporter (53548, 30678), sodium/sulfate symporter (54015), nucleoside transporter (62444), amino acid transporter (36205), sodium/dicarboxylate symporter (17172), sodium-dependent phosphate transporter (13678), as well as the ammonium transporter (65570, 65572, 65518). Thus, on the 4th day, CG cells mobilized many transporters for various nutrients to sustain rapid growth.
Active transportation was also found inside CG cells. Eight out of twenty-one kinesin family members were significantly upregulated in CG cells (Additional file 1). Four of them (20645, 13773, 37341, 15452) were among the 100 most upregulated DEGs. Kinesins are motor proteins that walk towards the positive end of microtubules [32, 33]. They transport protein and membrane components from the center of the cell towards the periphery. Such consistent and strong upregulation of kinesin family members indicated energetic metabolism inside CG cells to sustain rapid growth and lipid accumulation at the later stage.
It is noteworthy that about dozen genes encoding subunits of vacuolar H+-ATPase (V-ATPase) were significantly upregulated, for example, 52641, 8240, 28885 and 32039 (Additional file 1). V-ATPases couple the energy of ATP hydrolysis to transport proton across plasma and intracellular membranes. In other characterized organisms, V-ATPases are found within the membranes of many organelles, such as endosomes, lysosomes, and secretory vesicles, where they are involved in processes such as pH homeostasis and coupled transportation [34]. Given that high CO2 supplementation results in acidification of medium because additional carbonic acid is generated due to the solubilization of CO2 into aqueous phase, it might reduce the intracellular pH of CG cells and become an acid stress to C-169. Upregulation of V-ATPase might be one of the strategies to acclimate to acidification rendered by high CO2.
Quantitative RT-PCR analysis at the later stage
To further reveal the gene expression at the later stage, RNAs were extracted from AG and CG cells harvested on the 4th, 8th and 12th day and analyzed by quantitative RT-PCR (Fig. 5). Four fatty acid synthesis genes, coding ACCase (65159), acyl-ACP thioesterase (4465), FAS I (49000), and 3-oxoacyl-ACP synthase II (54810), were all significantly upregulated on the 4th, 8th and 12th day, which might explain the increasing lipid accumulation in the CG cells. Compared with 49000 and 54810, stronger upregulation was observed in 4465 and 65159; their transcripts increased by about 6- and 12-fold on the 12th day upon elevated CO2, respectively, which suggested their tremendous contribution to lipid accumulation at the later stage (Fig. 1D). It was also intriguing to find that ferredoxin gene (31164) was activated overtime and increased by 122-fold on the 12th day. Such dramatic upregulation was consistent with the increasing trend of lipid content that required large amount of reducing equivalents and provide hints for further biotechnological application. But the detailed relationship between ferredoxin and lipid accumulation awaits further investigation.
Quantitative RT-PCR analyses at the later stage. Expression fold change (CG/AG) of five genes, ferredoxin (31164) and four fatty acid synthesis genes, coding acyl-ACP thioesterase (4465), ACCase (65159), FAS I (49000), and 3-oxoacyl-ACP synthase II (54810), were evaluated on the 4th, 8th and 12th day by quantitative RT-PCR
Previous studies showed that extensive experiments have been conducted to prompt microalgae-derived lipid production via nutrient deficiency, especially using nitrogen starvation [35–37]. However, the disadvantage is that microalgal growth may be compromised in some degree under nutrient deficiency [19]. CO2 supplementation might overcome this disadvantage to some extent. Here in this study, the overall biomass productivity is 222 mg L−1 day−1 and the maximal fatty acid content is 48.5 % dry cell weight in 2 % CO2. They were higher than a recent report that TAG in C-169 linearly accumulated to 12.8 % dry weight after 10 day of nitrogen starvation [20], thus confirmed the great potential of lipid production from C-169 via CO2 supplementation. Transcriptomic analysis on the 4th day between 2 and 0.04 % CO2, for the first time, provides a comprehensive overview on the global regulation of important metabolic processes upon elevated CO2.
Photosynthetic carbon fixation has been well known as the main carbon assimilation pathway and was the focus of research on microalgae subjected to elevated CO2 [8]. However, our transcriptomic data here indicated that other than the enhanced Calvin cycle, C-169 mobilized several other carbon assimilation strategies to incorporate abundant carbon, and some of which were even confirmed by the enzymatic activity assay. Gene expression and enzyme activities of PEP carboxylase and pyruvate carboxylase were significantly upregulated to integrate CO2 to form oxaloacetate, which was an important metabolic intermediate and replenish the TCA cycle directly. Carbon was also assimilated with nitrogen into carbamoyl-phosphate by CPS II, whose transcripts increased by more than eightfold. Among the forty most upregulated genes, those encoding urea carboxylase (19857) and four clavaminate synthase-like proteins (52967, 33873, 33874, 54671) conspicuously showed up. Their expression rose 30- to more than 100-fold from very low basal level, suggesting that they were specifically activated by elevated CO2. Clavaminate synthase catalyzes the reversible conversion between 2-oxoglutarate and succinate [38, 39]. Given the high CO2 concentration, it was more likely that the reaction proceeded in the direction of succinate carboxylation, rather than 2-oxoglutarate oxidation; thus it might provide another carbon assimilation pathway to reinforce TCA cycle directly (Fig. 4c). Further experiments are needed to verify this speculation. Urea carboxylase forms carbon–nitrogen bonds between urea and bicarbonate to generate urea-1-carboxylate [40, 41]. Its upregulation implied the active cannibalization of old proteins to synthesize new intermediates.
Interesting remodeling of photosynthesis was revealed in C-169 in response to elevated CO2. Though Chl fluorescence does not represent the photosynthetic capacity directly, its fluctuation is positively correlated with photosystem activity. Chl fluorescence was enhanced by CO2 supplementation during the first 4 days while it was dramatically reduced after the 8th day (Fig. 1B). This phenomenon is consistent with previous report on plants that photosynthesis and growth rate were enhanced by short-term, but decreased by long-term elevated CO2 [21, 42]. Downregulation of photosystem at the later stage was implied by the notable decrease in most components of PS I, PS II and plastocyanin on the 4th day (Fig. 4b). Intriguingly, genes for the final part of photosynthetic electron transfer, ferredoxin and FNR, as well as the ADP/ATP transporter were dramatically upregulated. It seemed that plenty of light energy captured by the enhanced photosystem during the early stage was somehow converted and transported as reductant potential through ferredoxin. The quantitative RT-PCR analysis on the 4th, 8th and 12th day cells showed that ferredoxin 31164 was continuously upregulated, which might collaborate with upregulated FNR (54553) and ferredoxin–nitrite reductase (29833) to sustain anabolism, especially the lipid accumulation at the later stage. Previous investigation on diatom pointed out that nitrogen deficiency led to repression of photosynthetic protein including FNR [43]. Thus, here the uncoordinated regulation in photosystems, ferredoxin and FNR might be a special mechanism to sustain rapid growth and lipid accumulation upon elevated CO2, which was different from nitrogen deficiency. Except the energy transferred by ferredoxin, plenty of carbohydrates synthesized during the early stage upon elevated CO2 might go through the significantly enhanced glycolysis, accelerated TCA cycle and activated oxidative phosphorylation to generate large amount of intermediates, ATP and NADH to maintain rapid growth and lipid accumulation. The active intracellular metabolism was also indicated by the dramatically upregulated kinesin family members, which were responsible for intracellular transportation.
Nearly 50-fold elevated CO2 obviously would disrupt the C/N balance in C-169, and rapid growth resulted in greater consumption of other nutrients, especially nitrogen. Therefore, long-term elevated CO2 might mimic the nitrogen depletion. Actually, genes involved in nitrogen acquisition and assimilation were concertedly upregulated (Fig. 4e), which was similar with the metabolism remodeling in the diatom P. tricornutum under nitrogen stress [28]. However, C-169 does not have the complete urea cycle as reported in diatom [28, 29]. Data here revealed that it employed alternative pathways to supply ornithine, and a different CPS, CPS II, to incorporate bicarbonate with glutamine to provide carbamoyl-phosphate for the ornithine pathway. The CPS-ornithine pathway together with GS/GOGAT cycle might represent an important pathway for anaplerotic carbon fixation with nitrogenous compounds, which were essential for amino acid and pyrimidine metabolism, as well as replenishing the TCA cycle.
The results reported here are important because they represent the first global transcriptomic analysis on the early stage of microalgae upon elevated CO2 and propose potential targets for future metabolic engineering. Metabolic pathway engineering has been actively explored to enhance microalgae-based biofuel production, which is mainly dependent on the knowledge of algal lipid accumulation [44]. Here, downregulation of lipid hydrolysis revealed by RNA sequencing and upregulation of FA synthesis indicated by qRT-PCR at the later stage, suggested genes directly involved in lipid biosynthesis and catabolism could be the target of metabolic pathway engineering. Similar approaches have been applied to different microalgae [44–46]. However, metabolic pathway engineering is calling for innovative and integrated strategies, and our results are enlightening to propose new gene targets for metabolic engineering. CO2 supplementation resulted in overexpression and enhanced activities of PEP carboxylase and pyruvate carboxylase, which could capture more carbon and reinforce TCA cycle. As a pivotal enzyme in central metabolism, downregulation of PEP carboxylase indicated that it controlled carbon flux distribution and decided the ratio of major biomass constituents [47]. Thus, PEP carboxylase and pyruvate carboxylase might be important candidates for metabolic engineering efforts to promote biomass production and synthesize desired bio-products. More than dozen V-ATPase subunits were markedly induced upon elevated CO2, which was postulated as an adaptive mechanism to maintain intracellular pH homeostasis. V-ATPases might be the potential targets to increase the CO2 tolerance in lipid-producing microalgae. Though the transcriptomic data here provide hints for metabolic engineering, it is obvious that the transcriptomic changes do not necessarily lead to changes in protein biosynthesis and enzyme activity, which might be due to post-transcriptional and post-translational regulation. Thus, these potential targets should be verified case by case in the future.
In the present study, 2 and 5 % CO2 supplementation increased growth rate and lipid accumulation in autotrophically cultured C. subellipsoidea C-169. Overall biomass productivity of 222 mg L−1 day−1 and FA content as 48.5 % dry cell weight were found in 2 % CO2, suggesting C-169 as a great candidate for lipid production via CO2 supplementation. Transcriptomic profile comparison between 2 and 0.04 % CO2 unveiled the global regulation underlying rapid growth and lipid accumulation. C-169 enhanced gene expression in the Calvin cycle, and upregulated gene expression of PEP carboxylase, pyruvate carboxylase and CPS II to mobilize anaplerotic carbon assimilation pathways upon elevated CO2. Upregulation of ferredoxin and FNR implied that plentiful energy captured through photosynthesis might be converted and transferred through ferredoxin to sustain rapid growth and lipid accumulation. Upregulation of glycolysis, TCA cycle and oxidative phosphorylation gene expression implied them to provide abundant intermediates and metabolic energy for anabolism. Coordinated upregulation of nitrogen acquisition and assimilation genes, together with activation of CPS II and ornithine pathway genes might help C-169 to maintain C/N balance upon elevated CO2. Lipid accumulation was due to the significantly downregulated lipid degradation genes, as well as the upregulation of fatty acid synthesis genes at the later stage. Data here for the first time bring significant insights into the regulatory profile of metabolism and acclimation to elevated CO2 in C-169, which provide important information for future metabolic engineering to improve lipid production, and might eventually contribute to the development of sustainable microalgae-based biofuels.
Algal strain and culture conditions
Coccomyxa subellipsoidea C-169 was obtained from the Microbial Culture Collection of National Institute for Environmental Studies in Japan, under strain number NIES 2166. C-169 was incubated in 250-mL Erlenmeyer flasks containing 100 mL Bold's Basal Medium (BBM) with continuous illumination provided by fluorescent light of ~60 μmol m−2 s−1 at 25 °C on an orbital shaker (130 rpm). The pre-culture was carried out at ambient level of CO2 (0.04 %, v/v) to reach logarithmic phase (OD680 = 0.8). The pre-cultured cells were subsequently transferred to fresh media with initial cell density of 2 × 106 ml−1 and incubated with 0.04, 2 and 5 % CO2 (v/v) for 12 days. The pH of the medium was monitored using a pH meter (Mettler Toledo, Switzerland) and is provided in Additional file 2: Figure S2. Cells were sampled with 2-day interval followed by rinsing and centrifugation. Cell growth was monitored by counting cells with a hemocytometer. Dry cell weight was determined by weighing the cells pellet after lyophilization by a freeze drier (Modul YOD-230, Thermo-Fisher, USA).
Analysis by flow cytometry and Confocal laser scan microscopy
Collected cells were resuspended at a density of ~5 × 106 cells mL−1. Neutral lipids were quantified by Nile Red staining [48]. Aliquots of Nile Red (Sigma-Aldrich) in dimethyl sulfoxide (DMSO) were directly added to the suspension, having a final dye concentration of 2 μg mL−1 in 10 % DMSO (v/v). After incubation in dark for 3 min and filtration through a 45-μm membrane filter, all samples were analyzed using BD accuri C6 flow cytometer (BD Biosciences) equipped with 488-nm solid-state blue laser. The acquisition settings were 104 events with medium flow rate (35 μL min−1, 16 μm core size). All settings were preliminary optimized. Fluorescence of Nile Red stained cells, chlorophyll auto-fluorescence were determined via FL2 (585/40 nm), FL3 (670 LP), respectively.
Fluorescence images of stained cells were captured by a Confocal Laser Scanning Microscope (CLSM, TCS SP5; Leica Microsystems CMS GmbH, Germany) under HCX PL APO CS 100× /1.4 oil-immersion objective with confocal pinhole set at Airy 1 and 5× zoom factor for improved resolution with eight bits. A blue excitation light was used through a band-pass filter (460–490 nm) and emission wavelengths were imaged through a long-pass filter (560–590 nm). Laser transmission and scan settings were constant in all scans.
Total carbon content, carbon fixation rate and fatty acid profiling
C-169 cells from 0.04 % CO2 and 2 % CO2 were collected on the 4th, 8th and 12th day and lyophilized into cell pellets. Total carbon content (CC, % dry cell weight) was analyzed by an element analyzer (EuroEA3000, EuroVector S.p.A., Italy). CO2 fixation rate (\({\text{R}}_{{{\text{CO}}_{{\text{2}}} }}\), g L−1 day−1) was determined as previously described [49]. It was calculated using the following equation: \({\text{R}}_{{{\text{CO}}_{{\text{2}}} }}\) = CCP(\({\text{M}}_{{{\text{CO}}_{{\text{2}}} }}\)/Mc), where P is the biomass productivity (g L−1 day−1), MC is the molecular weight of carbon, and \({\text{M}}_{{{\text{CO}}_{{\text{2}}} }}\) is the molecular weight of CO2.
Fatty acid profiling was performed on the lyophilized cell pellets (Modul YOD-230, Thermo-Fisher, USA) via gas chromatography mass spectrometry (Agilent 6890 gas chromatography coupled with Agilent 5975 mass selective detector, Agilent Technologies, Santa Clara, CA, USA). Nonadecanoic acid (C19:0, Sigma-Aldrich, St. Louis, MO, USA) was added as internal standard to quantify FA content. Fatty acid methyl esters (FAMEs) were prepared and analyzed according to the protocol as previously described [50]. The degree of lipid unsaturation (DLU) was calculated according to previously described [51]:
$${\text{DLU}}({\triangledown }/{\text{mole}}) = \left[ {1.0 \, \times \, \left( {\% {\text{ monoene}}} \right) + 2.0 \, \times \, \left( {\% {\text{ diene}}} \right) + 3.0 \, \times \, \left( {\% {\text{ triene}}} \right)} \right]/100.$$
RNA extraction, library construction and sequencing
Total RNA was extracted from AG and CG cells using TRIzol (Invitrogen, Carlsbad, CA, USA) and incubated with DNase I (Takara, Dalian, China) for 30 min at 37 °C. RNA quality and quantity were determined by a Nanodrop ND-1000 spectrophotometer (LabTech, Holliston, MA, USA) and Labon-chip analysis 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) (Additional file 2: Table S1). Approximately 10 μg of total RNA was subjected to poly(A) mRNA isolation with poly-T attached magnetic beads (Thermo-fisher). Following purification, mRNA was fragmented into small pieces using divalent cations under elevated temperature. Then the randomly cleaved mRNA fragments were constructed into cDNA library in accordance with the protocol for the Illumina RNA ligation-based method (Illumina, San Diego, USA). In brief, the fragmented RNA was dephosphorylated at the 3′ end by the phosphatase and phosphorylated at the 5′ end by the PNK. RNA was purified with the RNeasyMinElute Kit (Qiagen) and ligated with a pre-adenylated 3′ adapter, which enables the subsequent ligation of the 5′ adapter. Based on the adapter sequence, reverse transcription followed by PCR was used to create cDNA constructs. The average insert size for the paired-end libraries was 300 bp (±50 bp). The single end sequencing was then performed on Illumina Hiseq 2000.
The raw data containing adaptor sequences, reads with low-quality sequences and unknown nucleotides N were filtered to obtain clean reads with 36 nt in length. Statistic analysis of data was provided in Additional file 2: Table S2. Clean reads were mapped to the transcript sequences of C-169 available on Phytozome V10 (http://www.genome.jgi.doe.gov/pages/dynamicOrganismDownload.jsf?organism=PhytozomeV10#) by Bowtie software [52], only 1 bp mismatch was allowed. For monitoring the mapping events on both strands, both the sense and the complementary antisense sequences were included in the data collection. The number of perfect clean reads corresponding to each gene was calculated and normalized to the number of Reads Per Kilobase of exon model per Million mapped reads (RPKM). Based on the expression levels, the significant DEGs (differentially expressed gene) between CG and AG were identified with |log2 fold change| > 1 and FDR < 0.001 unless otherwise noted. Functional classification of DEGs was conducted according to annotation in gene ontology (GO) and the pathway analysis was carried out according to KEGG. Heatmap clustering on top 100 DEGs of most significance was constructed and provided in Additional file 2: Figure S3.
Quantitative RT-PCR was performed on ABI 7500 (Applied Biosystems, Foster City, CA, USA) using two-step kits (TaKaRa Biotech Co., Dalian, China). The gene coding ribosomal protein L5 (54775, RibL5) was used as an internal control according to the references [8, 53, 54] and the analysis of our transcriptomic data. For single-strand cDNA synthesis, the PrimeScript RT regent Kit with genomic DNA Eraser (TaKaRa) was used to perform the reverse transcription reaction according to the user's manual. Genomic DNA removal was performed to purify the RNA extracts. Quantitative RT-PCR was performed with the SYBR Premix Ex Taq II kit (TaKaRa), based on cDNA template and 16 pairs of specific primers (Additional file 2: Table S3). Sequences of targeted genes were obtained form KEGG database and primers were designed using the Primer Premier 5.0 software. Primer alignments to secondary structures (predicted by mfold: http://www.unafold.rna.albany.edu/?q=mfold) of pairing sites and non-specific priming (validated by Primer-BLAST: http://www.ncbi.nlm.nih.gov/tools/primer-blast/) were avoided. Amplification program was 95 °C 30 s; 40 cycles at 95 °C for 5 s and 60 °C for 34 s followed by disassociation stage as instructed by user's manual. Samples were performed in triplicate. The relative amount of gene transcripts was normalized to that of reference gene RibL5 in each sample. Expression fold change (FC) was calculated as:
FCgene of x = 2^(CtAG – CtCG) of gene x/2^(CtAG – CtCG) of RibL5.
Phosphoenolpyruvate carboxylase and pyruvate carboxylase activity assay
The enzyme activity assay was mainly as previously described with some modifications [55]. To analyze the enzyme activities, cell extracts from the 4th day were prepared by washing the cell pellets with TE buffer (10 mM Tris–HCl 1 mM EDTA, pH = 8.0) and broken with 0.1-mm-dia. silica beads in a mini-bead beater-1 (Biospec). Cell debris was removed by centrifugation at 12,000 rpm for 10 min at 4 °C. The supernatant was further centrifuged at 12,000 rpm at 4 °C for 20 min, and the resulting supernatant was used for the assay of enzyme activity. The phosphoenolpyruvate carboxylase (PEPCase) activity was determined by monitoring the decrease in absorbance of NADH of 340 nm using malate dehydrogenase as a coupling enzyme. The 1 ml reaction mixture for PEPCase analysis consisted of 50 mM HEPES (pH 7.3), 5 mM PEP, 10 mM MgCl2, 5 mM NaHCO3, 5U of malate dehydrogenase, 0.2 mM NADH, and 25 μl of cell extract. One unit of relative PEPCase activity was defined as 1 μM NADH being oxidized min−1 at 30 °C. The pyruvate carboxylase activity was similar to the method used above except using 5 mM pyruvate instead of PEP as substrate.
DGE:
digital gene expression
triacylglycerol
CLSM:
confocal laser scanning microscope
FAME:
fatty acid methyl ester
DEGs:
differentially expressed genes
biological replicates from 0.04 % CO2 group
CG:
biological replicates from 2 % CO2 group
RPKM:
reads per kilobase exon model per million mapped reads
FC:
fold change (CG to AG)
FDR:
false discovery rate
GO:
KEGG:
G6PD:
glucose-6-phosphate dehydrogenase
6PGD:
6-phosphogluconate dehydrogenase
PGK:
phosphoglycerate kinase
GAPDH:
glyceraldehyde 3-phosphate dehydrogenase
TPI:
triosephosphate isomerase
fructose-bisphosphate aldolase
FNR:
ferredoxin–NADP+ reductase
SP:
starch phosphorylase
PFK:
6-phosphofructokinase
GAPN:
glyceraldehyde-3-phosphate dehydrogenase (NADP+)
PK:
pyruvate kinase
FBP:
fructose-1,6-bisphosphatase
PEPCK:
phosphoenolpyruvate carboxykinase
citrate synthase
IDH3:
isocitrate dehydrogenase 3
OGDC:
2-oxoglutarate dehydrogenase complex
nitrate reductase
NIR:
nitrite reductase
glutamine synthetase
GOGAT:
glutamine 2-oxoglutarate aminotransferase
GLDH:
glutamate dehydrogenase
CPS:
carbamoyl-phosphate synthase
ornithine carbamoyltransferase
ASS:
argininosuccinate synthase
ASL:
argininosuccinate lyase
ACS:
acyl-CoA synthetase
ACOX:
acyl-CoA oxidase
MFP-2:
enoyl-CoA hydratase/3-hydroxyacyl-CoA dehydrogenase
ACAT:
acetyl-CoA acyltransferase
Chisti Y. Biodiesel from microalgae beats bioethanol. Trends Biotechnol. 2008;26(3):126–31.
Huang GH, Chen F, Wei D, Zhang XW, Chen G. Biodiesel production by microalgal biotechnology. Appl Energy. 2010;87(1):38–46.
Lam MK, Lee KT. Microalgae biofuels: a critical review of issues, problems and the way forward. Biotechnol Adv. 2012;30(3):673–90.
Lam MK, Lee KT, Mohamed AR. Current status and challenges on microalgae-based carbon capture. Int J Greenhouse Gas Control. 2012;10:456–69.
Hein M, SandJensen K. CO2 increases oceanic primary production. Nature. 1997;388(6642):526–7.
Riebesell U, Wolfgladrow D, Smetacek V. Carbon dioxide limitation of marine phytoplankton growth rates. Nature. 1993;361:249–51.
Chiu SY, Kao CY, Tsai MT, Ong SC, Chen CH, Lin CS. Lipid accumulation and CO(2) utilization of Nannochloropsis oculata in response to CO(2) aeration. Bioresour Technol. 2009;100(2):833–8.
Wu S, Huang A, Zhang B, Huan L, Zhao P, Lin A, Wang G. Enzyme activity highlights the importance of the oxidative pentose phosphate pathway in lipid accumulation and growth of Phaeodactylum tricornutum under CO2 concentration. Biotechnol Biofuels. 2015;8:78.
Tsuzuki M, Ohnuma E, Sato N, Takaku T, Kawaguchi A. Effects of CO(2) concentration during growth on fatty acid composition in microalgae. Plant Physiol. 1990;93(3):851–6.
Yoo C, Jun SY, Lee JY, Ahn CY, Oh HM. Selection of microalgae for lipid production under high levels carbon dioxide. Bioresour Technol. 2010;101:S71–4.
Peng H, Wei D, Chen F, Chen G. Regulation of carbon metabolic fluxes in response to CO2 supplementation in phototrophic Chlorella vulgaris: a cytomic and biochemical study. J Appl Phycol. 2016;28:737–45.
Zeng X, Danquah MK, Chen XD, Lu Y. Microalgae bioengineering: from CO2 fixation to biofuel production. Renew Sustain Energy Rev. 2011;15(6):3252–60.
Bhola V, Swalaha F, Kumar RR, Singh M, Bux F. Overview of the potential of microalgae for CO2 sequestration. Int J Environ Sci Technol. 2014;11(7):2103–18.
Singh SP, Singh P. Effect of CO2 concentration on algal growth: a review. Renew Sustain Energy Rev. 2014;38:172–9.
Sydney EB, Sturm W, de Carvalho JC, Thomaz-Soccol V, Larroche C, Pandey A, Soccol CR. Potential carbon dioxide fixation by industrially important microalgae. Bioresour Technol. 2010;101(15):5892–6.
Yadav G, Karemore A, Dash SK, Sen R. Performance evaluation of a green process for microalgal CO2 sequestration in closed photobioreactor using flue gas generated in-situ. Bioresour Technol. 2015;191:399–406.
Hennon GMM, Ashworth J, Groussman RD, Berthiaume C, Morales RL, Baliga NS, Orellana MV, Armbrust EV. Diatom acclimation to elevated CO2 via cAMP signalling and coordinated gene expression. Nature Climate Change. 2015;5(8):761-U179.
Blanc G, Agarkova I, Grimwood J, Kuo A, Brueggeman A, Dunigan DD, Gurnon J, Ladunga I, Lindquist E, Lucas S, et al. The genome of the polar eukaryotic microalga Coccomyxa subellipsoidea reveals traits of cold adaptation. Genome Biol. 2012;13(5):R39.
Msanne J, Xu D, Konda AR, Casas-Mollano JA, Awada T, Cahoon EB, Cerutti H. Metabolic and gene expression changes triggered by nitrogen deprivation in the photoautotrophically grown microalgae Chlamydomonas reinhardtii and Coccomyxa sp. C-169. Phytochemistry. 2012;75:50–9.
Allen JW, DiRusso CC, Black PN. Triacylglycerol synthesis during nitrogen stress involves the prokaryotic lipid synthesis pathway and acyl chain remodeling in the microalgae Coccomyxa subellipsoidea. Algal Research. 2015;10:110–20.
Cheng SH, Moore BD, Seemann JR. Effects of short- and long-term elevated CO2 on the expression of ribulose-1,5-bisphosphate carboxylase/oxygenase genes and carbohydrate accumulation in leaves of Arabidopsis thaliana (L) Heynh. Plant Physiol. 1998;116(2):715–23.
Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, Davis AP, Dolinski K, Dwight SS, Eppig JT, et al. Gene ontology: tool for the unification of biology. The Gene Ontology consortium. Nat Genet. 2000;25(1):25–9.
Yamada T, Letunic I, Okuda S, Kanehisa M, Bork P. iPath20: interactive pathway explorer. Nucleic Acids Res. 2011;39(Web Server issue):W412–5.
Reginald H, Garrett C. Biochemistry. 5th ed. Belmont: Wadsworth Publishing; 2012.
Kruger NJ, von Schaewen A. The oxidative pentose phosphate pathway: structure and organisation. Curr Opin Plant Biol. 2003;6(3):236–46.
Adams MJ, Ellis GH, Gover S, Naylor CE, Phillips C. Crystallographic study of coenzyme, coenzyme analogue and substrate binding in 6-phosphogluconate dehydrogenase: implications for NADP specificity and the enzyme mechanism. Structure. 1994;2(7):651–68.
Goldberg T, Hecht M, Hamp T, Karl T, Yachdav G, Ahmed N, Altermann U, Angerer P, Ansorge S, Balasz K, et al. LocTree3 prediction of localization. Nucleic Acids Res. 2014;42(W1):W350–5.
Levitan O, Dinamarca J, Zelzion E, Lun DS, Guerra LT, Kim MK, Kim J, Van Mooy BAS, Bhattacharya D, Falkowski PG. Remodeling of intermediate metabolism in the diatom Phaeodactylum tricornutum under nitrogen stress. Proc Natl Acad Sci USA. 2015;112(2):412–7.
Allen AE, Dupont CL, Obornik M, Horak A, Nunes-Nesi A, McCrow JP, Zheng H, Johnson DA, Hu H, Fernie AR, et al. Evolution and metabolic significance of the urea cycle in photosynthetic diatoms. Nature. 2011;473(7346):203–7.
Rylott EL, Eastmond PJ, Gilday AD, Slocombe SP, Larson TR, Baker A, Graham IA. The Arabidopsis thaliana multifunctional protein gene (MFP2) of peroxisomal beta-oxidation is essential for seedling establishment. Plant J. 2006;45(6):930–41.
Gago G, Diacovich L, Arabolaza A, Tsai SC, Gramajo H. Fatty acid biosynthesis in actinomycetes. FEMS Microbiol Rev. 2011;35(3):475–97.
Vale RD. The molecular motor toolbox for intracellular transport. Cell. 2003;112(4):467–680.
Hirokawa N, Noda Y, Tanaka Y, Niwa S. Kinesin superfamily motor proteins and intracellular transport. Nat Rev Mol Cell Biol. 2009;10(10):682–96.
Nelson N, Perzov N, Cohen A, Hagai K, Padler V, Nelson H. The cellular biology of proton-motive force generation by V-ATPases. J Exp Biol. 2000;203(Pt 1):89–95.
Breuer G, Lamers PP, Martens DE, Draaisma RB, Wijffels RH. The impact of nitrogen starvation on the dynamics of triacylglycerol accumulation in nine microalgae strains. Bioresour Technol. 2012;124:217–26.
Benvenuti G, Bosma R, Cuaresma M, Janssen M, Barbosa MJ, Wijffels RH. Selecting microalgae with high lipid productivity and photosynthetic activity under nitrogen starvation. J Appl Phycol. 2014;27(4):1425–31.
Hu Q, Sommerfeld M, Jarvis E, Ghirardi M, Posewitz M, Seibert M, Darzins A. Microalgal triacylglycerols as feedstocks for biofuel production: perspectives and advances. Plant J. 2008;54(4):621–39.
Salowe SP, Krol WJ, Iwatareuyl D, Townsend CA. Elucidation of the order of oxidations and identification of an intermediate in the multistep clavaminate synthase reaction. Biochemistry. 1991;30(8):2281–92.
Zhang ZH, Ren JS, Stammers DK, Baldwin JE, Harlos K, Schofield CJ. Structural origins of the selectivity of the trifunctional oxygenase clavaminic acid synthase. Nat Struct Biol. 2000;7(2):127–33.
Sumrada RA, Cooper TG. Urea carboxylase and allophanate hydrolase are components of a multifunctional protein in yeast. J Biol Chem. 1982;275(15):9119–27.
Kanamori T, Kanou N, Atomi H, Imanaka T. Enzymatic characterization of a prokaryotic urea carboxylase. J Bacteriol. 2004;186(9):2532–9.
Bloom AJ, Smart DR, Nguyen DT, Searles PS. Nitrogen assimilation and growth of wheat under elevated carbon dioxide. Proc Natl Acad Sci USA. 2002;99:1730–5.
Yang ZK, Niu YF, Ma YH, Xue J, Zhang MH, Yang WD, Liu JS, Lu SH, Guan YF, Li HY. Molecular and cellular mechanisms of neutral lipid accumulation in diatom following nitrogen deprivation. Biotechnol Biofuels. 2013;6:1.
Bhowmick GD, Koduru L, Sen R. Metabolic pathway engineering towards enhancing microalgal lipid biosynthesis for biofuel application—a review. Renew Sustain Energy Rev. 2015;50:1239–53.
Trentacoste EM, Shrestha RP, Smith SR, Gle C, Hartmann AC, Hildebrand M, Gerwick WH. Metabolic engineering of lipid catabolism increases microalgal lipid accumulation without compromising growth. Proc Natl Acad Sci USA. 2013;110(49):19748–53.
Yan J, Cheng R, Lin X, You S, Li K, Rong H, Ma Y. Overexpression of acetyl-CoA synthetase increased the biomass and fatty acid proportion in microalga Schizochytrium. Appl Microbiol Biotechnol. 2013;97(5):1933–9.
Yan F, Quan L, Wei Z, Cong W. Regulating phosphoenolpyruvate carboxylase activity by copper-induced expression method and exploring its role of carbon flux distribution in Synechocystis PCC 6803. J Appl Phycol. 2014;27(1):179–85.
Huang G-H, Chen G, Chen F. Rapid screening method for lipid production in alga based on Nile red fluorescence. Biomass Bioenergy. 2009;33(10):1386–92.
de Morais MG, Costa JA. Biofixation of carbon dioxide by Spirulina sp. and Scenedesmus obliquus cultivated in a three-stage serial tubular photobioreactor. J Biotechnol. 2007;129(3):439–45.
Lu N, Wei D, Jiang X-L, Chen F, Yang S-T. Fatty acids profiling and biomarker identification in snow alga Chlamydomonas nivalis by NaCl stress using GC/MS and multivariate statistical analysis. Anal Lett. 2012;45(10):1172–83.
Kates M, Baxter RM. Lipid composition of mesophilic and psychrophilic yeasts (Candida species) as influenced by environmental temperature. Can J Biochem Physiol. 1962;40:1213–27.
Langmead B, Trapnell C, Pop M, Salzberg SL. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009;10(3):R25.
Chari R, Lonergan KM, Pikor LA, Coe BP, Zhu CQ, Chan TH, MacAulay CE, Tsao M-S, Lam S, Ng RT, et al. A sequence-based approach to identify reference genes for gene expression analysis. BMC Med Genomics. 2010;3:32.
Kianianmomeni A, Hallmann A. Validation of reference genes for quantitative gene expression studies in Volvox carteri using real-time RT-PCR. Mol Biol Rep. 2013;40(12):6691–9.
Wang D, Li Q, Mao Y, Xing J, Su Z. High-level succinic acid production and yield by lactose-induced expression of phosphoenolpyruvate carboxylase in ptsG mutant Escherichia coli. Appl Microbiol Biotechnol. 2010;87(6):2025–35.
DW, FC, and HP designed research. HP performed research. DW and FC contributed reagents and analytic tools. HP and GC analyzed data. GC, HP, and DW wrote the paper. All authors read and approved the manuscript.
Availability of supporting data
The raw DGE reads and differential gene expression data have been deposited in NCBI Gene Expression Omnibus (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE76638) under GEO Series Accession Number GSE76638.
This work was supported by National Sciences Foundation of China (NSFC, Grant Nos. 31370383 and 31270085), the Major State Basic Research Development Program of China (973 Project) (Grant No. 2011CB200904), and National Hi-tech Research and Development Program (863 Project) (Grant No. 2013AA065802). The funding bodies did not participate in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
School of Food Science and Engineering, South China University of Technology, Guangzhou, 510640, People's Republic of China
Huifeng Peng, Dong Wei, Gu Chen & Feng Chen
Institute for Food and Bioresource Engineering, College of Engineering, Peking University, Beijing, 100871, People's Republic of China
Feng Chen
Huifeng Peng
Gu Chen
Correspondence to Dong Wei or Gu Chen.
Transcriptomic data and DEGs enriched pathways or gene families.
Additional Tables and Figures.
Peng, H., Wei, D., Chen, G. et al. Transcriptome analysis reveals global regulation in response to CO2 supplementation in oleaginous microalga Coccomyxa subellipsoidea C-169. Biotechnol Biofuels 9, 151 (2016). https://doi.org/10.1186/s13068-016-0571-5
Coccomyxa subellipsoidea C-169
Elevated CO2
Lipid accumulation
Transcriptomic analysis
Phosphoenolpyruvate carboxylase
Pyruvate carboxylase
Carbamoyl-phosphate synthetase II
Ferredoxin
Vacuolar H+-ATPase
Clavaminate synthase | CommonCrawl |
Journal of Fluid Mechanics
Universal mechanism for air ent...
Hale, Jacob and Akers, Caleb 2016. Deceleration of droplets that glide along the free surface of a bath. Journal of Fluid Mechanics, Vol. 803, Issue. , p. 313.
Bouwhuis, Wilco Huang, Xin Chan, Chon U Frommhold, Philipp E. Ohl, Claus-Dieter Lohse, Detlef Snoeijer, Jacco H. and van der Meer, Devaraj 2016. Impact of a high-speed train of microdrops on a liquid pool. Journal of Fluid Mechanics, Vol. 792, Issue. , p. 850.
Chen, Longquan Bonaccurso, Elmar Deng, Peigang and Zhang, Haibo 2016. Droplet impact on soft viscoelastic surfaces. Physical Review E, Vol. 94, Issue. 6,
Josserand, C. and Thoroddsen, S.T. 2016. Drop Impact on a Solid Surface. Annual Review of Fluid Mechanics, Vol. 48, Issue. 1, p. 365.
Chen, Longquan Li, Long Li, Zhigang and Zhang, Kai 2017. Submillimeter-Sized Bubble Entrapment and a High-Speed Jet Emission during Droplet Impact on Solid Surfaces. Langmuir, Vol. 33, Issue. 29, p. 7225.
Li, Er Qiang Beilharz, Daniel and Thoroddsen, Sigurdur T. 2017. Vortex-induced buckling of a viscous drop impacting a pool. Physical Review Fluids, Vol. 2, Issue. 7,
Moore, John R. 2017. Protective Coatings. p. 465.
Santini, M. Fest-Santini, S. and Cossali, G.E. 2017. Experimental study of vortices and cavities from single and double drop impacts onto deep pools. European Journal of Mechanics - B/Fluids, Vol. 62, Issue. , p. 21.
Chen, Simeng and Bertola, Volfango 2017. Morphology of viscoplastic drop impact on viscoplastic surfaces. Soft Matter, Vol. 13, Issue. 4, p. 711.
Li, Dashu Zhang, Duo Zheng, Zhiwei and Tian, Xiaoshuai 2017. Numerical analysis on air entrapment during a droplet impacts on a dry flat surface. International Journal of Heat and Mass Transfer, Vol. 115, Issue. , p. 186.
Prokhorov, V. E. 2018. Influence of Molecular Effects on the Emission of Sound in a Low-Velocity Impact of a Drop on Water Surface. Journal of Experimental and Theoretical Physics, Vol. 126, Issue. 4, p. 479.
Gao, Yuesheng and Pan, Lei 2018. Measurement of Instability of Thin Liquid Films by Synchronized Tri-wavelength Reflection Interferometry Microscope. Langmuir, Vol. 34, Issue. 47, p. 14215.
Li, E. Q. Thoraval, M.-J. Marston, J. O. and Thoroddsen, S. T. 2018. Early azimuthal instability during drop impact. Journal of Fluid Mechanics, Vol. 848, Issue. , p. 821.
Law, Kiat Li and Chu, Hong-Yu 2019. Bowling water drops on water surface. Physics of Fluids, Vol. 31, Issue. 6, p. 067101.
Hasegawa, K. and Nara, T. 2019. Energy conservation during single droplet impact on deep liquid pool and jet formation. AIP Advances, Vol. 9, Issue. 8, p. 085218.
Langley, K. R. and Thoroddsen, S. T. 2019. Gliding on a layer of air: impact of a large-viscosity drop on a liquid film. Journal of Fluid Mechanics, Vol. 878, Issue. ,
Zhao, Yugang Zhu, Fangqi Zhang, Hui New, Tze How Jin, Liwen and Yang, Chun 2019. Triple condensate halo from a single water droplet impacting upon a cold surface. Applied Physics Letters, Vol. 114, Issue. 18, p. 183703.
Chen, Longquan Lin, Shiji Deng, Peigang and Wang, Xiang 2019. Microdrop impact on soft substrates at low Weber numbers. Journal of Adhesion Science and Technology, Vol. 33, Issue. 19, p. 2128.
Mirjalili, Shahab Ivey, Christopher B. and Mani, Ali 2019. Comparison between the diffuse interface and volume of fluid methods for simulating two-phase flows. International Journal of Multiphase Flow, Vol. 116, Issue. , p. 221.
Misra, Sirshendu Trinavee, Kumari Gunda, Naga Siva Kumar and Mitra, Sushanta K. 2020. Encapsulation with an interfacial liquid layer: Robust and efficient liquid-liquid wrapping. Journal of Colloid and Interface Science, Vol. 558, Issue. , p. 334.
Download full list
25 February 2016 , pp. 708-725
Universal mechanism for air entrainment during liquid impact
Maurice H. W. Hendrix (a1) (a2), Wilco Bouwhuis (a1), Devaraj van der Meer (a1), Detlef Lohse (a1) and Jacco H. Snoeijer (a1) (a3)...
1 Physics of Fluids Group, Faculty of Science and Technology, Mesa+ Institute, and J. M. Burgers Center for Fluid Dynamics, University of Twente, 7500 AE Enschede, The Netherlands
2 Laboratory for Aero and Hydrodynamics, Delft University of Technology, Leeghwaterstraat 21, NL-2628 CA Delft, The Netherlands
3 Mesoscopic Transport Phenomena, Eindhoven University of Technology, Den Dolech 2, 5612 AZ Eindhoven, The Netherlands
DOI: https://doi.org/10.1017/jfm.2015.757
Published online by Cambridge University Press: 26 January 2016
When a millimetre-sized liquid drop approaches a deep liquid pool, both the interface of the drop and the pool deform before the drop touches the pool. The build-up of air pressure prior to coalescence is responsible for this deformation. Due to this deformation, air can be entrained at the bottom of the drop during the impact. We quantify the amount of entrained air numerically, using the boundary integral method for potential flow for the drop and the pool, coupled to viscous lubrication theory for the air film that has to be squeezed out during impact. We compare our results with various experimental data and find excellent agreement for the amount of air that is entrapped during impact onto a pool. Next, the impact of a rigid sphere onto a pool is numerically investigated and the air that is entrapped in this case also matches with available experimental data. In both cases of drop and sphere impact onto a pool the numerical air bubble volume $V_{b}$ is found to be in agreement with the theoretical scaling $V_{b}/V_{drop/sphere}\sim \mathit{St}^{-4/3}$ , where $\mathit{St}$ is the Stokes number. This is the same scaling as has been found for drop impact onto a solid surface in previous research. This implies a universal mechanism for air entrainment for these different impact scenarios, which has been suggested in recent experimental work, but is now further elucidated with numerical results.
© 2016 Cambridge University Press
†Email address for correspondence: [email protected]
Bouwhuis, W., Hendrix, M. H. W., Van Der Meer, D. & Snoeijer, J. H. 2015 Initial surface deformations during impact on a liquid pool. J. Fluid Mech. 771, 503–519.
Bouwhuis, W., Van Der Veen, R. C. A., Tran, T., Keij, D. L., Winkels, K. G., Peters, I. R., Van Der Meer, D., Sun, C., Snoeijer, J. H. & Lohse, D. 2012 Maximal air bubble entrainment at liquid-drop impact. Phys. Rev. Lett. 109 (26), 264501.
Bouwhuis, W., Winkels, K. G., Peters, I. R., Brunet, P., Van Der Meer, D. & Snoeijer, J. H. 2013 Oscillating and star-shaped drops levitated by an airflow. Phys. Rev. E 88 (2), 023017.
Chen, S. & Guo, L. 2014 Viscosity effect on regular bubble entrapment during drop impact into a deep pool. Chem. Engng Sci. 109, 1–16.
van Dam, D. B. & Le Clerc, C. 2004 Experimental study of the impact of an ink-jet printed droplet on a solid substrate. Phys. Fluids 16 (9), 3403–3414.
Esmailizadeh, L. & Mesler, R. 1986 Bubble entrainment with drops. J. Colloid Interface Sci. 110 (2), 561–574.
Gekle, S. & Gordillo, J. M. 2011 Compressible air flow through a collapsing liquid cavity. Intl J. Numer. Meth. Fluids 67 (11), 1456–1469.
Guo, Y., Wei, L., Liang, G. & Shen, S. 2014 Simulation of droplet impact on liquid film with CLSVOF. Intl Commun. Heat Mass Transfer 53, 26–33.
Hendrix, M. H. W., Manica, R., Klaseboer, E., Chan, D. Y. C. & Ohl, C. D. 2012 Spatiotemporal evolution of thin liquid films during impact of water bubbles on glass on a micrometer to nanometer scale. Phys. Rev. Lett. 108 (24), 247803.
Hicks, P. D., Ermanyuk, E. V., Gavrilov, N. V. & Purvis, R. 2012 Air trapping at impact of a rigid sphere onto a liquid. J. Fluid Mech. 695, 310–320.
Hicks, P. D. & Purvis, R. 2010 Air cushioning and bubble entrapment in three-dimensional droplet impacts. J. Fluid Mech. 649, 135–163.
Hicks, P. D. & Purvis, R. 2011 Air cushioning in droplet impacts with liquid layers and other droplets. Phys. Fluids 23 (6), 062104.
Klaseboer, E., Chevaillier, J. P., Gourdon, C. & Masbernat, O. 2000 Film drainage between colliding drops at constant approach velocity: experiments and modeling. J. Colloid Interface Sci. 229 (1), 274–285.
Klaseboer, E., Manica, R. & Chan, D. Y. C. 2014 Universal behavior of the initial stage of drop impact. Phys. Rev. Lett. 113 (19), 194501.
Korobkin, A. A., Ellis, A. S. & Smith, F. T. 2008 Trapping of air in impact between a body and shallow water. J. Fluid Mech. 611, 365–394.
Leal, L. Gary 1992 Laminar Flow and Convective Transport Processes, pp. 345–448. Butterworth-Heinemann.
Mandre, S., Mani, M. & Brenner, M. P. 2009 Precursors to splashing of liquid droplets on a solid surface. Phys. Rev. Lett. 102 (13), 134502.
Mani, M., Mandre, S. & Brenner, M. P. 2010 Events before droplet splashing on a solid surface. J. Fluid Mech. 647, 163–185.
Marston, J. O., Vakarelski, I. U. & Thoroddsen, S. T. 2011 Bubble entrapment during sphere impact onto quiescent liquid surfaces. J. Fluid Mech. 680, 660–670.
Oguz, H. N. & Prosperetti, A. 1990 Bubble entrainment by the impact of drops on liquid surfaces. J. Fluid Mech. 219, 143–179.
Oguz, H. N. & Prosperetti, A. 1993 Dynamics of bubble growth and detachment from a needle. J. Fluid Mech. 257, 111–145.
Pumphrey, H. C. & Elmore, P. A. 1990 Entrainment of bubbles by drop impacts. J. Fluid Mech. 220, 539–567.
Saylor, J. R. & Bounds, G. D. 2012 Experimental study of the role of the Weber and capillary numbers on Mesler entrainment. AIChE J. 58 (12), 3841–3851.
Sun, Q., Klaseboer, E., Khoo, B. C. & Chan, D. Y. C. 2014 A robust and non-singular formulation of the boundary integral method for the potential problem. Engng Anal. Bound. Elem. 43, 117–123.
Thoraval, M., Takehara, K., Etoh, T. G., Popinet, S., Ray, P., Josserand, C., Zaleski, S. & Thoroddsen, S. T. 2012 von Kármán vortex street within an impacting drop. Phys. Rev. Lett. 108 (26), 264506.
Thoroddsen, S. T., Etoh, T. G. & Takehara, K. 2003 Air entrapment under an impacting drop. J. Fluid Mech. 478, 125–134.
Thoroddsen, S. T., Etoh, T. G., Takehara, K., Ootsuka, N. & Hatsuki, Y. 2005 The air bubble entrapped under a drop impacting on a solid surface. J. Fluid Mech. 545, 203–212.
Thoroddsen, S. T., Thoraval, M. J., Takehara, K. & Etoh, T. G. 2012 Micro-bubble morphologies following drop impacts onto a pool surface. J. Fluid Mech. 708, 469–479.
Tran, T., De Maleprade, H., Sun, C. & Lohse, D. 2013 Air entrainment during impact of droplets on liquid surfaces. J. Fluid Mech. 726, R3.
van der Veen, R. C. A., Hendrix, M. H. W., Tran, T., Sun, C., Tsai, P. A. & Lohse, D. 2014 How microstructures affect air film dynamics prior to drop impact. Soft Matt. 10 (21), 3703–3707.
van der Veen, R. C. A., Tran, T., Lohse, D. & Sun, C. 2012 Direct measurements of air layer profiles under impacting droplets using high-speed color interferometry. Phys. Rev. E 85 (2), 026315.
Wang, A., Kuan, C. & Tsai, P. 2013 Do we understand the bubble formation by a single drop impacting upon liquid surface? Phys. Fluids 25 (10), 101702.
URL: /core/journals/journal-of-fluid-mechanics
MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org.
JFM classification
Drops and Bubbles: Drops and Bubbles
Low-Reynolds-number flows: Lubrication theory
Interfacial Flows (free surface): Thin films | CommonCrawl |
Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction
Christian Staerk ORCID: orcid.org/0000-0003-0526-01891 &
Andreas Mayr ORCID: orcid.org/0000-0001-7106-97321
Statistical boosting is a computational approach to select and estimate interpretable prediction models for high-dimensional biomedical data, leading to implicit regularization and variable selection when combined with early stopping. Traditionally, the set of base-learners is fixed for all iterations and consists of simple regression learners including only one predictor variable at a time. Furthermore, the number of iterations is typically tuned by optimizing the predictive performance, leading to models which often include unnecessarily large numbers of noise variables.
We propose three consecutive extensions of classical component-wise gradient boosting. In the first extension, called Subspace Boosting (SubBoost), base-learners can consist of several variables, allowing for multivariable updates in a single iteration. To compensate for the larger flexibility, the ultimate selection of base-learners is based on information criteria leading to an automatic stopping of the algorithm. As the second extension, Random Subspace Boosting (RSubBoost) additionally includes a random preselection of base-learners in each iteration, enabling the scalability to high-dimensional data. In a third extension, called Adaptive Subspace Boosting (AdaSubBoost), an adaptive random preselection of base-learners is considered, focusing on base-learners which have proven to be predictive in previous iterations. Simulation results show that the multivariable updates in the three subspace algorithms are particularly beneficial in cases of high correlations among signal covariates. In several biomedical applications the proposed algorithms tend to yield sparser models than classical statistical boosting, while showing a very competitive predictive performance also compared to penalized regression approaches like the (relaxed) lasso and the elastic net.
The proposed randomized boosting approaches with multivariable base-learners are promising extensions of statistical boosting, particularly suited for highly-correlated and sparse high-dimensional settings. The incorporated selection of base-learners via information criteria induces automatic stopping of the algorithms, promoting sparser and more interpretable prediction models.
The increasing availability of high-dimensional biomedical data with many possible predictor variables calls for appropriate statistical tools in order to deal with the challenging problem of selecting an interpretable model that includes only the relevant variables for modelling a particular outcome. At the same time, it is desirable that the prediction accuracy is not deteriorated by selecting an overly sparse model.
Various variable selection methods have been proposed in the context of high-dimensional regression (see Table 1). Regularization approaches minimize the empirical risk function while considering additional penalties on the "size" of the regression coefficients, including the lasso [1] and the relaxed lasso [2, 3] with an \(\ell _1\)-penalty as well as the the elastic net [4] with a combined \(\ell _1\)- and \(\ell _2\)-penalty. These methods yield sparse point estimates through the imposed penalties, which enforce shrinkage of the regression coefficients towards zero; particularly, several coefficients are estimated to be exactly zero, corresponding to the exclusion of the respective variables from the model. A viable alternative to regularization methods is statistical boosting (see e.g. [5,6,7]). The general concept is best illustrated with the squared error loss, for which two important variants of statistical boosting—gradient boosting [8] and likelihood-based boosting [9]—yield basically the same algorithm called \(L_2\)Boosting [10, 11]. In each iteration of \(L_2\)Boosting the currently estimated regression coefficient vector is updated by adding one of several prespecified base-learners that leads to the best fit of the current residuals (i.e. of the negative gradient of the empirical risk function). The base-learners are typically defined by simple regression models each including one of the covariates (known as component-wise boosting) and the starting point is chosen as the zero regression vector, so that early stopping of the boosting algorithm leads to implicit regularization and variable selection.
It has been shown that there is a close connection between the lasso and \(L_2\)Boosting [12,13,14] and that the performance of both methods is often very similar in practice [15]. However, an important difference is that the lasso enforces regularization explicitly via the definition of the \(\ell _1\)-penalized optimization problem, whereas the regularization in boosting is imposed rather indirectly via early stopping of the algorithm after a finite number of iterations. While the explicit form of regularization in methods like the lasso can facilitate the theoretical analysis of the resulting estimators (see e.g. [16]), the implicit algorithmic regularization of boosting offers a large flexibility regarding the choice of the base-learners, enabling the application of boosting on a variety of different models, which can include non-linear covariate effects as in generalized additive models (GAMs) [9] or in generalized additive models for location, scale, and shape (GAMLSS) [17].
In practice, the choice of the penalty parameter in the lasso and the choice of the number of iterations in boosting are crucial, since they control the amount of imposed regularization and sparsity. The tuning of these parameters is typically guided by optimizing the predictive performance (e.g. via cross-validation), leading to final models which often include unnecessarily large numbers of noise variables with small effects. Stability selection [18,19,20,21] is a resampling technique that aims to reduce and control the number of selected false positives by applying a variable selection method on several subsamples of the observed data. However, the strict control of false positives by stability selection can induce a considerable reduction of selected variables which are truly relevant for modelling the response, leading to sparse models with poor predictive performance (cf. [22]).
By construction, boosting methods are "greedy" similar to forward stagewise algorithms: once a coefficient is updated at some point of the regularization path, the corresponding variable will be included in all more complex models along the path, although the contribution to the outcome may be small. Further, it has been shown that noise variables tend to be selected early on the lasso regularization path, even in favorable situations with low correlations between the covariates [23]. Thus, the regularization paths induced by classical boosting and the lasso are often too restrictive in order to simultaneously achieve a small false positive rate (sparsity) and a small false negative rate with good predictive performance.
Table 1 Selective summary of variable selection methods with types of regularizers, main regularization parameters and computational efficiency. Here we focus on the main regularization parameters of the different methods, but there are often several additional hyper-parameters
In this work we further exploit the algorithmic flexibility of boosting to address these issues. Here, the primary aim is not the application of boosting to more complex models; instead we reconsider the classical \(L_2\)Boosting algorithm in the context of high-dimensional linear regression and propose three consecutive extensions of the algorithm with regard to the choice of base-learners, aiming for more flexible regularization paths and sparser final estimators. Traditionally, the set of possible base-learners is fixed for all iterations of boosting and consists of simple regression models including only one covariate at a time. However, this choice is not imperative and may not be optimal: if for example two covariates are highly correlated, then it can be beneficial to update the corresponding regression coefficients jointly in one boosting iteration rather than separately in distinct iterations [32].
In our first extension, called Subspace Boosting (SubBoost), base-learners can consist of several variables so that multiple coefficients may be updated at a single iteration of the algorithm. In order to compensate for the larger flexibility in the choice of the base-learners and to avoid overfitting, in each iteration the final selection is based on likelihood-based \(\ell _0\)-type information criteria such as the extended Bayesian information criterion (EBIC) [33], leading to an automatic stopping of the algorithm without the need of additional tuning of the number of boosting iterations. For high-dimensional data with many possible covariates, the computation of the "best" base-learner in each iteration of SubBoost is too costly since base-learners can consist of multiple combinations of different variables. Thus, in a second step we extend the method to Random Subspace Boosting (RSubBoost), which incorporates a random preselection of base-learners in each iteration, enabling the computational scalability to high-dimensional settings. Similar randomization ideas have also been recently proposed in the context of component-wise gradient boosting, where significant computational gains with a promising predictive performance have been observed [34]. Finally, we propose a third extension, called Adaptive Subspace Boosting (AdaSubBoost), with an adaptive random preselection of base-learners in each iteration, where the adaptation is motivated by the recently proposed Adaptive Subspace (AdaSub) method [29, 35]. Here, the idea is to focus on those base-learners which—based on the information from the previous iterations—are more likely to be predictive for the response variable.
The performance of the proposed algorithms is investigated in a simulation study and through various biomedical data examples, comparing it with classical \(L_2\)Boosting as well as with other approaches including twin boosting [31], stability selection [20], the (relaxed) lasso [1,2,3] and the elastic net [4].
Variable selection in statistical modelling
We consider a linear regression model
$$\begin{aligned} \mathbb {E}(Y_i \,|\, \varvec{X}) = \sum _{j=1}^p \beta _j X_{i,j} ,\quad i=1,\ldots , n, \end{aligned}$$
for a continuous response \(\varvec{Y}=(Y_1,\ldots ,Y_n)'\) and covariates \(X_1,\ldots ,X_p\), whose observed values are summarized in the design matrix \(\varvec{X} = (X_{i,j})\in \mathbb {R}^{n \times p}\). For ease of presentation we assume that the covariates and the response have been mean-centered, so that an intercept term can be omitted. Here, \(\varvec{\beta }=(\beta _1,\ldots ,\beta _p)'\in \mathbb {R}^p\) denotes the vector of regression coefficients, which one needs to estimate even when the sample size n is small in relation to the number of covariates p. In practice, one is interested in estimators \(\hat{\varvec{\beta }}\in \mathbb {R}^p\) which are sparse in the sense that only a relatively small number of components of \(\hat{\varvec{\beta }}\) are nonzero, i.e.
$$\begin{aligned} |\hat{S}| = |\{j\in \{1,\ldots ,p\}:~\hat{\beta }_j\ne 0\}| \ll p, \end{aligned}$$
enhancing the interpretability of the resulting model. At the same time, the sparse estimators should minimize the mean squared error of prediction
$$\begin{aligned} {\text {MSE}} = \frac{1}{n_{{\text {test}}}}\sum _{i=1}^{n_{{\text {test}}}}(\varvec{x}_{{\text {test}},i}'\hat{\varvec{\beta }} - y_{{\text {test}},i})^2 \,, \end{aligned}$$
where \((\varvec{x}_{{\text {test}},i}, y_{{\text {test}},i})\), for \(i=1,\ldots ,n_{{\text {test}}}\), denotes independent test data from the true data-generating distribution.
Table 1 provides a selective overview of different regularization and variable selection methods. In particular, information criteria reflect the inherent trade-off between sparsity and predictive performance. A general family of \(\ell _0\)-type selection criteria with penalty parameter \(\lambda >0\) is given by
$$\begin{aligned} \text {GIC}_{\lambda }((\varvec{X},\varvec{y}),S) = n \cdot \log \left( \frac{\Vert \varvec{y} - \varvec{X} \hat{\varvec{\beta }}_S \Vert ^2}{n}\right) + \lambda |S| \,, \end{aligned}$$
for a subset of variables \(S\subseteq \{1,\ldots ,p\}\) and observed data \((\varvec{X},\varvec{y})\), where \(\hat{\varvec{\beta }}_S\in \mathbb {R}^p\) denotes the least-squares estimator under the linear model (1) with active variables in S only, i.e.
$$\begin{aligned} \hat{\varvec{\beta }}_S = \mathop {\mathrm{arg min}}\limits _{\varvec{\beta }\in \mathbb {R}^p} \{\left\| \varvec{y} - \varvec{X} \varvec{\beta }\right\| :~ \beta _j = 0 \text { for } j \notin S \}. \end{aligned}$$
The choice of the penalty parameter \(\lambda =2\) in \(\text {GIC}_{\lambda }\) corresponds to the Akaike information criterion (AIC) [24], while the choice \(\lambda = \log (n) + 2\gamma \log (p)\) with constant \(\gamma \in [0,1]\) yields the extended Bayesian information criterion (\({\text {EBIC}}_\gamma\)) [33], with the original BIC [25] as special case for \(\lambda =\log (n)\). While minimizing the BIC provides model selection consistency under the classical asymptotic setting (p fixed, \(n\rightarrow \infty\)), minimization of the \({\text {EBIC}}_\gamma\) has been shown to yield model selection consistency under reasonable assumptions for high-dimensional settings (\(p,n\rightarrow \infty\)) [26, 33]. In general, the identification of the subset S which minimizes a particular \(\ell _0\)-type selection criterion is computationally hard, since the number of possible subsets \(S\subseteq \{1,\ldots ,p\}\) grows exponentially with the number of covariates p.
Thus, computationally more efficient regularization methods such as the lasso [1] have been developed which make use of the \(\ell _1\)-norm (\(\Vert \varvec{\beta }\Vert _1 = \sum _{j} |\beta _j|\)) as a convex relaxation to the "\(\ell _0\)-norm" (\(\Vert \hat{\varvec{\beta }}_S\Vert _0 = |S|\)) in (4). On the other hand, several heuristic optimization methods have been proposed to address the combinatorial problem of minimizing \(\text {GIC}_{\lambda }\), including different variants of classical stepwise selection [27, 36] as well as stochastic optimization methods such as "Shotgun Stochastic Search" [28] and Adaptive Subspace methods [29, 35].
Statistical boosting
Statistical boosting is an alternative variable selection approach which is similar to forward stagewise algorithms [5, 37]. In contrast to classical forward selection, boosting leads to a slower overfitting behavior and shrinkage of the estimated coefficients, similarly to regularization methods (such as the lasso).
The classical component-wise \(L_2\)Boosting algorithm (Algorithm 1) takes the design matrix \(\varvec{X}\in \mathbb {R}^{n\times p}\) and the observed continuous response vector \(\varvec{y}\in \mathbb {R}^{n}\) as input and, after \(m_{{\text {stop}}}\) iterations, yields the estimator \(\hat{\varvec{\beta }}^{[m_{{\text {stop}}}]}\in \mathbb {R}^p\) with selected variables in \(\hat{S} = \{j:\,\hat{\beta }_j^{[m_{{\text {stop}}}]}\ne 0 \}\subseteq \{1,\ldots ,p\}\) as output. Here, we introduce some additional notation, which will also be convenient in the context of the proposed extensions: in the following let \({{\mathcal {P}}}=\{1,\ldots ,p\}\) denote the index set of covariates \(X_1,\ldots ,X_p\). Furthermore, for a subset \(S\subseteq {\mathcal {P}}\), let \({\mathcal {P}}{\setminus } S = \{j\in {\mathcal {P}}: j \notin S\}\) denote the difference set and let \(\varvec{\beta }_{{\mathcal {P}}{\setminus } S}\in \mathbb {R}^{p-|S|}\) denote the vector \(\varvec{\beta }\in \mathbb {R}^p\) restricted to the components in \({\mathcal {P}}{\setminus } S\).
In the first step of \(L_2\)Boosting, the vector of regression coefficients is initialized as the zero vector, i.e. \(\hat{\varvec{\beta }}^{[0]}=\varvec{0}\), and the current vector of residuals is set to the observed response vector, i.e. \(\varvec{u}^{[0]}=\varvec{y}\). Then, in each iteration \(t=1,\ldots , m_{{\text {stop}}}\) of the algorithm, the "best component" \(A^{[t]}\) is selected among all linear component-wise base-learners (\(S\subseteq {\mathcal {P}}\) with \(|S|=1\)), which leads to the best fit of the current residuals \(\varvec{u}^{[t-1]}\). Subsequently, the estimated coefficient vector \(\hat{\varvec{\beta }}^{[t]} =\hat{\varvec{\beta }}^{[t-1]} + \tau \varvec{\beta }^{[t]}\) is adjusted in the direction \(\varvec{\beta }^{[t]}\) of the selected component by the multiplication with a small learning rate \(\tau\) (e.g. \(\tau =0.1\)) and the vector of residuals \(\varvec{u}^{[t]} = \varvec{y} - \varvec{X}\hat{\varvec{\beta }}^{[t]}\) is updated. Stopping the algorithm after \(m_{{\text {stop}}}\) iterations generally leads to variable selection, since only those variables \(X_j\) with \(j\in \hat{S} = \cup _{t=1}^{m_{{\text {stop}}}} A^{[t]}\) are included in the final model, which have been selected at least once as the best component.
The stopping iteration \(m_{{\text {stop}}}\) is a crucial tuning parameter of \(L_2\)Boosting, since it controls the induced shrinkage and sparsity. In practice, the choice of \(m_{{\text {stop}}}\) is typically guided by optimizing the predictive performance via cross-validation (CV) or bootstrapping techniques. However, in sparse high-dimensional settings, tuning regarding prediction accuracy often yields a final set \(\hat{S}\) of selected variables with many false positives (see results below). A simple approach to induce sparser models is the "earlier stopping" of the \(L_2\)Boosting algorithm, as implemented in the R-package xgboost [38]: the algorithm is stopped as soon as the CV-error does not improve for a particular number of succeeding iterations. This approach can also lead to a reduced computational time, as \(L_2\)Boosting does not have to be run for a prespecified maximum number of iterations; however, earlier stopping tends to come at the cost of an increase in false negatives and larger shrinkage of effect estimates.
Different extensions of \(L_2\)Boosting have been proposed to simultaneously reduce the number of selected noise variables and the induced shrinkage. Among them is twin boosting [31], which implements a two-stage approach: the first stage consists of a standard \(L_2\)Boosting model with tuning of the stopping iteration \(m_{1}\), yielding the estimated coefficient vector \(\hat{\varvec{\beta }}^{[m_{1}]}\). Then, in the second stage, an additional run of an adjusted \(L_2\)Boosting algorithm is conducted, where selection step (a) in Algorithm 1 is modified so that components \(j\in {\mathcal {P}}\) with large absolute coefficients \(|\hat{\beta }_j^{[m_{1}]}|\) from the first stage are updated more frequently in the second stage, reducing the imposed shrinkage for the corresponding variables [31]. After tuning of the stopping iteration \(m_{2}\) in the second stage, the final estimated coefficient vector \(\hat{\varvec{\beta }}^{[m_{2}]}\) with corresponding set of variables \(\hat{S}_{\text {twin}}=\{j\in {\mathcal {P}}:\hat{\beta }_j^{[m_{2}]}\ne 0\}\) is obtained, which is in general a subset of the variables selected by a single run of \(L_2\)Boosting.
Stability selection is a general ensemble approach to control the number of false positive variables [18]. In the context of boosting [20, 21], stability selection applies a boosting algorithm on several subsamples of size \(\left\lfloor n/2\right\rfloor\) from the fully observed data of size n. Then, for each variable \(X_j\), its relative selection frequency \(f_j=\frac{1}{K}\sum _{k=1}^K \mathbbm {1}_{S^{[k]}}(j)\) is computed, where \(S^{[k]}\) denotes the variables selected by boosting for the kth subsample (\(k=1,\ldots ,K\)). Finally, for a threshold \(\pi _{\text {thr}}\in (0,1)\), the selected set of variables by stability selection is defined by \(\hat{S}_{\text {stab}} = \{j\in {\mathcal {P}}: f_j\ge \pi _{\text {thr}}\}\), where the threshold \(\pi _{\text {thr}}\) can be chosen in order to control the expected number of false positives (see [18, 19] for details). The idea behind stability selection is to consider only those variables to be "stable" which are selected frequently for different subsamples of the observed data, so that, for a sensible choice of the threshold \(\pi _{\text {thr}}\), the model \(\hat{S}_{\text {stab}}\) is typically much sparser than the model selected by a single run of boosting for the full dataset.
Proposed extensions of boosting
We propose three consecutive extensions of \(L_2\)Boosting with the aim of generating more flexible regularization paths and encouraging sparser solutions. In contrast to twin boosting and stability selection which use multiple runs of the original or slightly adjusted \(L_2\)Boosting algorithm to yield sparser models, the novel extensions modify the boosting algorithm directly through the choice of base-learners.
Subspace Boosting (SubBoost)
We first introduce Subspace Boosting (SubBoost) as a natural extension of \(L_2\)Boosting (Algorithm 1): additionally to the standard component-wise base-learners, further base-learners can be selected which estimate the effects of multiple variables, implying that coefficients can be updated jointly in a single iteration. However, in order to counterbalance the larger flexibility, the final selection of the components to be updated is based on an additional double-checking step via a likelihood-based variable selection procedure.
The details of SubBoost are given in Algorithm 2. There are two main differences to classical \(L_2\)Boosting (Algorithm 1) regarding the selection step (a). First, in step (a3) of SubBoost the "best" subset \(S^{[t]}\) of size \(|S^{[t]}|=s\) is computed which yields the best fit to the current residuals \(\varvec{u}^{[t-1]}\). Here, in contrast to component-wise \(L_2\)Boosting with \(s=1\), the number of components s to be updated can be larger than one. Second, in an additional double-checking step (a4) we consider a prespecified variable selection procedure \(\Phi :{\mathcal {D}} \times 2^{\mathcal {P}} \rightarrow 2^{\mathcal {P}}\), where \({\mathcal {D}}\) denotes the sample space and \(2^{\mathcal {P}}=\{S:S\subseteq {\mathcal {P}}\}\) the power set of \({\mathcal {P}}=\{1,\ldots ,p\}\). For a given subset S of variables and observed data \((\varvec{X},\varvec{y})\in {\mathcal {D}}\), the selection procedure \(\Phi\) yields the model \(\Phi ((\varvec{X},\varvec{y}),S)\subseteq S\) with variables selected within S. Here, we consider the minimization of likelihood-based \(\ell _0\)-type generalized information criteria (see Eq. (4)) such as the AIC, the BIC or the EBIC:
$$\begin{aligned} \Phi ((\varvec{X},\varvec{y}),S) = \mathop {\mathrm{arg min}}\limits _{A\subseteq S} \text {GIC}_{\lambda }((\varvec{X},\varvec{y}),A) \,. \end{aligned}$$
In step (a4) of SubBoost, \(\Phi\) is applied to the "best" set \(S^{[t]}\) of s variables from step (a3), yielding the final subset \(A^{[t]}= \Phi ((\varvec{X},\varvec{y}), S^{[t]}) \subseteq S^{[t]}\) of components to be updated in iteration t. Thus, while the maximum size of multivariable updates is given by \(|S^{[t]}|=s\), the realized updates \(A^{[t]}\) can be of smaller and varying sizes \(|A^{[t]}|\le s\) in different iterations t. Here, it is important to note that the variable selection procedure \(\Phi\) considers the observed data \((\varvec{X},\varvec{y})\) and not the current residuals \((\varvec{X},\varvec{u}^{[t-1]})\) as input data, so that the selection is based on the original likelihood. By this double-checking step it is ensured that variables which would never, for any subset of variables \(S\subseteq {\mathcal {P}}\), be selected by the base procedure \(\Phi\) on the originally observed data \((\varvec{X},\varvec{y})\), are also not selected in SubBoost even when they may provide the best fit to the current residuals in a particular iteration of the algorithm. Therefore, noise variables are less likely to be selected by SubBoost and the sparsity of the final model is encouraged.
The best model according to \(\Phi\) among all considered variables with indices in \({\mathcal {P}}=\{1,\ldots ,p\}\) is given by \(A^*=\Phi ((\varvec{X}, \varvec{y}),{\mathcal {P}})\). However, in practice there are often many models \(A^{[t]}\subseteq {\mathcal {P}}\) with \(A^{[t]}\ne A^*\) of reasonable size which provide a similar fit. Estimating the coefficient vector on the single best model \(A^*\) according to \(\Phi\) would generally not take into account the model uncertainty (see e.g. [39]). The SubBoost algorithm can be interpreted as a sequential ensemble method, since estimates from multiple "best" models \(A^{[t]}=\Phi ((\varvec{X}, \varvec{y}),S^{[t]})\) with \(S^{[t]}\subseteq {\mathcal {P}}\) are combined in an adaptive way, where \(A^{[t]}\) is the best model according to \(\Phi\) when only variables in \(S^{[t]}\) are considered. Note that the maximum size of updates \(s=|S^{[t]}|\) in SubBoost can be prespecified or, alternatively, be determined by the best model according to \(\Phi\), i.e. by computing \(S^{[0]}=A^*=\Phi ((\varvec{X}, \varvec{y}),{\mathcal {P}})\) and setting \(s=|S^{[0]}|\). The latter option constitutes an effective data-driven way to determine a suitable maximum update size s in case of no particular prior information.
A favorable consequence of double-checking with likelihood-based selection criteria is that it can lead to an automatic stopping of the SubBoost algorithm: if for some iteration t the selected subset \(A^{[t]}\) after step (a4) is the empty set, the algorithm is stopped since no components will be updated and the vector of residuals \(\varvec{u}^{[t]}=\varvec{u}^{[t-1]}\) will remain the same, leading to the same result also in the following iterations. Note that in data situations where most of the predictor variables are informative, the automatic stopping criterion may not be reached in the sense that \(A^{[t]}=\emptyset\) for some iteration t; instead the SubBoost algorithm may continue to update the effects of some signal variables with diminishing changes, indicating the convergence of the algorithm. However, this behavior is unlikely in situations with several noise variables, particularly in sparse settings. In all cases, the base variable selection procedure \(\Phi\) controls the sparsity of the final model and there is no need for additional tuning of the stopping iteration via resampling methods.
Random and Adaptive Subspace Boosting (RSubBoost and AdaSubBoost)
For high-dimensional data with a large number of variables p it can be prohibitive to compute in every iteration the s components yielding the best fit to the current residuals in step (a3) of the SubBoost algorithm, since there are \(\left( {\begin{array}{c}p\\ s\end{array}}\right)\) possible subsets of size s which have to be considered. Instead of searching through all possible base-learners of size s, it is natural to consider only a random selection of variables for a possible update in each iteration of the algorithm. Thus, we propose two extensions of SubBoost, called Random Subspace Boosting (RSubBoost) and Adaptive Subspace Boosting (AdaSubBoost), which are based on an (adaptive) random preselection of base-learners (see Algorithm 3 and Fig. 1).
More specifically, the additional steps (a1) and (a2) in Algorithm 3 concern the random preselection of base-learners: in step (a1), independent Bernoulli random variables \(b_j^{[t]}\sim \text {Bernoulli}(r_j^{[t-1]})\) with sampling probabilities \(r_j^{[t-1]}\) are generated for \(j\in {\mathcal {P}}{\setminus } S^{[t-1]}\). Then, in step (a2), the set of variables considered for a possible update in iteration t is defined by \(V^{[t]}=S^{[t-1]} \cup \{j\in {\mathcal {P}}{\setminus } S^{[t-1]}:\,b_j^{[t]}=1\}\), i.e. \(V^{[t]}\) includes all variables in \(S^{[t-1]}\) as well as a random set of additional variables (for which \(b_j^{[t]}=1\)). Here the idea is to reconsider the variables in \(S^{[t-1]}\) for a possible update in the next iteration t, since they did yield the best fit to the residuals in the previous iteration and are thus likely to be selected again in the next iteration based on the updated residuals. By this, the speed of convergence of the algorithm is increased and the sparsity of the final estimator is encouraged, as variables which have already been updated are more likely to be updated in the succeeding iterations as well. Steps (a3)-(c) in AdaSubBoost are basically the same as for the SubBoost algorithm, while in step (d) the sampling probabilities \(r_j^{[t]}\) are adapted based on the currently estimated "importance" of the individual variables \(X_j\). Here we employ a similar adaptation rule as in the Adaptive Subspace (AdaSub) method [29]: the sampling probability of variable \(X_j\) in iteration \(t+1\) is given by
$$\begin{aligned} r_j^{[t]}=\frac{q-s+K\sum _{i=1}^t \mathbbm {1}_{S^{[i]}}(j)}{p-s+K\sum _{i=1}^t \mathbbm {1}_{V^{[i]}}(j)} \, , \end{aligned}$$
where \(\mathbbm {1}_S\) denotes the indicator function for a set S. Thus, \(r_j^{[t]}\) can be viewed as a scaled fraction of the number of times variable \(X_j\) has been selected in the set \(S^{[i]}\) divided by the number of times variable \(X_j\) has been considered in the set of possible base-learners \(V^{[i]}\), \(i\le t\). Therefore, those variables \(X_j\), which already yielded a good fit in many previous iterations, are also reconsidered with a larger probability in the set of base-learners for the succeeding iterations of AdaSubBoost.
Schematic flowchart of Adaptive Subspace Boosting (AdaSubBoost). For details see Algorithm 3
The Random Subspace Boosting (RSubBoost) algorithm can be regarded as a special case of AdaSubBoost by setting \(K=0\), resulting in constant sampling probabilities \(r_j^{[t]} = r_j^{[0]}=\frac{q-s}{p-s}\). Thus, in RSubBoost all variables \(X_j\) with \(j\notin S^{[t-1]}\) have the same probability \(P(j\in V^{[t]}) = \frac{q-s}{p-s}\) to be considered in the set of possible base-learners for selection in iteration t. In RSubBoost the expectation of the size of \(V^{[t]}\) is constant and given by
$$\begin{aligned} \mathbb {E}|V^{[t]}|=s + (p-s)\cdot \mathbb {E}\big [b_j^{[t]}\big ] = s + (p-s)\cdot \frac{q-s}{p-s} = q\,, \end{aligned}$$
implying that on average q variables are considered for an update in each iteration t. The tuning parameter \(q\in (s,p]\) controls the expected search size of the algorithm: if q is chosen to be small, then only few variables are considered for an update in each iteration; however, if \(q=p\) then all variables are always considered, so that the RSubBoost algorithm coincides with the non-randomized SubBoost algorithm. The choice of the expected search size q is mainly guided by computational considerations, i.e. q should be chosen small enough so that the search step (a3) can be carried out efficiently (e.g. \(q\le 25\)). On the other hand, q should be chosen larger than the maximum update size s, so that several "new" variables \(X_j\) (with \(j\notin S^{[t-1]}\)) are considered in the stochastic search. We recommend to use \(q=20\) and \(s\le 15\), providing computational efficiency and an effective stochastic search (see Additional file 1: Section 2.5.3 for results on the influence of q).
The parameter \(K\ge 0\) controls the adaptation rate of AdaSubBoost. If K is chosen to be large (e.g. \(K=10,000\)), then the sampling probabilities are adapted quickly; on the other hand, for \(K=0\) the RSubBoost algorithm with constant sampling probabilities is retrieved. Regarding the stochastic search for "good" base-learners, K controls the trade-off between exploitation (corresponding to large K with a focus on base-learners which have already proven successful in previous iterations) and exploration (corresponding to \(K\approx 0\) without a strong focus on particular sets of base-learners). In practice, choosing \(K=\frac{p}{q}\) serves as a sensible default in AdaSubBoost (see Additional file 1: Section 2.5.2 for results on the influence of K). Note that, regardless of the choice of the sampling probabilities, in each iteration t of RSubBoost and AdaSubBoost all variables in \(S^{[t-1]}\) (which have provided the best fit to the residuals in the previous iteration) are reconsidered in the subspace \(V^{[t]}\) of base-learners. Thus, the adaptive choice of the sampling probabilities only affects the random search in the set of variables \({\mathcal {P}}{\setminus } S^{[t-1]}\) which are additionally considered in the next set of base-learners. In comparison to RSubBoost, the adaptive choice in AdaSubBoost can result in a higher predictive power, as more promising combinations of covariates are considered for potential joint updates. Furthermore, variables \(X_j\), which have already been selected, are generally more likely to be updated in the following iterations as well, which further encourages sparsity.
Due to the adaptive model building nature of boosting it is crucial that the first iteration of AdaSubBoost (and RSubBoost) starts with a reasonable set of candidate variables \(V^{[1]}\), since otherwise uninformative variables may be selected, which would not have been selected if other informative variables had already been considered in \(V^{[1]}\). Thus, a screening method such as component-wise \(L_2\)Boosting (Algorithm 1), forward regression [36] or sure independence screening based on marginal associations [40] should be applied to select an initial set \(S^{[0]}\) of \(|S^{[0]}|=s\) variables (in case the maximum update size s is prespecified). Alternatively and similarly as in the SubBoost algorithm, the maximum update size s can be selected in a data-driven way, by first screening a subset \(V^{[0]}\) of size \(|V^{[0]}|=s_{{\text {max}}}\) (e.g. \(s_{{\text {max}}}=15\)), computing the best model \(S^{[0]}=\Phi ( (\varvec{X},\varvec{y}), V^{[0]})\) according to \(\Phi\) restricted to variables in \(V^{[0]}\) and setting \(s=|S^{[0]}|\). Since \(S^{[0]}\subseteq V^{[1]}\) by the construction of the algorithm, all screened variables in \(S^{[0]}\) will be considered for an update in the first iteration of AdaSubBoost. If not indicated otherwise, in this work we will use forward regression in the initial screening step and apply the data-driven approach for selecting the maximum update size s (except for the two simulation examples in Figure 2 and Additional file 1: Fig. S1, where we prespecify \(s=2\) for illustration purposes). Note that AdaSubBoost and RSubBoost also provide automatic stopping similarly to SubBoost. However, the algorithms should not be stopped immediately when \(A^{[t]}=\emptyset\), since in the following iterations \(t'>t\) with different random sets \(V^{[t']}\) the selected sets \(S^{[t']}\) and \(A^{[t']}\) may change again. In practice, the algorithms may be stopped before the maximum number of iterations \(m_{{\text {max}}}\) is reached, if no variables are updated for a prespecified number of iterations \(N_{{\text {stop}}}\) (e.g. \(N_{{\text {stop}}}=\frac{p}{s}\)), i.e. the algorithms are stopped at iteration \(t\ge N_{{\text {stop}}}\) if \(A^{[t]}=A^{[t-1]}=\cdots =A^{[t-N_{{\text {stop}}}+1]}=\emptyset\).
Table 2 Comparison of classical component-wise \(L_2\)Boosting with the three proposed extensions: Subspace Boosting (SubBoost), Random Subspace Boosting (RSubBoost) and Adaptive Subspace Boosting (AdaSubBoost)
Table 2 provides a compact overview regarding the properties of component-wise \(L_2\)Boosting and the novel extensions SubBoost, RSubBoost and AdaSubBoost. In contrast to component-wise \(L_2\)Boosting, all three extensions allow multivariable updates of effects in a single iteration, as well as double-checking steps with a likelihood-based variable selection procedure \(\Phi\), providing automatic stopping of the algorithms and enhanced sparsity. The randomized preselection of base-learners in RSubBoost and AdaSubBoost leads to efficient algorithms even in high-dimensional settings with a large number of covariates p, with AdaSubBoost additionally providing an adaptive stochastic search in the space of base-learners based on the information from all previous iterations. An R package implementing the three proposed subspace boosting algorithms is available at GitHub (https://github.com/chstaerk/SubBoost).
The particular differences between classical component-wise \(L_2\)Boosting and the proposed randomized extensions RSubBoost and AdaSubBoost are first investigated based on an illustrative high-dimensional simulated data example. Then, a systematic simulation study is conducted in which the predictive performance and variable selection properties of the new algorithms are analyzed in comparison to competing boosting and regularization methods. Finally, the performance of the different methods is compared for various biomedical data applications.
Illustrative high-dimensional example
An illustrative high-dimensional dataset is simulated according to the linear regression model (1) with \(p=1000\) covariates, \(n=100\) samples, standard normally distributed errors and sparse coefficient vector \(\varvec{\beta }=(-2,-1,1,2,0,\ldots ,0)'\in \mathbb {R}^p\), i.e. only variables \(X_1,X_2,X_3\) and \(X_4\) are informative for the response Y. Furthermore, samples of continuous covariates are independently generated from a multivariate normal distribution with a Toeplitz correlation structure, i.e. \(\varvec{x}_i\sim \mathcal {N}_p(\varvec{0}, \varvec{\Sigma })\) for \(i=1,\ldots ,n\) with covariance matrix entries \(\Sigma _{j,k}=\rho ^{|j-k|}\). The correlation between adjacent covariates is set to \(\rho =0.8\), representing a challenging but realistic scenario.
The performance of \(L_2\)Boosting is illustrated in Figure 2, where the coefficient paths along the number of iterations are shown for the high-dimensional data example (using the R-package mboost [7]). The "optimal" stopping iteration \(m_{\text {CV}}\) selected by 10-fold cross-validation (CV) implies that several components corresponding to noise variables are included in the \(L_2\)Boosting model after \(m_{\text {CV}}\) iterations. In particular, the CV-optimal stopping iteration results in an estimate \(\hat{\varvec{\beta }}^{[m_{\text {CV}}]}\) with \(|\{j\in {\mathcal {P}}:\,\hat{\beta }_j^{[m_{\text {CV}}]}\ne 0 \}| = 14\) non-zero components (selected variables), among which 12 are false positives (i.e. \(j\in \{5,\ldots ,p\}\)) while only two are true positives (i.e. \(j\in \{1,\ldots ,4\}\)). Thus, the CV-optimal \(L_2\)Boosting model yields an unnecessarily large number of selected variables and also misses the two correlated signal variables \(X_2\) and \(X_3\) with opposite effects on the response.
High-dimensional illustrative data example. Coefficient paths \(\beta _j^{[t]}\) for \(j\in {{\mathcal {P}}}\) along the number of iterations t of \(L_2\)Boosting, RSubBoost and AdaSubBoost. Horizontal black dotted lines indicate the component values of the true \(\varvec{\beta }\). For \(L_2\)Boosting, the vertical red line indicates the CV-optimal stopping iteration \(m_{{\text {CV}}}\), while for RSubBoost and AdaSubBoost the automatic stopping after the first \(N_{{{\text {stop}}}}=p/2=500\) succeeding iterations without any updates is indicated
To illustrate the performance of the subspace boosting algorithms, we apply RSubBoost and AdaSubBoost on the simulated dataset using the \({\text {EBIC}}_\gamma\) with \(\gamma =1\) in the selection procedure \(\Phi\), which is particularly suitable for high-dimensional data (cf. [33]). In contrast to component-wise \(L_2\)Boosting (which implicitly is restricted to \(s=1\)), the number of components to be updated in the subspace algorithms is set to \(s=2\). In all subspace algorithms we use the "leaps-and-bounds" algorithm implemented in the R-package leaps [41] for computing the best subsets in steps (a3) and (a4) of the algorithms. While in \(L_2\)Boosting the default learning rate \(\tau =0.1\) is used, in the subspace algorithms the learning rate is set to \(\tau =0.01\); note that, due to the stochastic nature of RSubBoost and AdaSubBoost considering only a random subspace of all base-learners in each iteration, it is generally recommended to choose a relatively small learning rate, so that the estimated effects of important covariates are more likely to be updated multiple times in combination with various other important covariates. The mean number of covariates in RSubBoost and AdaSubBoost considered for a possible update in each iteration is initialized as \(q=10\), while \(K=\frac{p}{q}\) is used as the adaptation parameter in AdaSubBoost. Since the application of SubBoost is computationally intractable for high-dimensional search spaces, we only compare the performance of its randomized extensions with classical \(L_2\)Boosting (see Additional file 1: Section 1 for an illustrative low-dimensional example including SubBoost).
Figure 2 illustrates that no false positives are included in the RSubBoost and AdaSubBoost models, as the double-checking with \({\text {EBIC}}_1\) prevents the selection of such variables in this case. In contrast to \(L_2\)Boosting, the signal variable \(X_2\) is selected by RSubBoost as it is jointly updated with the correlated variable \(X_4\) (having an opposite effect on the response); this illustrates the potential benefits of considering multivariable base-learners. Note that RSubBoost induces somewhat less shrinkage on the effect estimate for \(X_4\) in comparison to \(L_2\)Boosting. While RSubBoost does not select variable \(X_3\), the adaptive choice of the sampling probabilities in AdaSubBoost leads to the detection of the signal variable \(X_3\). In order to analyze this favorable behavior, it is instructive to investigate the realized joint updates \(A^{[t]}\) along the iterations of RSubBoost and AdaSubBoost: during the first iterations of both algorithms (using the same random seed), variables \(X_1\) and \(X_4\), having the largest effects on the response, are updated jointly (\(A^{[t]}=\{1,4\}\) for \(t=1,\ldots ,115\)). Subsequently, variables \(X_2\) and \(X_4\) are also updated together (\(A^{[t]}=\{2,4\}\) for \(t=116,\ldots ,166\)). The RSubBoost algorithm does not select any further variables and the stopping criterion is reached after 677 iterations. However, since variables \(X_1\) and \(X_2\) have already been updated several times, their sampling probabilities \(r_1^{[t]}\) and \(r_2^{[t]}\) have been increased in AdaSubBoost, so that they are more likely to be reconsidered in the following iterations. This adaptation finally enables AdaSubBoost to identify the beneficial joint updates of variables \(X_1\) and \(X_3\) (\(A^{[419]}=\{1,3\}\)) as well as of variables \(X_2\) and \(X_3\) (\(A^{[t]}=\{2,3\}\) for \(t=420,\ldots ,437\)). Subsequently, no further updates occur (\(A^{[t]}=\emptyset\) for \(t\ge 438\)), so that AdaSubBoost reaches the stopping criterion after 937 iterations. Thus, AdaSubBoost is the only algorithm which identifies the true underlying model \(S_{{\text {true}}}=\{1,2,3,4\}\) for this setting.
Prediction error for high-dimensional illustrative data example. Mean squared error (MSE) of prediction on training data and independent test set (of size 1000), along the number of iterations of \(L_2\)Boosting, RSubBoost and AdaSubBoost (cf. Fig. 2). The vertical lines indicate the stopping iterations of the algorithms
The favorable estimation and variable selection properties of RSubBoost and AdaSubBoost also imply an improvement in predictive performance (see Figure 3). In contrast to \(L_2\)Boosting, the MSE on the training data for the subspace algorithms does not decline towards zero as the number of iterations increases; instead, RSubBoost and AdaSubBoost induce an automatic stopping of learning. While classical \(L_2\)Boosting continues to improve the fit to the training data, leading to a worsening performance on test data, the new extensions do not suffer from overfitting. In this example, AdaSubBoost yields the smallest prediction error on test data, as it is the only method which exactly identifies the true model.
Simulation study
Low-dimensional setting
In this simulation study we first examine a low-dimensional setting with \(p=20\) candidate variables (cf. Additional file 1: Section 1 for an illustrative low-dimensional example). As in the illustrative high-dimensional example, we consider \(n=100\) samples, multivariate normally distributed covariates using a Toeplitz correlation structure with \(\rho =0.8\) and the true model \(S_{{\text {true}}}=\{1,2,3,4\}\); however, to examine a variety of settings, for each of 500 different simulated datasets (simulation replicates), the true coefficients \(\beta _j\) for \(j\in S_{{\text {true}}}\) are not the same but independently simulated from the uniform distribution \(U(-2,2)\). Since we are facing a low-dimensional setting, the standard BIC is used in the selection procedure \(\Phi\) for the subspace algorithms. Further parameters in the boosting algorithms are specified as before, except that we do not use a prespecified maximum update size (\(s=2\)); instead, for each dataset the employed model selection procedure based on the BIC yields the initial selected set \(S^{[0]}\) and automatically determines the maximum size \(s=|S^{[0]}|\le s_{{\text {max}}}=7\) of the following updates in the subspace boosting algorithms.
To put the results into perspective, we consider \(L_2\)Boosting [5], twin boosting [31], stability selection [20], the lasso [1], the elastic net [4] and the relaxed lasso [2, 3] as benchmark competitors (see Table 1). For \(L_2\)Boosting (Algorithm 1) we consider two implementations of the algorithm differing in the choice of the stopping iteration: in the first implementation based on the R-package mboost [7], the stopping iteration is chosen by minimizing the 10-fold CV-error within a prespecified maximum number of iterations (here \(m_{{\text {max}}}=1000\)); in the second implementation based on the R-package xgboost [38], the algorithm is stopped before \(m_{{\text {max}}}=1000\) iterations in case the 10-fold CV-error does not improve for a certain number of succeeding iterations (here earlier stopping after 10 iterations without improvements). In both implementations of \(L_2\)Boosting we set the learning rate to \(\tau =0.1\) and consider component-wise linear base-learners (corresponding to a coordinate descent algorithm, by using the options booster="gblinear", updater="coord_descent" and top_k=1 in xgboost [38]). The R-package bst [42] is used for twin boosting, where the optimal stopping iteration is determined via 10-fold CV, the learning rate is set to \(\tau =0.1\) and the option twintype=1 is specified (i.e. weights in the second round of boosting are based on the magnitude of estimated coefficients from the first round). The R-package stabs [43] is used for stability selection in combination with classical \(L_2\)Boosting, where \(q_{\text {stab}} = 10\) variables are selected for each subsample and the expected number of selected false positives (i.e. the per-family error rate) is bounded by \(\text {PFER}=2\). Classical least squares estimation is used for the final model from stability selection. For all boosting algorithms, the maximum number of iterations is \(m_{{\text {max}}}=1000\) in the low-dimensional setting, while RSubBoost and AdaSubBoost incorporate automated stopping after \(\frac{p}{2}=10\) succeeding iterations without any updates. The R-package glmnet [44] is used for the lasso and the relaxed lasso, while the additional R-package glmnetUtils [45] is used for tuning the additional parameter \(\alpha\) in the elastic net. Final lasso, relaxed lasso and elastic net estimates are based on minimizing the 10-fold CV-error. For comparability reasons, we use serial implementations of all algorithms, without potential parallelization of resampling methods (reported computation times are based on a 2.7GHz processor).
Results for low-dimensional simulation setting. Boxplots of false positives, false negatives, estimation error and prediction error on test set (of size 1000), for 500 simulation replicates with \(n=100\), \(p=20\), \(S_{{{\text {true}}}}=\{1,2,3,4\}\) and Toeplitz correlation with \(\rho =0.8\)
Figure 4 shows that the three subspace methods SubBoost, RSubBoost and AdaSubBoost systematically reduce the number of false positives in comparison to classical \(L_2\)Boosting, while the number of false negatives is unaffected (see Additional file 1: Section 2.2 for detailed numerical results). The beneficial variable selection properties lead to small reductions in mean squared errors (MSEs) for estimating the coefficient vectors \(\varvec{\beta }\in \mathbb {R}^p\) and in root mean squared errors (RMSEs) of prediction on independent test data. The three subspace boosting algorithms perform very similar in this low-dimensional setting, with AdaSubBoost showing a slightly improved estimation and prediction performance. Earlier stopping of \(L_2\)Boosting via XGBoost leads to a reduction of false positives, yielding a worse predictive performance in this setting. The competing two-stage twin boosting algorithm also reduces the number of false positives in comparison to the single-stage \(L_2\)Boosting algorithm; however, the number of false negatives tends to be slightly larger compared to \(L_2\)Boosting and the subspace boosting algorithms. Stability selection yields very small numbers of false positives, while paying a price in terms of increased numbers of false negatives. Although the average estimation and prediction performance of the sparse models selected by twin boosting and stability selection seem not to be largely affected in this low-dimensional setting with only four informative variables, an increased variability over the different simulation replicates is apparent in comparison to the other boosting methods. The lasso and the elastic net perform similar to \(L_2\)Boosting (cf. [15]), including larger numbers of noise variables compared to the subspace boosting algorithms. The relaxed lasso tends to yield smaller numbers of false positives than the lasso, but at the cost of increased numbers of false negatives.
Sparse high-dimensional settings
Next, we extend the high-dimensional illustrative example from above (see Figures 2 and 3): for 500 simulation replicates, we consider \(n=100\) samples, \(p=1000\) multivariate normally distributed covariates using a Toeplitz correlation structure with \(\rho =0.8\) and true coefficients \(\beta _j\sim U(-2,2)\) for \(j\in S_{{\text {true}}}\). Here, we examine two sparse high-dimensional settings which differ only in the true underlying models \(S_{{\text {true}}}\): in setting (a), the true model \(S_{{\text {true}}}=\{1,\ldots ,10\}\) is fixed, while in setting (b) the true model \(S_{{\text {true}}}\subset \{1,\ldots ,p\}\) is randomly chosen with \(|S_{{\text {true}}}|=10\) for each simulation replicate. While setting (a) in conjunction with the Toeplitz correlation structure implies that high correlations predominantly occur among signal variables (\(X_1,\ldots ,X_{10}\)), setting (b) induces high correlations mostly between signal and noise variables, as the 10 signal variables are randomly distributed among the \(p=1000\) covariates.
In the sparse high-dimensional settings, the \({\text {EBIC}}_1\) is considered in the model selection procedure \(\Phi\) and is also used for the initialization of the maximum update sizes \(s\le s_{{\text {max}}}=15\) in RSubBoost and AdaSubBoost (see Additional file 1: Figure S4 for additional information on the selected "baseline" models \(S^{[0]}\)), considering the expected search size \(q=20\) and the adaptation parameter \(K=\frac{p}{q}\). We refer to Additional file 1: Section 2.5 for sensitivity analyses regarding the choice of the selection procedure \(\Phi\) and further tuning parameters \(s_{{\text {max}}}\), q and K. The maximum number of iterations is set to \(m_{{\text {max}}}=5000\), while RSubBoost and AdaSubBoost are automatically stopped after \(\frac{p}{2}=500\) succeeding iterations without any updates. The remaining parameters for the algorithms are specified as in the low-dimensional setting, except for stability selection where \(q_{\text {stab}} = 15\) variables (instead of \(q_{\text {stab}} = 10\)) are selected for each subsample.
Results for sparse high-dimensional simulation setting (a). Boxplots of false positives, false negatives, estimation error and prediction error on independent test set (of size 1000), for 500 simulation replicates with \(n=100\), \(p=1000\), \(S_{{\text {true}}}=\{1,\ldots ,10\}\) and Toeplitz correlation with \(\rho =0.8\)
Figure 5 shows that RSubBoost and AdaSubBoost largely reduce the number of false positives in comparison to classical \(L_2\)Boosting in high-dimensional setting (a). Remarkably, at the same time, the subspace algorithms also tend to yield smaller numbers of false negatives. Figure 5 further indicates an excellent estimation and prediction performance of the subspace boosting algorithms, with slight advantages for AdaSubBoost. These results confirm the observations in the high-dimensional illustrative example discussed above (see Figures 2 and 3): the joint updates of effect estimates in the subspace algorithms are particularly beneficial in cases of high correlations among signal variables; furthermore, in such cases the adaptive selection of base-learners in AdaSubBoost can lead to a higher predictive power. Due to the earlier stopping, XGBoost yields less false positives and more shrinkage of effect estimates than classical \(L_2\)Boosting, resulting in slightly favorable predictions but a worse estimation performance. Earlier stopping via XGBoost also leads to a considerable reduction of computation times in this sparse setting (see Additional file 1: Table S2 and Figure S3). For twin boosting and even more for stability selection, the reduction in the number of false positives leads to a loss of statistical power for detecting signal variables, so that no systematic improvements in predictive performance over classical \(L_2\)Boosting are observed. The lasso and the elastic net perform again similar to \(L_2\)Boosting, yielding relatively large numbers of false positives. The relaxed lasso shows an improved variable selection and prediction performance compared to the classical lasso, but is outperformed by the subspace boosting algorithms in this sparse and highly-correlated setting.
Results for the additional sparse high-dimensional setting (b) with high correlations predominantly between signal and noise variables show that the subspace boosting algorithms again substantially reduce the number of false positives compared to \(L_2\)Boosting, while providing a competitive predictive performance; however, in contrast to setting (a) with high correlations among signal variables, this comes at the cost of an increase in false negatives. Detailed results for simulation setting (b) can be found in Additional file 1: Section 2.1, while details on computation times for the different simulation settings are provided in Additional file 1: Sections 2.2 and 2.3.
Non-sparse high-dimensional setting
Finally, we consider a non-sparse setting, where the true model \(S_{{\text {true}}}=\{1,\ldots ,100\}\) is fixed and consists of 100 signal variables (out of \(p=1000\) candidate variables), while the sample size is \(n=1000\). In the non-sparse setting we additionally consider the AIC as an alternative selection procedure \(\Phi\), inducing less sparsity than the \({\text {EBIC}}_1\). The maximum number of iterations is set to \(m_{{\text {max}}}=10,000\) in the different boosting algorithms, while we set \(q_{\text {stab}} = 150\) in stability selection. The remaining parameters for the algorithms and further simulation specifications are the same as in the sparse high-dimensional settings.
Results for the non-sparse high-dimensional simulation setting. Boxplots of false positives, false negatives, estimation error and prediction error on independent test set (of size 1000), for 500 simulation replicates with \(n=100\), \(p=1000\), \(S_{{\text {true}}}=\{1,\ldots ,100\}\) with \(|S_{{\text {true}}}|=100\) and Toeplitz correlation with \(\rho =0.8\)
Figure 6 shows that AdaSubBoost in combination with the \({\text {EBIC}}_1\) yields very small numbers of false positives but large numbers of false negatives, leading to a poor predictive performance in this non-sparse setting. When the AIC is used instead of the \({\text {EBIC}}_1\) for the double-checking in AdaSubBoost, the number of false negatives is reduced, leading to a reasonable predictive performance; however, this comes at the cost of an increase in the number of false positives. Particularly in this non-sparse setting with many informative variables, the adaptive stochastic search in AdaSubBoost is beneficial compared to RSubBoost, yielding less false positives and improved predictions. \(L_2\)Boosting yields very large models with many false positives, but a competitive predictive performance. Earlier stopping via XGBoost results in a reduction of false positives, but larger numbers of false negatives and a worse prediction performance. In such non-sparse settings, the earlier stopping approach is also not beneficial in terms of computation times (see Additional file 1: Table S4 and Fig. S3). Stability selection yields sparse models with almost no false positives but many false negatives, resulting in a low prediction accuracy. Twin boosting also selects small numbers of false positives, but shows a very good predictive performance in this non-sparse setting, even though several signal variables are not selected. The regularization methods lasso, elastic net and relaxed lasso show a similar variable selection performance with many false positives, while the relaxed lasso yields the best predictive performance in this situation, which is in line with a recent comparative simulation study of Hastie et al. [3]. In summary, this non-sparse setting further illustrates the inherent trade-off between variable selection and predictive performance.
Applications on biomedical data
In order to evaluate the performance of the proposed subspace boosting algorithms in non-artificial data situations, we examine two low-dimensional and two high-dimensional biomedical datasets, which are publicly available and have previously been investigated using different variable selection methods. In particular, as the first low-dimensional dataset, we consider bodyfat data [46], consisting of body fat measurements for \(n=71\) healthy females as the response variable of interest and \(p=9\) covariates including age and several anthropometric measurements. As the second low-dimensional example, we consider diabetes data [12], where the response is a quantitative measure of disease progression one year after baseline, with \(p=10\) baseline covariates measured for \(n=442\) diabetes patients. The bodyfat data has already been analyzed using component-wise \(L_2\)Boosting [5, 7], while the diabetes data has originally been examined using Least Angle Regression (LARS) with discussions also related to boosting and the lasso [12]. As the first high-dimensional dataset, we consider ribovlavin data [47], where the response consists of \(n=71\) observations of log-transformed riboflavin production rates and the covariates are given by logarithmic gene expression levels for \(p=4088\) genes. As the second high-dimensional example, we consider polymerase chain reaction (PCR) data [48], where the response is given by a particular physiological phenotype for \(n=60\) mice and the full set of covariates comprises \(p=22{,}575\) gene expression levels. The ribovlavin data has been previously analyzed using stability selection [49], while the PCR data has, among others, been investigated using a Bayesian split-and-merge approach [50] and the Adaptive Subspace (AdaSub) method [29]. Histograms of correlations between the covariates for the four datasets are shown in Fig. 7.
Correlation structure of biomedical datasets. Histograms of pairwise Pearson correlations between the covariates for the two low-dimensional (upper row) and the two high-dimensional datasets (lower row)
Here, we evaluate the different algorithms based on external leave-one-out cross-validation (LOOCV), i.e. for each \(i\in \{1,\ldots ,n\}\) we consider \(n-1\) samples as training data \(\{1,\ldots ,n\}{\setminus }\{i\}\) and the single sample \(\{i\}\) as test data. The variable selection algorithms are applied independently on each of the n training subsamples, yielding potentially different models with varying numbers of selected variables. The performance of the algorithms is assessed based on the number of selected variables and the absolute prediction errors on the independent test samples. For the low-dimensional datasets we consider the three subspace boosting algorithms SubBoost, RSubBoost and AdaSubBoost in combination with the classical BIC, while for the high-dimensional datasets we consider the two randomized algorithms RSubBoost and AdaSubBoost in combination with the \({\text {EBIC}}_1\). The maximum number of iterations in the subspace algorithms is set to \(m_{{\text {max}}}=1000\) for the two low-dimensional datasets, while we use \(m_{{\text {max}}}=10{,}000\) for the two high-dimensional datasets. Similarly to the simulation study, the parameters in the subspace boosting algorithms are set to \(q=\min \{20,p/2\}\) and \(K=\frac{p}{q}\) for all four datasets, while we specify \(s_{{\text {max}}}=4\) for the low-dimensional and \(s_{{\text {max}}}=15\) for the high-dimensional datasets. For the PCR data, instead of forward regression, we apply sure independence screening [40] as a computationally more efficient initial screening step in RSubBoost and AdaSubBoost, which is based on ranking the marginal correlations between the individual covariates and the response. For stability selection, the number of variables selected for the subsamples is set to \(q_{\text {stab}}=\min \{15,\lfloor p/2 \rfloor \}\), with \(\text {PFER}=2\) as the bound on the expected false positives. All remaining parameters of the competing algorithms are specified as in the simulation study.
For all considered datasets, the computational costs for the proposed subspace algorithms are comparable to classical \(L_2\)Boosting using the R-package mboost [7] (mean computation times for AdaSubBoost between 1.5 s for bodyfat data and 190 s for PCR data; for \(L_2\)Boosting between 0.6 s and 114 s). The earlier stopping approach via XGBoost yields reduced computation times particularly in sparse high-dimensional settings (mean of 3 s for PCR data). On the other hand, twin boosting and stability selection tend to be more costly than RSubBoost and AdaSubBoost (means for twin boosting between 11 s and 405 s; for stability selection between 10 s and 312 s). Regularization methods including the (relaxed) lasso an d the elastic net are very efficient using the R-package glmnet [44] (means for lasso between 0.1 s and 1.9 s; for elastic net between 0.7 s and 24 s). We refer to Additional file 1: Section 3 for detailed results on computation times.
Results for different biomedical applications. Boxplots of numbers of selected variables and absolute prediction errors on out-of-sample data using external leave-one-out cross-validation (LOOCV). Empirical means are depicted by black crosses
Figure 8 shows the results of the different algorithms for external LOOCV applied to the four biomedical datasets (see Additional file 1: Table S5 for detailed numerical results). For the low-dimensional bodyfat data, the three subspace boosting algorithms and classical \(L_2\)Boosting perform similar, with the subspace algorithms yielding slightly sparser models (all with a median of six selected variables) in comparison to \(L_2\)Boosting (median of seven variables). SubBoost and RSubBoost perform almost identically for the bodyfat data, while AdaSubBoost tends to select slightly less variables with a competitive predictive performance. The earlier stopping approach via XGBoost, twin boosting and stability selection produce very sparse models in this application with median model sizes of two variables, but lead to a lower prediction accuracy, particularly for twin boosting. The regularization methods lasso, elastic net and relaxed lasso perform quite similar for this dataset, with the elastic net yielding slightly larger models and the relaxed lasso slightly sparser models. For the low-dimensional diabetes data with a larger sample size (\(n=442\)), the results of SubBoost and RSubBoost are almost equivalent (both with a median of nine selected variables), while AdaSubBoost yields again slightly sparser models (median of eight variables). The predictive performance of the three subspace boosting algorithms is comparable to \(L_2\)Boosting and to the earlier stopping approach via XGBoost with median model sizes of eight variables. Twin boosting and stability selection reduce the number of selected variables but lead to lower prediction accuracy. It is notable that, in contrast to stability selection and the subspace algorithms, twin boosting yields a larger variability regarding the number of selected variables as well as the lowest prediction accuracy for the two low-dimensional datasets. For the diabetes data, the lasso and the elastic net perform again similar to \(L_2\)Boosting. In this case, the relaxed lasso yields slightly sparser models than AdaSubBoost with a competitive predictive performance.
Regarding the two high-dimensional riboflavin and PCR datasets, Figure 8 shows that \(L_2\)Boosting results in relatively large models, with median model sizes of 39 variables for the riboflavin data and 44 variables for the PCR data. For the riboflavin data, RSubBoost yields quite similar model sizes to \(L_2\)Boosting (median 40 selected variables) with a comparable predictive performance, while AdaSubBoost results in considerably sparser models (median 23 variables). Earlier stopping via XGBoost yields sparser models (median five variables) with a poor predictive performance. Similarly, twin boosting and stability selection yield median model sizes of only four variables, but at the cost of a significant increase in prediction errors. On the other hand, the prediction performance of the relatively sparse AdaSubBoost models is only slightly worse in comparison to \(L_2\)Boosting. For the riboflavin data, the lasso performs again similar to \(L_2\)Boosting, while the elastic net results in very unstable variable selection with large numbers of selected variables; the relaxed lasso tends to select more variables (median 31 variables) than AdaSubBoost without beneficial effects on the predictive performance. For the PCR data, \(L_2\)Boosting, XGBoost, twin boosting, the lasso, the elastic net and the relaxed lasso tend to yield larger models (median model sizes ranging from 8 variables for the relaxed lasso to 186 variables for the elastic net), resulting in a poor predictive performance due to overfitting for this high-dimensional dataset with \(p=22{,}575\) variables and only \(n=60\) samples. In contrast, RSubBoost and AdaSubBoost produce very sparse models for the PCR data with median model sizes of one, while stability selection almost exclusively yields the intercept model. For the PCR data, the subspace boosting algorithms show the best predictive performance.
We have proposed three consecutive extensions of classical statistical boosting [5]. Results from the simulation study and the biomedical applications indicate that the proposed subspace boosting algorithms tend to yield sparser models with a competitive predictive performance compared to classical component-wise \(L_2\)Boosting. Even though competing approaches like stability selection [20] and twin boosting [31] also produce sparser models, these methods often result in a loss of predictive power, as several signal variables may not be detected. In this context, one should note that the main target of stability selection is the control of the expected number of false positives, while the objective of the subspace boosting algorithms is good predictive performance with final models as sparse as possible. Our results further show that the new algorithms can yield a favorable predictive performance compared to regularization methods like the (relaxed) lasso in sparse high-dimensional situations (e.g. for sparse high-dimensional simulation settings (a) and (b) as well as for the PCR data), while the predictive performance may be affected in less sparse situations (e.g. for the non-sparse simulation setting).
The adaptive stochastic search in AdaSubBoost is particularly beneficial compared to RSubBoost in settings with high correlations among signal variables as well as non-sparse situations. Nevertheless, the performance of RSubBoost and AdaSubBoost is often similar, as the selection of the base-learners in RSubBoost is already "adaptive" in the sense that predictor variables which yielded the best fit to the residuals in a particular iteration are reconsidered in the set of base-learners for the subsequent iteration. While the adaptation scheme in AdaSubBoost (Algorithm 3) is inspired by the AdaSub method [29], there are important differences between these approaches regarding their main objectives. AdaSub aims to identify the single best model according to an \(\ell _0\)-type selection criterion (such as the EBIC) and thus primarily focuses on variable selection in sparse high-dimensional settings. On the other hand, AdaSubBoost aims at achieving a competitive predictive performance by using an adaptive ensemble of multiple models, yielding a particular form of model averaging based on \(\ell _0\)-type criteria. In particular, due to the adaptive model building concept of boosting, the AdaSubBoost algorithm can also be efficiently applied in high-dimensional settings without underlying sparsity (see non-sparse simulation setting), although in such situations the predictive ability of AdaSubBoost may be reduced in comparison to classical \(L_2\)Boosting.
Our results indicate that the multivariable updates in the subspace boosting algorithms are advantageous in situations with high correlations among predictor variables, which is also in line with previous studies [32]. Indeed, the new subspace boosting algorithms also have parallels to the block-wise boosting (BlockBoost) algorithm proposed by Tutz and Ulbricht (2009, [32]): in each iteration of BlockBoost, multivariable base-learners can be selected by first ordering the covariates according to their current marginal contributions and then conducting a forward search using an adjusted AIC with an additional correlation-based penalty. Although forward regression or sure independence screening can be used in the initialization step of the subspace boosting algorithms, in contrast to BlockBoost our extensions of \(L_2\)Boosting do not rely on greedy forward searches, but instead yield exact solutions to the problem of computing the best base-learner within the considered subspace in each iteration. Furthermore, while classical \(L_2\)Boosting, BlockBoost and SubBoost are deterministic algorithms, the randomized extensions RSubBoost and AdaSubBoost rely on stochastic searches in the space of possible base-learners, enabling the efficient application of the algorithms on very high-dimensional data.
Since RSubBoost and AdaSubBoost constitute stochastic algorithms, one may obtain slightly different results when they are run multiple times on the same dataset. Nevertheless, our results for external leave-one-out cross-validation on the four biomedical datasets show that numbers of selected variables remain relatively stable in comparison to \(L_2\)Boosting and twin boosting. Furthermore, in practice, using cross-validation for tuning the optimal stopping iteration in classical \(L_2\)Boosting and twin boosting as well as using subsampling for stability selection also lead to a certain stochasticity in the final models. An important benefit of the double-checking steps in the subspace algorithms is that it leads to automatic stopping, so that no additional tuning of the stopping iteration via resampling methods is needed. Instead, the choice of the selection criterion for the double-checking steps controls the sparsity of the final subspace boosting models. Here we have focused on the BIC for low-dimensional cases and the \({\text {EBIC}}_1\) for high-dimensional cases; however, other selection criteria such as the AIC can also be used in the proposed algorithmic framework as illustrated in the non-sparse simulation setting.
The proposed subspace boosting algorithms are also related to the probing approach for boosting [51]. In probing, the originally observed dataset is first augmented with randomly permuted copies of the covariates (so-called "shadow variables") and then boosting is automatically stopped as soon as the first "shadow variable" is selected. Thus, while classical statistical boosting is tuned to yield the best predictive performance, the tuning of the stopping iteration in probing and the subspace boosting algorithms takes the variable selection into account, without requiring multiple runs of the algorithms. The resulting savings in computational resources are somewhat counterbalanced by the wider augmented data in probing (with twice as many covariates) and by the additional computational time for the double-checking steps in the subspace boosting algorithms. While probing basically alters only the stopping scheme of boosting, important features of the subspace boosting algorithms include the multivariable updates, the randomized selection of base-learners as well as the double-checking steps via likelihood-based information criteria considering only the observed covariates.
Limitations of this work include that we have only considered \(L_2\)Boosting with linear base-learners. Further research is warranted on extending our subspace boosting algorithms towards generalized linear models (i.e. other loss functions than the \(L_2\)-loss) as well as non-linear effect estimates (i.e. other types of base-learners such as regression trees, as efficiently implemented in the R-package xgboost [38]). Furthermore, similarly to other data-driven variable selection approaches, the proposed algorithms are primarily designed for relatively sparse settings, where variable selection is beneficial. In case the underlying data generating process is not sparse, the randomized algorithms are still applicable but may result in a reduced predictive performance due to the tendency to favor sparse and interpretable models. While this work focused on high-dimensional settings (i.e. wide data with many variables p and small to moderate sample sizes n), future work should be targeted at the extension and practical application of the proposed boosting methods to large-scale data (i.e. big data with large p and large n), such as the development of polygenic risk scores based on millions of single nucleotide polymorphisms (SNPs) and hundred thousands of samples [52]. Another general limitation of the statistical boosting framework is that the computation of standard errors and confidence intervals for effect estimates is not straightforward. Future research may investigate the application of permutation tests [53] and other recent advances in post-selection inference [54] for the new extensions of \(L_2\)Boosting.
The three proposed subspace boosting algorithms with multivariable base-learners are promising extensions of statistical boosting, particularly suited for data situations with highly-correlated predictor variables. By using (adaptive) stochastic searches in the space of possible base-learners, the randomized versions can be efficiently applied on high-dimensional data. The incorporated double-checking via information criteria induces automatic stopping of the algorithms, promoting sparser and more interpretable prediction models. The proposed algorithms shift the focus from finding the "optimal" ensemble solution regarding prediction accuracy towards finding a competitive prediction model which is as sparse as possible.
All biomedical datasets are publicly available: the bodyfat data [46] can be loaded via the R-package TH.data, the diabetes data [12] via the R-package lars, the riboflavin data [47] via the R-package hdi and the PCR data [48] can be downloaded from JRSSB Datasets Vol. 77(5), Song and Liang (2015, [50]) at the website https://rss.onlinelibrary.wiley.com/hub/journal/14679868/series-b-datasets/pre_2016a. An R implementation of the proposed algorithms and source code for reproducing all results is available at GitHub (https://github.com/chstaerk/SubBoost).
AdaSub:
Adaptive subspace method
AdaSubBoost:
Adaptive subspace boosting
Akaike information criterion
Bayesian information criterion
BlockBoost:
Block-wise boosting
Cross-validation
EBIC:
Extended Bayesian information criterion
ENet:
Elastic net
GAM:
Generalized additive model
GAMLSS:
Generalized additive models for location, scale, and shape
GIC:
Generalized information criterion
LARS:
Least angle regression
L 2 Boost:
Component-wise L2 Boosting with mboost
LOOCV:
Leave-one-out cross-validation
MSE:
Mean squared error
PFER:
Per-family error rate
ReLasso:
Relaxed lasso
RMSE:
Root mean squared error
RSubBoost:
Random subspace boosting
SNP:
Single nucleotide polymorphism
StabSel:
Stability selection
SubBoost:
Subspace boosting
TwinBoost:
Twin boosting
XGBoost:
Extreme gradient boosting (here with component-wise linear base-learners)
Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Stat Methodol). 1996;58(1):267–88.
Meinshausen N. Relaxed lasso. Comput Stat Data Anal. 2007;52(1):374–93.
Hastie T, Tibshirani R, Tibshirani R. Best subset, forward stepwise or lasso? Analysis and recommendations based on extensive comparisons. Stat Sci. 2020;35(4):579–92.
Zou H, Hastie T. Regularization and variable selection via the elastic net. J R Stat Soc Ser B (Stat Methodol). 2005;67(2):301–20.
Bühlmann P, Hothorn T. Boosting algorithms: regularization, prediction and model fitting. Stat Sci. 2007;22(4):477–505.
Mayr A, Binder H, Gefeller O, Schmid M. The evolution of boosting algorithms. Methods Inf Med. 2014;53(06):419–27.
Hofner B, Mayr A, Robinzonov N, Schmid M. Model-based boosting in R: a hands-on tutorial using the R package mboost. Comp Stat. 2014;29(1–2):3–35.
Friedman JH. Greedy function approximation: a gradient boosting machine. Ann Stat. 2001;29(5):1189–232.
Tutz G, Binder H. Generalized additive modeling with implicit variable selection by likelihood-based boosting. Biometrics. 2006;62(4):961–71.
Bühlmann P, Yu B. Boosting with the L2 loss: regression and classification. J Am Stat Assoc. 2003;98(462):324–39.
Bühlmann P. Boosting for high-dimensional linear models. Ann Stat. 2006;34(2):559–83.
Efron B, Hastie T, Johnstone I, Tibshirani R. Least angle regression. Ann Stat. 2004;32(2):407–99.
Hastie T, Taylor J, Tibshirani R, Walther G. Forward stagewise regression and the monotone lasso. Electron J Stat. 2007;1:1–29.
Freund RM, Grigas P, Mazumder R. A new perspective on boosting in linear regression via subgradient optimization and relatives. Ann Stat. 2017;45(6):2328–64.
Hepp T, Schmid M, Gefeller O, Waldmann E, Mayr A. Approaches to regularized regression—a comparison between gradient boosting and the lasso. Methods Inf Med. 2016;55(05):422–30.
Wainwright MJ. High-dimensional statistics: a non-asymptotic viewpoint. Cambridge: Cambridge University Press; 2019.
Mayr A, Fenske N, Hofner B, Kneib T, Schmid M. Generalized additive models for location, scale and shape for high dimensional data—a flexible approach based on boosting. J R Stat Soc Ser C (Appl Stat). 2012;61(3):403–27.
Meinshausen N, Bühlmann P. Stability selection. J R Stat Soc Ser B (Stat Methodol). 2010;72(4):417–73.
Shah RD, Samworth RJ. Variable selection with error control: another look at stability selection. J R Stat Soc Ser B (Stat Methodol). 2013;75(1):55–80.
Hofner B, Boccuto L, Göker M. Controlling false discoveries in high-dimensional situations: boosting with stability selection. BMC Bioinform. 2015;16(1):144.
Mayr A, Hofner B, Schmid M. Boosting the discriminatory power of sparse survival models via optimization of the concordance index and stability selection. BMC Bioinform. 2016;17(1):1–12.
Hothorn T. Discussion: stability selection. J R Stat Soc Ser B (Stat Methodol). 2010;72:463–4.
Su W, Bogdan M, Candes E. False discoveries occur early on the lasso path. Ann Stat. 2017;45(5):2133–50.
Akaike H. A new look at the statistical model identification. IEEE Trans Autom Control. 1974;19(6):716–23.
Schwarz G. Estimating the dimension of a model. Ann Stat. 1978;6(2):461–4.
Luo S, Chen Z. Extended BIC for linear regression models with diverging number of relevant features and high or ultra-high feature spaces. J Stat Plan Inference. 2013;143(3):494–504.
Huo X, Ni X. When do stepwise algorithms meet subset selection criteria? Ann Stat. 2007;35(2):870–87.
Hans C, Dobra A, West M. Shotgun stochastic search for "large p" regression. J Am Stat Assoc. 2007;102(478):507–16.
Staerk C, Kateri M, Ntzoufras I. High-dimensional variable selection via low-dimensional adaptive learning. Electron J Stat. 2021;15(1):830–79.
Bertsimas D, King A, Mazumder R. Best subset selection via a modern optimization lens. Ann Stat. 2016;44(2):813–52.
Bühlmann P, Hothorn T. Twin boosting: improved feature selection and prediction. Stat Comput. 2010;20(2):119–38.
Tutz G, Ulbricht J. Penalized regression with correlation-based penalty. Stat Comput. 2009;19(3):239–53.
Chen J, Chen Z. Extended Bayesian information criteria for model selection with large model spaces. Biometrika. 2008;95(3):759–71.
Lu H, Mazumder R. Randomized gradient boosting machine. SIAM J Optim. 2020;30(4):2780–808.
Staerk C. Adaptive subspace methods for high-dimensional variable selection. Ph.D. thesis. RWTH Aachen University; 2018. https://doi.org/10.18154/RWTH-2018-226562.
Wang H. Forward regression for ultra-high dimensional variable screening. J Am Stat Assoc. 2009;104(488):1512–24.
Tibshirani RJ. A general framework for fast stagewise algorithms. J Mach Learn Res. 2015;16(1):2543–88.
Chen T, He T, Benesty M, Khotilovich V, Tang Y, Cho H, et al. XGBoost: extreme gradient boosting; 2021. R package version 1.4.1.1. https://CRAN.R-project.org/package=xgboost.
Clyde M, George EI. Model uncertainty. Stat Sci. 2004;19(1):81–94.
Fan J, Lv J. Sure independence screening for ultrahigh dimensional feature space. J R Stat Soc Ser B (Stat Methodol). 2008;70(5):849–911.
Lumley T, Miller A. leaps: regression Subset Selection; 2017. R package version 3.0. https://CRAN.R-project.org/package=leaps.
Wang Z. bst: gradient boosting; 2019. R package version 0.3-17. https://CRAN.R-project.org/package=bst.
Hofner B, Hothorn T. stabs: stability selection with error control; 2017. R package version 0.6-3. https://CRAN.R-project.org/package=stabs.
Friedman J, Hastie T, Tibshirani R. Regularization paths for generalized linear models via coordinate descent. J Stat Softw. 2010;33(1):1–22.
Ooi H. glmnetUtils: utilities for 'Glmnet'; 2021. R package version 1.1.8. https://CRAN.R-project.org/package=glmnetUtils.
Garcia AL, Wagner K, Hothorn T, Koebnick C, Zunft HJF, Trippo U. Improved prediction of body fat by measuring skinfold thickness, circumferences, and bone breadths. Obes Res. 2005;13(3):626–34.
Lee JM, Zhang S, Saha S, Santa Anna S, Jiang C, Perkins J. RNA expression analysis using an antisense Bacillus subtilis genome array. J Bacteriol. 2001;183(24):7371–80.
Lan H, Chen M, Flowers JB, Yandell BS, Stapleton DS, Mata CM, et al. Combined expression trait correlations and expression quantitative trait locus mapping. PLoS Genet. 2006;2(1):e6.
Bühlmann P, Kalisch M, Meier L. High-dimensional statistics with a view toward applications in biology. Annu Rev Stat Appl. 2014;1(1):255–78.
Song Q, Liang F. A split-and-merge Bayesian variable selection approach for ultrahigh dimensional regression. J R Stat Soc Ser B (Stat Methodol). 2015;77(5):947–72.
Thomas J, Hepp T, Mayr A, Bischl B. Probing for sparse and fast variable selection with model-based boosting. Comput Math Methods Med. 2017;2017:1421409.
Qian J, Tanigawa Y, Du W, Aguirre M, Chang C, Tibshirani R, et al. A fast and scalable framework for large-scale and ultrahigh-dimensional sparse regression with application to the UK Biobank. PLoS Genet. 2020;16(10):e1009141.
Mayr A, Schmid M, Pfahlberg A, Uter W, Gefeller O. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models. Stat Methods Med Res. 2017;26(3):1443–60.
Rügamer D, Greven S. Inference for L2-Boosting. Stat Comput. 2020;30(2):279–89.
We would like to thank the editor and the referees for their valuable comments and suggestions. Special thanks goes to Tobias Wistuba for technical support with illustrations.
Open Access funding enabled and organized by Projekt DEAL.
Department of Medical Biometry, Informatics and Epidemiology, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany
Christian Staerk & Andreas Mayr
Christian Staerk
Andreas Mayr
CS conceived the research idea, implemented the algorithms and wrote the initial version of the manuscript. AM edited and revised the manuscript. CS and AM contributed to the development of the methodology, the analysis and the interpretation of the results. All authors read and approved the final manuscript.
Correspondence to Christian Staerk.
. The Supplement includes results for an illustrative low-dimensional data example as well as additional results for the simulation study and the biomedical data applications.
Staerk, C., Mayr, A. Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction. BMC Bioinformatics 22, 441 (2021). https://doi.org/10.1186/s12859-021-04340-z
Boosting
High-dimensional data
Information criteria
Sparsity
Variable selection
Machine Learning and Artificial Intelligence in Bioinformatics | CommonCrawl |
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Why Does Induction Prove Multiplication is Commutative?
Andrew Boucher's General Arithmetic (GA2) is a weak sub-theory of second order Peano Axioms (PA2). GA has second order induction and a single successor axiom:
$$\forall x \forall y \forall z\bigr((Sx=y \land Sx=z)\to(y=z)\bigl)$$
Boucher proves multiplication is commutative in GA2. Why does induction prove multiplication is commutative? GA2 has many finite models. The rings $\mathbb Z/n\mathbb Z$ are models. If we remove induction from GA2 it is easy to see GA-Ind is sub-theory of Ring Theory (RT). RT has finite non-commutative models. Why aren't these finite non-commutative rings models of GA? Would a first order version of GA also prove multiplication is commutative?
I asked on stack exchange and got no answer. https://math.stackexchange.com/questions/287557
Edit: I am not looking for an inductive proof. This is a standard result and I am sure it can be done. I am more interested in something like abo's explanation. Can we prove induction fails in every non-commutative ring? Is it impossible to define a successor chain that visits every ring element using addition in a non-commutative ring?
ac.commutative-algebra peano-arithmetic induction
122 silver badges33 bronze badges
Russell EasterlyRussell Easterly
$\begingroup$ I don't understand the "why" of your question. Consider 2 x 2 matrices. What do you propose to define as the successor relationship (from which you then define addition and multiplication)? What is the successor of 0? I imagine a first-order version cannot prove multiplication is commutative. $\endgroup$
– abo
$\begingroup$ Question from a bystander. I don't understand your successor axiom. Is it not a consequence of the general axioms of equality? $\endgroup$
– Joël
$\begingroup$ If $S$ is meant to be a function symbol in first-order logic, then I agree with Joël that the displayed axiom follows from the axioms for equality. Following the link to Boucher's text, however, it seems that for GA he wants the axiom he calls PA3, which seems to assert that successor is functional (formalized as a binary relation giving the graph). $\endgroup$
– Joel David Hamkins
$\begingroup$ Regarding your edit, Russell, isn't it clear that every countable set (whether finite or infinite) admits a successor function for which induction holds? Just pick a $0$, and then define $S(x)$ so that the $n^{th}$ element of the ring is $S^n(0)$. This will satisfy induction for the same reason that $\mathbb{N}$ satisfies induction. But of course, this $S$ will not interact with the ring operations meaningfully, and in the non-commutative case it cannot agree so as to make multiplication agree with the usual recursion, since then the inductive argument would make that operation commutative. $\endgroup$
$\begingroup$ Matrix addition and multiplication satisfy all of the axioms of Ring Theory (RT). Non-commutative rings are not models of RT+Ind where Ind is first order induction. Abo gives an example of a phi(x) we can prove using induction that is false in matrix arithmetic. $\endgroup$
– Russell Easterly
This is an answer to the edited questions Russell has added. Joel David Hamkins' reply in comments to the question is completely correct, but I'll take advantage of the greater space here.
Let (R,0,1,+,*) be a ring. Define
Sx = x + 1 and
B = {x | $\forall P(P0 \land \forall y\forall z(Py \land Sy,z \to Pz) \to Px)$}
i.e. B is the set of all x which are part of an S-chain beginning with 0.
Then S is functional and induction holds over B, i.e.
$\forall P(P0 \land \forall y\forall z(Py \land Sy,z \to Pz) \to \forall x (Bx \to Px))$.
One can define ++ and ** (both definitions being on B) from S using the normal recursive definitions, both of which can be proved commutative. By induction over B, one can show that + and ++ define the same function on B; also for * and **. Hence, if B = R, then * is commutative. So if R is a non-commutative ring, then B is properly contained in R.
"Can we prove induction fails in every non-commutative ring?" No. There are definitions of the successor function (see my other answer) so that induction will hold. OTOH, in a non-commutative ring R where the successor is defined as Sx = x + 1, induction will fail, because the set B (as defined above) cannot equal all of R. To see that induction fails, consider the predicate phi(n) to be Bn. Then clearly phi(0) and the inductive step holds, but obviously not phi(n) for all n in R.
"Is it impossible to define a successor chain that visits every ring element using addition in a non-commutative ring?" I'm not sure what you mean by this question. If successoring is defined by Sx = x + 1, then the successor chain isn't defined, it's implied (by the definition). You won't be able to prove that the successor chain is the entire ring, because again that would imply that multiplication is commutative, contrary to assumption.
$\begingroup$ $\forall x(Sx=x+S0)$ is a theorem of PA and weaker theories. I don't know if it is a theorem of GA2. I am interested in theories of arithmetic where induction fails. It looks like finite non-commutative rings are models of ring theory + Not(Ind) + $\forall x(Sx=x+1)$. $\endgroup$
$\begingroup$ It's actually not quite a theorem of GA2 because you don't know that the successor of x exists. But it can be proven, if the successor of x exists, then it equals x + S0. Yes, finite non-commutative rings would be models of that theory. $\endgroup$
I am not familiar with GA2, but this is how one can prove that multiplication is commutative in PA, and it seems not to use very much.
I assume that multiplication is defined by recursion, so that $x\cdot 0=0$ and $x\cdot(y+1)=x\cdot y+x$. Let me also assume that you already know that addition is associative and commutative.
We prove that multiplication is commutative by proving that every $x$ commutes with every $y$, by induction on $x$. It is not difficult to prove that $0\cdot y=0=y\cdot 0$, and so it is true for $x=0$. Now, suppose that $x$ commutes with all $y$, and consider $x+1$. This commutes with $0$, so assume it commutes with $y$, and observe that
$$(x+1)(y+1)=(x+1)y+(x+1)$$
$$ =y(x+1)+x+1$$ $$ =yx+y+x+1$$ $$ =y+yx+x+1$$ $$ =y+xy+x+1$$ $$ =y+x(y+1)+1$$ $$ =y+(y+1)x+1$$ $$ =(y+1)x+y+1$$ $$ =(y+1)(x+1),$$ as desired. At each step, we either use the definition of multiplication, the induction assumption on $x$ or the induction assumption on $y$, with $x+1$. Altogether, it can be unified as one induction on pairs $(x,y)$ under the lexical order.
Joel David HamkinsJoel David Hamkins
193k3333 gold badges594594 silver badges10431043 bronze badges
$\begingroup$ Is there any way to prove this in PA without having proven associativity of addition? I tried doing this with just commutativity of addition and the axioms but unfortunately got stuck at the point in your proof where you invoke associativity of addition. I get the that I can just prove this as a lemma but was just curious if it were possible without it. Thank you! $\endgroup$
– gowrath
This is not an answer to your question but, I hope, an answer to your confusion.
Consider 2 x 2 matrices whose elements are from the set {0,1}. Endowed with the usual addition and multiplication, the set of such matrices forms a non-commutative ring.
Now there are a finite number of elements in this set, 16 in all, so one can define a successor function arbitrarily, by choosing a first element, then a next, and so on, and touching every element in the set. For instance, one can define:
$$\mathrm{S}\begin{bmatrix}0&0\\0&0\end{bmatrix}=\begin{bmatrix}0&1\\0&0\end{bmatrix}$$ $$\mathrm{S}\begin{bmatrix}0&1\\0&0\end{bmatrix}=\begin{bmatrix}0&0\\1&0\end{bmatrix}$$ $$\mathrm{S}\begin{bmatrix}0&0\\1&0\end{bmatrix}=\begin{bmatrix}0&0\\0&1\end{bmatrix}$$ $$\mathrm{S}\begin{bmatrix}0&0\\0&1\end{bmatrix}=\begin{bmatrix}1&0\\0&0\end{bmatrix}$$ $$...$$ $$\mathrm{S}\begin{bmatrix}1&1\\1&1\end{bmatrix}=\begin{bmatrix}0&0\\0&0\end{bmatrix}$$
Now, if one uses this definition of succession, then the axioms of GA will hold, because (1) as we've said this successoring is a function; and (2) because every element has been included in the successoring chain, induction holds. However, the addition (call it ++) which is induced by this successoring is not normal matrix addition (call this +). For instance
$$\begin{bmatrix}0&1\\0&0\end{bmatrix}+\begin{bmatrix}0 &1\\0&0\end{bmatrix}=\begin{bmatrix}0&0\\0&0\end{bmatrix}$$
$$\begin{bmatrix}0&1\\0&0\end{bmatrix}++\begin{bmatrix}0&1\\0&0\end{bmatrix}=\begin{bmatrix}0&0\\1&0\end{bmatrix}$$
Similarly the multiplication induced by this successoring is not normal matrix multiplication. You will find that the induced multiplication is in fact commutative, while (of course) matrix multiplication is not.
jeq
$\begingroup$ I think GA2 is strong enough to prove if x has a successor then $Sx = x+S0$. I know ring theory proves this. Once we call some element 0 and some element 1 we have no choice on how successor is defined. I know nothing about non-commutative rings, but I assume they satisfy GA2's very weak successor axiom using matrix addition. $\endgroup$
$\begingroup$ OK, let the zero matrix be 0 and the identity matrix I (1's on the diagonal and 0's elsewhere) be the successor of 0. And define Sx to be x + I. Then S0 = I and SS0 = 0. So the successoring chain starting from 0 only includes two elements, {0,I}, and not the whole ring. You therefore cannot conclude that multiplication is commutative on the whole ring. $\endgroup$
$\begingroup$ This still satisfies the successor axiom. An element doesn't even have to have a successor in GA2. $\endgroup$
$\begingroup$ Yes it does satisfy the successor axiom. But the problem is that with this definition of successor all the GA2 axioms do not hold, because induction does not hold. Why doesn't it? Define e.g. the predicate phi to be (n = 0 v n = S0). Then phi(0), and if phi(n), then phi(Sn), by a very simple argument on cases. But it's not true that every element in the ring is 0 or S0. So induction doesn't hold. Because induction doesn't hold for 2x2 matrices over {0,1} with this definition of successor, you can't use the results about GA2 to infer that matrix multiplication is commutative. $\endgroup$
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra peano-arithmetic induction or ask your own question.
Axiom to exclude nonstandard natural numbers
Even XOR Odd Infinities?
Provability in Second-Order Arithmetic without the Successor Axiom
Can FPA really prove its consistency?
Is an ultrafinitist Hilbert's program doomed?
Why do stacked quantifiers in PA correspond to ordinals up to $\epsilon_0$?
The (un)decidability of Robinson-Arithmetic-without-Multiplication?
Existence of a model of ZFC in which the natural numbers are really the natural numbers
Models of arithmetical theory R + induction in which successor is not injective | CommonCrawl |
Monday, May 21, 2012 ... / /
How the (2,0) SCFT, little string theory, and others arise from string theory
We often say that the primary reason why string/M-theory is so essential for modern physics is that it is the only known – and most likely, the only mathematically possible – consistent theory of gravity. Everyone who believes that he or she can do state-of-the-art research of quantum gravity without string theory is an unhinged crank, a barbarian, and a conspiracy theorist of the same kind as those who believe that Elvis Presley lives on the Moon.
But another reason why string/M-theory is indispensable for the 21st century theoretical and particle physics is that many of the "ordinary", important, non-gravitational quantum field theories and some of their non-field-theoretical but still non-gravitational generalizations are tightly embedded as limits in string theory. In this way, a theory whose main strength is to provide us with robust quantum rules governing gravity is important for our knowledge of contexts that avoid gravity, too.
Because of the dense network of relationships within string theory that link ideas, concepts, and equations that used to be considered independent – and I mostly mean dualities but not only dualities – each of the "ordinary" non-gravitational theories may be analyzed from new perspectives. In particular, extreme limits of the old theories in which a quantity is sent to infinity (or zero) could have been very mysterious but many of the mysteries go away as string/M-theory allows us to use new descriptions.
Among the new insights that we're learning from the stringy network of ideas, rules, equations, and maps, we also encounter new quantum field theories – and some other non-gravitational generalizations of these theories which are not quantum field theories – i.e. theories that are not full-fledged string vacua and that we shouldn't have overlooked in the past but we have. What are they?
In March, I discussed the maximally supersymmetric gauge theory in four dimensions. It's arguably the most far-reaching or at least the most widely studied example of the point I made in the second paragraph.
The \(\NNN=4\) gauge theory in \(d=4\) is a gauge theory with 16 real supercharges. If you write it in terms of components, it's a gauge theory with a gauge group – it can be \(SU(N)\), \(O(N)\), \(OSp(2k)\), \(E_6\), or any other compact Lie group – which is coupled to four Weyl neutrinos in the adjoint representation of the same group and six Hermitian scalars in the same representation. When the interactions are appropriately chosen, we discover that the theory has those 16 supersymmetries even at the interacting level.
Nima Arkani-Hamed would call this theory a harmonic oscillator of the 21st century. Andy Strominger reserves this term for black holes but it's true that these two theoretical constructs are perhaps even more important if they work as a team and they often do.
String theory tells us lots of things about the seemingly ordinary gauge theory which wasn't known to have any direct connection to strings. In fact, we have known for almost 15 years that this gauge theory is string theory. The \(SU(N)\) maximally supersymmetric gauge theory is totally equivalent to the superselection sector of type IIB string theory respecting the asymptotic conditions of \(AdS_5\times S^5\). This relationship is, of course, the most famous example of Juan Maldacena's AdS/CFT correspondence.
However, the remarkable relationship was found – and may be "almost proven" – by less shocking relationships between this gauge theory and string theory. In particular, the simplest representation of the gauge theory is the dynamics of D3-branes in type IIB string theory at very long distances. Some properties of the gauge theory may be deduced out of this realization immediately. In particular, the theory inherits the \(SL(2,\ZZ)\) S-duality group – which includes the \(g\to 1/g\) exchange of the weak coupling with the strong coupling – from the full type IIB string theory. In the type IIB string theory, the S-duality group may also be motivated by representing type IIB string theory as a 12-dimensional theory, F-theory, compactified on a two-torus. This toroidal proof of the S-duality group may also be realized by another embedding: the gauge theory may also be viewed as a long-distance limit of the \(d=6\) \((2,0)\) superconformal field theory compactified on a two-torus; the logic is the same.
You should appreciate that the S-duality is an extremely complicated relationship if you want to construct it or prove it by hand. In fact, it replaces point-like elementary oscillations that are weakly coupled with extended objects such as magnetic monopoles that are strongly coupled. They look like very different physical objects and the proof of the equivalence can't be made in perturbative expansion – because it is not a duality that holds order-by-order in this expansion – but it's still true. But of course, all tests you can fully calculate work: the gauge theory seems to possess the non-trivial S-duality group. In its stringy incarnation, the S-duality may be seen within a second.
Also, Maldacena's holographic duality boils down to the construction of the gauge theory involving D3-branes, too. The low-energy limit of the D3-branes' internal interactions has to be an interacting theory with 16 supercharges – because they aren't being broken by anything – and that has a field content that may be obtained from the counting of open string excitations attached to the D3-branes. You will find out that the theory has to be a gauge theory with the degrees of freedom I enumerated above; the supersymmetries and consistency dictate the interactions uniquely. In the long-distance limit, only the massless open strings i.e. gauge fields and their superpartners matter; closed strings (especially gravity) is decoupled because the energy density per Planck volume is very low in this limit. So we really do have a non-gravitational theory.
On the other hand, the D3-branes in string theory are real objects, lively animals that manifest themselves in many other physical ways. In particular, they have a gravitational field that extends to the transverse dimensions. Much like D0-branes would be particles that would behave as black holes, D3-branes are extended versions of the same objects, extended black holes. We call them black branes or black \(p\)-branes. They are black 3-branes, in this case. Just to be sure, in the previous paragraph, I stated that the gravitational force between the open string interactions may be neglected; but the gravitational field from their substrate – the static D3-branes in which the open strings live – still curves the 10-dimensional spacetime of type IIB string theory.
A funny thing is that if you adopt the full 10-dimensional perspective, the low-energy excitations have another interpretation: they are physical states that are located near the event horizon of the black branes. The relationship between the adjectives "low-energy" and "near-horizon" holds because near the horizon, it's where the excitations that look "very red" from the global viewpoint (of an observer at infinity) may be created in generic processes. That's because of the gravitational red shift, of course.
If you ask which degrees of freedom are kept if you simply consider all low-energy excitations of those 3-branes, you have two methods to answer: you either realize that the 3-branes may be described as D3-branes whose dynamics is governed by interactions of open strings and the low-energy limit of the open strings' interactions is nothing else than the gauge theory; or you may imagine that the D3-branes are actual solutions of a gravitational theory – an extension of general relativity – and low-energy states are the states of all objects that move near the event horizon.
Each of these operations is a valid method to isolate the low-energy states; so the two theories obtained by these methods must be exactly equivalent. That's an elegant proof of the AdS/CFT correspondence, a non-technical, non-constructive proof that avoids almost all mathematics (although one should still add some mathematics in order to show that it really deserves to be called the "proof"). The near-horizon geometry of the black 3-branes is nothing else than \(AdS_5\times S^5\) and gravitational – well, type IIB stringy – phenomena within this spacetime must therefore be exactly described by a four-dimensional gauge theory.
Of course, this successful union of string theory and gauge theory may be extended to other gauge groups, less supersymmetric gauge theories corresponding to less symmetric compactifications of the gravitational side, and even to other dimensions. Lots of objects on both sides of the equivalence may be given new interpretations using the other description, and so on. But the main goal of this text is to describe new field theories and new non-gravitational non-field theories that arise from similar constructions. The most supersymmetric example of the first category is the so-called \((2,0)\) superconformal field theory in 6 dimensions.
M5-branes and their dynamics
In the case of the D3-branes above, we considered objects in string theory in ten dimensions. In the usual weakly coupled approach, these theories are parameterized by the string coupling constant \(g_s\) which is the exponential of the (stringy) dilaton; greetings. The coupling constant is adjustable in the simplest vacua; all values are equally good but the choice isn't a parameter representing inequivalent possibilities. Instead, because the coupling is an exponential of the dilaton and the dilaton is a dynamical field, different values of the coupling constant correspond to different environments that may be achieved in a single theory.
In realistic compactifications, a potential for the dilaton is generated (much like the potential for all other moduli) and string theory picks a preferred value of the string coupling which is at least in principle but – to a large extent – also in practice calculable (much like the detailed shape of the extra dimensions etc.).
However, there exists a vacuum of string/M-theory that has no dilaton-like scalar field that would label inequivalent environments. Of course, it's the 11-dimensional M-theory. The field content of the eleven-dimensional supergravity only includes the graviton, some spin-3/2 gravitino, and spin-1 three-form generalizing electromagnetism. No spin-0 scalar fields here.
That's kind of nice because the theories we may obtain from M-theory in similar ways as the theories obtained from type II or type I or heterotic string theory have an unusual property: they have no adjustable dimensionless coupling constants. This is something we're not used to from the quantum field theory courses taught at schools. In those courses, we first start with a free theory and interactions are added as a voluntary deformation. All these interactions may be chosen to be weak because the coupling constants are adjustable and the free, non-interacting limit is assumed to be OK.
However, for theories obtained from M-theory, we can't turn off the interactions at all! These theories inevitably force their degrees of freedom to interact with a particular vigor that cannot be reduced at all. Because the coupling constants may be measured as the strength of the "quantum processes" – how much the one-loop diagrams where virtual pairs exist for a while are important relatively to the tree-level "classical" processes – we may also say that the theories extracted from M-theory are intrinsically quantum and they have no classical limit.
Are there any?
You bet. As I mentioned in my discussion of 11D SUGRA, the theory has to contain a three-form potential \(C_3\). One may add terms in the Lagrangian where \(C_3\) is integrated over a 3-dimensional world volume in the spacetime. This term generalizes the \(\int \dd x^\mu A_\mu \) coupling of the electromagnetic fields with world lines of charged particles (in the limit in which they're treated as particles with clear world lines, not as fields). And indeed, M-theory does allow such terms; the 3-dimensional world volumes are those of M2-branes, or membranes, objects with 2 spatial and 1 temporal dimensions.
Also, the exterior derivative of the \(C_3\) potential is a four-form \(F_4\) field strength. By using the epsilon symbol in eleven dimensions, this may get mapped to a Hodge-dual seven-form \(F_7\) potential which is locally, in the vacuum, the exterior derivative of a six-form "dual potential" \(C_6\). So M-theory also admits couplings of this \(C_6\) and indeed, the 6-dimensional world volume we integrate over is the world volume of M5-branes, the electromagnetic dual partners of M2-branes.
Just like string theories contain fundamental strings, F1-branes, and lots of heavy D-branes of various dimensions, M-theory contains no strings or 1-branes but it has M2-branes and M5-branes which have different dimensions but are "comparably heavy" as long as their typical mass scale goes.
A nice thing is that just like you may study the long-distance dynamics of D3-branes which led to the very important maximally supersymmetric gauge theory, you may also study the long-distance limit of the dynamics inside M2-branes and M5-branes. Both of them give you some new interesting theories. The theories related to the M2-branes were the subject of the recent "membrane minirevolution"; this was my name for the intense research of some supersymmetric 3-dimensional gauge theories extending the Chern-Simons theory. Some new ways to see the hidden symmetries of these theories were found; the most obvious "clearly new" development of the minirevolution were the ABJM theories extending the long-distance of the membranes to more complicated compactifications. The membrane minirevolution has surprised many people who had thought that such M(ysterious) field theories would never be written in terms of ordinary Lagrangians. They could have been written. People could only discover these very interesting and special Lagrangians once they were forced by string/M-theory to look for them.
When you consider the low-energy limit of the M5-branes, you get a six-dimensional theory: 5 dimensions of space and 1 dimension of time. It is useful to mention how spinors work in 6 dimensions. In 4 dimensions, the minimal spinor is a Weyl spinor (or, equivalently – when it comes to the counting of fields – the Majorana spinor). But there's only one kind: if you include a left-handed Weyl spinor, the theory immediately possesses the Hermitian conjugate right-handed one, too. So you only need to know how many spinors your theory has. For example, the \(\NNN=4\) theory has supercharges that may be organized into 4 Weyl or Majorana spinors.
However, things are a bit different in \(d=6\). Because it is an even number, one still distinguishes left-handed and right-handed Weyl spinors. But in spacetime dimensions of the form \(4k+2\), the left-handed and right-handed spinors are actually not complex conjugates to each other. You may incorporate them independently of each other. The same comment holds for supersymmetries; if you want to accurately describe how the spinors of supersymmetric transform, you must specify how many left-moving and how many right-moving Weyl spinors there are in the list of supercharges.
In ten dimensions, we use the "shortened" terms type I, type IIA, type IIB for \((1,0)=(0,1)\) supersymmetric theories, \((1,1)\) supersymmetric theories, and \((2,0)=(0,2)\) supersymmetric theories, respectively. The permutation of the two labels is immaterial. The type I and type IIB theories are inevitably left-hand-asymmetric i.e. chiral; type IIA is left-right-symmetric i.e. non-chiral, as expected from the fact that it may be produced as a compactification of an 11-dimensional theory.
In six dimensions, there's a similar classification. The \((1,1)\) theories are non-chiral and typically include some gauge fields. On the other hand, the \((2,0)\) theories are chiral. The \((2,0)\) theory we find in the long-distance limit of the M5-branes is non-chiral not only when it comes to the fermions in the field content. Because the labels \((2,0)\) are "very asymmetric" between the first and second digit, the left-right asymmetry actually inevitably gets imprinted to the bosonic spectrum, too. If we're explicit, it's because the theory contains "self-dual field strength fields" i.e. 3-form(s) \(H_3\) generalizing \(F_2\) in Maxwell's theory that however obey \(*H_3=H_3\). Note that this is possible in 6 dimensions but not in 4 dimensions because \((*)^2=+1\) in 6 dimensions but \((*)^2=-1\) in 4 dimensions.
Because the \((2,0)\) theory must allow a generalization of the gauge field whose field strength is however constrained by the self-duality condition, it's hard to write an explicit Lagrangian definition of the theory, at least if we want it to be manifestly Lorentz-symmetric one. It's a part of the unproven lore that this can't be done. However, you must be careful about such widely held beliefs. In particular, the membrane minirevolution has shown that various Lagrangians that would be thought of as impossible are actually totally possible and you never know whether someone will find a clever trick by which this explicit construction may be extended to 6 dimensions.
So the six-dimensional theory can't be constructed as a "quantization" of a classical theory. It's a point that I discussed in less specific contexts in several recent articles about the foundations of quantum mechanics. We see many independent reasons why it's natural that no such "master classical theory" may exist in this case. First, the quantum theory requires the coupling constant to be "one" in some normalization: it can't be adjusted to be close to zero so studying the theory as the deformation of a free theory would be similar to studying \(\pi\) using the \(\pi\to 0\) limit. Second, we have mentioned that the theory contains self-dual fields and it's hard to write a Lagrangian for a potential if you also want its field strength to be self-dual. Third, and it is related, you would have a problem to write renormalizable interactions in a theory in 6 or more dimensions, anyway. A \(\phi^3\) cubic coupling for a scalar would be the "maximum" that would still be renormalizable but it would create instabilities. By denying that there exists a way to represent the full quantum field theory as a quantization of a classical theory (with a polynomial Lagrangian), string/M-theory finds the loophole in all these arguments that a sloppy person could offer as an excuse that such a non-trivial 6-dimensional theory shouldn't exist.
However, this theory still exists as an interacting, non-gravitational theory with all the things you expect from a local quantum field theory. One may define local fields \(\Phi_k(x^\mu)\) and these fields have various correlation functions and may be evolved according to some well-defined Heisenberg equations, and so on. It may be hard or impossible to use the perturbative (and other) techniques we know from the gauge theory but the resulting product – Green's functions etc. – is conceptually identical to the product in the gauge theory. You may be ignorant about methods how to compute these physical answers in the \((2,0)\) theory; but one may actually prove – using the consistency of string theory as a main tool or assumption – that these answers exist and have the same useful properties as similar answers in gauge theory. However, in gauge theory, we may calculate a whole 1-parameter or 2-parameter family of the "collection of Green's functions"; the families are parameterized by the coupling constant (and the axion). In the \((2,0)\) case, there are no such parameters. It's just an isolated theory – one isolated set of Green's functions encoding all the evolution and interactions – without continuously adjustable dimensionless parameters.
Much like the \(\NNN=4\) gauge theory is equivalent to type IIB string theory in \(AdS_5\times S^5\) which we could have derived as the near-horizon geometry of a stack of the D3-branes, the \((2,0)\) theory in six dimensions may be shown to be equivalent to M-theory on \(AdS_7\times S^4\), the near-horizon geometry of a stack of the M5-branes in M-theory. Just to be sure, there is a similar case involving a 3-dimensional Chern-Simons-like theory andd M-theory on \(AdS_4\times S^7\) – note that the labels four and seven got exchanged – which is the near-horizon geometry of a stack of M2-branes in M-theory.
So while the perturbative, weakly coupled methods don't exist for this six-dimensional theory, the holographic AdS/CFT methods work as well as they do for the gauge theory. Also, this six-dimensional theory is as important for Matrix theory, a non-gravitational way to describe some simple enough compactifications of string/M-theory on flat backgrounds, as the gauge theory is. In particular, if you compactify the \((2,0)\) theory on a five-torus (times the real line for time), you get a matrix description for M-theory on a four-torus.
Perturbatively, the \(\NNN=4\) gauge theory with the \(SU(N)\) gauge group seems to have the number of degrees of freedom – independent elementary fields – that scales like \(N^2\). That's because the adjoint representation may be viewed as a square matrix, of course. There are actually different, independent methods to derive this power law, too, in particular a holographic one that is based on the entropy of a dual bulk black hole.
The holographic methods may also be used for the M2-based 3-dimensional theory and the M5-based 6-dimensional theory. They tell you that the number of degrees of freedom in these two theories should scale like \(N^{3/2}\) and \(N^3\) in \(d=3\) and \(d=6\), respectively. The first case, a fractional power, doesn't even produce an integer but it has still been motivated in various ways.
The 6-dimensional case is even more intriguing because the integral exponent does suggest that there could exist a "constructive explanation" – some formulation that uses fields with three "fundamental gauge indices", if you wish. Many authors have tried to shed light on this strange power law. A month ago, Sav Sethi and Travis Maxfield offered a brand new calculation of the "conformal anomaly" (what was interpreted as the number of degrees of freedom) which also produces the right \(N^3\) scaling.
There's still a significant activity addressing this 6-dimensional theory and its less supersymmetric cousins. A few days ago, Elvang, Freedman, Myers, and 3 more colleagues wrote an interesting paper about the a-theorem in six dimensions. You should realize that despite the absence of an old-fashioned, "textbook" Lagrangian classical-based construction of the theory, the amount of knowledge has been growing for more than 15 years. Let me pick my 1998 paper with Ori Ganor as some "relatively early" research of physical effects that occur in this theory.
So the \((2,0)\) theory is conformal and therefore scale-invariant (it is a "fixed point" of the renormalization group) which is why it may occur as the low-energy limit of other physical theories in 6 dimensions; I will mention one momentarily. It has a qualitatively well-understood holographic dual and it appears in a matrix description of M-theory on a four-torus. Some fields, especially the "supersymmetry preserving ones", may be isolated and some of their correlation functions may be calculated purely from SUSY, and so on. The theory has various topological solutions that may be interpreted by various "perspectives" to look at this theory that string/M-theory offers. This six-dimensional theory is also an "ancestor" of the maximally supersymmetric gauge theory; the \(\NNN=4\) gauge theory may be obtained from a compactification of the six-dimensional theory on a two-torus.
There are interesting modifications and projections of this theory, too. For example, there are \((1,0)\) theories in six dimensions which respect an \(E_8\) global symmetry. This global symmetry is inherited from the \(E_8\) gauge symmetry that lives on the domain walls (ends-of-the-world) in M-theory whenever the M5-branes are places on such a boundary. I can't say everything that is interesting about this theory but be sure that there would be lots of other things just to enumerate – and lots of interesting details if I were to fully "teach you" about those things.
One of the broader points is that physics is making progress and finding "conceptually new ways" how to think about old theories, how to calculate their predictions, and how to related previous unrelated physical mechanisms and insights. Quantum field theory is essential in all this research; however, we know that quantum field theory isn't just some mechanical exercise starting from a classical theory and adding interactions to a free limit by perturbative interactions. There are lots of nonperturbative processes and insights that may be obtained without explicit perturbative calculations, too.
Little string theory
I have mentioned that the \((2,0)\) superconformal field theory discussed above was a quantum field theory whose Green's functions are as real as those coming from a gauge theory; they satisfy the same consistency, unitarity, and locality conditions, too. But it's a "fixed point", a scale-invariant theory that may be identified as the "ultimate long-distance limit" of some other theories. Are there any other theories of this kind?
Yes, you bet. But the most interesting ones aren't gauge theories. They're "little string theories".
A little string theory is a type of a theory in spacetime that is something in between a quantum field theory in the spacetime; and the full gravitating string theory in the same spacetime. They're not local because we may say that their elementary degrees of freedom or elementary building blocks arise from strings much like in the full string theory; however, an appropriate limit is taken so that the gravitational force between the strings decouples.
This seemingly contradicts the lore that every theory constructed from interacting strings inevitably includes gravity; however, there's actually no congtradiction because while the little string theories contain strings and they are interacting theories, they actually cannot be constructed out of these "elementary strings" by following the usual constructive methods of the full string theory.
Fine, so what is the little string theory? The simplest little string theories carry the same \((2,0)\) supersymmetry in \(d=6\) as the superconformal quantum field theory I was discussing at the beginning. In fact, the long-distance limit of these little string theories (they are parameterized by discrete labels such as the number of 5-branes) produce the superconformal field theory we have already discussed.
But these little string theories are not superconformal or scale-invariant. In fact, they are not local quantum field theories at all. In this sense, they are just a generalization of a quantum field theory in a similar sense as the full string theory is a generalization of a quantum field theory. How can we obtain them?
The most straightforward way to obtain the \((2,0)\) superconformal field theories above were a stack of M5-branes in M-theory. Are there some other objects in string theory that are not M5-branes but that look as M5-branes in the low-energy limit? The answer is Yes. M-theory may be obtained as the strong coupling limit of type IIA string theory. Type IIA string theory also contains 5-branes. But they are not D5-branes which may be found in type IIB string theory; type IIB D5-branes produce \((1,1)\) supersymmetric theories in six dimensions, not \((2,0)\): their world volume is exactly as left-right-symmetric as the type IIB spacetime fails to be. There are also NS5-branes in type IIB string theory which have the same SUSY as the D5-branes, because of S-duality that relates them.
Type IIA string theory only contains D-even-branes, not D5-branes, but it still allows NS5-branes, the electromagnetic duals of fundamental strings. And while type IIA is left-right-symmetric in the spacetime, its NS5-branes are left-right asymmetric; not that there is an anticorrelation between the chirality of the spacetime and the chirality of the NS5-brane world volume.
The dilaton of type IIA string theory has a value that depends on the distance from the NS5-branes; this contrasts with the behavior of D3-branes in type IIB string theory that preserve the constant dilaton (and string coupling) in the whole spacetime. This depends of the dilaton – it goes to infinity near the NS5-branes' core – means that the ultimate low-energy limit of the dynamics of NS5-branes is the same one as it is for M5-branes in M-theory: the new 11th dimension really emerges if you're close enough to the NS5-branes.
On the other hand, one may define a different scaling limit of dynamics inside the type IIA NS5-branes in which the gravity in between the excitations of the NS5-branes is sent to zero; but which is not the ultimate long-distance, scale-invariant limit yet. Such a theory inherits a privileged length scale, the string scale, from the "parent" type IIA string theory. But it doesn't preserve the dilaton or the coupling constant because it's scaled to infinity.
The resulting theory of this limit, the little string theory, has no gravitational force but it has string-like excitations. It is not a local quantum field theory but its low energy limit is a quantum field theory. The theory – which has a "qualitatively higher level of conceptual complexity than the \((2,0)\) superconformal field theory" – also enters Matrix theory; its compactification on a five-torus is the matrix description of M-theory compactified on a five-torus. All the usual limits and dualities between the toroidally compactified string/M-theoretical backgrounds may be deduced from the matrix description, too: these dualities may be reduced to relationships between their non-gravitational matrix descriptions.
The little string theories have various other relationships to quantum field theories and vacua of the full string theory, too. Again, I can't say everything that is known about them and everything that makes them important.
Let me emphasize that none of these theories – neither the new superconformal field theories nor the little string theories – has any adjustable continuous dimensionless parameters. They still have discrete parameters – counting the number of 5-branes in the stack and/or whether or not these 5-branes were positioned at some end-of-the-world boundaries or other singular loci in the parent spacetime. But the absence of the continuously adjustable parameters allows us to say that all these quantum theories are "islands" of a sort.
They're obviously important islands. If you want to study consistent non-gravitational interacting theories in 6 dimensions, these islands may be as important as Hawaii or the Greenland or Polynesia or Africa – it's hard to quantify their importance accurately in this analogy. However, the importance is clearly "finite" and can't go to zero. Hawaii, the Greenland, Polynesia, or Africa inevitably enters many people's lives.
Finally, I want to end up with a more general comment. New exceptional theories that were previously overlooked but that obey all the "quality criteria" that were satisfied by the more well-known theories; and all the new perspectives and "pictures" that allow us to say something or calculate something about these as well as the more ordinary theories are important parts of the genuine progress in theoretical physics and everyone who actually likes theoretical physics must be thrilled by this kind of progress and by the new "concise ways" how some previously impenetrable technical insights may be explained or proved.
There exists a class of people with a very low intelligence, no creativity, no imagination, and no ability to see the "big picture" who are only capable of learning some very limited rules and who are devastated by every new powerful technique or technology that physics learns. These human feces often concentrate around Shmoit-and-Shmolin kind of aggressive sourball crackpot forums. I hope that all readers with IQ above 100 have managed to understand why the text above is enough as a proof of the simple assertion that all these Shmoits-and-Shwolins are just intellecutally worthless dishonest scum.
Other texts on similar topics: mathematics, string vacua and phenomenology, stringy quantum gravity
Matrix theory: a novel alternative to second quant...
FAQ on black holes and information
Bribed, stealing officials enjoy their $500k befor...
Recent JS-Kit comments
Recent slow Blogger.com and JS-Kit Echo comments
Knud Rasmussen pictures: Greenland is melting less...
Why \(\NNN=8\) supergravity is probably divergent ...
Indus Valley Civilization destroyed by SUVs 3,000+...
Recent DISQUS comments
3(+1) new papers on stop squarks (and staus)
Nature Climate Change: climate fear decreases with...
Dripping crucifix vs Indian heretic
Electric shocks under high voltage power lines
Assassination of Heydrich 70 years ago: details co...
Krugman: scientists should falsely predict alien i...
Is the "follow the money" argument correct?
Iran, AGW: useless summits in Baghdad, Bonn
BaBar: 3.4-sigma excess in tau-nu decays of B
Science on anti-GOP bias of the NAS
Psychology of dark matter denial
South Dakota's LUX will join the dark matter wars
Sheldon Cooper's revenge to Stephen Hawking: Hawki...
Euro, geuro, and Greece before grexit
Hartle, Hawking, Hertog: how our C.C. could be neg...
Paul Frampton: three generations from an extension...
Klaus for Heartland on AGW: Eastern Europe is a bi...
Higgs combo viXra java applet
Klaus: Afghans not ready to take security lead
How the (2,0) SCFT, little string theory, and othe...
Cap and trade for U.S. water
Global temperature maps
Chanel Nº 5/fb: sweet fragrance of SUSY
Where and why people's reasoning starts to diverge...
Will Happer: CO2: friend or foe?
Seventh Heartland Climate Conference: schedule
Does hard work guarantee discoveries and answers?
Czech socialist politician: $350,000 in a shoe box...
Focus point supersymmetry
LQG calculates the correct black hole entropy
Quantum gravity: replies to "top ten"
David Suzuki: humans are 2D maggots
Tommaso Dorigo, cMSSM, and demagogy
Richard Feynman: birthday
Thomas Bayes and supersymmetry
EU lawmakers won't go to Rio+20, can't afford the ...
Gauginos with Dirac masses and F-theory
Mars: climate change faster than thought
Naomi Riley: another victim of PC fascism
Is everything entangled?
Brontosauruses' flatulence: as much methane as civ...
French adventures may hardly lead to a happy end
A confirmation of the 130 GeV dark matter-like bum...
Wrong log corrections to BH entropy exclude LQG, i...
String predictions in particle physics
Black holes: harmonic oscillators of the 21st cent...
Kaczynski Heartland billboard wasn't a blunder
Klaus for CNN: Europe needs reforms like Czechoslo...
A menace called Francois Hollande
When experimental mathematics fails
Bell denial: Scott Aaronson vs Joy Christian
HSCP at CMS: many 2-sigma excesses
Lisa Randall and string theory
Do wind farms cause global warming?
Eugene: Reds shifting and redshifting | CommonCrawl |
What Is EBITDAR?
Formula and Calculation
What Does EBITDAR Tell You?
EBITDAR vs. Other Metrics
EBITDAR FAQs
EDITDAR: Meaning, Formula & Calculations, Example, Pros/Cons
Julia Kagan
Julia Kagan is a financial/consumer journalist and senior editor, personal finance, of Investopedia.
Amilcar Chavarria
Reviewed by Amilcar Chavarria
Amilcar has 10 years of FinTech, blockchain, and crypto startup experience and advises financial institutions, governments, regulators, and startups.
Earnings before interest, taxes, depreciation, amortization, and restructuring or rent costs (EBITDAR) is a non-GAAP tool used to measure a company's financial performance. Although EBITDAR does not appear on a company's income statement, it can be calculated using information from the income statement.
EBITDAR is a profitability measure like EBIT or EBITDA that adjusts net income to be internally analyzed by removing certain costs.
It's better for casinos, restaurants, and other companies that have non-recurring or highly variable rent or restructuring costs as these expenses are taken out of net income.
EBITDAR gives analysts a view of a company's core operational performance apart from expenses unrelated to operations, such as taxes, rent, restructuring costs, and non-cash expenses.
Using EBITDAR allows for easier comparison of one firm to another by minimizing unique variables that don't relate directly to operations.
EBITDAR may unjustly remove controllable costs which may not hold management accountable for some costs incurred.
Formula and Calculation of EBITDAR
EBITDAR can be calculated in several different ways. Because EBITDA is a heavily used financial calculation, the most common way is to add restructuring and/or rental costs to EBITDA:
EBITDAR = EBITDA + Restructuring/Rental Costs where: EBITDA = Earnings before interest, taxes, depreciation, and amortization \begin{aligned} &\text{EBITDAR}=\text{EBITDA + Restructuring/Rental Costs}\\ &\textbf{where:}\\ &\text{EBITDA = Earnings before interest, taxes,}\\ &\text{depreciation, and amortization}\\ \end{aligned} EBITDAR=EBITDA + Restructuring/Rental Costswhere:EBITDA = Earnings before interest, taxes,depreciation, and amortization
Different approaches to calculating EBITDAR may start with different earnings or income calculations. In general, the earnings portion refers to net income. This is the all-inclusive, non-restrictive earnings that a company has made in a given period that is not yet adjusted for any items below.
Interest expense is the cost incurred for securing a debt or line of credit with an outstanding balance. A company may choose to eliminate this cost because it may not be controllable by management. In addition, it may be strategically advantageous to have opted to finance something using low-cost debt instead of relying on internal capital or higher-cost methods such as issuing equity shares.
Tax expense is the cost imposed on a company for local, state, or federal taxes. Because a company often does not have a say in its tax assessment, it may be removed for internal analysis. However, companies also have the discretion of forming favorable legal structures to help minimize its tax assessment. Some may argue that if a company fails to strategically plan its future tax liability, it should be held accountable for the taxes assessed when analyzing financial results internally.
Depreciation is the allocated cost of a tangible asset over its useful life. Though a company may outright purchase an asset, it will likely not receive the benefit of the asset all at one time but instead over a period of time. Although there are different depreciation rates and methods, a company may have much control over how depreciation impacts their net income calculation. In addition, companies may not care to see such non-cash transactions when analyzing results.
In a very similar manner as depreciation, amortization is the spreading of costs over the useful life of an asset. However, amortization occurs for intangible assets such as trademarks, patents, and goodwill. The benefit of these items is received over time; however, the worth theoretically deteriorates over time and they become less valuable as they are used or competitors make them obsolete. Just like depreciation, amortization is a non-cash, uncontrollable expense that management may not care to analyze.
Restructuring or Rental Costs
The element that makes EBITDAR different from other calculations is the elimination of restructuring costs or rental costs. These costs may not yield financial results comparable with other companies or comparable for a single company across a period of time. For certain industries and sectors, it may be more favorable to remove these costs when analyzing financial results for reasons discussed below.
EBITDAR is an internal analysis tool only. Though it may be discussed within the notes to a company's financial statements, companies are not required to publicly disclose their EBITDAR calculations.
EBITDAR is a metric used primarily to analyze the financial health and performance of companies that have gone through restructuring within the past year. It is also useful for businesses such as restaurants or casinos that have unique rent costs. It exists alongside earnings before interest and tax (EBIT) and earnings before interest, tax, depreciation, and amortization (EBITDA).
Using EBITDAR in analysis helps to reduce variability from one company's expenses to the next, in order to focus only on costs that are related to operations. This is helpful when comparing peer companies within the same industry.
EBITDAR doesn't take rent or restructuring into account because this metric seeks to measure a company's core operational performance. For example, imagine an investor comparing two restaurants, one in New York City with expensive rent and the other in Omaha with significantly lower rent. To compare those two businesses effectively, the investor excludes their rent costs, as well as interest, tax, depreciation, and amortization.
Similarly, an investor may exclude restructuring costs when a company has gone through a restructuring and has incurred costs from the plan. These costs, which are included on the income statement, are usually seen as nonrecurring and are excluded from EBITDAR to give a better idea of the company's ongoing operations.
EBITDAR is most often calculated for internal purposes only, as it is not a required financial reporting metric for public companies. A firm might calculate it each quarter to isolate and review operational expenses without having to consider fluctuating costs such as restructuring, or rent costs that may differ within various subsidiaries of the company or among the firm's competitors.
EBITDAR Example
Imagine Company XYZ earns $1 million in a year in revenue and incurs $400,000 in total operating expenses. Included in the firm's $400,000 operating expenses is depreciation of $15,000, amortization of $10,000, and rent of $50,000. The company also incurred $20,000 of interest expenses and $10,000 of tax expenses for the period.
Company XYZ can begin by calculating its net income. This is the total amount of revenue less the total amount of expenses.
Net Income = $1,000,000 Revenue - $400,000 Operating Expenses - $20,000 Interest - $10,000 Taxes = $570,000
Company XYZ can then back into EBIT by adding back interest and taxes.
EBIT = $570,000 Net Income + $20,000 Interest + $10,000 Taxes = $600,000.
Company XYZ can further back out additional costs to arrive at EBITDA.
EBITDA = $600,000 EBIT + $15,000 Depreciation + $10,000 Amortization = $625,000
Last, Company XYZ can reincorporate rental costs to arrive at EBITDAR.
EBITDAR = $625,000 EBITDA + $50,000 Rental Expenses = $675,000
EBITDAR can be calculated many different ways. For example, if you know EBITDA, you can simply add restructuring or rent costs. As another example, if you know EBIT, just add back depreciation, amortization, and restructuring/rent costs. The ultimate calculation across all different methods should be the same.
Advantages and Limitations of EBITDAR
Advantages of EBITDAR
EBITDAR is more useful than other financial calculations in several different situations:
EBITDAR removes one-time restructuring costs. As these expenses are often non-recurring, it is less useful to analyze earnings after these one-time costs.
EBITDAR makes certain companies more comparable. By removing rental costs, it becomes more reasonable to compare the operations of different companies without discrepancies arising based on whether the company owns its assets or not.
EBITDAR adjusts for geographical regions with higher costs. Some locations may have higher rent costs based on the nature of that area.
EBITDAR communicates a more controllable earnings calculation. Management can more strategically approach earnings calculations when less controllable elements have been removed.
Limitations of EBITDAR
However, there are several cases where EBITDAR is not as advantageous to use:
EBITDAR manipulates what may be a recurring reorganizational process. Larger companies may restructure their entity very frequently. As this may be an inherent cost of the company, some may argue it is unfair to eliminate this naturally-occurring cost.
EBITDAR may eliminate controllable costs. An organization must still be held responsible for inefficiency if it continually must undergo restructuring. Because EBITDAR "hides" the restructuring cost, management may not take full ownership of this semi-controllable aspect of operations when only looking at this calculation.
EBITDAR does not reflect potentially higher selling prices. The argument is to eliminate rent costs as some areas incur higher expenses; however, these areas may also be subject to geographical pricing and more likely to charge higher rates for their products and incur greater income (which is not adjusted for).
EBITDAR attempts to align reporting to cash activity but may be misleading. A company must still incur cash outlays for interest, taxes, restructurings, and rental costs. By removing these amounts, a company may be misled regarding how much cash it actually goes through in a period.
Strives to exclude non-recurring or one-time expenses
Disregards different capital structures and attempts to compare companies based on their operations only
Adjusts for how different regions may have different costs
Aims to include only the major expenses that management has the ability to control
Removes restructuring costs which may be recurring and part of the normal course of operations for a large company
May remove controllable costs that management should be held accountable for
Does not reduce income for higher cost areas although expenses are adjusted for
May mislead management regarding cash flow needs
EBITDAR vs. Other Financial Calculations
EBITDAR vs. EBITDA
The difference between EBITDA and EBITDAR is that the latter excludes restructuring or rent costs. However, both metrics are utilized to compare the financial performance of two companies without considering their taxes or non-cash expenses such as depreciation and amortization.
A company may choose EBITDAR over EBITDA if it has undergone a recent reorganization that will make it more difficult to analyze year-over-year results. In the year of the reorganization, expenses will likely be higher due to conversions, training, and temporary inefficiencies.
A majority of companies are able to stick with EBITDA because (1) they have not recently undergone a reorganization, (2) they wish to still include the cost of that reorganization as part of their earnings analysis since it may have been controllable, and (3) it is a much more widely accepted earnings calculation.
EBITDAR vs. EBIT
The difference between EBITDAR and EBIT is more substantial. EBIT adjusts earnings for interest and taxes, but it still includes the costs allocated to a good over its useful life. EBIT also includes restructuring and rental costs.
The argument for EBIT is that the cost of depreciable assets is still a controllable cost. Although management may not have full discretion on how long as asset is depreciated for or what its depreciable is, the company still decided to incur the cost of acquiring the asset to use as part of operations. For this reason, depreciation is included in EBIT.
The same concept applies to intangible assets that must be amortized. A company can argue it receives a financial benefit (i.e. greater brand awareness, better product recognition) from goodwill; therefore, because it is recognizing the financial benefit, it must also consider the financial cost (amortization).
Potentially the largest difference between EBITDAR and EBIT relates to cashflow. EBITDAR removes many more non-cash expenses and one-time expenses; therefore, EBITDAR may be a more accurate reflection on what a company will need in terms of cash on a recurring basis. On the other hand, EBIT is usually a greater reflection of what a company's accounting profit will be.
EBITDAR vs. Net Income
The greatest difference lies between EBITDAR and net income. Net income is the ultimate bottom line. It includes all company-wide expenses whether they require cash outlay or not. Net income does not distinguish between different types of costs; all expenses are included.
Net income is heavily dictated by accounting rules and non-cash transactions. Though the financial industry heavily relies on analyzing and comparing net income across companies, there are simply too many variables impact this single calculation to make it truly useful for analysis. This idea stemmed the calculations above; instead of relying on a single, broad number, analysists could choose the aspects of a company to look into by forming different metrics such as EBITDAR.
How Do You Calculate EBITDAR?
EBITDAR is calculated by subtracting interest, taxes, depreciation, amortization, and restructuring/rent costs from earnings. Because EBIT and EBITDA are commonly used measurements as well, a company can calculate EBITDAR by manipulating either of those two measurements. For example, a company can simply subtract depreciation, amortization, and restructuring/rent costs from EBIT.
What Is a Good EBITDAR Margin?
It is not uncommon to see an EBITDA ratio exceed 20%. The general rule of thumb is a strong EBITDA measurement is 10%; because EBITDAR may not be substantially different from EBITDA for many companies, a good EBITDAR margin will be at least double-digits.
What Companies Use EBITDAR?
Instead of using EBITDA, EBITDAR is used by companies that recently underwent restructuring. The goal of EBITDAR is to eliminate these one-time restructuring costs to allow management an easier opportunity to analyze financial performance. EBITDAR is also used by casinos, restaurants, or other businesses that typically pay rent. Companies that want to strictly look at financial performance relating to more controllable aspects of operations may choose to internally eliminate rent for better analysis.
What Is the Difference Between EBIT, EBITDA, and EBITDAR?
EBIT, EBITDA, and EBITDAR are all calculations that adjust a company's earnings to eliminate less controllable aspects of the company's operations. The difference between the three is the amount of items that are taken out of earnings for analysis purposes. Calculations with longer acronyms will have more items adjusted out of earnings.
EBITDAR is a variation of the very commonly used EBIT or EBITDA calculations. In addition to adjusting income for interest, taxes, depreciation, and amortization, EBITDAR removes (1) restructuring costs and (2) rent payments. This calculation is used by companies that want a better sense of financial performance who recently underwent a one-time restructuring or do not own a majority of their assets.
U.S. Security and Exchange Commission. "EVO Reports First Quarter 2022 Results."
EBITDA: Meaning, Formula, and History
EBITDA, or earnings before interest, taxes, depreciation, and amortization, is a measure of a company's overall financial performance.
Operating Profit: How to Calculate, What It Tells You, Example
Operating profit is the total earnings from a company's core business operations, excluding deductions of interest and tax.
Interest Coverage Ratio: Formula, How It Works, and Example
The interest coverage ratio is a debt and profitability ratio used to determine how easily a company can pay interest on its outstanding debt.
Operating income is a company's profit after deducting operating expenses such as wages, depreciation, and cost of goods sold.
Adjusted EBITDA: Definition, Formula and How to Calculate
Adjusted EBITDA (earnings before interest, taxes, depreciation, and amortization) is a measure computed for a company that takes its earnings and adds back interest expenses, taxes, and depreciation charges, plus other adjustments to the metric.
Earnings Before Interest and Taxes (EBIT): How to Calculate with Example
Earnings before interest and taxes (EBIT) is an indicator of a company's profitability and is calculated as revenue minus expenses, excluding taxes and interest.
Operating Margin vs. EBITDA: What's the Difference?
How Are EBITDA, EBITDAR, and EBITDARM Different?
How to Value Airline Stocks
EBIT vs. EBITDA: What's the Difference?
What Does the EBITDA Margin Imply About a Company's Financial Condition?
How Do I Calculate an EBITDA Margin Using Excel? | CommonCrawl |
Resultant of two forces acting in the same line
I'm quoting the definition of Resultant of two forces acting in the same line from the book "A FIRST COURSE IN PHYSICS" one of whose authors is Robert Andrews Millikan:
The resultant of two forces is defined as that single force which will produce the same effect upon a body as is produced by the joint action of the two forces.
I'm really confused as to whether the resultant of two forces say $A$ and $B$ is the force which is produced as a result of the two forces just mentioned or is it a completely separate force which is not caused by $A$ and $B$, but its effect is the same as that of the force produced as a result of $A$ and $B$? Even though, the force caused by $A$ and $B$, let's say it is $C$, is equal in magnitude to that of the resultant $R$ of $A$ and $B$ and the direction is also the same, but $C$ is caused by $A$ and $B$; however, $R$ has no primary causes as $C$ has. This is what I conclude from this definition; however, I'm not sure yet.
newtonian-mechanics forces vectors
Samama FahimSamama Fahim
Mathematically, given two forces $\mathbf F_A$ and $\mathbf F_B$, the resultant is simply their vector sum; \begin{align} \mathbf F_A + \mathbf F_B. \end{align} This is consistent with the definition in your quote because it is a physical fact that if these two forces both act on an object of mass $m$, then the acceleration of the object will satisfy \begin{align} \mathbf a = \frac{\mathbf F_A + \mathbf F_B}{m}. \end{align} But here's the interesting thing. Let's say, for example, that the two forces $\mathbf F_A$ and $\mathbf F_B$ are produced by two people pushing on a rigid box. If, instead, a third person were to push on the box with a force $\mathbf F_C$ that is equal to their resultant \begin{align} \mathbf F_C = \mathbf F_A + \mathbf F_B \end{align} then the resulting motion of the box would be exactly the same. In other words, the motion of the box is insensitive to precisely what makes up the total force on it, all that matters is what the total force vector is.
In summary, the resultant force can be viewed as a unique mathematical object, namely the vector sum of the total forces, but physically, in terms of the motion of the object, the different ways of achieving that resultant are all equivalent. However, when we talk physically about the resultant force on an object, we are typically talking about the effect of all of the forces that are actually acting on it, not some other force that would be equivalent.
joshphysicsjoshphysics
The word resultant implies that this $R$ is determined by some prior force(s) (or primary causes as you put it) such as $A$ and $B$. I think your question is likely coming from the use of the word resultant, which implies the existence of primary causes on top of the fact that it already implies a mathematical equivalence.
As you know, when you actually apply the mathematics for example on vectors, it makes no difference to the solution which inputs you use if all your inputs are equivalent.
E.g. $F=(+4-6)\hat x $ is completely equivalent to $F=-2 \hat x$
i.e, any problem requiring the use of the $F$ in the x axis would be completely and just as accurately be solved using either--the solution doesn't care. The equivalence implication of the word resultant is as axiomatic as saying 4-6=-2. The reason why the word resultant is used instead of equivalent is to show that the simple equivalent (such as $R$) has primary causes (such as $A$ and $B$), thereby making it valid to use this simplification in the first place ($R=C$ because $A+B=C$). Because ultimately, using resultant values makes no difference to the solution, it just makes the problem look a lot easier, but the step remains to show the calculations of how you arrived at your simplified/resultant/equivalent input value.
gregsangregsan
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces vectors or ask your own question.
Adding forces acting at different points on a body
What's the significance of the point at which the resultant of two unlike unequal parallel forces acts?
Forces acting on a spinning moon base
How to know what method to use when finding resultant forces?
Determining the direction of a force with a dynamometer
Power - why do we use the driving force over the resultant force?
What forces are required to physically separate two bodies in a gravitational system? | CommonCrawl |
Disadvantages and Advantages of Energy Harvesting
August 19, 2019 by Francesco Orfei
Energy harvesting is a way to obtain electrical energy from the one already available in the environment. Where is energy harvesting appropriate in designs?
One of the main issues in designing modern devices is the pervasive requirement for extremely low power, especially for wireless sensor network applications.
When dealing with such power requirements, there are at least two main factors:
The amount of time a system must remain ON with respect a hypothetical period of work (duty cycle)
The number of components that compose a system
Additionally, an engineer must consider factors such as budget. For example, non-rechargeable batteries discharge and must be properly disposed at the end of their life, which represents a cost. Meanwhile, rechargeable batteries or capacitors are a valid alternative because they can be recharged.
In this article, we'll talk about the broad strokes of energy harvesting, a concept wherein a system can "harvest" energy from its environment.
What Is Energy Harvesting?
Energy harvesting is a way to obtain electrical energy already available in the environment. This concept represents a valid solution to provide energy to electronic systems and requires an energy converter (energy harvester) to function.
Perhaps the most famous example of energy harvesting is the use of light, one of the most diffused sources of energy where a photovoltaic cell is the corresponding energy harvester.
Among the other sources, kinetic energy harvesting is another important technology where a vibration energy harvester is the corresponding transducer.
It is important to note that only a portion of the available energy can be converted into electrical energy because of the dissipation during the conversion during which an amount of heat is produced.
Energy Harvesting vs. Energy Stealing
Energy harvesting represents the will to recover the energy already spread in the environment, and this is totally different from the concept of subtracting energy from, say, the motion of a vehicle.
If we place energy harvesters under the asphalt of a road to harvest electricity from passing cars, we define this concept as "stealing" rather than "harvesting". This is because we are subtracting energy from the motion of the vehicle. In this way, the vehicle will consume more fuel because of the energy "stolen" by the harvester. We can think about this as the vehicle making a very slight climb.
This can be explained with the first principle of thermodynamics, i.e., energy cannot be created or destroyed but it can be transferred from one location to another and converted to and from other forms of energy.
There are four renewable sources of energy: thermal, solar, electromagnetic and kinetic. Vibration energy harvesting converts kinetic energy into electric energy. It is not the most available in nature, but it can be a valid alternative to the solar one. During the night or inside a tunnel no light is available, but a machine while running can vibrate and these vibrations can be converted into electricity.
Using Energy Harvesting in an Electronic System
Dealing with energy harvesters is never easy. They inexorably impact the cost and the performance of electronic systems.
When an energy harvester is the only source of energy in a system, it generally means that a very efficient energy management system is required. Compared to the cost of a battery, an energy harvester is more expensive by orders of magnitude.
If we consider a coin cell battery-powered sensor, the cost of typical battery, such as a CR2032, is around one dollar and it gives 3 V. In order to replace this battery with an energy harvester like a solar cell, we have to take into account that the flux of energy is not constant and that energy storage is required. This storage can be represented by a supercapacitor or by a rechargeable battery but, in both cases, we need a charger and a voltage regulator.
This is the reason why an energy harvester powered system costs more and is more complex. But on the other side we have a theoretically infinite work life.
Advantages of Energy Harvesting
In order to understand why energy harvesting is important, imagine a very big bridge where many sensors are placed for structure monitoring. They should be energetically autonomous, small, light and capable of wireless communication.
These requirements are very common today because of the hassle associated with wired and connectivity for a sensor. Of course, no one wants to change the batteries, either, because maintenance is a cost.
Or imagine being in a very large and wild area where no power lines are available. Or imagine having to insert a sensor inside a structure (e.g., a column made of concrete or under the asphalt) so that you cannot extract it to change the battery.
The only economical way to power an electronic system for a long time in these situations is to use an energy harvester.
Disadvantages of Energy Harvesting
There are also some disadvantages to energy harvesting.
For example, the cost of an energy harvester can be high when compared with the overall cost of a wireless sensor.
Another con is that is not always easy to have a small converter. If we think about the size of a coin cell battery, today it's not easy to build an energy harvester with the same footprint that can provide a useful amount of energy. For the sake of comparison, a typical deep sleep current of a wireless sensor can be around one microamp. A vibration energy harvester the size of a AA battery can provide tens or hundreds of microamps at the most with accelerations of around 1 g. (These values vary a lot depending on the technology of the harvester, on the materials used, on the frequency distribution of the vibrations and on their peak to average ratio).
Moreover, generally energy conversion efficiency increases with the size of the generator. This is due to several factors, one of which is related to the fact that energy harvesters often produce an AC current which must be rectified. If we use diodes to rectify the current, we have to deal with the threshold voltages of the junctions; these represent an energy loss. The bigger the input voltage to the rectifier, the higher is the conversion efficiency.
Generally, we can say that efficiency can be evaluated with the following formula:
$$Efficiency = \frac{Output Energy}{Input Energy} \leq 1$$
Energy Sources in the Environment
When we need energy for our system, we have to choose among several sources and we need to take into account several other parameters such as the cost, the availability of components, the impact on the environment, the energy density, the transportability, the possibility of energy storage, and the safety situation.
Generally, as a starting point, it is easier to divide sources of energy into two categories: renewable and nonrenewable energies.
Renewable vs. Nonrenewable Energy
Renewable energy sources can be easily defined as those that are naturally replenished regularly or over a relatively short time scale: biomass, hydropower, geothermal, wind, solar, etc. Other energy sources are nonrenewable: petroleum, natural gas, coal, uranium, etc.
Energy harvesting converts wasted energy from all available energy sources (renewable or nonrenewable) into electricity.
In all energy transformation, there is a certain amount of wasted energy because the efficiency of each energy converter is lower than 1. Everybody knows that a solar panel heats up when exposed to the sun in order to produce electricity. This heat comes from the light, itself, and it represents the wasted energy (together with the reflected from the surface of the panel).
By coupling a thermoelectric generator to the solar panel, a portion of this heat can be converted into electricity. This is mainly because it is not easy to establish large temperature differences from one side to the other of the thermoelectric generator.
Energy Density for Different Energy Harvesting Technologies
The following table, from a Texas Instruments whitepaper, summarizes the density of energy for different sources and technologies.
TI states that "The most promising micro-harvesting technologies extract energy from vibration, temperature differentials, and light. A fourth possibility—scavenging energy from RF emissions—is interesting, but the energy availability is at least an order of magnitude less than that of the first three."
Table 1. Energy harvesting estimates, from Texas Instruments
Harvested power
Vibration and motion
4 μW/cm2
100 μW/cm2
Temperature difference
25 μW/cm2
10 mW/cm2
0.1 μW/cm2
0.001 μW/cm2
These values should help you understand that the perfect energy harvester for your application depends on your application.
For example, for an application in a wild remote area, the most readily available energy source may be the sun, so a solar panel may represent the ideal solution for the majority of the situations. On the other hand, in a mine, there is almost no light and the temperature is almost the same between the rocks and the air, so it is impossible to use solar and thermal energy harvesting. But what about vibrations? If your purpose is to monitor mine carts, the vibrations of the carts moving on their rails could be converted into electricity to power their sensors.
Of course, it is completely possible to use more than one energy harvester at a time.
In my next article, I will go into more depth on the subject of vibration energy harvesting.
Share your questions and ideas in the comments below.
Teardown Tuesday: Google Home Mini
Ambient Light Monitor: Understanding and Implementing the ADC
How Software Automation in PCBA Manufacturing Helps Quicken Innovation
energy density
vibration energy harvesting
piezoelectric energy harvesting
rf energy harvesting
Thermal Energy Harvesting
solar energy harvesting
The War Against Dendrites, the Plague of Li-ion Batteries, Wages On
How to Assure the Reliability of SiC-based Power Semiconductors
IoT Critical Infrastructure Checklist
In Partnership with Wind River
Graphine-Based Supercaps Show Promise for Wind and Solar Energy Storage | CommonCrawl |
Analysis of a chemostat model for bacteria and virulent bacteriophage
DCDS-B Home
Optimal control of treatments in a two-strain tuberculosis model
November 2002, 2(4): 483-494. doi: 10.3934/dcdsb.2002.2.483
Stability of stationary solutions of the forced Navier-Stokes equations on the two-torus
Chuong V. Tran 1, , Theodore G. Shepherd 1, and Han-Ru Cho 1,
Department of Physics, University of Toronto, 60 St. George Street, Toronto, ON, Canada M5S 1A7, Canada, Canada, Canada
Received January 2002 Revised June 2002 Published August 2002
We study the linear and nonlinear stability of stationary solutions of the forced two-dimensional Navier-Stokes equations on the domain $[0,2\pi]\times[0,2\pi/\alpha]$, where $\alpha\in(0,1]$, with doubly periodic boundary conditions. For the linear problem we employ the classical energy--enstrophy argument to derive some fundamental properties of unstable eigenmodes. From this it is shown that forces of pure $x_2$-modes having wavelengths greater than $2\pi$ do not give rise to linear instability of the corresponding primary stationary solutions. For the nonlinear problem, we prove the equivalence of nonlinear stability with respect to the energy and enstrophy norms. This equivalence is then applied to derive optimal conditions for nonlinear stability, including both the high- and low-Reynolds-number limits.
Keywords: Two-dimensional Navier–Stokes equations, linear stability, asymptotic (global) stability..
Mathematics Subject Classification: 34D, 35Q30, 7.
Citation: Chuong V. Tran, Theodore G. Shepherd, Han-Ru Cho. Stability of stationary solutions of the forced Navier-Stokes equations on the two-torus. Discrete & Continuous Dynamical Systems - B, 2002, 2 (4) : 483-494. doi: 10.3934/dcdsb.2002.2.483
Gabriela Planas, Eduardo Hernández. Asymptotic behaviour of two-dimensional time-delayed Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2008, 21 (4) : 1245-1258. doi: 10.3934/dcds.2008.21.1245
Laiqing Meng, Jia Yuan, Xiaoxin Zheng. Global existence of almost energy solution to the two-dimensional chemotaxis-Navier-Stokes equations with partial diffusion. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3413-3441. doi: 10.3934/dcds.2019141
Muriel Boulakia, Anne-Claire Egloffe, Céline Grandmont. Stability estimates for a Robin coefficient in the two-dimensional Stokes system. Mathematical Control & Related Fields, 2013, 3 (1) : 21-49. doi: 10.3934/mcrf.2013.3.21
Fei Jiang, Song Jiang, Junpin Yin. Global weak solutions to the two-dimensional Navier-Stokes equations of compressible heat-conducting flows with symmetric data and forces. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 567-587. doi: 10.3934/dcds.2014.34.567
Yulan Wang. Global solvability in a two-dimensional self-consistent chemotaxis-Navier-Stokes system. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 329-349. doi: 10.3934/dcdss.2020019
Cui-Ping Cheng, Ruo-Fan An. Global stability of traveling wave fronts in a two-dimensional lattice dynamical system with global interaction. Electronic Research Archive, 2021, 29 (5) : 3535-3550. doi: 10.3934/era.2021051
Cui-Ping Cheng, Wan-Tong Li, Zhi-Cheng Wang. Asymptotic stability of traveling wavefronts in a delayed population model with stage structure on a two-dimensional spatial lattice. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 559-575. doi: 10.3934/dcdsb.2010.13.559
Shuichi Kawashima, Shinya Nishibata, Masataka Nishikawa. Asymptotic stability of stationary waves for two-dimensional viscous conservation laws in half plane. Conference Publications, 2003, 2003 (Special) : 469-476. doi: 10.3934/proc.2003.2003.469
Frederic Heihoff. Global mass-preserving solutions for a two-dimensional chemotaxis system with rotational flux components coupled with a full Navier–Stokes equation. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4703-4719. doi: 10.3934/dcdsb.2020120
Thierry Gallay. Stability and interaction of vortices in two-dimensional viscous flows. Discrete & Continuous Dynamical Systems - S, 2012, 5 (6) : 1091-1131. doi: 10.3934/dcdss.2012.5.1091
Chun-Hsiung Hsia, Tian Ma, Shouhong Wang. Bifurcation and stability of two-dimensional double-diffusive convection. Communications on Pure & Applied Analysis, 2008, 7 (1) : 23-48. doi: 10.3934/cpaa.2008.7.23
Tian Ma, Shouhong Wang. Block structure and block stability of two-dimensional incompressible flows. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 169-184. doi: 10.3934/dcdsb.2006.6.169
Robin Ming Chen, Feimin Huang, Dehua Wang, Difan Yuan. On the stability of two-dimensional nonisentropic elastic vortex sheets. Communications on Pure & Applied Analysis, 2021, 20 (7&8) : 2519-2533. doi: 10.3934/cpaa.2021083
Vu Manh Toi. Stability and stabilization for the three-dimensional Navier-Stokes-Voigt equations with unbounded variable delay. Evolution Equations & Control Theory, 2021, 10 (4) : 1007-1023. doi: 10.3934/eect.2020099
Qing Yi. On the Stokes approximation equations for two-dimensional compressible flows. Kinetic & Related Models, 2013, 6 (1) : 205-218. doi: 10.3934/krm.2013.6.205
Renjun Duan, Xiongfeng Yang. Stability of rarefaction wave and boundary layer for outflow problem on the two-fluid Navier-Stokes-Poisson equations. Communications on Pure & Applied Analysis, 2013, 12 (2) : 985-1014. doi: 10.3934/cpaa.2013.12.985
Bingkang Huang, Lusheng Wang, Qinghua Xiao. Global nonlinear stability of rarefaction waves for compressible Navier-Stokes equations with temperature and density dependent transport coefficients. Kinetic & Related Models, 2016, 9 (3) : 469-514. doi: 10.3934/krm.2016004
Yuming Qin, Lan Huang, Zhiyong Ma. Global existence and exponential stability in $H^4$ for the nonlinear compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1991-2012. doi: 10.3934/cpaa.2009.8.1991
Qingshan Zhang, Yuxiang Li. Convergence rates of solutions for a two-dimensional chemotaxis-Navier-Stokes system. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2751-2759. doi: 10.3934/dcdsb.2015.20.2751
Anna Amirdjanova, Jie Xiong. Large deviation principle for a stochastic navier-Stokes equation in its vorticity form for a two-dimensional incompressible flow. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 651-666. doi: 10.3934/dcdsb.2006.6.651
Chuong V. Tran Theodore G. Shepherd Han-Ru Cho | CommonCrawl |
Live above- and belowground biomass of a Mozambican evergreen forest: a comparison of estimates based on regression equations and biomass expansion factors
Tarquinio Mateus Magalhães1Email author
Forest Ecosystems20152:28
© Magalhães. 2015
Biomass regression equations are claimed to yield the most accurate biomass estimates than biomass expansion factors (BEFs). Yet, national and regional biomass estimates are generally calculated based on BEFs, especially when using national forest inventory data. Comparison of regression equations based and BEF-based biomass estimates are scarce. Thus, this study was intended to compare these two commonly used methods for estimating tree and forest biomass with regard to errors and biases.
The data were collected in 2012 and 2014. In 2012, a two-phase sampling design was used to fit tree component biomass regression models and determine tree BEFs. In 2014, additional trees were felled outside sampling plots to estimate the biases associated with regression equation based and BEF-based biomass estimates; those estimates were then compared in terms of the following sources of error: plot selection and variability, biomass model, model parameter estimates, and residual variability around model prediction.
The regression equation based below-, aboveground and whole tree biomass stocks were, approximately, 7.7, 8.5 and 8.3 % larger than the BEF-based ones. For the whole tree biomass stock, the percentage of the total error attributed to first phase (random plot selection and variability) was 90 and 88 % for regression- and BEF-based estimates, respectively, being the remaining attributed to biomass models (regression and BEF models, respectively). The percent bias of regression equation based and BEF-based biomass estimates for the whole tree biomass stock were −2.7 and 5.4 %, respectively. The errors due to model parameter estimates, those due to residual variability around model prediction, and the percentage of the total error attributed to biomass model were larger for BEF models (than for regression models), except for stem and stem wood components.
The regression equation based biomass stocks were found to be slightly larger, associated with relatively smaller errors and least biased than the BEF-based ones. For stem and stem wood, the percentages of their total errors (as total variance) attributed to BEF model were considerably smaller than those attributed to biomass regression equations.
Androstachys johnsonii Prain
Mecrusse
Root growth
Biomass additivity
Double sampling
Forest biomass inventory
Carbon allocation
Carbon dioxide sequestration and storage associated with forest ecosystem is an important mechanism for regulating anthropogenic emissions of this gas and contribute to the mitigation of global warming (Husch et al. 2003). The estimation of carbon stock in forest ecosystems must include measurements in the following carbon pools (Brown 1999; Brown 2002; IPCC 2006; Pearson et al. 2007): live aboveground biomass (AGB) (trees and non-tree vegetation), belowground biomass (BGB), dead organic matter (dead wood and litter biomasses), and soil organic matter.
Biomass can be measured or estimated by in situ sampling or remote sensing (Lu 2006; Ravindranath 2008; GTOS 2009; Vashum and Jayakumar 2012). The in situ sampling, in turn, is divided into destructive direct biomass measurement and non-destructive biomass estimation (GTOS 2009; Vashum and Jayakumar 2012).
Non-destructive biomass estimation does not require harvesting trees; it uses biomass equations to estimate biomass at the tree-level and sampling weights to estimate biomass at the forest level (Pearson et al. 2007; GTOS 2009; Soares and Tomé 2012). When biomass equations are fitted using least squares they are called biomass regression equations. Biomass regression equations are developed as linear or non-linear functions of one or more tree-level dimensions. On other hand, when they are fitted in such a way that specify tree component biomass as directly proportional to stem volume, the ratios of proportionality are then called component biomass expansion factors (BEFs). However, biomass equation (either regressions or BEFs) are developed from destructively sampled trees (Carvalho and Parresol 2003; Carvalho 2003; Dutca et al. 2010; Marková and Pokorný 2011; Sanquetta et al. 2011; Mate et al. 2014; Magalhães and Seifert 2015 a, b, c).
Biomass regression equations yield the most accurate estimates (IPCC 2003; Jalkanen et al. 2005; Zianis et al. 2005; António et al. 2007; Soares and Tomé 2012) as long as they are derived from a large enough number of trees (Husch et al. 2003; GTOS 2009). Nonetheless, national and regional biomass estimates are generally calculated based on BEFs (Magalhães and Seifert 2015c), especially when using national forest inventory data (Schroeder et al. 1997; Tobin and Nieuwenhuis 2007).
Jalkanen et al. (2005) compared regression equations based and BEF-based biomass estimates for pine-, spruce- and birch-dominated forests and mixed forests and concluded that BEF-based biomass estimates were lower and associated with larger error than regression equations based biomass estimates. However, no similar studies have been conducted for tropical natural forests.
The objective of this particular study was to compare regression equations based and BEF-based above- and belowground biomass estimates for an evergreen forest in Mozambique with regard to the following sources of errors: (1) random plot selection and variability, (2) biomass model, (3) model parameter estimates, and (4) residual variability around model prediction. Therefore, the precision and bias associated with those estimates were critically analysed. This study is a follow up of the study by Magalhães and Seifert (2015b). However, unlike the study by those authors, that considered only five tree components, the current study is extended to 11 components (taproot, lateral roots, root system, stem wood, stem bark, stem, branches, foliage, crown, shoot system, and whole tree), and to bias analyses not considered by Magalhães and Seifert (2015b, c) for either method of estimating biomass.
The study was conducted in Mozambique, in an evergreen forest type named Mecrusse. Mecrusse is a forest type where the main species, many times the only one, in the upper canopy is Androstachys johnsonii Prain (Mantilla and Timane 2005). A. johnsonii is an evergreen tree species (Molotja et al. 2011), the sole member of the genus Androstachys in the Euphorbiaceae family. Mecrusse woodlands are mainly found in the southmost part of Mozambique, in Inhambane and Gaza provinces, and in Massangena, Chicualacuala, Mabalane, Chigubo, Guijá, Mabote, Funhalouro, Panda, Mandlakaze, and Chibuto districts. The easternmost Mecrusse forest patches, located in Mabote, Funhalouro, Panda, Mandlakaze, and Chibuto districts, were defined as the study area and encompassed 4,502,828 ha (Dinageca 1997), of which 226,013 ha (5 %) were Mecrusse woodlands. Maps showing the area of natural occurrence of mecrusse in Inhnambane and Gaza provinces and the study area, along with detailed description of the species and the forest type can be found in Magalhães and Seifert (2015c) and Magalhães (2015).
The data were collected in 2012 and 2014. In 2012, a two-phase sampling design was used to determine tree component biomass. In the first phase, diameter at breast height (DBH) and total tree height of 3574 trees were measured in 23 randomly located circular plots (20-m radius). Only trees with DBH ≥5 cm were considered. In the second phase, 93 A. johnsonii trees (DBH range: 5–32 cm; height range: 5.69–16 m) were randomly selected from those analysed during the first phase for destructive measurement of tree component biomass along with the variables from the first phase. Maps showing the distribution of the 23 randon plots in the study area and in the different site classes are shown by Magalhães and Seifet (2015c) and Magalhães (2015).
In 2014, additional 37 trees (DBH range: 5.5–32 cm; height range: 7.3–15.74 m) were felled outside sampling plots, 21 inside and 16 outside the study area. The 93 trees collected in 2012 were used to fit tree component biomass regression models and determine tree component BEFs, and those collected in 2014 (37 trees) were used to estimate the biases associated with regression equation based and BEF-based tree component biomass estimates.
The felled trees (both from 2012 to 2014) were divided into the following components: (1) taproot + stump; (2) lateral roots; (3) root system (1 + 2); (4) stem wood; (5) stem bark; (6) stem (4 + 5); (7) branches; (8) foliage; (9) crown (7 + 8); (10) shoot system (6 + 9); and (11) whole tree (3 + 10). Tree components were sampled and the dry weights estimated as desbrided by Magalhães and Seifert (2015, a, b, c, d, e) and Magalhães (2015).
Tree component biomass
The distinction between biomass regression equations (or simply regression equations) and biomass expansion factors (BEFs) may be confusing as BEF is a biomass equation (equation that yields biomass estimates), it is a regression through the origin of biomass on stem volume where, therefore, the BEF value is the slope. For clarity, in this study, biomass regression equations refer to the biomass equations where the regression coefficients are obtained using least squares (Montgomery and Peck 1982) such that the sum of squares of the difference between the observed and expected value is minimum (Jayaraman 2000), unlike BEF which is not obtained using least squares.
Biomass estimation typically requires estimation of tree components and total tree biomass (Seifert and Seifert 2014). To ensure the additivity of minor component biomass estimates into major components and whole tree biomass estimates, minor component, major component and whole tree biomass models were fitted using the same regressors (Parresol 1999; Goicoa et al. 2011). For this, first the best tree component and whole tree biomass regression equations were selected by running various possible linear regressions on combinations of the independent variables (DBH, tree height) and evaluating them using the following goodness of fit statistics: coefficient of determination (R2), standard deviation of residuals (Sy.x), mean residual (MR), and graphical analysis of residuals. The mean residual and the standard deviation of residuals were expressed as relative values, hereafter referred to as percent mean residual (MR (%)) and coefficient of variation of residuals (CVr (%)), respectively, which are more revealing. The computation and interpretation of these fit statistics were previously described by Mayer (1941), Gadow & Hui (1999), Ruiz-Peinado et al. (2011), and Goicoa et al. (2011).
Among the different model forms tested (Y = b0 + b1D2, Y = b0 + b1D2 + b2H and Y = b0 + b1D2H, where b0 and b1 are regression coefficients, D is the DBH and H is the tree height), the model form Y = b0 + b1D2H was the best for 8 tree components and for the whole tree biomass, and the second best for the remaining tree components, as judged by the goodness of fit statistics described above. Therefore, to allow all tree components and whole tree biomass models to have the same regressors, and thus achieve additivity, this model form was generalized for all tree components and whole tree biomass models.
Linear weighted least squares were used to address heteroscedasticity. The weight functions were obtained by iteratively finding the optimal weight that homogenised the residuals and improved other fit statistics. Among the tested weight functions (1/D, 1/D2, 1/DH, 1/D2H), the best weight function was found to be 1/D2H for all tree components and whole tree biomass models. Although the selected weight function may not have been the best one among all possible weights, it was the best approximation found.
Linear models were preferred over nonlinear models because the procedure of enforcing additivity by using the same regressors is only applicable for linear models (Parresol 1999; Goicoa et al. 2011) and because the procedure of combining the error of the first and second sampling phases in double sampling (Cunia 1986a) is limited to biomass regressions estimated by linear weighted least squares (Cunia 1986a).
The regression equation based and the BEF-based biomass of the c component of the k th tree in the h th plot (Ŷ hk ) is determined by Eq. (1) and Eq. (2), respectively:
$$ {\widehat{Y}}_{hk}={b}_0+{b}_1{D}_{hk}^2{H}_{hk} $$
$$ {\widehat{Y}}_{hk}= BE{F}_c\times {v}_{hk}= BE{F}_c\times \frac{\pi }{4}\times {D}_{hk}^2\times {H}_{hk}\times ff $$
where v hk , D hk and H hk represent stem volume, DBH and tree height of the k th tree in the h th plot, ff and BEFc represent the average Hohenadl form factor (0.4460) and tree component BEFs of A. johnsonii estimated by Magalhães and Seifert (2015c).
Computing BEF-based biomass is similar to compute the biomass with a regression equation of tree compontent biomass on stem volume passing through the origin, where, therefore, b0 = 0 and b1 = BEFc. In fact, in ratio estimators, the ratio R (BEF value, in this case) is the regression slope when the regression line passes through the origin (Johnson 2000). Given that fact, Eqs. (1, 2) can be presented as one, in matrix form as follows:
$$ {\widehat{Y}}_{hk}=b{X}_{hk} $$
where \( b=\left[\begin{array}{cc}\hfill {b}_0\hfill & \hfill {b}_1\hfill \end{array}\right] \) and \( {X}_{hk}={\left[\begin{array}{cc}\hfill 1\hfill & \hfill {D}_{hk}^2{H}_{hk}\hfill \end{array}\right]}^T \) if b 0 ≠ 0; and \( b=\left[\begin{array}{cc}\hfill 0\hfill & \hfill {b}_1\hfill \end{array}\right]= BE{F}_c \) and \( {X}_{hk}={\left[\begin{array}{cc}\hfill 0\hfill & \hfill \frac{\pi }{4}{D}_{hk}^2{H}_{hk} ff\hfill \end{array}\right]}^T=\frac{\pi }{4}{D}_{hk}^2{H}_{hk} ff \) if b 0 = 0. T denotes matrix transpose.
The biomass of plot h (Ŷ h ) is estimated by summing the individual biomass (Ŷ hk ) values of the n h trees in plot h. Dividing Ŷ h by plot size a gives biomass Ŷ on an area basis:
$$ \widehat{Y}=\frac{{\widehat{Y}}_h}{a}=\frac{b{\displaystyle \sum_{k=1}^{nh}{X}_{hk}}}{a} $$
where k = 1, 2, …, n h , and h = 1, 2, …, n p , n p = number of plots in the sample, and n h = number of trees in the h th plot.
Denoting \( {S}_h=\frac{{\displaystyle \sum_{k=1}^{nh}{X}_{hk}}}{a} \), Eq. (4) can be rewritten as:
$$ \widehat{Y}=b{S}_h $$
where \( {S}_h={\left[\begin{array}{cc}\hfill {S}_{h0}\hfill & \hfill {S}_{h1}\hfill \end{array}\right]}^T \). Where \( {S}_{h0}=\frac{n_h}{a} \) and \( {S}_{h1}=\frac{{\displaystyle \sum_{k=1}^{nh}{D}_{hk}^2{H}_{hk}}}{a} \) if b 0 ≠ 0; and S h0 = 0 and \( {S}_{h1}=\frac{{\displaystyle \sum_{k=1}^{nh}\frac{\pi }{4}{D}_{hk}^2{H}_{hk} ff}}{a} \) if b 0 = 0.
The biomass stock Ȳ (average biomass per hectare) is estimated by summing the biomass Ŷ of each plot (area basis) and dividing it by the number of plots n p :
$$ \overline{Y}=\frac{b{S}_h}{n_p} $$
Now, denoting \( Z=\frac{S_h}{n_p} \), Eq. (6) can be rewritten as follows:
$$ \overline{Y}=bZ $$
where \( Z={\left[\begin{array}{cc}\hfill {Z}_0\hfill & \hfill {Z}_1\hfill \end{array}\right]}^T \) if b 0 ≠ 0; and \( Z={\left[\begin{array}{cc}\hfill 0\hfill & \hfill {Z}_1\hfill \end{array}\right]}^T={Z}_1 \) if b 0 = 0.
Recall that b is the row vector of the estimates from the second sampling phase (regression coefficients or BEF values), and Z is the column vector of the estimates from the first phase.
Eqs. (2, 3, 4, 5, 6, 7) were applied to estimate biomass stock of each tree component and whole tree.
Biomass stock [Eq. (7)] is estimated by combining the estimates of the first and second phases (Z and b, respectively). Two main sources of error must be accounted for in this calculation, that resulting from plot-level variability (first sampling phase) and that from biomass equation: either regression or BEF equation (second phase).
Cunia (1965, 1986a, 1986b, 1990) demonstrated that the total variance of Ȳ (mean biomass per hectare) can be estimated by Eq. (8):
$$ VA{R}_t=VA{R}_1+VA{R}_2=b\times {S}_{ZZ}\times {b}^T+Z\times {S}_{bb}\times {Z}^T $$
where VAR 1 and VAR 2 are variance components from the first and second sampling phases, respectively; S zz represents the variance–covariance matrix of vector Z T ; and S bb represents the variance–covariance matrix of vector b. For this specific case, S bb and S zz are given in Eqs. (9, 10):
$$ {S}_{bb}=\left[\begin{array}{cc}\hfill {S}_{b_0{b}_0}\hfill & \hfill {S}_{b_0{b}_1}\hfill \\ {}\hfill {S}_{b_0{b}_1}\hfill & \hfill {S}_{b_1{b}_1}\hfill \end{array}\right] $$
$$ {S}_{zz}=\left[\begin{array}{cc}\hfill {S}_{z_0{z}_0}\hfill & \hfill {S}_{z_0{z}_1}\hfill \\ {}\hfill {S}_{z_0{z}_1}\hfill & \hfill {S}_{z_1{z}_1}\hfill \end{array}\right] $$
where \( {S}_{b_i{b}_j} \) = covariance of b i and b j , \( {S}_{b_i{b}_i} \) = variance of b i , \( {S}_{z_i{z}_j}=\frac{{\displaystyle \sum_{h=1}^{n_p}\left({S}_{hi}-{\overline{S}}_i\right)\left({S}_{hj}-{\overline{S}}_j\right)}}{\left({n}_p-1\right){n}_p} \) = covariance of Z i and Z j , and \( {S}_{z_i{z}_i} \) = variance of Z i .
Note that if b 0 = 0 (and then b 1 = BEFc), \( {S}_{b_i{b}_j}=0 \) and \( {S}_{z_i{z}_j}=0 \), therefore, \( {S}_{bb}={S}_{b_1{b}_1} \) and \( {S}_{zz}={S}_{z_1{z}_1} \). Consequentely, \( VA{R}_t= BE{F}_c\times {S}_{Z_1{Z}_1}\times BE{F}_c+{Z}_1\times {S}_{b_1{b}_1}\times {Z}_1 \) which is equal to:
$$ VA{R}_t= BE{F}_c^2\times {S}_{Z_1{Z}_1}+{Z}_1^2\times {S}_{b_1{b}_1} $$
The square roots of Eqs. (8, 11) are the total standard errors (SE) of Ȳ, the square roots of the first components of Eqs. (8, 11) are the SEs of the first phase, and the square roots of the second components of the same equations are the SEs of the second phase of the relevant methods of estimating biomass stock.
In this study, the error of Ȳ of the first and second sampling phases, and of both phases combined is expressed as the percent SE of the relevant phase or both phases combined, obtained by dividing the relevant SE by Ȳ and multiplying by 100. However, in some cases, the error is expressed as the variance of Ȳ, especially where the proportional influence of a particular source of error needs to be known, because, unlike the SEs, the variances of the first and second phases are additive (sum to total variance) (Cunia 1990).
As said previously, the error of the first sampling phase results from random plot selection and variability, and that from the second phase results from biomass model (either regression or BEF model). McRoberts and Westfall (2015), Henry et al. (2015), Temesgen et a.l (2015), and Picard et al. (2014) distinguish four sources of errors (surrogate of uncertainty) in model prediction: (1) model misspecification (also known as statistical model; i.e.: error due to model selection (Cunia 1986a)), (2) uncertainty in the values of independent variables, (3) uncertainty in the model parameter estimates, and (4) residual variability around model prediction.
The first source of error in model prediction arises from the fact that changing the model will generally change the estimates. Here, this error is expected to be negligible as, in general, the predictors explained a large portion of the variation in biomass and because the models were associated to a small error (CVr) (Table 1). In fact, according to Cunia (1986a) and McRoberts and Westfall (2015), when the statistical model used fits reasonably well the sample data, the statistical model error is generally small and can be ignored. The second source of error is quantified by Magalhães and Seifert (2015b). The third source of error is expressed by the parameter variance-covariance matrix, S bb . In this study, this source of error is expressed by the standard errors of the regression parameters or of the BEF values, as they are the square roots of the respective variances obtained from the variance-covariance matrix, S bb . The fourth source (residual variability around model prediction) is here expressed as coefficient of variation of residuals (CVr), as it measures the dispersion between the observed and the estimated values of the model, indicates the error that the model is subject to when is used for predicting the dependent variable.
Regression coefficients (± SE), BEF values (± SE) and the fit statistics for each tree component and for total biomass
Tree component
b0 (± SE)
b1 (± SE) or BEF (Mg m−3) (± SE)
R2 (%)
Sy.x (Kg)
CVr (%)
MR (%)
Taproot + stump
1.3122 (±36.69 %)b
0.0045 (±4.44 %)c
Lateral roots
– 1.0600 (±43.02 %)a
Root system (1 + 2)
0.2522 (±251.15 %) ns
0.0097 (±2.06 %) c
– 0.0709
Stem wood
0.6616 (±173.08 %)ns
Stem bark
Stem (4 + 5)
– 0.4569 (±332.74 %)ns
0.7602 (±15.85 %)c
Crown (7 + 8)
Shoot system (6 + 9)
Whole tree (3 + 10)
BEF model
SE standard error (%), "c" = significant at α = 0.001; "b" = significant at α = 0.01; "a" = significant at α = 0.05; ns = not statistically significant at α = 0.05; the major components and their values are indicated in bold font
Therefore, the methods of estimating biomass under study (regression and BEF models) were compared with regard to the following sources of errors: (1) random plot selection and variability, (2) biomass model, (3) model parameter estimates, and (4) residual variability around model prediction. The first constitutes the error of the first sampling phase and the second constitutes the error of the second phase which incorporates the third and fourth source of errors.
The percent biases resulting from regression equation based and from BEF-based estimates were determined by Eq. (12) using an independent sample of 37 trees (trees not included in fitting the models):
$$ Bias\left(\%\right)=\frac{{\displaystyle \sum P{B}_k-{\displaystyle \sum O{B}_k}}}{{\displaystyle \sum P{B}_k}}\times 100 $$
where PB k and OB k represent, respectively, the predicted and observed biomass of the c compontent of the k th tree.
As described above, the regression-based biomass is estimated by the model form Y = b 0 + b 1 D 2 H [kg] and the BEF-based one is estimated by \( Y=BEF\times {v}_{hk}=\frac{\pi }{4}\times {D}^2H\times ff \) [Mg], which is equal to \( Y=\frac{\pi }{4}\times {D}^2H\times ff\times 1000 \) [kg], where as v hk and H are expressed in m3 and m, respectively, D must be converted to m, which makes BEF-based biomass (in kg) to be estimated as \( Y=\frac{\pi }{40000}\times {D}^2H\times ff\times 1000=\frac{\pi }{40}\times {D}^2H\times ff \) if D is expressed in cm.
From Table 1 it can be seen that 8 out of the 11 regression equations have their intercepts not statistically siginicant at α = 0.05; therefore, the regression equation can be generelized as Y = b 1 D 2 H [kg] and the BEF model as \( Y={\tilde{b}}_1{D}^2H \) [kg], where \( {\tilde{b}}_1=\frac{BEF\times \pi \times ff}{40} \). Thus, to estimate the percentual difference between regression-based and BEF-based biomasses at a given D2H, b 1 and \( {\tilde{b}}_1 \) were contrasted; i.e.: the percentual magnitude of \( {\tilde{b}}_1 \) in relation to b 1 was taken as an indicative of how the different models (regression and BEF models) estimate biomass from a given D2H. Additionally, the average b 1 and \( {\tilde{b}}_1 \) for all components at given D2H were compared using Student's t-test.
Furthermore, the estimation errors (defined as the percentual difference between predicted and observed biomass values) of the individual trees from 2014 for each method of estimating biomass were plotted against those trees' D2H to evaluate the under or overestimation associated to each method. Farther, the average errors at given D2H per tree (for each method) were compared using Student's t-test. All the statistical analyses were performed at α = 0.05.
For all tree components and whole tree, except foliage, the variation of biomass explained by predictor variable(s) ranged 82.14 to 97.75 % for regression models and from 74.54 to 98.85 % for BEF models (Table 1). In general, the variation of biomass explained by the predictor variable(s) was larger in regression models than in BEF ones, except for stem and stem wood (Table 1). Less than half of the variation of foliage biomass was explained by the predictor variable(s). All tree components presented non-significant MRs. The plots of the residuals presented no particular trend (refer to Magalhães and Seifert (2015a, b)); the cluster of points was contained in a horizontal band, with the residuals evenly distributed under and over the axis of abscissas, meaning that there were not model defects.
The errors due to model parameter estimates (SE) and those due to residual variability around model prediction (CVr) are larger for BEF models, except for stem and stem wood components.
The regression equation based biomass stocks estimates were relatively larger than the BEF-based ones, except for foliage (Table 2). For example, the regression equation based BGB, AGB and whole tree biomass stocks were 7.7, 8.5 and 8.3 % larger than the BEF-based ones. However, the proportion of the whole tree biomass allocated to each tree component is similar in either method; for instance, BGB, stem, and crown biomass accounted for 20, 56 and 24 %, respectively, to whole tree biomass for both methods. The property of additivity is achieved in both methods, for the whole tree biomass and for all major tree components. This is so because for each particular method (regression or BEF), all tree component models used the same predictors (DBH and H for regression and stem volume for BEF models).
Regression equations based and BEF-based tree component biomass
Regression equation based biomass (Mg ha−1)
BEF-based biomass (Mg ha−1)
The major components and their values are indicated in bold font
Overall, the percent SEs of the first sampling phase (error resulting from plot selection and variability) of the BEF-based biomass estimates were slightly and sometimes nigligibly larger than those obtained using regression equations (Table 3), except for 2 tree components (lateral roots and branches) where the percent SEs were relatively smaller. In the second sampling phase considerable differences in percent SEs were found; BEF-based estimates exhibited smaller percent SE in 6 tree components and larger ones in the remaining five. The total percent SEs (both phases combined) were also negligibly different between the two methods of estimating biomass stocks, except for foliage where a substantial difference was observed. Although, the average tree component biomasses obtained by either method were slightly different (Table 2), they fell in the 95 % confidence interval of any method (Table 3).
Absolute standard errors (Mg ha−1), percent standard errors, and 95 % confidence limits of the estimates of tree component biomass stocks for each sampling phase using regression equations and BEFs
SE1 (%)
SEt (%)
95 % CI (Mg ha−1)
95 % CI (%)
Measures of precision for regression equations based biomass
±10.8279
Measures of precision for BEF-based biomass
Subscripts 1 and 2 indicate the first and second sampling phases, respectively; subscript t indicates the total standard error (SE) for a given component; the major components and their values are indicated in bold font
The percent SE of the first phase is a result of plot selection and variability, and that of the second phase is a result of biomass models (either regression or BEF models). From Table 4, it is noted that for both methods, the percentage of the total error (as total variance) attributed to first phase (plot selection) is larger than that attributed to second phase (biomass models), except for the foliage, branches and crown. The percentage of the total error (as total variance) attributed to BEF models is larger than that attributed to regression models in all tree components, except for stem wood, stem bark and stem (stem bark + stem wood). The percentage of the total error (as total variance) attributed to BEF model for stem wood and stem is more than twice as small as that attributed to regression model.
Percentage of total error (as variance) attributed to each sampling phase
Regression equation based biomass
BEF-based biomass
Percentage of variance attributed to the first phase (plot selection and variability)
Percentage of variance attributed to the second phase (regression model)
Percentage of variance attributed to the second phase (BEF model)
The BEF-based biomass estimates were found to be more biased than the regression-based ones in 6 out of 11 tree components (Table 5). Overall, regression equation based biomasses tended to be larger than the observed biomasses and the BEF-besed ones tended to be smaller than the observed ones. As expected, the percent biases for stem wood and stem BEF-based biomass are considerably smaller than those from regression based ones. Recall that BEF models for stem wood and stem were found to be associated to larger R2, smaller percentage of total error (as variance) attributed to biomass model, smaller errors due to model parameter estimates and smaller errors due to residual variability around model prediction than the regression models.
Comparision of bias between regression equation based and BEF-based biomass estimates
Bias (%)a
Bias (%)b
– 23.2792
Superscripts a and b indicate biases related to regression equations based and BEF-based biomass estimates, respectively; the major components and their values are indicated in bold font
It was found that at a given D2H, the regression-based biomass estimates tended to be considerably larger than the BEF-based ones (Table 6), supporting the finding from Table 2. However, it is worth mentioning that the percentual difference between the regression-based and BEF-based biomass estimates at a given D2H for taproot + stump, lateral roots, and foliage are overestimated, as for those components the intercepts are statistically significant and then should not be removed from the model. For example, it was expected the regression-based biomass estimate at a given D2H for the taproot + stump to be larger than the BEF-based one, therefore in accordance to the Table 2 (yielding a negative difference); however, the exclusion of the intercept caused the BEF-based biomass estimate at a given D2H to be larger, causing a positive difference. Accordingly, the really differences between the regression-based and the BEF-based biomass estimates at a given D2H for lateral roots and foliage are smaller than those presented in the Table 6. Using Student's t-test the average biomass estimates by each method at a given D2H are found to be statistically different (p-value = 0.01).
Comparision between regression-based and BEF-based biomass at a given D2H
\( {\tilde{b}}_1 \)
Difference (%)
−20.1840
The estimation errors per tree plotted against the respective D2H values (Fig. 1) for the whole tree show that the positive and negative errors of regression model cancel each other, tending to average zero; in fact, the Student's t test showed that the average percent error (1.34 %) is not statistical different from zero (p-value = 0.51). On the other hand, the plot of the errors show that the BEF model underestimates the biomass, a finding confirmed by Student's t-test (average error = −8.60, p-value = 0.0007).
Comparision of the estimation errors of the regression model and BEF model for the whole tree biomass
This study compares two commonly used methods of estimating tree and forest biomass: regression equations and biomass expansion factors. This is a unique study for many reasons: (1) the precision and bias associated with each method of estimating biomass are critically compared; the errors associated with biomass estimates are rarely evaluated carefully (Chave et al. 2004); (2) the comparison involved 11 tree components, including BGB, which is rarely studied (GTOS 2009); (3) in turn, BGB was divided into 2 root components: taproot and lateral roots.
Many biomass studies include only AGB not breakdown in further components (e.g. Overman et al. 1994; Grundy 1995; Eshete and Ståhl 1998; Pilli et al. 2006; Salis et al. 2006; Návar-Cháidez 2010; Suganuma et al. 2012; Sitoe et al. 2014; Mason et al. 2014), ignoring the fact that different tree components have distinguished uses and decomposition rates, affecting differently the storage time of carbon and nutrients (Magalhães and Seifert 2015a). Aware of that, here, the AGB is divided into 6 tree components (foliage, branches, crown, stem wood, stem bark, and stem).
Few studies have considered BGB (e.g. Kuyah et al. 2012; Mugasha et al. 2013; Green et al. 2007; Ryan et al. 2010; Ruiz-Peinado et al. 2011; Paul et al. 2014); in most of those studies the root system was not fully excavated (Green et al. 2007; Ryan et al. 2010; Ruiz-Peinado et al. 2011; Kuyah et al. 2012; and Paul et al. 2014), the excavation was done to a certain predefined depth or the fine roots were not considered; or a sort of sampling procedure was used (Kuyah et al. 2012; Mugasha et al. 2013). These procedures of estimating BGB lead to underestimation or to less accurate estimates (Mokany et al. 2006; Mugasha et al. 2013). Furthermore, studies that have breakdown BGB into further root components are limited.
The only studies available that compare regression equations based and BEF-based biomass estimates are those by Jalkanen et al. (2005) and Petersson et al. (2012), which, however, did not consider BGB. The finding that the whole tree BEF-based biomass estimate was 8.3 % lower, with slightly larger percent error than that based on regression equation is in line with the finding by Jalkanen et al. (2005), which found that BEF-based AGB estimate was 6.7 % lower.
It was verified here that the percentage of the total error of biomass (as total variance) attributed to BEF model for stem wood and stem is more than twice as small as that attributed to regression model; and that BEF models for those tree components (stem wood and stem) were associated to larger R2, smaller biases, smaller errors due to model parameter estimates and smaller errors due to residual variability around model prediction than the regression models. Therefore, although it has been maintained that biomass regression equations yield the most accurate estimates than BEFs (IPCC 2003; Jalkanen et al. 2005; Zianis et al. 2005; António et al. 2007; Soares and Tomé 2012), this might not be true when stem and stem wood components are concerned. This is so because the stem BEF value is computed by dividing the stem biomass by stem volume, which makes the stem BEF value to be similar to stem wood density (specific gravity) and thus more realistic (than models using only DBH and tree height) when using it to convert stem volume to stem biomass, as biomass is a function of wood density (Ketterings et al. 2001). As for stem wood biomass, since the difference between stem wood and stem biomass is negligible.
On the contrary, using stem volume to obtain any other tree component biomass, through BEF value, is not realistic, since the density varies from component to component, leading to less accurate and less precise estimates. This is aggravated for the non-woody components, where the density value may differ greatly from the stem density value. In fact, it has been noted here that the BEF-based foliage biomass is associated with the largest percent error (11.55 %), and that 84 % of that error is attributed to BEF model (Table 4), besides being associated to the largest error due to model parameter estimates and due to residual variability around model prediction (within and between methods).
In this study, the average stem density value of A.johnsonii trees was 754.42 Kg m−3 and the average stem BEF was 0.7334 Mg m−3 (733.40 Kg m−3). The small difference of these estimates might be due to the fact that the stem density was computed using saturated volume and the stem BEF value was computed using green volume. The stem density obtained here is in line with that by Bunster (2006) (754 Kg m−3) for the same tree species.
The errors of regression-based biomass estimates are the same as those obtained by Magalhães and Seifert (2015b) for the relevant tree components. However, the errors of the BEF-based estimates were slightly different from those obtained by Magalhães and Seifert (2015c); these differences might be attributed to the different approaches used to compute the errors.
The regression-based biomass estimates could have been more precise if non-linear regression models were used instead of linear ones, as biomass is better described by non-linear functions (Bolte et al. 2004; Ter-Mikaelian and Korzukhin 1997; Schroeder et al. 1997; de Jong and Klinkhmer 2005; and Salis et al. 2006). However, the approach of combining the errors from the first and second phases developed by Cunia (1986a) is limited to linear regression models, as using non-linear regression, the expression of the error (as variance) may be so complex that may become extremely cumbersome to apply (Cunia 1986a). In the meantime, the linear models used here performed satisfactorily; relatively lower performance was obtained for foliage biomass model (R2 = 49.41 %; CVr = 66.21 %; MR = 1.55 %). Foliage biomass models have, usually, shown relatively poor performance (Brandeis et al. 2006; Mate et al. 2014).
A combined-variable model (Y = b0 + b1 × D2H) was used here to estimate tree component biomass. Silshi (2014) has referred that where compound derivatives of DBH and H are included there is no unique way to partition the variance in the response. However, the Monte Carlo error propagation approach can be applied to estimate the percent contribution of each variable (DBH and H) measurement error to the error of biomass estimate as performed by Magalhães and Seifert (2015b) and Chave et al. (2004) or using Bayesian approach as done by Molto et al. (2012).
It has been maintained here that the error due to model misspecification was ignored because it is expected to be negligible as overall the models fitted reasonably well the sample data. However, the foliage biomass models might be associated with a large model misspecification error as their predictors explained less than half of the variation in biomass, especially the foliage BEF model.
The current biomass estimates disregarded smaller and younger trees (DBH <5 cm), which may have led to underestimation, as those trees may have a significant contribution to forest biomass stock and are reported to be very important in the United Nations Framework Convention on Climate Change (UNFCCC) reporting process (Black et al. 2004). For example, Vicent et al. (2015) found that small trees (DBH <10 cm) accounted for 7.2 % of aboveground live biomass, which is a considerable share. Lugo and Brown (1992) and Chave et al. (2003) maintained that small tree biomass (DBH <10 cm) is equivalent to 5 % of large tree biomass. Nevertheless, in this study, the share of small trees biomass to aboveground live biomass or to large trees biomass is expected to be very small than that reported by Lugo and Brown (1992), Chave et al. (2003) and Vicent et al. (2015) as the definition of small trees (DBH <5 cm) considered here, include only part of the trees considered as small by those authors.
The regression equation based BGB and AGB stocks were, approximately, 33.6 ± 3.3 Mg ha−1 and 134.5 ± 12.9 Mg ha−1, respectively. The BEF-based BGB and AGB were, approximately, 30.1 ± 3.2 Mg ha−1and 123.1 ± 12.0 Mg ha−1, respectively.
Overall, the regression equation based biomass stocks were found to be slightly larger, associated with relatively smaller errors and least biased than the BEF-based ones. However, because stem BEF and stem wood BEFs are equivalent to stem and stem wood densities (specific gravities) and therefore, the equivalent biomasses computed directely by multiplying stem volume by stem or stem wood density, the percentages of their total errors (as total variance) attributed to BEF model were considerably smaller than those attributed to biomass regression equations, as regression equations were based only on DBH and stem height and ignored the stem density.
Aboveground biomass
BGB:
Belowground biomass
DBH:
Diameter at breast height
BEF:
Biomass expansion factor
MR:
Mean residual
Coefficient of variation of residuals
R2 :
Coefficient of determination
Standard error
This study was funded by the Swedish International Development Cooperation Agency (SIDA). Thanks are extended to Professor Thomas Seifert for his contribution in data collection methodology and to Professor Almeida Sitoe for his advices during the preparation of the field work. I would also like to thank Professor Agnelo Fernandes and Madeirarte Lda for financial and logistical support.
The author declares that he has no competing interests.
Departamento de Engenharia Florestal, Universidade Eduardo Mondlane, Campus Universitário, Edifício no.1, 257, Maputo, Mozambique
Antonio N, Tome M, Tome J, Soares P, Fontes L (2007) Effect of tree, stand and site variables on the allometry of Eucalyptus globulus tree biomass. Can J For Res 37:895–906View ArticleGoogle Scholar
Black K, Tobin B, Siaz G, Byrne KA, Osborne B (2004) Allometric regressions for an improved estimate of biomass expansion factors for Ireland based on a Sitka spruce chronosequence. Irish Forestry 61(1):50–65Google Scholar
Bolte A, Rahmann T, Kuhr M, Pogoda P, Murach D, Gadow K (2004) Relationships between tree dimension and coarse root biomass in mixed stands of European beech (Fagus sylvatica L.) and Norway spruce (Picea abies [L.] Karst.). Plant Soil 264:1–11View ArticleGoogle Scholar
Brandeis T, Matthew D, Royer L, Parresol B (2006) Allometric equations for predicting Puerto Rican dry forest biomass and volume. In: Proceedings of the Eighth Annual Forest Inventory and Analysis Symposium, pp. 197–202.Google Scholar
Brown S (1999) Guidelines for inventorying and monitoring carbon offsets in forest-based projects. Winrock International Institute for Agricultural Development, ArlingtonGoogle Scholar
Brown S (2002) Measuring, monitoring, and verification of carbon benefits for forest-based projects. Phil Trans R Soc Lond A 360:1669–1683View ArticleGoogle Scholar
Bunster J (2006) Commercial timbers of Mozambique, Technological catalogue. Traforest Lda, Maputo, p 62Google Scholar
Carvalho JP (2003) Uso da propriedade da aditividade de componentes de biomassa individual de Quercus pyrenaica Willd. com recurso a um sistema de equações não linear. Silva Lusitana 11:141–152Google Scholar
Carvalho JP, Parresol BR (2003) Additivity of tree biomass components for Pyrenean oak (Quercus pyrenaica Willd.). For Ecol Manag 179:269–276View ArticleGoogle Scholar
Chave J, Condit R, Lao S, Caspersen JP, Foster RB, Hubbell SP (2003) Spatial and temporal variation of biomass in a tropical forest: results from a large census plot in Panama. J Ecol 91:240–52View ArticleGoogle Scholar
Chave J, Condic R, Aguilar S, Hernandez A, Lao S, Perez R (2004) Error propagation and scaling for tropical forest biomass estimates. Phil Trans R Soc Lond B 309:409–420View ArticleGoogle Scholar
Cunia T (1965) Some theory on the reliability of volume estimates in a forest inventory sample. For Sci 11:115–128Google Scholar
Cunia T (1986a) Error of forest inventory estimates: its main components. In: Wharton EH, Cunia T (eds) Estimating tree biomass regressions and their error. NE-GTR-117. PA, USDA, Forest Service, Northeastern Forest Experimental Station, Broomall, pp 1–13Google Scholar
Cunia T (1986b) On the error of forest inventory estimates: double sampling with regression. In: Wharton EH, Cunia T (eds) Estimating tree biomass regressions and their error. NE-GTR-117. PA, USDA, Forest Service, Northeaster Forest Experimental Station, Broomall, pp 79–87Google Scholar
Cunia T (1990) Forest inventory: on the structure of error of estimates. In: LaBau VJ, Cunia T (eds) State-of-the-art methodology of forest inventory: a symposium proceedings; Gen. Tech. Rep. PNW-GTR-263. USDA, Forest Service, Pacific Northwest Research Station, Portland, pp 169–176Google Scholar
de Jong TJ, Klinkhamer PGI (2005) Evolutionary ecology of plant reproductive strategies. Cambridge University Press, New York, p 328Google Scholar
Dinageca (1997) Mapa digital de Uso e cobertura de terra. CENACARTA, MaputoGoogle Scholar
Dutca I, Abrudan IV, Stancioiu PT, Blujdea V (2010) Biomass conversion and expansion factors for young Norway spruce (Picea abies (L.) Karst.) trees planted on non-forest lands in Eastern Carpathians. Not Bot Hort Agrobot Cluj 38(3):286–292Google Scholar
Eshete G, Ståhl G (1998) Functions for multi-pahase assessment of biomass in acacia woodlands of the Rift Valley of Ethiopia. For Ecol Manag 105:79–90View ArticleGoogle Scholar
Goicoa T, Militino AF, Ugarte MD (2011) Modelling aboveground tree biomass while achieving the additivity property. Environ Ecol Stat 18:367–384View ArticleGoogle Scholar
Green C, Tobin B, O'Shea M, Farrel EP, Byrne KA (2007) Above- and belowground biomass measurements in an unthinned stand of Sitka spruce (Picea sitchensis (Bong) Carr.). Eur J Forest Res 126:179–188View ArticleGoogle Scholar
Grundy IM (1995) Wood biomass estimation in dry miombo woodland in Zimbabwe. For Ecol Manag 72:109–117View ArticleGoogle Scholar
GTOS (2009) Assessment of the status of the development of the standards for the terrestrial essential climate variables. NRL, FAO, Rome, p 18Google Scholar
Henry M, Jara MC, Réjou-Méchain M et al (2015) Recommendations for the use of tree models to estimate national forest biomass and assesses their uncertainty. Ann For Sci 72:769–777View ArticleGoogle Scholar
Husch B, Beers TW, Kershaw JA Jr (2003) Forest mensuration, 4th edn. Wiley, Hoboken, p 443Google Scholar
IPCC (2003) Intergovernmental Panel on Climate Change. Good Practice Guidance for Land Use, Land-Use Change and Forestry. [http:/www.ipcc.ch].
IPCC (2006) Intergovernmental Panel on Climate Change. Guidelines for National Greenhouse Gas Inventories. [http:/www.ipcc.ch].
Jalkanen A, Mäkipää R, Stahl G, Lehtonen A, Petersson H (2005) Estimation of the biomass stock of trees in Sweden: comparison of biomass equations and age-dependent biomass expansion factors. Ann For Sci 62:845–851View ArticleGoogle Scholar
Jayaraman K (2000) A statistical manual for forestry research. FORSPA, FAO, Bangkok, p 240Google Scholar
Johnson EW (2000) Forest sampling desk reference. CRC Press LLC, Florida, p 985View ArticleGoogle Scholar
Ketterings QM, Coe R, van Noordwijk M, Ambagau Y, Palm CA (2001) Reducing uncertainty in the use of allometric biomass equations for predicting above-ground tree biomass in mixed secondary forest. For Ecol Manag 146:199–209View ArticleGoogle Scholar
Kuyah S, Dietz J, Muthuri C, Jamnadass R, Mwangi P, Coe R, Neufeldt H (2012) Allometric equations for estimating biomass in agricultural landscapes: II. Belowground biomass. Agriculture. Ecosystem and Environment 158:225–234View ArticleGoogle Scholar
Kv G, Hui GY (1999) Modelling forest development. Kluwer Academic Publishers, Dordrecht, p 213Google Scholar
Lu D (2006) The potential and challenge of remote sensing-based biomass estimation. Int J Remote Sens 27:1297–1328View ArticleGoogle Scholar
Lugo AE, Brown S (1992) Tropical forests as sinks of atmospheric carbon. For Ecol Manag 54:239–55View ArticleGoogle Scholar
Magalhães TM (2015) Allometric equation for estimating belowground biomass of Androstachys jonhsonii Prain. Carbon Balance and Management 10:16PubMed CentralView ArticlePubMedGoogle Scholar
Magalhães TM, Seifert T (2015a) Biomass modelling of Androstachys johnsonii Prain – a comparison of three methods to enforce additivity. International Journal of Forestry Research 2015:1–17View ArticleGoogle Scholar
Magalhães TM, Seifert T (2015b) Estimation of tree biomass, carbon stocks, and error propagation in mecrusse woodlands. Open Journal of Forestry 5:471–488View ArticleGoogle Scholar
Magalhães TM, Seifert T (2015c) Tree component biomass expansion factors and root-to-shoot ratio of Lebombo ironwood: measurement uncertainty. Carbon Balance and Management 10:9PubMed CentralView ArticlePubMedGoogle Scholar
Magalhães TM, Seifert T (2015d) Below- and aboveground architecture of Androstachys johnsonii Prain: Topological analysis of the root and shoot systems. Plant Soil 394:257–269. doi:10.1007/s11104-015-2527-0 View ArticleGoogle Scholar
Magalhães TM, Seifert T (2015e) Estimates of tree biomass, and its uncertainties through mean-of-ratios, ratio-of-eans, and regression estimators in double sampling: a comparative study of mecrusse woodlands. American Journal of Agriculture and Forestry 3(5):161–170Google Scholar
Mantilla J, Timane R (2005) Orientação para maneio de mecrusse. SymfoDesign Lda, Maputo, DNFFB, p 27Google Scholar
Marková I, Pokorný R (2011) Allometric relationships for the estimation of dry mass of aboveground organs in young highland Norway spruce stand. Acta Univ Agric Silvic Mendel Brun 59(6):217–224View ArticleGoogle Scholar
Mason NWH, Beets PN, Payton I, Burrows L, Holdaway RJ, Carswell FE (2014) Individual-based allometric equations accurately measure carbon storage and sequestration in shrublands. Forest 5:309–324View ArticleGoogle Scholar
Mate R, Johansson T, Sitoe A (2014) Biomass equations for tropical forest tree species in Mozambique. Forests 5:535–556View ArticleGoogle Scholar
McRoberts RE, Westfall JA (2015) Propagating uncertainty through individual tree volume model predictions to large-area volume estimates. Annals of Forest Science. Doi 10.1007/s13595-015-0473-x.Google Scholar
Meyer HA (1941) A correction for a systematic errors occurring in the application of the logarithmic volume equation. Forestry School Research, PennsylvaniaGoogle Scholar
Mokany K, Raison RJ, Prokushkin AS (2006) Critical analysis of root: shoot ratios in terrestrial biomes. Global Change Biol 12:84–96View ArticleGoogle Scholar
Molotja GM, Ligavha-Mbelengwa MH, Bhat RB (2011) Antifungal activity of root, bark, leaf and soil extracts of Androstachys johnsonii Prain. Afr J Biotechnol 10(30):5725–5727Google Scholar
Molto Q, Rossi V, Blanc L (2012) Error propagation in biomass estimation in tropical forests. Methods Ecol Evol 4:175–183View ArticleGoogle Scholar
Montgomery DC, Peck EA (1982) Introduction to linear regression analysis. John Wiley & Sons, New York, p 504Google Scholar
Mugasha WA, Eid T, Bollandsås OM, Malimbwi RE, Chamshama SAO, Zahabu E, Katani JZ (2013) Allometric models for prediction of above- and belowground biomass of trees in the miombo woodlands of Tanzania. For Ecol Manag 310:87–101View ArticleGoogle Scholar
Návar-Cháidez JJ (2010) Biomass allometry for tree species of Northwestern Mexico. Tropical and Subtropical Agroecosystems 12:507–519Google Scholar
Overman JPM, White HJL, Saldarriaca JG (1994) Evaluation of regression models for above-ground biomass determination in Amazon rainforest. J Trop Ecol 10:207–218View ArticleGoogle Scholar
Parresol BR (1999) Assessing tree and stand biomass: a review with examples and critical comparisons. For Sci 45:573–593Google Scholar
Paul KI, Roxburgh SH, England JR, Brooksbank K, Larmour JS, Ritson P, Wildy D, Sudmeyer R, Raison RJ, Hobbs T, Murphy S, Sochacki S, McArthur G, Barton G, Jonson J, Theiveyanathan S, Carter J (2014) Root biomass of carbon plantings in agricultural landscapes of southern Australia: Development and testing of allometrics. For Ecol Manag 318:216–227View ArticleGoogle Scholar
Pearson TRH, Brown SL, Birdsey RA (2007) Measurement guidelines for the sequestration of forest carbon. United States Department of Agriculture, Forest Science, General Technical Report NRS-18.Google Scholar
Petersson H, Holma S, Ståhl G, Algera D, Fridman J, Lehtonen A, Lundström A, Mäkipää R (2012) Individual tree biomass equations or biomass expansion factors for assessment of carbon stock changes in living biomass – A comparative study. For Ecol Manag 270(15):78–84View ArticleGoogle Scholar
Picard N, Bosela FB, Rossi V (2014) Reducind the error in biomass estimates strongly depends on model selection. Annals of Forest Science. doi:10.1007/s13595-014-0434-9.Google Scholar
Pilli R, Anfodillo T, Carrer M (2006) Towards a functional and simplified allometry for estimating forest biomass. For Ecol Manag 237:583–593View ArticleGoogle Scholar
Ravindranath NH, Ostwald M (2008) Methods for estimating above-ground biomass. In N. H. Ravindranath, and M. Ostwald, Carbon Inventory Methods: Handbook for greenhouse gas inventory, carbon mitigation and roundwood production projects. Dordrecht: Springer Science + Business Media B.V 113–14.Google Scholar
Ruiz-Peinado R, del Rio M, Montero G (2011) New models for estimating the carbon sink of Spanish softwood species. Forest Systems 20(1):176–188View ArticleGoogle Scholar
Ryan CM, Williams M, Grace J (2010) Above- and belowground carbon stocks in a Miombo woodland landscape in Mozambique. Biotropica 11(11):1–10Google Scholar
Salis SM, Assis MA, Mattos PP, Pião ACS (2006) Estimating the aboveground biomass and wood volume of savanna woodlands in Brazil's pantanal wetlands based on allometric correlations. For Ecol Manag 228:61–68View ArticleGoogle Scholar
Sanquetta CR, Corte APD, Silva F (2011) Biomass expansion factors and root-to-shoot ratio for Pinus in Brazil. Carbon Bal Manage 6:1–8View ArticleGoogle Scholar
Schroeder P, Brown S, Mo J, Birdsey R, Cieszewski C (1997) Biomass estimation for temperate broadleaf forest of the United States using inventory data. For Sci 43:424–434Google Scholar
Seifert T, Seifert S (2014) Modelling and simulation of tree biomass. In: Seifert T (ed) Bioenergy from wood: sustainable production in the tropics, vol 26, Springer, Managing Forest Ecosystems., pp 42–65View ArticleGoogle Scholar
Silshi GW (2014) A critical review of forest biomass estimation models, common mistakes and corrective measures. For Ecol Manag 329: 237–254View ArticleGoogle Scholar
Sitoe AA, Mondlate LJC, Guedes BS (2014) Biomass and carbon stocks of Sofala bay mangrove forests. Forests 5:1967–1981View ArticleGoogle Scholar
Soares P, Tome M (2012) Biomass expansion factores for Eucalyptus globulus stands in Portugal. Forest system 21(1):141–152View ArticleGoogle Scholar
Sugunuma HS, Kawada K, Smaout A, Suzuki K, Isoda H, Kojima T, Abe Y (2012) Allometric equations and biomass amount of representative Tunisian arid land shrubs for estimating baseline. Journal of Arid Land Studies 22(1):219–222Google Scholar
Tamesgen H, Affleck D, Poudel K, Gray A, Sessions J (2015) A review of the challenges and opportunities in estimating above ground forest biomass using tree-level models. Scandinavian Journal of Forest Research. doi:10.1080/02827581.2015.1012114.Google Scholar
Ter-Mikaelian MT, Korzukhin MD (1997) Biomass equation for sixty five North American tree species. For Ecol Manag 97:1–27View ArticleGoogle Scholar
Tobin B, Nieuwenhuis M (2007) Biomass expansion factors for Sitka spruce (Picea sitchensis (Bong.) Carr.) in Ireland. Eur J Forest Res 126:189–196View ArticleGoogle Scholar
Vashum TK, Jayakumar S (2012) Methods to estimate aboveground biomass and carbons stock in natural forests – a review. J Ecossyst Ecogr 2(4):2–7Google Scholar
Vicent JB, Henning B, Saulei S, Sosanika G, Weiblen GD (2015) Forest carbon in lowland Papua New Guinea: Local variation and the importance of small trees. Austral Ecol 40:151–159View ArticleGoogle Scholar
Zianis D, Muukkonen P, Makipaa R, Mencuccini M (2005) Biomass and stem volume equations for tree species in Europe. Silva Fennica, Monographs 4.Google Scholar | CommonCrawl |
Resources tagged with Factors and multiples similar to One or Both:
Other tags that relate to One or Both
Venn diagrams.
Broad Topics > Numbers and the Number System > Factors and multiples
Inclusion Exclusion
How many integers between 1 and 1200 are NOT multiples of any of the numbers 2, 3 or 5?
I'm thinking of a number. My number is both a multiple of 5 and a multiple of 6. What could my number be?
Thirty Six Exactly
The number 12 = 2^2 � 3 has 6 factors. What is the smallest natural number with exactly 36 factors?
Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit?
Ben's Game
Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters.
Cuboids
Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all?
Factoring Factorials
Find the highest power of 11 that will divide into 1000! exactly.
What a Joke
Each letter represents a different positive digit AHHAAH / JOKE = HA What are the values of each of the letters?
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
Can You Find a Perfect Number?
Can you find any perfect numbers? Read this article to find out more...
Eminit
The number 8888...88M9999...99 is divisible by 7 and it starts with the digit 8 repeated 50 times and ends with the digit 9 repeated 50 times. What is the value of the digit M?
Different by One
Make a line of green and a line of yellow rods so that the lines differ in length by one (a white rod)
Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why?
American Billions
Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3...
I added together the first 'n' positive integers and found that my answer was a 3 digit number in which all the digits were the same...
Three Times Seven
A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why?
Big Powers
Three people chose this as a favourite problem. It is the sort of problem that needs thinking time - but once the connection is made it gives access to many similar ideas.
A First Product Sudoku
Given the products of adjacent cells, can you complete this Sudoku?
Factor Lines
Arrange the four number cards on the grid, according to the rules, to make a diagonal, vertical or horizontal line.
The items in the shopping basket add and multiply to give the same amount. What could their prices be?
Product Sudoku
The clues for this Sudoku are the product of the numbers in adjacent squares.
Data Chunks
Data is sent in chunks of two different sizes - a yellow chunk has 5 characters and a blue chunk has 9 characters. A data slot of size 31 cannot be exactly filled with a combination of yellow and. . . .
AB Search
The five digit number A679B, in base ten, is divisible by 72. What are the values of A and B?
Sieve of Eratosthenes
Follow this recipe for sieving numbers and see what interesting patterns emerge.
Satisfying Statements
Can you find any two-digit numbers that satisfy all of these statements?
Gabriel's Problem
Gabriel multiplied together some numbers and then erased them. Can you figure out where each number was?
Factors and Multiples Game for Two
Factors and Multiples game for an adult and child. How can you make sure you win this game?
Got it for Two
Got It game for an adult and child. How can you play so that you know you will always win?
Robotic Rotations
How did the the rotation robot make these patterns?
What Numbers Can We Make Now?
Imagine we have four bags containing numbers from a sequence. What numbers can we make now?
Substitution Transposed
Substitution and Transposition all in one! How fiendish can these codes get?
Transposition Cipher
Can you work out what size grid you need to read our secret message?
Factor Track
Factor track is not a race but a game of skill. The idea is to go round the track in as few moves as possible, keeping to the rules.
Star Product Sudoku
The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid.
Missing Multipliers
What is the smallest number of answers you need to reveal in order to work out the missing headers?
What Numbers Can We Make?
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
Charlie's Delightful Machine
Here is a machine with four coloured lights. Can you develop a strategy to work out the rules controlling each light?
Counting Cogs
Which pairs of cogs let the coloured tooth touch every tooth on the other cog? Which pairs do not let this happen? Why?
Mathematical Swimmer
Twice a week I go swimming and swim the same number of lengths of the pool each time. As I swim, I count the lengths I've done so far, and make it into a fraction of the whole number of lengths I. . . .
Shifting Times Tables
Can you find a way to identify times tables after they have been shifted up or down?
The Remainders Game
Play this game and see if you can figure out the computer's chosen number.
LCM Sudoku II
You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku.
Times Right
Using the digits 1, 2, 3, 4, 5, 6, 7 and 8, mulitply a two two digit numbers are multiplied to give a four digit number, so that the expression is correct. How many different solutions can you find?
How many zeros are there at the end of the number which is the product of first hundred positive integers?
N000ughty Thoughts
How many noughts are at the end of these giant numbers?
Helen's Conjecture
Helen made the conjecture that "every multiple of six has more factors than the two numbers either side of it". Is this conjecture true?
Special Sums and Products
Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48.
Gaxinta
A number N is divisible by 10, 90, 98 and 882 but it is NOT divisible by 50 or 270 or 686 or 1764. It is also known that N is a factor of 9261000. What is N?
Ewa's Eggs
I put eggs into a basket in groups of 7 and noticed that I could easily have divided them into piles of 2, 3, 4, 5 or 6 and always have one left over. How many eggs were in the basket?
Digat
What is the value of the digit A in the sum below: [3(230 + A)]^2 = 49280A | CommonCrawl |
Opportunistic half-duplex/full-duplex relaying mode selection criterion in cognitive relay networks
Zisheng Cheng1,
Hongbin Chen ORCID: orcid.org/0000-0003-4008-37041 &
Feng Zhao1
EURASIP Journal on Wireless Communications and Networking volume 2018, Article number: 47 (2018) Cite this article
In this paper, adaptive transmission in a cognitive relay network where a secondary transmitter acts as cooperative relay for a primary transmitter while in return gets the opportunity to send its own data is considered. An opportunistic half-duplex (HD)/full-duplex (FD) relaying mode selection criterion which can utilize the advantages of both HD and FD is proposed. The key idea is that the cooperative relay switches between the HD mode and the FD mode according to the residual self-interference power. When the residual self-interference power is lower than a preset threshold, the FD mode is selected to get a high throughput; otherwise, the HD mode is selected to avoid the effect of self-interference. The target is to maximize the throughput of secondary system under the interference constraint of primary system and transmission power constraints. As it is difficult to solve this optimization problem directly, an alternate optimization method is used to solve it, which optimizes amplification gains in HD and FD modes in turn until convergence. Simulation results show that the proposed opportunistic mode selection criterion can select an appropriate relaying mode to achieve a higher throughput than either the FD mode or the HD mode under different residual self-interference power regimes.
With the rapid popularization of smart terminals and multimedia services, data traffic in wireless communication networks is growing exponentially, which demands a huge amount of spectrum resources. However, the remaining available spectrum is scarce while efforts are made to explore new frequency bands and to improve spectrum utilization efficiency. Cognitive relay network was considered to have a great potential to mitigate the spectrum scarcity problem and has become a hot topic in the field of wireless communications in recent years. For example, authors in [1] proposed an energy-efficient relay selection and power allocation scheme in cooperative cognitive radio networks. Based on the analysis of outage probabilities of primary users and secondary users in cognitive relay networks, novel cooperative relay selection schemes were proposed in [2, 3]. In [4], authors took the secondary user as potential cooperator for the primary user, in which the secondary transmitter acts as cooperative relay for the primary transmitter to enhance their outage performances.
Except for the research on the performance of cognitive relay networks, the transmission mode of cooperative relays also has attracted a lot of research efforts. Depending on whether the transmission and reception can be done simultaneously in the same frequency band, relay transmission mode can be generally categorized into half duplex (HD) and full duplex (FD) [5]. Because of the low complexity of relay design in the HD mode, a large number of works have been done for it, such as works on the outage probability, the resource allocation for multi-carrier non-orthogonal multiple access systems, and the channel capacity were reported in [6,7,8], respectively. In [9], a new transmission protocol for the HD multi-hop relaying system was proposed, which selected optimal states of nodes and corresponding optimal transmission rates such that the achievable average rate from the source to the destination was maximized. Though the HD mode is very popular, it requires two orthogonal phases for receiving and transmitting signals which causes a waste of spectrum resources.
Recently, with the continuous improvement of antenna technology and signal processing capability, self-interference in the FD mode can be well eliminated [10,11,12]. In particular, the basic cause of self-interference cancelation bottlenecks in the FD mode has been studied in [13], which indicated that self-interference can be further suppressed. According to that the FD mode has received a lot of research interest from both industry and academia [14]. For example, multi-objective optimization for power-efficient and secure FD communication systems was studied in [15], while joint user selection and power allocation in FD multicell networks was investigated in [16]. Considering different system models with FD relays, authors in [17,18,19] discussed power allocation and energy efficiency. In addition, a new device-to-device (D2D) communication scheme was proposed which allowed D2D links to underlay the cellular downlink by assigning D2D transmitters as FD relays to assist the cellular downlink transmission [20]. Besides, the energy efficiency and outage performance were also deeply studied in FD D2D communications [21, 22]. Although the FD mode has been widely studied, with the increase of self-interference power, its performance will be greatly degraded, which may even be worse than that of the HD mode [23].
To avoid disadvantages of HD and FD modes while utilizing advantages of them, some earlier works have investigated the combination of HD and FD. For example, a novel scheme consisting of opportunistic mode selection between HD and FD was proposed in [23], which was compared with either the HD mode or the FD mode. The results showed that the instantaneous and average spectral efficiency were improved a lot. Authors of [24] proposed an optimal transmission scheduling scheme for a hybrid HD/FD relaying system, which could achieve a higher spectral efficiency than a single duplex relaying system. A hybrid duplex scheme was proposed in a random-access wireless network and heterogeneous wireless networks [25, 26], which could get a high throughput. In [27,28,29], a hybrid HD/FD relaying mode was proposed for wireless ad hoc networks, heterogeneous networks, and cognitive relay networks, which aimed to maximize the security performance or the sum rate. In addition, a joint relay mode selection and power allocation model was proposed in [30], with the aim to maximize the sum rate of a multi-carrier relay network with hybrid relay modes on a per sub-carrier basis. Moreover, a novel parallel hybrid radio frequency/free-space optical relaying system with both the non-buffer-aided and the buffer-aided schemes has been studied in [31], and optimal relay selection policies were used to maximize the end-to-end throughput. Authors in [32] adopted an adaptive antenna which can automatically select receiving or transmitting signals to maximize the end-to-end signal-to-interference-plus-noise ratio in the FD relaying system. In a relay-aided cellular network, an opportunistic HD/FD relaying mode selection scheme based on the received signal-to-interference-plus-noise ratio was introduced, which can achieve a high energy efficiency [33].
Most of existing works focused on the performance of primary system and secondary system in cognitive relay networks or opportunistic relaying mode selection in a cooperative relay network, but rarely consider opportunistic relaying mode selection in cognitive relay networks. [29] studied the opportunistic relaying mode selection in an underlay spectrum sharing system, but focused on the outage performance. In this work, an opportunistic HD/FD relaying mode selection criterion in a cognitive relay network is proposed where a secondary transmitter acts as a cooperative relay to assist the transmission of primary system, in order to get a high throughput of secondary system under the interference constraint of primary system and transmission power constraints. This opportunistic relaying mode selection criterion not only utilizes the throughput advantage of FD but also weakens the negative effect of self-interference. In case of reliable communications in the primary system, the secondary system can achieve maximum throughput. Though both this work and earlier works [23, 29, 33] studied opportunistic HD/FD relaying mode selection through mode switching, they are quite different in several aspects. Firstly, in this work, we consider a cognitive relay network but in [23, 29, 33] authors considered a three-node relay network, another kind of cognitive relay network, and a relay-aided cellular network, respectively. Secondly, in this work, the switching between HD and FD is based on the residual self-interference power but in [23, 29] and [33] the switching was based on the maximum channel capacity and the signal-to-interference-plus-noise ratio, respectively. Thirdly, in this work the goal is to maximize the throughput of secondary system but in [23, 29, 33], the goals were to maximize the spectral efficiency, the outage probability, and the energy efficiency, respectively.
The remainder of this paper is organized as follows. Section 2 describes the cognitive relay network, the HD transmission mode, the FD transmission mode, and the opportunistic HD/FD relaying mode selection criterion. Then, the secondary throughput derivation and the problem of maximizing the throughput of secondary system is formulated and solved in Section 3. Section 4 presents simulation results of secondary throughput. Finally, concluding remarks are made in Section 5.
The cognitive relay network under consideration is shown in Fig. 1. The primary system consists of a primary transmitter (PT) and a primary receiver (PR), while the secondary system consists of a secondary transmitter (ST) and a secondary receiver (SR). It should be noted that the primary system (PT, PR) has absolute control over the use of its licensed frequency band, while the secondary system (ST, SR) is willing to offer assistance for the primary system and in return gets the opportunity to use the licensed frequency band. To be more specific, ST acts as cooperative relay to assist the transmission in the primary system, in order to ensure the communication quality-of-service of the primary system. In return, ST is allowed to transmit its own signal in the licensed frequency band. This system model may be applied in the scenario where there are no dedicated relay nodes but a secondary system shares the frequency band of a primary system through cooperative relaying [4]. All of the nodes are half-duplex except for ST that has the full-duplex capability. ST is equipped with a transmitting antenna and a receiving antenna while other nodes are equipped with a single antenna. ST can opportunistically switch between the HD mode and the FD mode according to a certain criterion. Through checking out the literature, earlier works addressing the integration of HD and FD modes in a terminal was rarely found. For example, HD/FD mode switching boundaries were discussed in [23] but the way of implementation was not mentioned. A possible strategy to integrate HD and FD modes in ST is to dynamically control its receiving and transmitting antennas to achieve HD/FD mode switching with a software-defined radio module. The whole transmission process consists of multiple transmission slots. In the initialization of each transmission slot, the default FD mode is tested and the residual self-interference power is measured, like pilot training in channel estimation. The principle of HD and FD mode switching is elaborated as follows. In the first transmission slot, if the residual self-interference power is below a given threshold, ST retains the FD mode and its receiving and transmitting antennas are activated. This slot is not divided into two equal sub-slots. The transmitting antenna of ST transmits the signal received by the receiving antenna of ST immediately. In the second transmission slot, if the residual self-interference power is above a given threshold, ST switches to the HD mode and one of its antennas is deactivated. This slot is divided into two equal sub-slots. The activated antenna of ST receives a signal in the first sub-slot and transmits the received signal in the second sub-slot. The HD and FD mode switching occurs in successive transmission slots following the above principle. Note that the focus of this work is on the signal transmission process while operations of antennas and signal processing are beyond the scope.
A cognitive relay network with HD/FD mode switching
The simple amplify-and-forward relaying protocol is adopted. In the HD mode, a transmission slot is divided into two equal sub-slots. In the first sub-slot, PT transmits a signal to PR, while ST and SR also receive the signal. In the second sub-slot, ST combines the received signal with its own signal and then amplifies the composite signal and forwards the amplified signal to PR and SR. PR and SR recover their desired signals from their received signals, respectively. In the FD mode, each transmission slot is not divided into two equal sub-slots. ST receives the signal from PT and immediately amplifies and forwards the composite signal to PR and SR.
All the channels are assumed to experience Rayleigh flat fading and the channel state information is perfectly known through channel estimation, and remains constant within a transmission slot. Channel coefficients of links PT → PR, PT → ST, PT → SR, ST → PR, and ST → SR are denoted by h1, h2, h3, h4, and h5, respectively. Moreover, it is assumed that \( {h}_i\sim CN\left(0,{d}_i^{-v}\right)\ \left(i=1,2,3,4,5\right) \), which means that h i is a circularly symmetric complex Gaussian random variable with variance \( {d}_i^{-v} \). Here, d i represents the normalized distance between two nodes and v represents the path loss exponent. That is to say, h1, h2, h3, h4, and h5 denote normalized distances between PT and PR, PT and ST, PT and SR, ST and PR, and ST and SR, respectively. This distance normalization is done with respect to the distance between PT and PR, i.e., d1 = 1. It is worth noting that the self-interference can be mitigated and its residual part also can be considered as following circularly symmetric complex Gaussian distribution (see [17, 18, 21, 23] and references therein). So, the self-interference channel coefficient from the transmitting antenna to the receiving antenna of ST can be modeled as \( {h}_q\sim CN\left(0,{d}_q^{-v1}\right) \), where d q is the distance between transmitting and receiving antennas of ST and v1 is the path loss exponent of the self-interference channel.
HD transmission mode
When the half-duplex (HD) transmission mode is used, although the self-interference does not exist, but the utilization efficiency of a frequency band is only half of the FD transmission mode. Only one antenna at ST is activated to receive and transmit signals in two sub-slots. In the first sub-slot, PT transmits the primary signal x P (m) (with zero mean and normalized variance 1) with a transmission power P P . PR, ST, and SR receive the signal. Received signals of PR, ST, and SR are denoted by y11, y21, y31, respectively, which are written as
$$ {y}_{a1}(m)=\sqrt{P_p}{h}_a{x}_p(m)+{n}_{a1}(m), $$
where a = 1, 2, and 3. Here, h a is a channel coefficient, and na1~N(0, σ2) is an additive white Gaussian noise (AWGN) with zero mean and variance σ2. The signal-to-noise ratio (SNR) at ST can be calculated as
$$ {SNR}_{1, ST}^{HD}=\frac{P_P{\left|{h}_2\right|}^2}{\sigma^2}. $$
In the second sub-slot, ST combines the signal \( {x}_{c1}(m)=\sqrt{P_P}{h}_2{x}_p(m)+{n}_{21}(m) \) received in the first sub-slot with its own signal x s (m) and generates the amplified composite signal x c (m). In order to simplify the signal transmission process, weighting of signals is not considered and these signals are normalized to the same order of magnitudes. The amplified signal is then forwarded to PR and SR. The amplified composite signal can be represented by
$$ {x}_c(m)=\sqrt{\beta}\left({x}_{c1}(m)+{x}_s(m)\right), $$
where β is the amplification gain of ST in the HD mode, x s (m) is the secondary signal with zero mean and normalized variance 1. Received signals at PR and SR are denoted by \( {\tilde{y}}_1(m) \) and \( {\tilde{y}}_3(m) \), respectively, which can be expressed as
$$ {\tilde{y}}_1(m)={h}_4{x}_c(m)+{v}_1(m), $$
$$ {\tilde{y}}_3(m)={h}_5{x}_c(m)+{v}_3(m). $$
Here, v1 and v3 are AWGN with zero mean and variance σ2. PR decodes the primary signal x p (m) from signals received in these two sub-slots, and the interference power from the secondary system to the primary system can be expressed as
$$ {P}_I=\beta {\left|{h}_4\right|}^2. $$
SR decodes the secondary signal x s (m) from the signal received in the second sub-slot, and the throughput can be expressed as
$$ {R}_{half}=\frac{1}{2}W{\log}_2\left(1+\frac{\beta {\left|{h}_5\right|}^2}{\beta {\left|{h}_5\right|}^2\left({P}_P{\left|{h}_2\right|}^2+{\sigma}^2\right)+{\sigma}^2}\right). $$
Here, W is the bandwidth and the value \( \frac{1}{2} \) is due to the fact that the transmission slot is divided into two equal sub-slots. Note that the amplify-and-forward relaying protocol is adopted and SR does not attempt to decode the primary signal, thus it is treated as an interference in (7). In addition, the goal of this work is to maximize the throughput of secondary system while satisfying the interference constraint of primary system, so maximizing the throughput of primary system is not concerned, but the maximal-ratio combining can be applied to improve the diversity gain of primary system [4].
FD transmission mode
When the full-duplex (FD) transmission mode is used, a transmission slot is not divided into two equal sub-slots. It allows simultaneous transmission and reception in the same frequency band. However, the communication quality-of-service is degraded by the severe self-interference from the transmitting antenna to the receiving antenna of ST. In the FD transmission mode, ST will amplify and forward the composite signal received at the previous time instant m − 1. Specifically, the signal received by ST at the time instant m can be written as \( {\tilde{x}}_{c1}(m)=\sqrt{P_P}{h}_2{x}_p(m)+{n}_{21}(m)+{v}_r(m) \), where \( {v}_r(m)=\sqrt{P_q}{h}_qx(m) \) is the residual self-interference due to the FD operation, x(m) = x lin (m) + x imp (m) is the transmitted SI containing the known linear part x lin (m) and transmitter impairments x imp (m), and P q is the residual self-interference power after interference cancelation. Moreover, it is worth noting that the analysis of these two parts has been done in [13]. But in order to simplify the following analysis and focus on the residual self-interference power, x imp (m) is not considered.
Next, ST will forward the signal \( {\tilde{x}}_{c1} \) received at the previous time instant along with its own signal x s . The signal forwarded by ST at the time instant m can be represented by
$$ {\tilde{x}}_c(m)=\sqrt{\beta^{\hbox{'}}}\left({\tilde{x}}_{c1}\left(m-1\right)+{x}_s(m)\right), $$
where β′ is the amplification gain of ST in the FD transmission mode. In order to analyze the effect of residual SI on the system performance, it is assumed that the residual SI signal v r is zero mean, additive, and white Gaussian, which is denoted as v r ~CN(0, V) [20, 21]. Received signals at PR and SR from ST are denoted by \( {\tilde{z}}_1(m) \) and \( {\tilde{z}}_3(m) \), respectively, which can be expressed as
$$ {\tilde{z}}_1(m)={h}_4{\tilde{x}}_c(m)+{v}_1(m), $$
$$ {\tilde{z}}_3(m)={h}_5{\tilde{x}}_c(m)+{v}_3(m). $$
Note that due to the FD operation, PR and SR also receive signals from PT at the time instant m. The received signals at PR and SR are \( {\tilde{z}}_1(m)+\sqrt{P_p}{h}_1{x}_p(m) \) and \( {\tilde{z}}_3(m)+\sqrt{P_p}{h}_3{x}_p(m) \), respectively. PR decodes the primary signal x p (m) from the signal received at the time instant m, and the interference power from the secondary system to the primary system can be expressed as
$$ {\tilde{P}}_I={\beta}^{\hbox{'}}{\left|{h}_4\right|}^2\left(1+{P}_q{\left|{h}_q\right|}^2\right). $$
SR decodes the secondary signal x s (m) from the signal received at the time instant m, and the throughput can be expressed as
$$ {R}_{full}=W{\log}_2\left(1+\frac{\beta^{\prime }{\left|{h}_5\right|}^2\left(1+{P}_q{\left|{h}_q\right|}^2\right)}{\beta {\left|{h}_5\right|}^2\left({P}_P{\left|{h}_2\right|}^2+{\sigma}^2\right)\left(1+{P}_q{\left|{h}_q\right|}^2\right)+{P}_P{\left|{h}_3\right|}^2+{\sigma}^2}\right). $$
Note that in [23, 29], transmission powers in the FD mode were halved in order to make a fair comparison between the FD mode and the HD mode. But in this work, the target is to maximize the throughput of secondary system when FD and HD modes are opportunistically selected. Therefore, the transmission power in the FD mode is not necessarily halved.
Opportunistic HD/FD relaying mode selection
It is difficult to find a general way for HD/FD relaying mode selection. Therefore, an opportunistic HD/FD relaying mode selection criterion is designed as follows: ST selects the FD mode or the HD mode based on the residual self-interference power. When the residual self-interference power at ST in the FD transmission mode is below a given threshold Γ, the FD mode is preferred in order to get a high throughput. However, when the residual self-interference power is above a given threshold Γ, the FD mode suffers a great performance loss and the HD mode becomes a better option. In practice, the residual self-interference power may be measured at ST by using the method in [34] or other methods.
Secondary throughput in the opportunistic mode selection
In this section, firstly, the throughput of secondary system in the opportunistic mode selection is derived. Then, the throughput maximization of secondary system under the interference constraint of primary system and transmission power constraints is formulated as a constrained optimization problem. From the description of opportunistic mode selection, the secondary throughput can be expressed as
$$ R=P\left({P}_q>\Gamma \right){R}_{half}+P\left({P}_q\le \Gamma \right){R}_{full}. $$
Here, P(P q > Γ) denotes the probability that P q is greater than Γ, the same as P(P q ≤ Γ). From (13), we know that R is calculated in the statistical average sense, in order to reflect the average throughput that can be obtained through mode switching. It is worth noting that since it is assumed that the channel state information is constant within a transmission slot and can be perfectly obtained by channel estimation, R represents the throughput of secondary system when ST selects a relaying mode according to the residual self-interference power in a transmission slot. From the perspective of multiple transmission slots, due to the variation of channel state information over slots, R represents the throughput of secondary system when ST selects a relaying mode according to the residual self-interference power in every slot.
Secondary throughput derivation
There are many practical models of self-interference which greatly affects the throughput of FD transmission. As the self-interference cancelation itself is not concerned, the early and commonly used model in [35] is adopted. Based on experimental results in [35], the variance of the residual SI is modeled as \( V=\frac{1}{\omega }{P_S}^{\lambda } \), that is to say, the residual self-interference power can be expressed as
$$ {P}_q=\frac{1}{\omega }{P_S}^{\lambda }, $$
where 1/ω and λ(0 ≤ λ ≤ 1) are constants, and 1/ω indicates the attenuation of residual self-interference power which reflects the effectiveness of selected interference cancelation technique. This model generally includes two cases: the optimistic case in which the self-interference variance is simply a constant and is not a function of the transmission power (λ = 0) [14, 36], and the other case in which the variance increases linearly with the transmission power (λ = 1) [32, 37]. Without loss of generality, the second case is considered in which P q = P S /ω, and 1/ω plays an important role in the FD transmission mode which characterizes the quality of self-interference cancelation. Here, P S is the transmission power of ST in the FD mode, which can be expressed as
$$ {P}_S={\beta}^{\hbox{'}}\left({P}_p{\left|{h}_2\right|}^2+{P}_p{\left|{h}_q\right|}^2+{\sigma}^2+1\right). $$
Substituting (15) into (14), we can get the expression of P q as
$$ {P}_q=\frac{\beta^{\prime}\left({P}_P{\left|{h}_2\right|}^2+{\sigma}^2+1\right)}{\omega -{\beta}^{\prime }{\left|{h}_q\right|}^2}. $$
In order to facilitate the calculation, denote r i = |h i |2(i = 1, 2, 3, 4, 5, q). r i follows exponential distribution with the parameter \( {\lambda}_i={d}_i^v\left(i=1,2,3,4,5,q\right) \). The probability P(P q > Γ) can be further calculated as \( P\left\{\frac{\beta^{\prime}\left({P}_P{r}_2+{\sigma}^2+1\right)}{\omega -{\beta}^{\prime }{r_q}^2}>\Gamma \right\} \). The probability P(P q > Γ) is difficult to be obtained directly. So the above fraction is decomposed into two parts and their properties are observed, respectively. It is assumed that there is a probability P1 which can be described as f1(t) = P1(β′P P r2 + (β′σ2 + β′) < t). Here, in addition to r2, other parameters can be regarded as constants. So, we can get the following form \( {P}_1\left({r}_2<\frac{t\hbox{-} \left({\beta}^{\prime }{\sigma}^2+{\beta}^{\prime}\right)}{\beta^{\prime }{P}_P}\right)\left(t\ge \left({\beta}^{\prime }{\sigma}^2+{\beta}^{\prime}\right)\right) \). We can also know that r2~ exp(λ2). So, we can get the probability distribution function of r2 as\( F\left({r}_2\right)=1-\exp \left(-{\lambda}_2\frac{t\hbox{-} \left({\beta}^{\prime }{\sigma}^2+{\beta}^{\prime}\right)}{\beta^{\prime }{P}_P}\right) \). Next, the first-order derivative of F(r2) with respect to t will be found. The probability density function of P1 is obtained and can be expressed as
$$ {f}_y={F}_t^{\hbox{'}}\left({r}_2\right)=A\exp \left(- By\right)\left(y\ge \left({\beta}^{\hbox{'}}{\sigma}^2+{\beta}^{\hbox{'}}\right)\right). $$
Here, A is \( {e}^{\frac{\lambda_2\left({\beta}^{\prime }{\sigma}^2+{\beta}^{\prime}\right)}{\beta^{\prime }{P}_P}}\frac{\lambda_2}{\beta^{\prime }{P}_P} \) and B is \( \frac{\lambda_2}{\beta^{\prime }{P}_P} \). The same as before, we can get the probability density function of f2(t) = P2(ω − β′r q < t), which is expressed as
$$ {f}_x={F}_t^{\hbox{'}}\left({r}_q\right)=C\exp \left(- Dx\right)\left(x<\omega \right). $$
Here, C is \( {e}^{\frac{-{\lambda}_q\omega }{\beta^{\prime }}}\frac{\lambda_q}{\beta^{\prime }} \) and D is \( \frac{-{\lambda}_q}{\beta^{\prime }} \).
From the previous derivation, we can get the probability P(P q ≤ Γ), which can be expressed as
$$ P\left({P}_q\le \Gamma \right)=P\left\{\frac{\beta^{\prime}\left({P}_P{r}_2+{\sigma}^2+1\right)}{\omega -{\beta}^{\prime }{r_q}^2}\le \Gamma \right\}=P\left\{\frac{Y}{X}\le Z\right\}, $$
where Y is β′(P P r2 + σ2 + 1), X is ω − β′r q 2, and Z denotes Γ. From the knowledge of probability and statistics, the probability \( P\left\{\frac{Y}{X}\le Z\right\} \) can be derived as
$$ {\displaystyle \begin{array}{l}P\left\{\frac{Y}{X}\le Z\right\}={P}_{Y/X}(Z)=\underset{s}{\iint }f\left(x,y\right) dxdy={\iint}_{Y/X\le Z,y\ge \left({\beta}^{\hbox{'}}{\sigma}^2+{\beta}^{\hbox{'}}\right),x<\omega }f\left(x,y\right) dxdy\\ {}=\underset{-\infty }{\overset{0}{\int }} dx\underset{\beta \hbox{'}\left({\sigma}^2+1\right)}{\overset{\infty }{\int }} AC\exp \left(- By- D x\right) dy+\underset{\frac{\beta^{\hbox{'}}\left({\sigma}^2+1\right)}{Z}}{\overset{\omega }{\int }} dx\underset{\beta \hbox{'}\left({\sigma}^2+1\right)}{\overset{Zx}{\int }} AC\exp \left(- By- D x\right) dy\\ {}=\frac{AC}{B D}\exp \left(-{B\beta}^{\hbox{'}}\left({\sigma}^2+1\right)\right)\left[\exp \left(-\frac{{D\beta}^{\hbox{'}}\left({\sigma}^2+1\right)}{Z}\right)-\exp \left(- D\omega \right)-1\right]+\frac{AC}{B}\frac{1}{B Z+D}\\ {}\left[\exp \left(-\left( BZ+D\right)\omega \right)-\exp \left(-\left(B+\frac{D}{Z}\right){\beta}^{\hbox{'}}\left({\sigma}^2+1\right)\right)\right]\end{array}} $$
Next, values of A, B, C, D, and Z are substituted into (20) and the expression of the probability P(P q ≤ Γ) is obtained as
$$ P\left({P}_q\le \Gamma \right)=\kern0.5em \frac{\lambda_2\Gamma}{\lambda_q{P}_P-{\lambda}_2\Gamma}\exp \left(\frac{\lambda_q{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_q\Gamma \omega }{\beta^{\prime}\Gamma}\right)+\frac{\lambda_q{P}_P}{\lambda_2\Gamma -{\lambda}_q{P}_P}\exp \left(\frac{\lambda_2{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_2\Gamma \omega }{\beta^{\prime }{P}_P}\right)+\exp \left(\frac{-{\lambda}_q\omega }{\beta^{\prime }}\right)+1. $$
When (7), (12), and (21) are substituted into (13), we can obtain the expression of the secondary throughput in the opportunistic mode selection, which can be expressed as
$$ {\displaystyle \begin{array}{l}R=P\left({P}_q>\Gamma \right){R}_{half}+P\left({P}_q\le \Gamma \right){R}_{full}\\ {}\kern0.75em =\left\{\left(\frac{\lambda_2\Gamma}{\lambda_q{P}_P-{\lambda}_2\Gamma}\exp \left(\frac{\lambda_q{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_q\Gamma \omega }{\beta^{\prime}\Gamma}\right)-\frac{\lambda_q{P}_P}{\lambda_2\Gamma -{\lambda}_q{P}_P}\exp \left(\frac{\lambda_2{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_2\Gamma \omega }{\beta^{\prime }{P}_P}\right)-\exp \left(\frac{-{\lambda}_q\omega }{\beta^{\prime }}\right)\right)\ast \frac{1}{2}W{\log}_2\left(1+\frac{\beta {\left|{h}_5\right|}^2}{\beta {\left|{h}_5\right|}^2\left({P}_P{\left|{h}_2\right|}^2+{\sigma}^2\right)+{\sigma}^2}\right)\right\}\\ {}\kern0.75em +\left\{\left[\frac{\lambda_2\Gamma}{\lambda_q{P}_P-{\lambda}_2\Gamma}\exp \left(\frac{\lambda_q{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_q\Gamma \omega }{\beta^{\prime}\Gamma}\right)+\frac{\lambda_q{P}_P}{\lambda_2\Gamma -{\lambda}_q{P}_P}\exp \left(\frac{\lambda_2{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_2\Gamma \omega }{\beta^{\prime }{P}_P}\right)+\exp \left(\frac{-{\lambda}_q\omega }{\beta^{\prime }}\right)+1\right]\ast W{\log}_2\left(1+\frac{\beta^{\prime }{\left|{h}_5\right|}^2\left(1+{P}_q{\left|{h}_q\right|}^2\right)}{\beta^{\prime }{\left|{h}_5\right|}^2\left({P}_P{\left|{h}_2\right|}^2+{\sigma}^2\right)\left(1+{P}_q{\left|{h}_q\right|}^2\right)+{P}_P{\left|{h}_3\right|}^2+{\sigma}^2}\right)\right\}.\end{array}} $$
Secondary throughput maximization
In the opportunistic mode selection, the objective is to seek optimal amplification gains β and β′ in order to maximize R while keeping P I and \( {\tilde{P}}_I \) below a threshold and the transmission power of ST does not exceed its limit. The secondary throughput maximization problem can be formulated as
$$ {\displaystyle \begin{array}{l}\underset{\beta, {\beta}^{\hbox{'}}}{\max}\kern1em R\\ {}\mathrm{s}.\mathrm{t}.\kern0.5em 0<{P}_I\le \Lambda, \\ {}\kern1.5em 0<{\tilde{P}}_I\le \Lambda, \\ {}\kern1.5em 0<\beta \le {\beta}_{\mathrm{max}},\\ {}\kern1.5em 0<{\beta}^{\hbox{'}}\le {\beta}_{\mathrm{max}}^{\hbox{'}},\end{array}} $$
where Λ is the interference threshold at PR, βmax and \( {\beta}_{\mathrm{max}}^{\prime } \) are the maximum allowed amplification gains of ST in the HD mode and the FD mode, respectively, which are βmax = Ps, max/(P P r2 + σ2 + 1) and \( {\beta}_{\mathrm{max}}^{\prime }={P}_{s,\max }/\left({P}_P{r}_2+{\mathrm{P}}_q{r}_q+{\sigma}^2+1\right) \), and Ps, max is the maximum allowed transmission power of PT. Note that the interference endured by the primary system is limited by constraints in (23) and the secondary system will not cause harmful interference to the primary system.
Unfortunately, the joint optimization over β and β′ is very hard due to the fact that R is not concave in β and β′ jointly. To overcome this difficulty, we can first optimize over one variable, and let the other variable be fixed. That is, we can optimize over β for a fixed β′, and optimize over β′ for a fixed β, separately. Then, we can consider the joint optimization by utilizing separate optimization results. A one-dimensional search (ODS) method and an alternate optimization (AOP) method are proposed to find the solution to the optimization problem in (23). In the following, details of the AOP method and the ODS method are given.
Optimization over β for a fixed β′: Given β′, the optimization over β can be formulated as
$$ \underset{\beta }{\max }\ R\kern0.5em \mathrm{s}.\mathrm{t}.\kern0.5em 0<{P}_I\le \Lambda, 0<\beta \le {\beta}_{\mathrm{max}}. $$
From the domain of the function P I and the inequality constraint in (24), we can obtain the feasible region of β as \( \beta \in \left(0,{\widehat{\beta}}_{\mathrm{max}}\right] \), where \( {\widehat{\beta}}_{\mathrm{max}}=\min \left\{\frac{\Lambda}{r_4},{\beta}_{\mathrm{max}}\right\} \).
Theorem 1: R is strictly quasi-concave in β for β ∈ [0, +∞).
Proof: See Appendix 1.
From Theorem 1, there are only three cases for the curve R versus β for \( \beta \in \left(0,{\widehat{\beta}}_{\mathrm{max}}\right] \).
Case 1: R1 strictly increases with β for \( \left(0,{\widehat{\beta}}_{\mathrm{max}}\right] \) if \( {\left.{dR}_1/ d\beta\ \right|}_{\beta ={\widehat{\beta}}_{\mathrm{max}}}\ge 0 \), where dR/dβ is given by (26) in Appendix 1. The solution to the optimization problem (24) is achieved at \( \overset{\smile }{\beta }={\widehat{\beta}}_{\mathrm{max}} \).
Case 2: R strictly decreases with β for \( \left(0,{\widehat{\beta}}_{\mathrm{max}}\right] \) if \( {\left. dR/ d\beta\ \right|}_{\beta =\left(0,{\widehat{\beta}}_{\mathrm{max}}\right]}\le 0 \). The optimal solution is achieved at \( \overset{\smile }{\beta}\approx 0 \).
Case 3: R first strictly increases and then strictly decreases with β for \( \left(0,{\widehat{\beta}}_{\mathrm{max}}\right] \) ifdR/dβ |β = 0 > 0 and \( {\left. dR/ d\beta\ \right|}_{\beta ={\widehat{\beta}}_{\mathrm{max}}}<0 \). The optimal solution is achieved at \( \overset{\smile }{\beta }={\beta}^{\ast } \), where β∗ is the point at which R reaches its maximum when \( \beta \in \left(0,{\widehat{\beta}}_{\mathrm{max}}\right] \) for a fixed β′ and is obtained by solving the equation dR/dβ = 0.
Optimization over β′for a fixed β: Given β, the optimization over β′ can be formulated as
$$ \underset{\beta^{\hbox{'}}}{\max }\ R\kern0.5em \mathrm{s}.\mathrm{t}.\kern0.5em 0<{\tilde{P}}_I\le \Lambda, 0<{\beta}^{\hbox{'}}\le {\beta}_{\mathrm{max}}^{\hbox{'}}, $$
From the domain of the function \( {\tilde{P}}_I \) and the inequality constraint in (25), we can obtain the feasible region of β′ as \( {\beta}^{\prime}\in \left(0,{{\widehat{\beta}}^{\prime}}_{\mathrm{max}}\right] \), where \( {{\widehat{\beta}}^{\prime}}_{\mathrm{max}}=\left(-b+\sqrt{b^2-4 ac}\right)/2a \), a = P q r q r2r4 + r q r4,b = ωr4 + Λr q , and c = − ωΛ.
Theorem 2: R is strictly quasi-concave in β′ for β′ ∈ [0, +∞).
Similar to the previous analysis, from Theorem 2, there are only three cases for the curve R2 versus β′ in \( \left({{\widehat{\beta}}^{\prime}}_{\mathrm{min}},{\widehat{\beta}}_{\mathrm{max}}^{\prime}\right] \).
Case 1: R strictly increases with β′ for \( \left({{\widehat{\beta}}^{\prime}}_{\mathrm{min}},{\widehat{\beta}}_{\mathrm{max}}^{\prime}\right] \) if \( {\left. dR/d{\beta}^{\prime }\ \right|}_{\beta^{\prime }={\widehat{\beta}}_{\mathrm{max}}^{\prime }}\ge 0 \), where dR/dβ′ is given by (29) in Appendix 2. The solution to the optimization problem (25) is achieved at \( {\overset{\smile }{\beta}}^{\prime }={\widehat{\beta}}_{\mathrm{max}} \).
Case 2: R strictly decreases with β′ for \( \left({{\widehat{\beta}}^{\prime}}_{\mathrm{min}},{\widehat{\beta}}_{\mathrm{max}}^{\prime}\right] \) if \( {\left. dR/d{\beta}^{\prime }\ \right|}_{\beta^{\prime }={{\widehat{\beta}}^{\prime}}_{\mathrm{min}}}\le 0 \). The optimal solution is achieved at \( {\overset{\smile }{\beta}}^{\prime }={{\widehat{\beta}}^{\prime}}_{\mathrm{min}} \).
Case 3: R first strictly increases and then strictly decreases with β′ for\( \left({{\widehat{\beta}}^{\prime}}_{\mathrm{min}},{\widehat{\beta}}_{\mathrm{max}}^{\prime}\right] \) if \( {\left. dR/d{\beta}^{\prime }\ \right|}_{\beta^{\prime }={{\widehat{\beta}}^{\prime}}_{\mathrm{min}}}>0 \) and \( {\left. dR/d{\beta}^{\prime }\ \right|}_{\beta ={\widehat{\beta}}_{\mathrm{max}}^{\prime }}<0 \). The optimal solution is achieved at \( {\overset{\smile }{\beta}}^{\prime }={\beta^{\prime}}^{\ast } \), where β′∗ is the point at which R reaches its maximum when \( {\beta}^{\prime}\in \left({{\widehat{\beta}}^{\prime}}_{\mathrm{min}},{\widehat{\beta}}_{\mathrm{max}}^{\prime}\right] \) for a fixed β and is obtained by solving the equation dR/dβ′ = 0.
ODS: First of all, one can enumerate all values of β in the feasible region and obtain corresponding optimal solutions to the problem in (23). Next, by comparing all possible optimal solutions, finally, one can get the optimal solution to the problem in (23).
AOP: The optimization problems (24) and (25) can be alternately repeated by letting the output of one of the optimization problems be the input of the other. The specific procedure of the AOP algorithm is listed in Table 1.
Table 1 Iterative optimization algorithm AOP
The MATLAB tool is used to simulate the secondary throughput related to various system parameters and verify the effectiveness of the proposed opportunistic mode selection criterion. As other HD/FD relaying mode selection criterion is not seen in literature, the proposed opportunistic HD/FD relaying mode selection criterion is compared with the HD or FD mode to show its merit. The bandwidth W is normalized to 1, i.e., W = 1 Hz. The topology of the cognitive relay network is constructed like this: PT, PR, ST, and SR are collinear. Based on this, nodes can be rendered in the 2D plane and PT, PR, and SR are located at points (0, 0), (0, 1), and (1, 0), respectively. ST moves along the positive X axis between PT and SR. Other simulation parameters are set as pS, max = 0.1W, σ2 = 1, ω = 60dB, Γ = 10−6, Λ = 0.1W, v = 4, v1 = 0.5, d1 = d3 = 1, \( {d}_4=\sqrt{1+{d_2}^2} \), d5 = 1 − d2, d q = 0.1, and ε0 = 10−3.
As shown in Fig. 2, we can observe the probability of opportunistic mode selecting HD versus the power of PTP P . In this simulation, the distance between PT and ST is set as d2 = 0.5. From (16), with the increase of P P , the residual self-interference power will increase. This means when the residual self-interference power is low, the performance of FD mode is better than that of the HD mode; thus, the probability of opportunistic mode selecting HD will be low. But with the increase of P P , the performance of FD mode will become worse and worse; thus, the probability of opportunistic mode selecting HD will be high.
Probability of opportunistic mode selecting HD versus power of PT P p
Figure 3 shows the effect of power of PT P P on the throughput of secondary system when d2 = 0.5. It is clear that, when the power of PT is low, from (16), we can know that the residual self-interference power at ST is also low. In this case, the throughput in the FD mode is nearly twice as much as the HD mode. Moreover, the opportunistic mode will select the FD mode basically and takes the throughput advantage of FD. But with the increase of P P , the residual self-interference power will become higher and higher and will seriously affect the performance of FD. About at P P = 0.23W, the throughput of HD will become better than FD. At this point, the opportunistic mode selection criterion will select the HD mode which can achieve a higher throughput than FD. Due to the statistical average simulation of getting P q by (16), there is a small throughput gap between the opportunistic mode selection and either the FD or HD mode. Nevertheless, Fig. 3 clearly shows the advantage of the opportunistic mode selection in terms of selecting either the FD mode or the HD mode under different residual self-interference power regimes.
Secondary throughput R versus power of PT P p
Note that in [23, 29] the HD/FD mode switching was executed according to maximizing the channel capacity; thus, the opportunistic mode selection outperforms both the HD mode and the FD mode. But in this work, the secondary throughput in the opportunistic mode selection is composed of the throughput of HD and FD modes weighted by statistical average; thus, the secondary throughput in the opportunistic mode selection equals that either in the FD mode or in the HD mode, depending on the residual self-interference power.
To gain more insights, the throughput of primary system in the opportunistic mode selection is also simulated. Figure 4 shows the effect of power of PT P P on the throughput of primary system, where two received signals at PR are combined with maximal-ratio combining. In this simulation, all parameter settings are the same as those in Fig. 3. It is clear that, with the increase of P P , the throughput of primary system in all the three modes increases. In other words, the interference constraint of secondary system to the primary system is well satisfied. It is worth noting that, since there is a direct link in the primary system, the throughput of primary system increases with P P even in the FD mode where self-interference exists. However, as P P increases, self-interference in the relay link slows down the speed of increasing the throughput of primary system. From the line trends in Fig. 3, it can be conjectured that although the opportunistic mode selection is adopted at ST, the throughput of primary system in the opportunistic mode selection also can be equal to that in either the FD mode or the HD mode under different residual self-interference power regimes.
Primary throughput versus power of PT P p
Figure 5 shows the optimal throughput of secondary system R∗ versus the distance d2. From this figure, we can see the optimal throughput of secondary system first increases and then decreases with d2. When d2 ≈ 0.5, which means ST is near the central point between PT and SR, the optimal secondary throughput R∗ reaches its maximum. On the other hand, from the above figure, it is easy to know that optimization results by ODS and AOP methods are nearly the same, which indicates that the AOP method can attain a near-optimal solution.
Optimal secondary throughput R* versus distance d2
Figure 6 illustrates the convergence of the AOP algorithm in iteration when d2 = 0.5. It can be seen from Fig. 6 that the AOP algorithm converges only after five iterations, which indicates that its computational complexity is lower than that of the ODS method.
Optimal secondary throughput R* in iteration
To show the uniformity of the opportunistic mode selection and the AOP algorithm, the impact of the self-interference threshold Γon the probability of opportunistic mode selecting HD and the secondary throughput is further simulated.
The probability of opportunistic mode selecting HD versus the self-interference threshold Γis shown in Fig. 7. In this simulation, the transmission power of PT is set as P P = 0.23W while other parameter settings remain unchanged. In this case, the residual self-interference power is a fixed value. From (13) and (16), we can know that with the increase of Γ, the probability of opportunistic mode selecting HD will decrease, as shown in Fig. 7. It also shows that derivations from (16) are valid.
Probability of opportunistic mode selecting HD versus self-interference threshold Γ
Figures 8 and 9 show the impact of self-interference threshold Γon the throughput of secondary system under settings of P P = 0.15W and P P = 0.35W, respectively. Combining Figs. 8 and 9 and (13), we can know that with the change of the self-interference threshold Γ, the secondary throughput in the opportunistic mode selection will also change. However, it still approaches either in the FD mode or in the HD mode, which further validates the opportunistic mode selection criterion. Moreover, combining Figs. 7, 8, and 9, when Γincreases, the probability of an opportunistic mode selecting FD will increase. However, the performance of FD is not superior to that of HD in all cases, as shown in Fig. 9. Therefore, choosing a proper self-interference threshold to switch between HD and FD modes can effectively enhance the throughput of secondary system. However, it is difficult to obtain the optimal self-interference threshold through solving the optimization problem in (23), as Γ is involved in exponential functions in (22).
Secondary throughput R versus self-interference threshold Γ when P p = 0.15W
In this paper, opportunistic HD/FD relaying mode selection in a cognitive relay network has been studied, which takes the residual self-interference power at the cooperative relay (secondary transmitter) as switching criterion. The problem of maximizing the throughput of secondary system under the interference constraint of primary system and transmission power constraints was formulated. Moreover, an alternate optimization method was introduced to solve this optimization problem, and optimum amplification gains in the HD mode and the FD mode were obtained. Numerical results illustrated that the proposed opportunistic mode selection criterion selects either the HD mode or the FD mode depending on the residual self-interference power, which flexibly utilizes respective advantages of HD and FD and achieves a higher throughput than either the FD mode or the HD mode under different residual self-interference power regimes. This result can be applied to help the adaptive transmission protocol design in cognitive relay networks.
J Chen, L Lv, Y Liu, Y Kuo, C Ren, Energy efficient relay selection and power allocation for cooperative cognitive radio networks. IET Commun. 9(13), 1661–1668 (2015)
Z Yang, Z Ding, P Fan, GK Karagiannidis, Outage performance of cognitive relay networks with wireless information and power transfer. IEEE Trans. Veh. Technol. 65(5), 3828–3833 (2016)
J Si, Z Li, X Chen, B Hao, Z Liu, On the performance of cognitive relay networks under primary user's outage constraint. IEEE Commun. Lett. 15(4), 422–424 (2011)
Y Han, A Pandharipande, SH Ting, Cooperative decode-and-forward relaying for secondary spectrum access. IEEE Trans. Wirel. Commun. 8(10), 4945–4950 (2009)
Z Zhang, K Long, AV Vasilakos, L Hanzo, Full-duplex wireless communications: challenges, solutions, and future research directions. Proc. IEEE 104(7), 1369–1409 (2016)
KT Hemachandra, NC Beaulieu, Outage analysis of opportunistic scheduling in dual-hop multiuser relay networks in the presence of interference. IEEE Trans. Commun. 61(5), 1786–1796 (2013)
Y. Sun, D. W. K. Ng, and R. Schober, Resource allocation for MC-NOMA systems with cognitive relaying, in Proc. IEEE Global Communications Conference (GLOBECOM), Singapore, 2017.
R Kazemi, M Boloursaz, SM Etemadi, F Behnia, Capacity bounds and detection schemes for data over voice. IEEE Trans. Veh. Technol. 65(11), 8964–8977 (2016)
V Jamali, N Zlatanov, H Shoukry, R Schober, Achievable rate of the half-duplex multi-hop buffer-aided relay channel with block fading. IEEE Trans. Wirel. Commun. 14(11), 6240–6256 (2015)
E Ahmed, AM Eltawil, All-digital self-interference cancellation technique for full-duplex systems. IEEE Trans. Wirel. Commun. 14(7), 3519–3532 (2015)
Z Zhang, X Chai, K Long, AV Vasilakos, L Hanzo, Full duplex techniques for 5G networks: self-interference cancellation, protocol design, and relay selection. IEEE Commun. Mag. 53(5), 128–137 (2015)
A Masmoudi, T Le-Ngoc, Channel estimation and self-interference cancelation in full-duplex communication systems. IEEE Trans. Veh. Technol. 66(1), 321–334 (2017)
A. Masmoudi and T. Le-Ngoc, Self-interference cancellation limits in full-duplex communication systems, in Proc. IEEE Global Communications Conference (GLOBECOM), Washington, pp. 1–6, Dec. 2016.
D. Bharadia, E. Mcmilin, and S. Katti, "Full duplex radios," ACM SIGCOMM Computer Communication Review, vol. 43, no. 4, pp. 375–386, 2013.
Y Sun, DWK Ng, J Zhu, R Schober, Multi-objective optimization for robust power efficient and secure full-duplex wireless communication systems. IEEE Trans. Wirel. Commun. 15(8), 5511–5526 (2016)
S Goyal, P Liu, SS Panwar, User selection and power allocation in full-duplex multicell networks. IEEE Trans. Veh. Technol. 66(3), 2408–2422 (2017)
Y Sun, DWK Ng, Z Ding, R Schober, Optimal joint power and subcarrier allocation for full-duplex multicarrier non-orthogonal multiple access systems. IEEE Trans. Commun. 65(3), 1077–1091 (Mar. 2017)
Y Su, L Jiang, C He, Joint relay selection and power allocation for full-duplex DF co-operative networks with outdated CSI. IEEE Commun. Lett. 20(3), 510–513 (2016)
D Nguyen, LN Tran, P Pirinen, M Latva-aho, Precoding for full duplex multiuser MIMO systems: spectral and energy efficiency maximization. IEEE Trans. Signal Process. 61(16), 4038–4050 (2013)
G Zhang, K Yang, P Liu, J Wei, Power allocation for full-duplex relaying-based D2D communication underlaying cellular networks. IEEE Trans. Veh. Technol. 64(10), 4911–4916 (2015)
Y Chang, H Chen, F Zhao, Energy efficiency maximization of full-duplex and half-duplex D2D communications underlaying cellular networks (Mobile Information Systems, Oct., 2016)
S Dang, G Chen, JP Coon, Outage performance analysis of full-duplex relay-assisted device-to-device systems in uplink cellular networks. IEEE Trans. Veh. Technol. 66(5), 4506–4510 (2017)
T Riihonen, S Werner, R Wichman, Hybrid full-duplex/half-duplex relaying with transmit power adaptation. IEEE Trans. Wirel. Commun. 10(9), 3074–3085 (2011)
K Yamamoto, K Haneda, H Murata, S Yoshida, Optimal transmission scheduling for a hybrid of full- and half-duplex relaying. IEEE Commun. Lett. 15(3), 305–307 (2011)
V Aggarwal, NK Shankaranarayanan, Performance of a random-access wireless network with a mix of full- and half-duplex stations. IEEE Commun. Lett. 17(11), 2200–2203 (2013)
J Lee, TQS Quek, Hybrid full-/half-duplex system analysis in heterogeneous wireless networks. IEEE Trans. Wirel. Commun. 14(5), 2883–2895 (2015)
TX Zheng, HM Wang, J Yuan, Z Han, MH Lee, Physical layer security in wireless ad hoc networks under a hybrid full-/half-duplex receiver deployment strategy. IEEE Trans. Wirel. Commun. 16(6), 3827–3839 (2017)
W Tang, S Feng, Y Liu, Y Ding, Hybrid duplex switching in heterogeneous networks. IEEE Trans. Wirel. Commun. 15(11), 7419–7431 (2016)
EE Benítez Olivo, DP Moya Osorio, H Alves, JCSS Filho, M Latva-aho, An adaptive transmission scheme for cognitive decode-and-forward relaying networks: half duplex, full duplex, or no cooperation. IEEE Trans. Wirel. Commun. 15(8), 5586–5602 (2016)
Y Li, T Wang, Z Zhao, M Peng, W Wang, Relay mode selection and power allocation for hybrid one-way/two-way half-duplex/full-duplex relaying. IEEE Commun. Lett. 19(7), 1217–1220 (2015)
M Najafi, V Jamali, R Schober, Optimal relay selection for the parallel hybrid RF/FSO relay channel: non-buffer-aided and buffer-aided designs. IEEE Trans. Commun. 65(7), 2794–2810 (2017)
K Yang, H Cui, L Song, Y Li, Efficient full-duplex relaying with joint antenna-relay selection and self-interference suppression. IEEE Trans. Wirel. Commun. 14(7), 3991–4005 (2015)
H Chen, F Zhao, A hybrid half-duplex/full-duplex transmission scheme in relay-aided cellular networks (EURASIP Journal on Wireless Communications and Networking, Jan, 2017)
Y He, X Yin, H Chen, Spatiotemporal characterization of self-interference channels for 60-GHz full-duplex communication. IEEE Antennas and Wireless Propagation Letters 16, 2220–2223 (2017)
M Duarte, C Dick, A Sabharwal, Experiment-driven characterization of full-duplex wireless systems. IEEE Trans. Wirel. Commun. 11(12), 4296–4307 (2012)
G Miao, N Himayat, GY Li, S Talwar, Distributed interference-aware energy-efficient power optimization. IEEE Trans. Wirel. Commun. 10(4), 1323–1333 (2011)
AC Cirik, Y Rong, Y Hua, Achievable rates of full-duplex MIMO radios in fast fading channels with imperfect channel estimation. IEEE Trans. Signal Process. 62(15), 3874–3886 (2014)
This research was supported by the National Natural Science Foundation of China (61671165, 61471135); the Guangxi Natural Science Foundation (2015GXNSFBB139007, 2016GXNSFGA380009), the Fund of Key Laboratory of Cognitive Radio and Information Processing (Guilin University of Electronic Technology), Ministry of Education, China and the Guangxi Key Laboratory of Wireless Wideband Communication and Signal Processing (CRKL160105, CRKL170101); and the Innovation Project of GUET Graduate Education (2016YJCX91, 2017YJCX27).
Key Laboratory of Cognitive Radio and Information Processing, Guilin University of Electronic Technology, Guilin, 541004, China
Zisheng Cheng, Hongbin Chen & Feng Zhao
Zisheng Cheng
Hongbin Chen
Feng Zhao
ZC was responsible for the mathematical derivation, numerical simulation, and paper writing. HC was responsible for the problem formulation, result discussion, and paper revision. FZ was responsible for the problem discussion, model validation, and result check. All authors read and approved the final manuscript.
Correspondence to Hongbin Chen.
Zisheng Cheng received the B.Eng. degree in communication engineering from Wuhan Luojia University, China, in June 2015 and is working towards the M.E. degree in communication and information systems from Guilin University of Electronic Technology. His research focuses on adaptive transmission in cognitive relay networks.
Hongbin Chen received the B.Eng. degree in electronic and information engineering from Nanjing University of Posts and Telecommunications, Nanjing, China, in 2004 and the Ph.D. degree in circuits and systems from South China University of Technology, Guangzhou, China, in 2009. From October 2006 to May 2008, he was a Research Assistant in the Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hong Kong. From March to April 2014, he was a Research Associate with the same department. From May 2015 to May 2016, he was a Visiting Scholar in the Department of Electrical and Computer Engineering, National University of Singapore, Singapore. He is currently a Professor in the School of Information and Communication, Guilin University of Electronic Technology, Guilin, China. His research interests include energy-efficient wireless communications.
Feng Zhao received the Ph.D. degree in communication and information systems from Shandong University, China, in 2007. Now, he is a Professor in the School of Information and Communication, Guilin University of Electronic Technology, China. His research interests include wireless communications, signal processing, and information security.
Proof of Theorem 1
From (24), the first-order derivative of R with respect to β is represented by
$$ \frac{\mathrm{d}R\left(\beta, {\beta}^{\prime}\right)}{\mathrm{d}\beta }=\frac{\frac{1}{2}{T}_1}{\ln 2\times \left(1+\frac{\beta {r}_5}{\beta {r}_5{G}_3+{\sigma}^2}\right)}\times \frac{\left({r}_5{\sigma}^2\right)}{{\left(\beta {r}_5{G}_3+{\sigma}^2\right)}^2}>0, $$
where the intermediate variable is G3 = (P P r2 + σ2), and
$$ {T}_1=\left(\frac{\lambda_2\Gamma}{\lambda_q{P}_P-{\lambda}_2\Gamma}\exp \left(\frac{\lambda_q{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_q\Gamma \omega }{\beta^{\prime}\Gamma}\right)-\frac{\lambda_q{P}_P}{\lambda_2\Gamma -{\lambda}_q{P}_P}\exp \left(\frac{\lambda_2{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_2\Gamma \omega }{\beta^{\prime }{P}_P}\right)-\exp \left(\frac{-{\lambda}_q\omega }{\beta^{\prime }}\right)\right). $$
The second-order derivative of R with respect to β is represented by
$$ \frac{\mathrm{d}{R}^{{\prime\prime}}\left(\beta, {\beta}^{\prime}\right)}{\mathrm{d}{\beta}^{{\prime\prime} }}=-\frac{\frac{T_1{r}_5{\sigma}^2}{2\ln 2}\times \left[\left(2{r_5}^2{G_3}^2+2{r_5}^2{G}_3\right)\beta +{r}_5{\sigma}^2+2{r}_5{G}_3{\sigma}^2\right]}{{\left({\left(\beta {r}_5{G}_3+{\sigma}^2\right)}^2+\beta {r}_5\left(\beta {r}_5{G}_3+{\sigma}^2\right)\right)}^2}<0. $$
From (26), we can know that R is a monotonically increasing function of β within the feasible region of β. Further from (28), it is easy to know that R is strictly quasi-concave in β for β ∈ [0, +∞). Hence, the proof of Theorem 1 is complete.
From (25), the first-order derivative of R with respect to β′ is represented by
$$ \frac{\mathrm{d}R\left(\beta, {\beta}^{\prime}\right)}{\mathrm{d}{\beta}^{\prime }}={f_R}^{\prime}\left({\beta}^{\prime}\right)={C}_1{f}^{\prime}\left({\beta}^{\prime}\right)+\left[-{f}^{\prime}\left({\beta}^{\prime}\right)\times {\log}_2\left(1+\mathrm{SINR}\right)+\left(1-f\left({\beta}^{\prime}\right)\right)\times \frac{{\beta^{\prime}}^2{C}_2+{\beta}^{\prime }{C}_3+{C}_4}{\ln 2\times \left(1+\mathrm{SINR}\right)}\right], $$
$$ {f}^{\prime}\left({\beta}^{\prime}\right)=\frac{\omega }{{\beta^{\prime}}^2}\left[{\lambda}_q{G}_1\exp \left(\frac{\lambda_q{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_q\Gamma \omega }{\beta^{\prime}\Gamma}\right)-\frac{\lambda_2\Gamma}{P_P}{G}_2\exp \left(\frac{\lambda_2{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_2\Gamma \omega }{\beta^{\prime }{P}_P}\right)-{\lambda}_q\exp \left(\frac{-{\lambda}_q\omega }{\beta^{\prime }}\right)\right], $$
$$ f\left({\beta}^{\prime}\right)={G}_1\exp \left(\frac{\lambda_q{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_q\Gamma \omega }{\beta^{\prime}\Gamma}\right)-{G}_2\exp \left(\frac{\lambda_2{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_2\Gamma \omega }{\beta^{\prime }{P}_P}\right)-\exp \left(\frac{-{\lambda}_q\omega }{\beta^{\prime }}\right), $$
$$ {\displaystyle \begin{array}{l}{C}_1={R}_{half},\\ {}{C}_2=\left(2{r_5}^2{r}_q{G_3}^2\omega +{r}_5{r}_q{\omega \sigma}^2-{r}_5{r_q}^2{G}_3{\omega}^2-{r_5}^2{G}_3{\omega}^2\right),\\ {}{C}_3=\left(2{r}_5{r}_q{G}_3{\omega \sigma}^2+{r_5}^2{G}_3{\omega}^2-{r}_5{r}_q{\omega \sigma}^2-2{r_5}^2{r}_q{G_3}^2\omega \right),\kern0.5em {C}_4={r}_5{\omega}^2{\sigma}^2,\\ {}\mathrm{SINR}=\left({t}_1{\beta^{\prime}}^2+{t}_2{\beta}^{\prime}\right)/\left({t}_3{\beta^{\prime}}^2+{t}_4{\beta}^{\prime }+{C}_5\right),{t}_1={r}_5{r}_q{G}_3,\kern0.5em {t}_2={r}_5\omega, \kern0.5em {t}_3={r}_5{r}_q{G_3}^2,\\ {}{t}_4=\left({r}_q{r}_3{\sigma}^2+{r}_5{G}_3\omega -{r}_3{G}_3-{\sigma}^2{r}_q\right),\kern0.5em {C}_5=\omega \left({r}_3{G}_3+{\sigma}^2-{r}_3{\sigma}^2\right).\end{array}} $$
The second-order derivative of R with respect to β′ is represented by
$$ \frac{\mathrm{d}{R}^{{\prime\prime}}\left(\beta, {\beta}^{\prime}\right)}{\mathrm{d}{\left({\beta}^{\prime}\right)}^{"}}=-\frac{f_1\left({\beta}^{\prime}\right)}{\ln 2\times \left(1+\mathrm{SINR}\right)}-{f}_2\left({\beta}^{\prime}\right)\times {f}^{{\prime\prime}}\left({\beta}^{\prime}\right)<0, $$
$$ {\displaystyle \begin{array}{l}\ {f}_1\left({\beta}^{\prime}\right)=\left[\left(2{C}_2{\beta}^{\prime }+{C}_3\right)+{f}^{{\prime\prime}}\left({\beta}^{\prime}\right)\times \left({\beta^{\prime}}^2{C}_2+{\beta}^{\prime }{C}_3+{C}_4\right)+\left({f}^{\prime}\left({\beta}^{\prime}\right)\times \mathrm{SIN}{\mathrm{R}}_{\beta^{\prime}}^{\prime}\right)\right]\times \left(1+\mathrm{SINR}\right)+\left({\beta^{\prime}}^2{C}_2+{\beta}^{\prime }{C}_3+{C}_4\right)\mathrm{SIN}{\mathrm{R}}_{\beta^{\prime}}^{\prime }>0,\\ {}\ {f}_2\left({\beta}^{\prime}\right)=\left({\log}_2\left(1+\mathrm{SINR}\right)-{C}_1\right)>0,\kern1em \mathrm{SIN}{\mathrm{R}}_{\beta^{\prime}}^{\prime }=\frac{\left({t}_1{t}_4-{t}_2{t}_3\right){\beta^{\prime}}^2+2{t}_1{C}_5{\beta}^{\prime }+{t}_2{C}_5}{{\left({t}_3{\beta^{\prime}}^2+{t}_4{\beta}^{\prime }+{C}_5\right)}^2}>0,\\ {}{f}^{{\prime\prime}}\left({\beta}^{\prime}\right)=\frac{\omega }{{\beta^{\prime}}^4}\times \left[{\lambda}_q{G}_1\left({\lambda}_q{G}_1\omega -2{\beta}^{\prime}\right)\exp \left(\frac{\lambda_q{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_q\Gamma \omega }{\beta^{\prime}\Gamma}\right)-\frac{\lambda_2\Gamma {G}_2}{P_P}\left(\frac{\lambda_2\Gamma \omega -{P}_P}{P_P}\right)\exp \left(\frac{\lambda_2{\beta}^{\prime}\left({\sigma}^2+1\right)-{\lambda}_2\Gamma \omega }{\beta^{\prime }{P}_P}\right)-{\lambda}_q\left(\omega -1\right)\exp \left(\frac{-{\lambda}_q\omega }{\beta^{\prime }}\right)\right]>0.\end{array}} $$
From the above derivation, it is easy to get \( \underset{\beta^{\prime}\to 0}{\lim }{f_R}^{\prime}\left({\beta}^{\prime}\right)=\frac{C_4}{\ln 2}>0 \) and \( \underset{\beta^{\prime}\to +\infty }{\lim }{f_R}^{\prime}\left({\beta}^{\prime}\right)<0 \). Thus, we have f R ′(+∞) < f R ′(β′) < f R ′(0), ∀β′ ∈ [0, +∞). There exists a single value of β′ denoted as β′∗, which makes f R ′(β′∗) = 0. In view of the above, we can know that when β′ < β′∗, dR(β, β′)/dβ′ > 0 and when β′ > β′∗, dR(β, β′)/dβ′ < 0. It means that R(β, β′) first increases and then decreases when β′increases. Thus, we can know that R(β, β′) is strictly quasi-concave in β′ for β′ ∈ [0, +∞). Hence, the proof of Theorem 2 is complete.
Cheng, Z., Chen, H. & Zhao, F. Opportunistic half-duplex/full-duplex relaying mode selection criterion in cognitive relay networks. J Wireless Com Network 2018, 47 (2018). https://doi.org/10.1186/s13638-018-1051-3
Cognitive relay network
Self-interference
Power control | CommonCrawl |
Would a fast inter-stellar spaceship benefit from an aerodynamic shape?
Some (generous) assumptions:
We have a spaceship that can reach a reasonable fraction of light speed.
The ship is able to withstand the high energies of matter impacting at that speed.
Given the amount of matter in inter-stellar space, at high speed, would it encounter enough of it and frequently enough that an aerodynamic shape would significantly reduce its drag (and thus save fuel)?
space drag
For the sorts of vehicles we're used to, like cars and aeroplanes, there are two contributions to drag. There's the drag caused by turbulence, and the drag caused by the effort of pushing the air out of the way. The streamlining in cars and aeroplanes is designed to reduce the drag due to turbulence. The effort of pushing the air out of the way is basically down to the cross sectional area of whatever is pushing it's way through the air.
Turbulence requires energy transfer between gas molecules, so you can't get turbulence on length scales shorter than the mean free path of the gas molecules. The Wikipedia article on mean free paths helpfully lists values of the mean free path for the sort of gas densities you get in space. The gas density is very variable, ranging from $10^6$ molecules per cm$^3$ in nebulae to (much) less than one molecule per cm$^3$ in intergalactic space, bt if we take the value of $10^4$ in the table on Wikipedia the mean free path is 100,000km. So unless your spaceship is very big indeed we can ignore drag due to turbulence.
A sidenote: turbulence is extremely important in nebulae, and a quick glance at any of the Hubble pictures of nebulae shows turbulent motion. However the length scale of the turblence is of the order of light years, so it's nothing to worry a spaceship.
So your spaceship designer doesn't have to worry about the sort of streamlining used in aeroplanes, but what about the drag due to hitting gas molecules? Let's start with a non-relativistic calculation, say at 0.5c, and use the density of $10^4$ I mentioned above, and let's suppose that the gas is atomic hydrogen. If the mass per cubic metre is $\rho$ and you're travelling at a speed $v$ m/sec then the mass you hit per second is:
$$ m = \rho v $$
Suppose when you hit the gas molecules you accelerate them to match your speed, then the rate of change of momentum is this mass times your speed, $v$, and the rate of change of momentum is just the force so:
$$ F = \rho v^2 $$
A density of $10^4$ atoms/cm$^3$ is $10^8$ per m$^3$ or about $1.7 \times 10^{-19}$kg and 0.5c is $1.5 \times 10^8$m/sec so $F$ is about 0.004N per square metre.
So unless your spaceship is very big the drag from hitting atoms is insignificant as well, so not only do you not worry about streamlining, you don't have to worry about the cross section either. However so far I've only talked about non-relativistic speeds, and at relativistic speeds you get two effects:
the gas density goes up due to Lorentz contraction
the relativistic mass of the hydrogen atoms goes up so it gets increasingly harder to accelerate them to match your speed
These two effects add a factor of $\gamma^2$ to the equation for the force:
$$ F = \rho v^2 \gamma^2 $$
so if you take v = 0.999c then you get $F$ is about 7.5N/m$^2$, which is still pretty small. However $\gamma$ increases without limit as you approach the speed of light so eventually the drag will be enough to stop you accelerating any more.
Incidentally, if you have a friendly university library to hand have a look at Powell, C. (1975) Heating and Drag at Relativistic Speeds. J. British Interplanetary Soc., 28, 546-552. Annoyingly, I have Googled in vain for an online copy.
John RennieJohn Rennie
$\begingroup$ so extremely fast spaceships should be needle-shaped to minimize cross section? $\endgroup$ – endolith Jul 17 '13 at 14:33
$\begingroup$ Unless you're travelling very, very close to the speed of light the shape of the spaceship makes little difference. $\endgroup$ – John Rennie Jul 17 '13 at 14:40
$\begingroup$ Very interesting answer -- you mention the gas density increasing due to Lorentz contraction. Is it possible that the density increase (and hence the mean free path decrease) makes the turbulent length scales small enough to matter? Or does the Knudsen number stay constant regardless of velocity? $\endgroup$ – tpg2114♦ Dec 31 '13 at 1:29
$\begingroup$ Sorry, but based on my understanding you have a number of things wrong. Firstly, the breakdown into "drag caused by the effort of pushing air out of the way" and "drag caused by turbulence" is wrong. Yes, we have form drag, but the other component of drag (skin friction) would happen regardless of whether we have turbulence (and in fact would be much worse in many cases if you didn't have turbulence). Secondly, aerodynamic design addresses all kinds of drag, and in fact form drag is one of the primary issues it seeks to address, so saying "streamlining is to address turbulence" is wrong. $\endgroup$ – Asad Saeeduddin Dec 21 '15 at 21:33
$\begingroup$ @AsadSaeeduddin: well yes, but when answering questions we have to judge at what level to pitch the answer, and in any case turbulence isn't relevant here because the mean free path is too long. A detailed discussion of turbulent drag would be a diversion. Please feel free to post your own answer if you feel mine doesn't adequately address the question. $\endgroup$ – John Rennie Dec 22 '15 at 6:11
At that speed aerodynamic shape would be unimportant since almost all particles would penetrate the hall. So more important would be total cross-sectional area perpendicular to velocity (how many particles per second collide with ship). So I assume that ship should have torpedo like shape to reduce total cross-sectional area.
More probable is use of some kind of electromagnetic shielding (to protect crew) but in such case aerodynamics of ship is also unimportant.
Don't forget that dimension in direction of movement is compress in such speed according to Special Relativity so aerodynamic shape becomes less effective (harder to achieve).
Marek RMarek R
Yes, this shape would be good, but not for aerodynamic reasons. As the others have commented, there's not much matter in your path. However, the matter that is in your way really hits your hull hard.
At that point, look at tank designs. Since WW2, tanks have sloped armor: armor which is at a angle to incoming shells. This increases the effective armor thickness by 1/cos(θ). An aerodynamic shape of a spaceship achieves a similar effect.
So, essentially your second assumption is dependent on this shape.
MSaltersMSalters
No, in fact, you can see why even at more mundane speeds.
The drag that you're typically familiar with on cars and planes is happening at speeds well below the speed of sound. Remember what "speed of sound" actually is, it's the speed at which one air molecule can bump into the next to pass a wave through the air.
Why is that important? Because when you're going slowly, that means that when you, a plane say, impacts the air, that molecule bumps into another one in front of it, and so on... causing the entire air mass in the area to sort of "get out of the way". But it can only do that so much, so if you try to push a pie plate through the air one way the air can't move sideways fast enough and just piles up and you get lots of drag. But turn it sideways and now most of it can get out of the way in time and now you have a Frisby.
If you look at this at a macroscopic level, the result is a series of "streamlines" that the air follows under a given set of conditions. If you design your aircraft to follow those lines, you minimize the number of parts of the aircraft that impact with the air, and thereby reduce drag. So, for instance, you might find that narrowing the fuselage just so causes the air to separate ever so slightly behind the door (for instance) and thus have less drag on the rear fuselage. Until we had fast computers this was as much an art as science, but now we solve everything by simulating the crap out of it.
Ok, now what's that got to do with anything? Well when you start getting up near the speed of sound, all of this goes out of the window.
At supersonic speeds, the air molecules literally cannot get out of the way before you hit them. So every single molecule in front of you hits you. The key to lowering drag at supersonic speeds is to simply reduce your cross section as much as possible, which is why things like the F-104 and Concorde look like darts. Streamlining works very differently now, and it's perhaps not even accurate to call it streamlining.
There is an effect even at supersonic speeds that comes into play, and that's shock waves. These do indeed travel faster than the speed of sound, but they don't really "move" the air as much as just shake it. At a simple level, the shock waves have lower speed behind them, so the trick is to put something out in front of the aircraft and try to keep as much of the rest behind it. That's why supersonic aircraft have sharp noses, they're trying to generate a shock wave they can "hide behind".
Ok, now what does any of this have to do with your question? Well, you're talking about a vehicle moving at speed way beyond any sort of inter-molecular interactions. So the basic idea of streamlining is just not going to work, and the idea of using shock waves won't either because there's just not enough particles to create one.
So then the answer pops out - the key is to reduce cross-section, and that's basically it.
Maury MarkowitzMaury Markowitz
Not the answer you're looking for? Browse other questions tagged space drag or ask your own question.
How fast until you feel wind in space?
How to calculate heat generated from cosmic dust in space?
Does a interstellar spacecraft traveling at relativistic velocity require continous thrust to maintain velocity?
Can I measure a journey time < 100 years on a 100 light year voyage?
What would be an appropiate shape for a parachute?
How far from a spacecraft would it's exhaust cool to BR temperatures?
How fast would a truck have to go to pull a pedestrian onto the road?
What would happen to a rocket car in a torus-shaped spaceship that encompasses the earth?
Heat Transfer From a Spaceship in Deep Space
How would a huge ball of air look in space from outside and from inside?
Would the terminal velocity of a roller coaster train differ on the track from if it was free falling through air?
Why would we freeze in space if there is no matter to conduct the heat away from us?
Most aerodynamic shape
Would it be possible to propel a spaceship by pushing against the sun's magnetic field? | CommonCrawl |
7 Approximate Methods for Solving One-Particle Schrödinger Equations
7.1 Expansion in a Basis
7.1.1 Solving the Secular Equation
7.1.2 Example for the Particle-in-a-Box
7.1.3 Particle-in-a-Box with Jacobi polynomials
7.1.3.1 🤔 Thought-Provoking Question: Why does adding odd-order polynomials to the basis set not increase the accuracy for the ground state wavefunction.
7.1.3.2 🤔 Thought-Provoking Question: Why does one get exactly the same results for the Jacobi polynomials and the simpler $(1-x)(1+x)x^k$ polynomials?
7.2 Perturbation Theory
7.2.1 The Perturbed Hamiltonian
7.2.2 Hellmann-Feynman Theorem
7.2.2.1 Derivation of the Hellmann-Feynman Theorem by Differentiation Under the Integral Sign
7.2.2.2 Derivation of the Hellmann-Feynman Theorem from First-Order Perturbation Theory
7.2.3 Perturbed Wavefunctions
7.2.4 The Law of Diminishing Returns and Accelerating Losses
7.2.5 Example: Particle in a Box with a Sloped Bottom
7.2.5.1 The Hamiltonian for an Applied Uniform Electric Field
7.2.5.2 Perturbation Theory for the Particle-in-a-Box in a Uniform Electric Field
7.2.5.3 Variational Approach to the Particle-in-a-Box in a Uniform Electric Field
7.2.5.4 Basis-Set Expansion for the Particle-in-a-Box in a Uniform Electric Field
7.2.5.5 Demonstration
7.3 🪞 Self-Reflection
7.4 🤔 Thought-Provoking Questions
7.5 🔁 Recapitulation
7.6 🔮 Next Up...
7.7 📚 References
Approximate Methods for Solving One-Particle Schrödinger Equations
Up to this point, we've focused on systems for which we can solve the Schrödinger equation. Unfortunately, there are very few such systems, and their relevance for real chemical systems is very limited. This motivates the approximate methods for solving the Schrödinger equation. One must be careful, however, if one makes poor assumptions, the results of approximate methods can be very poor. Conversely, with appropriate insight, approximation techniques can be extremely useful.
Expansion in a Basis¶
We have seen the eigenvectors of a Hermitian operator are a complete basis, and can be chosen to be orthonormal. We have also seen how a wavefunction can be expanded in a basis, $$ \Psi(x) = \sum_{k=0}^{\infty} c_k \phi_k(x) $$ Note that there is no requirement that the basis set, $\{\phi_k(x) \}$ be eigenvectors of a Hermitian operator: all that matters is that the basis set is complete. For real problems, of course, one can choose only a finite number of basis functions, $$ \Psi(x) \approx \sum_{k=0}^{N_{\text{basis}}} c_k \phi_k(x) $$ but as the number of basis functions, $N_{\text{basis}}$, increases, results should become increasingly accurate.
Substituting this expression for the wavefunction into the time-independent Schrödinger equation, $$ \hat{H} \Psi(x) = \hat{H} \sum_{k=0}^{\infty} c_k \phi_k(x) = E \sum_{k=0}^{\infty} c_k \phi_k(x) $$ Multiplying on the left by $\left(\phi_j(x) \right)^*$ and integrating over all space,
$$ \sum_{k=0}^{\infty} \left[\int \left(\phi_j(x) \right)^* \hat{H} \phi_k(x) dx \right] c_k = E \sum_{k=0}^{\infty}\left[ \int \left(\phi_j(x) \right)^* \phi_k(x) dx\right] c_k $$ At this stage we usually define the Hamiltonian matrix, $\mathbf{H}$, as the matrix with elements $$ h_{jk} = \int \left(\phi_j(x) \right)^* \hat{H} \phi_k(x) dx $$ and the overlap matrix, $\mathbf{S}$ as the matrix with elements $$ s_{jk} = \int \left(\phi_j(x) \right)^* \phi_k(x) dx $$ If the basis is orthonormal, then the overlap matrix is equal to the identity matrix, $\mathbf{S} = \mathbf{I}$ and its elements are therefore given by the Kronecker delta, $s_{jk} = \delta_{jk}$.
The Schrödinger equation therefore can be written as a generalized matrix eigenvalue problem: $$ \mathbf{Hc}=E\mathbf{Sc} $$ or, in element-wise notation, as: $$ \sum_{k=0}^{\infty} h_{jk} c_k = E \sum_{k=0}^{\infty} s_{jk} c_k $$ In the special case where the basis functions are orthogonormal, $\mathbf{S} = \mathbf{I}$ and this is an ordinary matrix eigenvalue problem, $$ \mathbf{Hc}=E\mathbf{c} $$ or, in element-wise notation, as: $$ \sum_{k=0}^{\infty} h_{jk} c_k = E c_j $$
Solving the Secular Equation¶
In the context of quantum chemistry, the generalized eigenvalue problem $$ \mathbf{Hc}=E\mathbf{Sc} $$ is called the secular equation. To solve the secular equation:
Choose a basis, $\{|\phi_k\rangle \}$ and a basis-set size, $N_{\text{basis}}$
Evaluate the matrix elements of the Hamiltonian and the overlap matrix \begin{align} h_{jk} &= \int \left(\phi_j(x) \right)^* \hat{H} \phi_k(x) dx \qquad \qquad 0 \le j,k \le N_{\text{basis}} \\ s_{jk} &= \int \left(\phi_j(x) \right)^* \phi_k(x) dx \end{align}
Solve the generalized eigenvalue problem $$ \sum_{k=0}^{\infty} h_{jk} c_k = E \sum_{k=0}^{\infty} s_{jk} c_k $$
Because of the variational principle, the lowest eigenvalue will always be greater than or equal to the true ground-state energy.
Example for the Particle-in-a-Box¶
As an example, consider an electron confined to a box with length 2 Bohr, stretching from $x=-1$ to $x=1$. We know that the exact energy of this system is $$E=\tfrac{(\pi n)^2}{8}$$ The exact wavefunctions are easily seen to be $$\psi_n(x) = \begin{cases} \cos\left(\tfrac{n \pi x}{2}\right) & n=1,3,5,\ldots \\ \sin\left(\tfrac{n \pi x}{2}\right) & n=2,4,6,\ldots \end{cases} $$
However, for pedagogical purposes, suppose we did not know these answers. We know that the wavefunction will be zero at $x= \pm1$, so we might hypothesize a basis like: $$ \phi_k(x) = (x-1)(x+1)x^k = x^{k+2} - x^{k} \qquad \qquad k=0,1,2,\ldots $$ The overlap matrix elements are
\begin{align} s_{jk} &= \int_{-1}^{1} \left(\phi_j(x) \right)^* \phi_k(x) dx \\ &= \int_{-1}^{1} \left(x^{j+2}-x^{j}\right) \left(x^{k+2} - x^{k}\right) dx \\ &= \int_{-1}^{1} \left(x^{j+k+4}+x^{j+k} - 2 x^{j+k+2}\right) dx \\ &= \left[\frac{x^{k+j+5}}{k+j+5} + \frac{x^{k+j+1}}{k+j+1} - 2\frac{x^{k+j+3}}{k+j+3} \right]_{-1}^{+1} \end{align}
This integral is zero when $k+j$ is odd. Specifically, $$ s_{jk} = \begin{cases} 0 & j+k \text{ is odd}\\ 2\left(\frac{1}{k+j+5} - \frac{2}{k+j+3} + \frac{1}{k+j+1} \right) & j+k \text{ is even} \end{cases} $$ and the Hamiltonian matrix elements are
\begin{align} h_{jk} &= \int_{-1}^{1} \left(\phi_j(x) \right)^* \hat{H} \phi_k(x) dx \\ &= \int_{-1}^{1} \left(x^{j+2}-x^{j}\right) \left(-\tfrac{1}{2}\tfrac{d^2}{dx^2}\right) \left(x^{k+2} - x^{k}\right) dx \\ &= -\tfrac{1}{2}\int_{-1}^{1} \left(x^{j+2}-x^{j}\right) \left((k+2)(k+1)x^{k} - (k)(k-1)x^{k-2}\right) dx \\ &= -\tfrac{1}{2}\int_{-1}^{1} \left((k+2)(k+1)x^{k+j+2} + (k)(k-1)x^{k+j-2} -\left[(k+2)(k+1) + k(k-1) \right]x^{k+j} \right) dx \\ &= -\tfrac{1}{2}\left[\left(\frac{(k+2)(k+1)}{k+j+3}x^{k+j+3} + \frac{(k)(k-1)}{k+j-1}x^{k+j-1} - \frac{(k+2)(k+1) + k(k-1)}{k+j+1}x^{k+j+1} \right) \right]_{-1}^{+1} \end{align}
This integral is also zero when $k+j$ is odd. Specifically, $$ h_{jk} = \begin{cases} 0 & j+k \text{ is odd}\\ 2\left(\frac{(k+2)(k+1)}{k+j+3} - \frac{(k+2)(k+1) + k(k-1)}{k+j+1} + \frac{(k)(k-1)}{k+j-1} \right) & j+k \text{ is even} \end{cases} $$
from scipy.linalg import eigh
def compute_energy_ground_state(n_basis):
"""Compute ground state energy by solving the Secular equations."""
# assign S & H to zero matrices
s = np.zeros((n_basis, n_basis))
h = np.zeros((n_basis, n_basis))
# loop over upper-triangular elements & compute S & H elements
for j in range(0, n_basis):
for k in range(j, n_basis):
if (j + k) % 2 == 0:
s[j, k] = s[k, j] = 2 * (1 / (k + j + 5) - 2 / (k + j + 3) + 1 / (k + j + 1))
h[j, k] = h[k, j] = -1 * (((k + 2) * (k + 1)) / (k + j + 3) - ((k + 2) * (k + 1) + k * (k - 1)) / (k + j + 1) + (k**2 - k) / (k + j - 1))
# solve Hc = ESc to get eigenvalues E
e_vals = eigh(h, s, eigvals_only=True)
return e_vals[0]
# plot basis set convergence of Secular equations
# -----------------------------------------------
# evaluate energy for a range of basis functions
n_values = np.arange(2, 11, 1)
e_values = np.array([compute_energy_ground_state(n) for n in n_values])
expected_energy = (1 * np.pi)**2 / 8.
plt.rcParams['figure.figsize'] = [15, 8]
fig, axes = plt.subplots(1, 2)
fig.suptitle("Basis Set Convergence of Secular Equations", fontsize=24, fontweight='bold')
for index, axis in enumerate(axes.ravel()):
if index == 0:
# plot approximate & exact energy
axis.plot(n_values, e_values, marker='o', linestyle='--', label='Approximate')
axis.plot(n_values, np.repeat(expected_energy, len(n_values)), marker='', linestyle='-', label='Exact')
# set axes labels
axis.set_xlabel("Number of Basis Functions", fontsize=12, fontweight='bold')
axis.set_ylabel("Ground-State Energy [a.u.]", fontsize=12, fontweight='bold')
axis.legend(frameon=False, fontsize=14)
# plot log of approximate energy error (skip the last two values because they are zero)
axis.plot(n_values[:-2], np.log10(e_values[:-2] - expected_energy), marker='o', linestyle='--')
axis.set_ylabel("Log10 (Ground-State Energy Error [a.u.])", fontsize=12, fontweight='bold')
Particle-in-a-Box with Jacobi polynomials¶
Similar results can be obtained with different basis functions. It is often convenient to use an orthonormal basis, where $s_{jk} = \delta_{jk}$. For the particle-in-a-box with $-1 \le x \le 1$, one such set of basis functions can be constructed from the (normalized) Jacobi polynomials, $$ \phi_j(x) = N_j(1-x)(1+x)P_j^{(2,2)}(x) $$ where $N_j$ is the normalization constant $$ N_j = \sqrt{\frac{(2j+5)(j+4)(j+3)}{32(j+2)(j+1)}} $$ To evaluate the Hamiltonian it is useful to know that: \begin{align} \frac{d^2\phi_j(x)}{dx^2} &= N_j \left(-2 P_j^{(2,2)}(x) - 4x \frac{d P_j^{(2,2)}(x)}{dx} + (1-x)(1+x)\frac{d^2 P_j^{(2,2)}(x)}{dx^2} \right) \\ &= N_j \left(-2 P_j^{(2,2)}(x) - 4x \frac{j+5}{2} P_{j-1}^{(3,3)}(x) + (1-x^2)\frac{(j+5)(j+6)}{4}P_{j-2}^{(4,4)}(x) \right) \end{align} The Hamiltonian matrix elements could be evaluated analytically, but the expression is pretty complicated. It's easier to merely evaluate them numerically as: $$ h_{jk} = -\frac{1}{2}N_j N_k \int_{-1}^1 (1-x)(1+x) P_k^{(2, 2)}(x) \left(-2 P_j^{(2,2)}(x) - 4x \frac{j+5}{2} P_{j-1}^{(3,3)}(x) + (1-x^2)\frac{(j+5)(j+6)}{4}P_{j-2}^{(4,4)}(x) \right) dx $$
from scipy.special import eval_jacobi
from scipy.integrate import quad
"""Compute ground state energy for a particle-in-a-Box with Jacobi basis."""
def normalization(i):
return np.sqrt((2 * i + 5) * (i + 4) * (i + 3) / (32 * (i + 2) * (i + 1)))
def phi_squared(x, j):
return (normalization(j) * (1 - x) * (1 + x) * eval_jacobi(j, 2, 2, x))**2
def integrand(x, j, k):
term = -2 * eval_jacobi(j, 2, 2, x)
if j - 1 >= 0:
term -= 2 * x * (j + 5) * eval_jacobi(j - 1, 3, 3, x)
term += 0.25 * (1 - x**2) * (j + 5) * (j + 6) * eval_jacobi(j - 2, 4, 4, x)
return (1 - x) * (1 + x) * eval_jacobi(k, 2, 2, x) * term
# assign H to a zero matrix
# compute H elements
for j in range(n_basis):
for k in range(n_basis):
integral = quad(integrand, -1.0, 1.0, args=(j, k))[0]
h[j, k] = -0.5 * normalization(j) * normalization(k) * integral
# solve Hc = Ec to get eigenvalues E
e_vals = eigh(h, None, eigvals_only=True)
# plot basis set convergence of particle-in-a-Box with Jacobi basis
# -----------------------------------------------------------------
fig.suptitle("Basis Set Convergence of Particle-in-a-Box with Jacobi Basis", fontsize=24, fontweight='bold')
🤔 Thought-Provoking Question: Why does adding odd-order polynomials to the basis set not increase the accuracy for the ground state wavefunction.¶
Hint: The ground state wavefunction is an even function. A function is said to be even if it is symmetric about the origin, $f(x) = f(-x)$. A function is said to be odd if it is antisymmetric around the origin, $f(x) = - f(-x)$. Even-degree polynomials (e.g., $1, x^2, x^4, \ldots$) are even functions; odd-degree polynomials (e.g.; $x, x^3, x^5, \ldots$) are odd functions. $\cos(ax)$ is an even function and $\sin(ax)$ is an odd function. $\cosh(ax)$ is an even function and $\sinh(ax)$ is an odd function. In addition,
A linear combination of odd functions is also odd.
A linear combination of even functions is also even.
The product of two odd functions is even.
The product of two even functions is even.
The product of an odd and an even function is odd.
The integral of an odd function from $-a$ to $a$ is always zero.
The integral of an even function from $-a$ to $a$ is always twice the value of its integral from $0$ to $a$; it is also twice its integral from $-a$ to $0$.
The first derivative of an even function is odd.
The first derivative of an odd function is even.
The k-th derivative of an even function is odd if k is odd, and even if k is even.
The k-th derivative of an odd function is even if k is odd, and odd if k is even.
These properties of odd and even functions are often very useful. In particular, the first and second properties indicate that if you know that the exact wavefunction you are looking for is odd (or even), it will be a linear combination of basis functions that are odd (or even). E.g., odd basis functions are useless for approximating even eigenfunctions.
🤔 Thought-Provoking Question: Why does one get exactly the same results for the Jacobi polynomials and the simpler $(1-x)(1+x)x^k$ polynomials?¶
Hint: Can you rewrite one set of polynomials as a linear combination of the others?
Perturbation Theory¶
It is not uncommon that a Hamiltonian for which the Schrödinger equation is difficult to solve is "close" to another Hamiltonian that is easier to solve. In such cases, one can attempt to solve the easier problem, then perturb the system towards the actual, more difficult to solve, system of interest. The idea of leveraging easy problems to solve difficult problems is the essence of perturbation theory.
The Perturbed Hamiltonian¶
Suppose that for some Hamiltonian, $\hat{H}$, we know the eigenfunctions and eigenvalues, $$ \hat{H} |\psi_k \rangle = E_k |\psi_k \rangle $$ However, we are not interested in this Hamiltonian, but a different Hamiltonian, $\tilde{H}$, which we can write as: $$ \tilde{H} = \hat{H} + \hat{V} $$ where obviously $$ \hat{V} = \tilde{H} - \hat{H} $$
Let us now define a family of perturbed Hamiltonians, $$ \hat{H}(\lambda) = \hat{H} + \lambda \hat{V} $$ where obviously: $$ \hat{H}(\lambda) = \begin{cases} \hat{H} & \lambda = 0\\ \tilde{H} & \lambda = 1 \end{cases} $$ Writing the Schrödinger equation for $\hat{H}_\lambda$, we have: $$ \hat{H}(\lambda) |\psi_k(\lambda) \rangle = E_k(\lambda) |\psi_k(\lambda) \rangle $$ This equation holds true for all values of $\lambda$. Since we know the answer for $\lambda = 0$, and we assume that the perturbed system described by $\tilde{H}$ is close enough to $\hat{H}$ for the solution at $\lambda =0$ to be useful, we will write the expand the energy and wavefunction as Taylor-MacLaurin series
\begin{align} E_k(\lambda) &= E_k(\lambda=0) + \lambda \left[\frac{dE_k}{d \lambda} \right]_{\lambda=0} + \frac{\lambda^2}{2!} \left[\frac{d^2E_k}{d \lambda^2} \right]_{\lambda=0} + \frac{\lambda^3}{3!} \left[\frac{d^3E_k}{d \lambda^3} \right]_{\lambda=0} + \cdots \\ |\psi_k(\lambda) \rangle &= |\psi_k(\lambda=0) \rangle + \lambda \left[\frac{d|\psi_k \rangle}{d \lambda} \right]_{\lambda=0} + \frac{\lambda^2}{2!} \left[\frac{d^2|\psi_k \rangle}{d \lambda^2} \right]_{\lambda=0} + \frac{\lambda^3}{3!} \left[\frac{d^3|\psi_k \rangle}{d \lambda^3} \right]_{\lambda=0} + \cdots \end{align}
When we write this, we are implicitly assuming that the derivatives all exist, which is not true if the zeroth-order state is degenerate (unless the perturbation does not break the degeneracy).
If we insert these expressions into the Schrödinger equation for $\hat{H}(\lambda)$, we obtain a polynomial of the form: $$ 0=p(\lambda)= a_0 + a_1 \lambda + a_2 \lambda^2 + a_3 \lambda^3 + \cdots $$ This equation can only be satisfied for all $\lambda$ if all its terms are zero, so $$ 0 = a_0 = a_1 = a_2 = \cdots $$ The key equations that need to be solved are listed below. First there is the zeroth-order equation, which is automatically satisfied: $$ 0 = a_0 = \left( \hat{H}(0) - E_k(0) \right) | \psi_k(0) \rangle $$ The first-order equation is: $$ 0 = a_1 = \left( \hat{H}(0) - E_k(0) \right) \left[\frac{d|\psi_k \rangle}{d \lambda} \right]_{\lambda=0} +\left(\hat{V} - \left[\frac{dE_k}{d \lambda} \right]_{\lambda=0}\right)|\psi_k(\lambda=0) \rangle $$ The second-order equation is: $$ 0 = a_2 = \tfrac{1}{2} \left( \hat{H}(0) - E_k(0) \right) \left[\frac{d^2|\psi_k \rangle}{d \lambda^2} \right]_{\lambda=0} +\left(\hat{V} - \left[\frac{dE_k}{d \lambda} \right]_{\lambda=0}\right)\left[\frac{d|\psi_k \rangle}{d \lambda} \right]_{\lambda=0} -\tfrac{1}{2} \left[\frac{d^2E_k}{d \lambda^2} \right]_{\lambda=0} |\psi_k(\lambda=0) \rangle $$ Higher-order equations are increasingly complicated, but still tractable in some cases. One usually applies perturbation theory only when the perturbation is relatively small, which usually suffices to ensure that the Taylor series expansion converges rapidly and higher-order terms are relatively insignificant.
Hellmann-Feynman Theorem¶
The Hellmann-Feynman theorem has been discovered many times, most impressively by Richard Feynman, who included it in his undergraduate senior thesis. In simple terms:
Hellmann-Feynman Theorem: Suppose that the Hamiltonian, $\hat{H}(\lambda)$ depends on a parameter. Then the first-order change in the energy with respect to the parameter is given by the equation, $$ \left[\frac{dE}{d\lambda}\right]_{\lambda = \lambda_0} = \int \left( \psi(\lambda_0;x)\right)^* \left[\frac{d\hat{H}}{d \lambda} \right]_{\lambda = \lambda_0}\psi(\lambda_0;x) \; dx $$
Derivation of the Hellmann-Feynman Theorem by Differentiation Under the Integral Sign¶
The usual way to derive the Hellmann-Feynman theorem uses the technique of differentiation under the integral sign. Therefore, $$ \frac{dE}{d\lambda} = \frac{d}{d\lambda}\int \left( \psi(\lambda;x)\right)^* \hat{H}\psi(\lambda;x) \; dx = \int \frac{d\left( \psi(\lambda;x)\right)^* \hat{H}\psi(\lambda;x) }{d\lambda}\; dx $$ While such an operation is not always mathematically permissible, it is usually permissible, as should be clear from the definition of the derivative as a limit of a difference, $$ \left[\frac{dE}{d\lambda}\right]_{\lambda = \lambda_0} = \lim_{h\rightarrow0} \frac{E(\lambda_0 + h) - E(\lambda_0)}{h} $$ and the fact that the integral of a sum is the sum of the integrals. Using the product rule for derivatives, one obtains:
\begin{align} \frac{dE}{d\lambda} &= \int \frac{d\left( \psi(\lambda;x)\right)^* \hat{H}\psi(\lambda;x) }{d\lambda}\; dx \\ &=\int \frac{\left(\psi(\lambda;x)\right)^*}{d\lambda} \hat{H} \psi(\lambda;x) + \left( \psi(\lambda;x)\right)^* \frac{d\hat{H}}{d \lambda}\psi(\lambda;x) + \left( \psi(\lambda;x)\right)^* \hat{H} \frac{d\psi(\lambda;x)}{d\lambda} \; dx \\ &=\int \frac{\left(\psi(\lambda;x)\right)^*}{d\lambda} E(\lambda) \psi(\lambda;x) + \left( \psi(\lambda;x)\right)^* \frac{d\hat{H}}{d \lambda}\psi(\lambda;x) + \left( \psi(\lambda;x)\right)^* E(\lambda) \frac{d\psi(\lambda;x)}{d\lambda} \; dx \\ &=E(\lambda) \int + \frac{\left(\psi(\lambda;x)\right)^*}{d\lambda} \psi(\lambda;x) + \left( \psi(\lambda;x)\right)^* \frac{d\psi(\lambda;x)}{d\lambda} \; dx +\int \left( \psi(\lambda;x)\right)^* \frac{d\hat{H}}{d \lambda}\psi(\lambda;x) \; dx \\ &=\int \left( \psi(\lambda;x)\right)^* \frac{d\hat{H}}{d \lambda}\psi(\lambda;x) \; dx \end{align}
In the third-from-last line we used the eigenvalue relation and the Hermitian property of the Hamiltonian; in the last step we have used the fact that the wavefunctions are normalized and the fact that the derivative of a constant is zero to infer that the terms involving the wavefunction derivatives vanish. Specifically, we used: \begin{align} \int &\left(\left[\frac{d\left( \psi(\lambda0;x)\right)^*}{d \lambda}\right]{\lambda = \lambda_0} \psi(\lambda_0;x)
\left( \psi(\lambda_0;x)\right)^ \left[\frac{d \psi(\lambda0;x)}{d \lambda}\right]{\lambda = \lambda_0}\right) \; dx \ &= \left[\frac{d}{d \lambda} \int \left( \psi(\lambda_0;x)\right)^ \psi(\lambda0;x) \; dx \right]{\lambda = \lambda_0}\ &= \frac{d 1}{d\lambda} \ &= 0 \end{align}
Derivation of the Hellmann-Feynman Theorem from First-Order Perturbation Theory¶
Starting with the equation from first-order perturbation theory, $$ 0 = a_1 = \left( \hat{H}(0) - E_k(0) \right) \left[\frac{d|\psi_k \rangle}{d \lambda} \right]_{\lambda=0} +\left(\hat{V} - \left[\frac{dE_k}{d \lambda} \right]_{\lambda=0}\right)|\psi_k(0) \rangle $$ multiply on the left-hand-side by $\langle \psi_k(0) |$. (I.e., multiply by $\psi_k(0;x)^*$ and integrate.) Then: $$ 0 = \langle \psi_k(0) |\left( \hat{H}(0) - E_k(0) \right) \left[\frac{d|\psi_k \rangle}{d \lambda} \right]_{\lambda=0} +\langle \psi_k(0) |\left(\hat{V} - \left[\frac{dE_k}{d \lambda} \right]_{\lambda=0}\right)|\psi_k(0) \rangle $$ Because the Hamiltonian is Hermitian, the first term is zero. The second term can be rearranged to give the Hellmann-Feynman theorem, $$ \left[\frac{dE_k}{d \lambda} \right]_{\lambda=0} \langle \psi_k(0) |\psi_k(0) \rangle = \langle \psi_k(0) |\hat{V}|\psi_k(0) \rangle = \langle \psi_k(0) |\left[\frac{d\hat{H}}{d\lambda} \right]_{\lambda=0}|\psi_k(0) \rangle $$
Perturbed Wavefunctions¶
To determine the change in the wavefunction, $$\psi_k'(\lambda) = \frac{d |\psi_k\rangle}{d\lambda}$$ it is helpful to adopt the convention of intermediate normalization, whereby $$ \langle \psi_k(0) | \psi_k(\lambda) \rangle = 1 $$ for all $\lambda$. Inserting the series expansion for $|\psi(\lambda) \rangle$ one finds that \begin{align} 1 &= \langle \psi_k(0) | \psi_k(0) \rangle + \lambda \langle \psi_k(0) | \psi_k'(0) \rangle + \tfrac{\lambda^2}{2!} \langle \psi_k(0) | \psi_k''(0) \rangle + \cdots \\ 1 &= 1 + \lambda \langle \psi_k(0) | \psi_k'(0) \rangle + \tfrac{\lambda^2}{2!} \langle \psi_k(0) | \psi_k''(0) \rangle + \cdots \end{align} where in the second line we have used the normalization of the zeroth-order wavefunction, $\langle \psi_k(0) | \psi_k(0) \rangle = 1$. Since this equation holds for all $\lambda$, it must be that $$ 0=\langle \psi_k(0) | \psi_k'(0) \rangle\\ 0=\langle \psi_k(0) | \psi_k''(0) \rangle\\ \vdots $$ Because the eigenfunctions of $\hat{H}(0)$ are a complete basis, we can expand $ | \psi_k'(0) \rangle$ as: $$ | \psi_k'(0) \rangle = \sum_{j=0}^{\infty} c_j | \psi_j(0) \rangle $$ but because $\langle \psi_k(0) | \psi_k'(0) \rangle=0$, it must be that $c_k = 0$. So: $$ | \psi_k'(0) \rangle = \sum_{j=0\\ j \ne k}^{\infty} c_j | \psi_j(0) \rangle $$ We insert this expansion into the expression from first-order perturbation theory: $$ 0 = \left( \hat{H}(0) - E_k(0) \right) \sum_{j=0\\ j \ne k}^{\infty} c_j | \psi_j(0) \rangle +\left(\hat{V} - \left[\frac{dE_k}{d \lambda} \right]_{\lambda=0}\right)|\psi_k(0) \rangle $$ and multiply on the left by $\langle \psi_l(0) |$, with $l \ne k$. \begin{align} 0 &= \langle \psi_l(0) |\left( \hat{H}(0) - E_k(0) \right) \sum_{j=0\\ j \ne k}^{\infty} c_j | \psi_j(0) \rangle +\langle \psi_l(0) |\left(\hat{V} - \left[\frac{dE_k}{d \lambda} \right]_{\lambda=0}\right)|\psi_k(0) \rangle \\ &= \sum_{j=0\\ j \ne k}^{\infty} c_j\langle \psi_l(0) |\left( E_l(0) - E_k(0) \right) | \psi_j(0) \rangle +\langle \psi_l(0) |\hat{V} |\psi_k(0) \rangle - \left[\frac{dE_k}{d \lambda} \right]_{\lambda=0}\langle \psi_l(0) |\psi_k(0) \rangle \\ &= \sum_{j=0\\ j \ne k}^{\infty} c_j \left( E_l(0) - E_k(0) \right) \delta_{lj} +\langle \psi_l(0) |\hat{V} |\psi_k(0) \rangle \\ &=c_l \left( E_l(0) - E_k(0) \right)+\langle \psi_l(0) |\hat{V} |\psi_k(0) \rangle \end{align}
Assuming that the k-th state is nondegenerate (so that we can safely divide by $ E_l(0) - E_k(0)$), $$ c_l = \frac{\langle \psi_l(0) |\hat{V} |\psi_k(0) \rangle }{E_k(0) - E_l(0)} $$ and so: $$ | \psi_k'(0) \rangle = \sum_{j=0\\ j \ne k}^{\infty} \frac{\langle \psi_j(0) |\hat{V} |\psi_k(0) \rangle }{E_k(0) - E_j(0)} | \psi_j(0) \rangle $$
Higher-order terms can be determined in a similar way, but we will only deduce the expression for the second-order energy change. Using the second-order terms from the perturbation expansion, $$ 0 = a_2 = \tfrac{1}{2} \left( \hat{H}(0) - E_k(0) \right) |\psi_k''(0) \rangle +\left(\hat{V} - E_k'(0)\right)|\psi_k'(0) \rangle-\tfrac{1}{2} E_k''(0) |\psi_k(0) \rangle $$ Projecting this expression against $\langle \psi_k(0) |$, one has: \begin{align} 0 &= \tfrac{1}{2} \langle \psi_k(0) |\left( \hat{H}(0) - E_k(0) \right) |\psi_k''(0) \rangle +\langle \psi_k(0) |\left(\hat{V} - E_k'(0)\right)|\psi_k'(0) \rangle-\tfrac{1}{2} \langle \psi_k(0) |E_k''(0) |\psi_k(0) \rangle \\ &= \tfrac{1}{2} \langle \psi_k(0) |\left(E_k(0) - E_k(0) \right) |\psi_k''(0) \rangle +\langle \psi_k(0) |\hat{V} |\psi_k'(0) \rangle -E_k'(0)\langle \psi_k(0) |\psi_k'(0) \rangle -\tfrac{1}{2} E_k''(0) \\ &= \langle \psi_k(0) |\hat{V} |\psi_k'(0) \rangle -\tfrac{1}{2} E_k''(0) \end{align} To obtain the last line we used the intermediate normalization of the perturbed wavefunction, $\langle \psi_k(0) | \psi_k'(0) \rangle = 0$. Rewriting the expression for the second-order change in the energy, and then inserting the expression for the first-order wavefunction, gives \begin{align} E_k''(0) &= 2\langle \psi_k(0) |\hat{V} |\psi_k'(0) \rangle \\ &= 2\langle \psi_k(0) |\hat{V} \sum_{j=0\\ j \ne k}^{\infty} \frac{\langle \psi_j(0) |\hat{V} |\psi_k(0) \rangle }{E_k(0) - E_j(0)} | \psi_j(0) \rangle \\ &= 2 \sum_{j=0\\j \ne k}^{\infty}\frac{\langle \psi_j(0) |\hat{V} |\psi_k(0) \rangle \langle \psi_k(0) |\hat{V} | \psi_j(0) \rangle}{E_k(0) - E_j(0)} \\ &= 2 \sum_{j=0\\j \ne k}^{\infty}\frac{ \left|\langle \psi_j(0) |\hat{V} |\psi_k(0) \rangle \right|^2}{E_k(0) - E_j(0)} \end{align} Notice that for the ground state ($k=0$), where $E_0 - E_{j>0} < 0$, the second-order energy change is never positive, $ E_0''(0) \le 0$.
The Law of Diminishing Returns and Accelerating Losses¶
Suppose one is given a Hamiltonian that is parameterized in the general form used in perturbation theory, $$ \hat{H}(\lambda) = \hat{H}(0) + \lambda \hat{V} $$ According to the Hellmann-Feynman theorem, I have: $$ \frac{dE_0}{d\lambda} = E_0'(\lambda) = \langle \psi(\lambda) | \hat{V} |\psi(\lambda) \rangle $$ Consider two distinct values for the perturbation parameter, $\lambda_1 < \lambda_2$. According to the variational principle, if one evaluates the expectation value of $\hat{H}(\lambda_1)$ with $\psi(\lambda_2)$ one will obtain an energy above the true ground-state energy. I.e., $$ E_0(\lambda_1) = \langle \psi(\lambda_1) | \hat{H}(\lambda_1) |\psi(\lambda_1) \rangle < \langle \psi(\lambda_2) | \hat{H}(\lambda_1) |\psi(\lambda_2) \rangle $$ Or, more explicitly, $$ \langle \psi(\lambda_1) | \hat{H}(0) +\lambda_1\hat{V} |\psi(\lambda_1) \rangle < \langle \psi(\lambda_2) | \hat{H}(0) +\lambda_1\hat{V} |\psi(\lambda_2) \rangle $$ Similarly, the energy expectation value $\hat{H}(\lambda_2)$ evaluated with $\psi(\lambda_1)$ is above the true ground-state energy, so $$ \langle \psi(\lambda_2) | \hat{H}(0) +\lambda_2\hat{V} |\psi(\lambda_2) \rangle < \langle \psi(\lambda_1) | \hat{H}(0) +\lambda_2\hat{V} |\psi(\lambda_1) \rangle $$ Adding these two inequalities and cancelling out the factors of $\langle \psi(\lambda_2) | \hat{H}(0) |\psi(\lambda_2) \rangle $ that appear on both sides of the inequality, one finds that: $$ \left(\lambda_2 - \lambda_1 \right) \left(\langle \psi(\lambda_2) | \hat{V} |\psi(\lambda_2) \rangle - \langle \psi(\lambda_1) | \hat{V} |\psi(\lambda_1) \rangle \right) < 0 $$ or, using the Hellmann-Feynman theorem (in reverse), $$ \left(\lambda_2 - \lambda_1 \right) \left( E_0'(\lambda_2) - E_0'(\lambda_1)\right) < 0 $$
Recall that $\lambda_2 > \lambda_1$. Thus $E_0'(\lambda_2) < E_0'(\lambda_1)$. If the system is losing energy at $\lambda_1$ (i.e., $E'(\lambda_1) < 0$), then at $\lambda_2$ the system is losing energy even faster ($E_0'(\lambda_2)$ is more negative than $E_0'(\lambda_1)$. This is the law of accelerating losses. If the system is gaining energy a $\lambda_1$ (i.e., $E_0'(\lambda_1) > 0$), then at $\lambda_2$ the system is gaining energy more slowly (or even losing energy) ($E_0'(\lambda_2)$ is smaller than $E_0'(\lambda_1)$). This is the law of diminishing returns.
If the energy is a twice-differentiable function of $\lambda$, then one can infer that the second derivative of the energy is always negative $$ \lim_{\lambda_2 \rightarrow \lambda_1} \frac{E_0'(\lambda_2) - E_0'(\lambda_1)}{\lambda_2 - \lambda_1} = \left[\frac{d^2E_0}{d\lambda^2}\right]_{\lambda = \lambda_1}= E_0''(\lambda_1)< 0 $$
Example: Particle in a Box with a Sloped Bottom¶
The Hamiltonian for an Applied Uniform Electric Field¶
When a system cannot be solved exactly, one can solve it approximately using
perturbation theory.
variational methods using either an explicit wavefunction form or basis-set expansion.
To exemplify these approaches, we will use the particle-in-a-box with a sloped bottom. This is obtained when an external electric field is applied to a charged particle in the box. The force on the charged particle due to the field is $$ \text{force} = \text{charge} \cdot \text{electric field} $$ so for an electron in a box on which an electric field of magnitude $F$ is applied in the $+x$ direction, the force is $$ \text{force} = -e F $$ where $e$ is the magnitude of the charge on the electron. The potential is $$ \text{potential} = - \nabla \text{force} $$ Assuming that the potential is zero at the origin for convenience, $V(0) = 0$, the potential is thus: $$ V(x) = eFx $$
The particle in a box with an applied field has the Hamiltonian $$ \hat{H} = -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V(x) + eFx $$ or, in atomic units, $$ \hat{H} = -\tfrac{1}{2} \tfrac{d^2}{dx^2} + V(x) + Fx $$ For simplicity, we assume the case where the box has length 2 and is centered at the origin, $$ V(x) = \begin{cases} \infty & x \le -1 \\ 0 & -1 < x < 1 \\ \infty & 1 \le x \end{cases} $$
For small electric fields, we can envision solving this system by perturbation theory. We also expect that variational approaches can work well. We'll explore how these strategies can work. It turns out, however, that this system can be solved exactly, though the treatment is far beyond the scope of this course. There are a few useful equations, however: for a field strength of $F=\tfrac{1}{16}$ the ground-state energy is 1.23356 a.u. and for a field strength of $F=\tfrac{25}{8}$ the ground-state energy is 0.9063 a.u.; these can be compared to the unperturbed result of 1.23370 a.u.. Some approximate formulas for the higher eigenvalues are available:
\begin{align} E\left(F=\tfrac{1}{16};n\right) &= \frac{10.3685}{8} \left( 0.048 + 5.758 \cdot 10^{-5} n + 0.952 n^2 + 3.054 \cdot 10^{-7} n^3\right) - \frac{1}{16} \\ E\left(F=\tfrac{25}{8};n\right) &= \frac{32.2505}{8} \left( 0.688 + 0.045 n + 0.300 n^2 + 2.365 \cdot 10^{-4} n^3\right) - \frac{25}{8} \end{align}
Note, to obtain these numbers from the reference data in solved exactly, you need to keep in mind that the reference data assumes the mass is $1/2$ instead of $1$, and that the reference data is for a box from $0 \le x \le 1$ instead of $-1 \le x \le 1$. This requires dividing the reference field by 16, and shifting the energies by the field, and dividing the energy by 8 (because both the length of the box and the mass of the particle has doubled). In the end, $F = \tfrac{1}{16} F_{\text{ref}}$ and $E = \tfrac{1}{8}E_{\text{ref}}-\tfrac{1}{16}F_{\text{ref}}= \tfrac{1}{8}E_{\text{ref}}-F$.
Perturbation Theory for the Particle-in-a-Box in a Uniform Electric Field¶
The First-Order Energy Correction is Always Zero¶
The corrections due to the perturbation are all zero to first order. To see this, consider that, from the Hellmann-Feynman theorem,
\begin{align} \left[\frac{dE_n}{dF}\right]_{F=0} &= \int_{-1}^{1} \psi_n(x) \left[ \frac{d \hat{H}}{dF} \right]_{F=0} \psi_n(x) dx \\ &= \int_{-1}^{1} x|\psi_n(x)|^2 dx \\ &= \int_{-1}^{1} \text{(even function)} \text{(odd function) } dx \\ &= \int_{-1}^{1}\text{(odd function) } dx \\ &= 0 \end{align}
This reflects the fact that this system has a vanishing dipole moment.
The First-Order Correction to the Wavefunction¶
To determine the first-order correction to the wavefunction, one needs to evaluate integrals that look like: $$ V_{mn} = \int_{-1}^{1} \psi_m(x) (x) \psi_n(x) dx $$ From the properties of odd and even functions, and the fact that $\psi_n(x)$ is odd if $n$ is even, and vice versa, it's clear that $V_mn = 0$ unless $m+n$ is odd. (That is, either $m$ or $n$, but not both, must be odd.) The integrals we need to evaluate all have the form $$ V_{mn} = \int_{-1}^{1} x \sin \left(\frac{m \pi x}{2} \right)\cos \left(\frac{n \pi x}{2} \right) dx $$ where $m$ is even and $n$ is odd. Using the trigonometric identity $$ \sin(ax) \cos(bx) = \tfrac{1}{2} \sin((a+b)x) + \tfrac{1}{2} \sin((a-b)x) $$ we can deduce that the integral is $$ V_{mn} = \left[2\frac{\sin\left( \frac{(m-n)\pi x}{2} \right)}{(m-n)^2 \pi^2}
2\frac{\sin\left( \frac{(m+n)\pi x}{2} \right)}{(m+n)^2 \pi^2} -\frac{x\cos\left( \frac{(m-n)\pi x}{2} \right)}{(m-n) \pi} -\frac{x\cos\left( \frac{(m+n)\pi x}{2} \right)}{(m+n)^2 \pi} \right]{=1}^{1} $$ As mentioned before, this integral is zero unless $m+n$ is odd. The cosine terms therefore vanish. For odd $p$, $\sin \tfrac{p \pi}{2} = -1^{(p-1)/2}$, we have $$ V{mn} = \begin{cases} 0 & m+n \text{ is even} \\ \dfrac{4}{\pi^2} \left( \dfrac{-1^{(m-n-1)/2} }{(m-n)^2} + \dfrac{-1^{(m+n-1)/2}}{(m+n)^2} \right) & m+n \text{ is odd} \end{cases} $$ The first-order corrections to the ground-state wavefunction is then: $$ \left[\frac{d\psin(x)}{dF} \right]{F=0} = | \psim'(0) \rangle = \sum{m=1\ m \ne n}^{\infty} \frac{V_{mn}}{E_n(0) - E_m(0)} | \psi_m(0) \rangle $$
The Second-Order Correction to the Energy¶
The second-order correction to the energy is $$ E_n''(0) = 2 \sum_{m=1\\m \ne n}^{\infty}\frac{ V_{mn}^2}{E_m(0) - E_n(0)} $$ This infinite sum is not trivial to evaluate, but we can investigate the first non-vanishing term for the ground state. (This is the so-called Unsold approximation.) Thus: $$ E_0''(0) = 2 \frac{V_{21}^2}{E_1(0) - E_2(0)} = 2 \frac{\left(\tfrac{4}{\pi^2}(1-\tfrac{1}{9})\right)^2}{\tfrac{\pi^2}{8} - \tfrac{4\pi^2}{8}} = -\frac{16384 }{243 \pi^6} = -0.0701 $$ Using this, we can estimate the ground-state energy for different field strengths as $$ E(F) \approx E(0) - \frac{1}{2!}\frac{16384 }{243 \pi^6} F^2 $$ For the field strengths for which we have exact results readily available, this gives $$ E(\tfrac{1}{16}) \approx 1.23356 \text{ a.u.} \\ E(\tfrac{25}{8}) \approx 0.8913 \text{ a.u.} \\ $$ These results are impressively accurate, especially considering all the effects we have neglected.
Variational Approach to the Particle-in-a-Box in a Uniform Electric Field¶
When the field is applied, it becomes more favorable for the electron to drift to the $x<0$ side of the box. To accomodate this, we can propose a wavefunction ansatz for the ground state, $$ \psi_c(x) = (1 - cx)\cos\left(\frac{\pi x}{2} \right) $$ Clearly $c = 0$ in the absence of a field, but $c > 0$ is to be expected when the field is applied. We can determine the optimal value of $c$ using the variational principle. First we need to determine the energy as a function of $c$: $$ E(c) = \frac{\langle \psi_c | \hat{H} | \psi_c \rangle}{\langle \psi_c | \psi_c \rangle} $$ The denominator of this expression is easily evaluated $$ \langle \psi_c | \psi_c \rangle = 1 + \gamma c^2 $$ where we have defined the constant: $$ \gamma = \int_{-1}^1 x^2 \cos^2\left(\frac{\pi x}{2}\right) dx = \tfrac{1}{3} - \tfrac{2}{\pi^2} $$ The numerator is
$$ \langle \psi_c | \hat{H} | \psi_c \rangle = \frac{\pi^2}{8} + c^2 \frac{\gamma \pi^2}{8} - 2cF\gamma + \tfrac{1}{2}c^2 $$
where we have used the integral:
$$ \int_{-1}^1 x \cos\left(\frac{\pi x}{2}\right) \sin\left(\frac{\pi x}{2} \right) dx = \frac{1}{\pi} $$
This equation can be solved analytically because it is a cubic equation, but it is more convenient to solve it numerically.
Basis-Set Expansion for the Particle-in-a-Box in a Uniform Electric Field¶
As a final approach to this problem, we can expand the wavefunction in a basis set. The eigenfunctions of the unperturbed particle-in-a-box are a sensible choice here, though we could use polynomials (as we did earlier in this worksheet) without issue if one wished to do so. The eigenfunctions of the unperturbed problem are orthonormal, so the overlap matrix is the identity matrix $$ s_{mn} = \delta_{mn} $$ The Hamiltonian matrix elements are: and the Hamiltonian matrix elements are
\begin{align} h_{mn} &= \int_{-1}^{1} \cos\left(\frac{m \pi x}{2} \right) \hat{H} \cos\left(\frac{n \pi x}{2} \right) dx \\ &= \int_{-1}^{1} \cos\left(\frac{m \pi x}{2} \right)\left[-\frac{1}{2}\frac{d^2}{dx^2} + Fx \right] \cos\left(\frac{n \pi x}{2} \right) dx \\ &= \frac{\pi^2 n^2}{2} \delta_{mn} + F V_{mn} \end{align}
Using the results we have already determined for the matrix elements, then,
$$ h_{mn} = \begin{cases} 0 & m\ne n \text{ and }m+n \text{ is even}\\ \dfrac{\pi^2n^2}{8} & m = n \\ \dfrac{4F}{\pi^2} \left( \dfrac{-1^{(m-n-1)/2} }{(m-n)^2} + \dfrac{-1^{(m+n-1)/2}}{(m+n)^2} \right) & m+n \text{ is odd} \end{cases} $$
Demonstration¶
In the following code block, we'll demonstrate how the energy converges as we increase the number of terms in our calculation. For the excited states, it seems the reference data is likely erroneous.
from scipy.optimize import minimize_scalar
def compute_V(n_basis):
"""Compute the matrix <k|x|l> for an electron in a box from -1 to 1 in a unit external field, in a.u."""
# initialize V to a zero matrix
V = np.zeros((n_basis,n_basis))
# Because Python is zero-indexed, our V matrix will be shifted by 1. I'll
# make this explicit by making the counters km1 and lm1 (k minus 1 and l minus 1)
for km1 in range(n_basis):
for lm1 in range(n_basis):
if (km1 + lm1) % 2 == 1:
# The matrix element is zero unless (km1 + lm1) is odd, which means that (km1 + lm1) mod 2 = 1.
# Either km1 is even or km1 is odd. If km1 is odd, then the km1 corresponds to a sine and
# lm1 is even, and corresponds to a cosine. If km1 is even and lm1 is odd, then the roles of the
# sine and cosine are reversed, and one needs to multiply the first term below by -1. The
# factor -1**lm1 achieves this switching.
V[km1,lm1] = 4. / np.pi**2 * (-1**((km1 - lm1 - 1)/2) / (km1-lm1)**2 * -1**(lm1)
+ -1**((km1 + lm1 - 1)/2) / (km1+lm1+2)**2)
return V
def energy_pt2(k,F,n_basis):
"""kth excited state energy in a.u. for an electron in a box of length 2 in the field F estimated with 2nd-order PT.
k : scalar, int
k = 0 is the ground state and k = 1 is the first excite state.
F : scalar
the external field strength
n_basis : scalar, int
the number of terms to include in the sum over states in the second order pert. th. correction.
energy_pt2 : scalar
The estimated energy of the kth-excited state of the particle in a box of length 2 in field F.
# It makes no sense for n_basis to be less than k.
assert(k < n_basis), "The excitation level of interest should be smaller than n_basis"
# Energy of the kth-excited state in a.u.
energy = (np.pi**2 / 8.) * np.array([(k + 1)**2 for k in range(n_basis)])
V = compute_V(n_basis)
der2 = 0
if j != k:
der2 += 2*V[j,k]**2/(energy[k]-energy[j])
return energy[k] + der2 * F**2 / 2
def energy_variational(F):
"""ground state energy for a electron0in-a-Box in a box of length 2 in the field F estimated with the var. principle.
The variational wavefunction ansatz is psi(x) = (1+cx)cos(pi*x/2) where c is a variational parameter.
gamma = 1/3 - 2/np.pi**2
def func(c):
return (np.pi**2/8*(1+gamma*c**2)+c**2/2 - 2*c*gamma*F) / ( 1 + gamma * c**2)
res = minimize_scalar(func,(0,1))
return res.fun
def energy_basis(F,n_basis):
"""Eigenenergies in a.u. of an electron in a box of length 2 in the field F estimated by basis-set expansion.
n_basis basis functions from the F=0 case are used.
energy_basis_exp : array_like
list of n_basis eigenenergies
# assign Hamiltonian to the potential matrix, times the field strength:
h = F*V
np.fill_diagonal(h,energy)
return e_vals
print("Energy of these models vs. reference values:")
print("Energy of the unperturbed ground state (field = 0):", np.pi**2 / 8.)
print("Field value: ", 1./16)
print("Exact Energy of the ground state:", 10.3685/8 - 1./16)
print("Energy of the ground state estimated with 2nd-order perturbation theory:", energy_pt2(0,1./16,50))
print("Energy of the ground state estimated with the variational principle:", energy_variational(1./16))
print("Energy of the ground state estimated with basis set expansion:", energy_basis(1./16,50)[0])
print("Field value: ", 25./4)
print("Exact Energy of the ground state:", 32.2505/8 - 25./8)
print("Energy of the ground state estimated with 2nd-order perturbation theory:", energy_pt2(0,25./8,50))
print("Energy of the ground state estimated with the variational principle:", energy_variational(25./8))
print("Energy of the ground state estimated with basis set expansion:", energy_basis(25./8,50)[0])
print("Energy of the unperturbed first excited state (field = 0):", np.pi**2 * 2**2 / 8.)
print("Exact Energy of the first excited state:", 39.9787/8 - 1./16)
print("Energy of the first excited state estimated with 2nd-order perturbation theory:", energy_pt2(1,1./16,50))
print("Energy of the first excited state estimated with basis set expansion:", energy_basis(1./16,50)[1])
print("Exact Energy of the first excited state:", 65.177/8 - 25./8)
print("Energy of the first excited state estimated with 2nd-order perturbation theory:", energy_pt2(1,25./4,50))
print("Energy of the first excited state estimated with basis set expansion:", energy_basis(25./4,50)[1])
print("Energy of the unperturbed second excited state (field = 0):", np.pi**2 * 3**2 / 8.)
print("Exact Energy of the second excited state:", 89.3266/8 - 1./16)
print("Energy of the second excited state estimated with 2nd-order perturbation theory:", energy_pt2(2,1./16,50))
print("Exact Energy of the ground state:", 114.309/8 - 25./8)
print("Energy of the second excited state estimated with 2nd-order perturbation theory:", energy_pt2(2,25./4,50))
print("Energy of the second excited state estimated with basis set expansion:", energy_basis(25./4,50)[2])
Energy of these models vs. reference values:
Energy of the unperturbed ground state (field = 0): 1.2337005501361697
Field value: 0.0625
Exact Energy of the ground state: 1.2335625
Energy of the ground state estimated with 2nd-order perturbation theory: 1.2335633924534153
Energy of the ground state estimated with the variational principle: 1.2335671162852282
Energy of the ground state estimated with basis set expansion: 1.2335633952295426
Field value: 6.25
Exact Energy of the ground state: 0.9063125000000003
Energy of the unperturbed first excited state (field = 0): 4.934802200544679
Exact Energy of the first excited state: 4.9348375
Energy of the first excited state estimated with 2nd-order perturbation theory: 4.93484310142371
Energy of the first excited state estimated with basis set expansion: 4.934843098735417
Exact Energy of the first excited state: 5.022125000000001
Energy of the first excited state estimated with 2nd-order perturbation theory: 5.343810990856903
Energy of the unperturbed second excited state (field = 0): 11.103304951225528
Exact Energy of the second excited state: 11.103325
Energy of the second excited state estimated with 2nd-order perturbation theory: 11.103329317896025
Energy of the first excited state estimated with basis set expansion: 11.103329317809841
Exact Energy of the ground state: 11.163625
Energy of the second excite state estimated with 2nd-order perturbation theory: 11.34697165619853
Energy of the second excited state estimated with basis set expansion: 11.336913933805935
# user-specified parameters
F = 25.0 / 8
nbasis = 20
# plot basis set convergence of energy estimates at a given field
# ---------------------------------------------------------------
# evaluate energy for a range of basis functions at a given field
e_pt2_basis = np.array([energy_pt2(0, F, n) for n in n_values])
e_var_basis = np.repeat(energy_variational(F), len(n_values))
e_exp_basis = np.array([energy_basis(F, n)[0] for n in n_values])
# evaluate energy for a range of fields at a given basis
f_values = np.arange(0.0, 100., 5.)
e_pt2_field = np.array([energy_pt2(0, f, nbasis) for f in f_values])
e_var_field = np.array([energy_variational(f) for f in f_values])
e_exp_field = np.array([energy_basis(f, nbasis)[0] for f in f_values])
# fig.suptitle("Basis Set Convergence of Particle-in-a-Box with Jacobi Basis", fontsize=24, fontweight='bold')
# plot approximate energy at a fixed field
axis.plot(n_values, e_pt2_basis, marker='o', linestyle='--', label='PT2')
axis.plot(n_values, e_var_basis, marker='', linestyle='-', label='Variational')
axis.plot(n_values, e_exp_basis, marker='x', linestyle='-', label='Basis Expansion')
axis.set_title(f"Field Strength = {F}", fontsize=24, fontweight='bold')
# plot approximate energy at a fixed basis
axis.plot(f_values, e_pt2_field, marker='o', linestyle='--', label='PT2')
axis.plot(f_values, e_var_field, marker='', linestyle='-', label='Variational')
axis.plot(f_values, e_exp_field, marker='x', linestyle='-', label='Basis Expansion')
axis.set_xlabel("Field Strength [a.u.]", fontsize=12, fontweight='bold')
axis.set_title(f"Number of Basis = {nbasis}", fontsize=24, fontweight='bold')
# plot basis set convergence of 1st excited state energy at a given field
axis.set_ylabel("1st Excited-State Energy [a.u.]", fontsize=12, fontweight='bold')
When is a basis set appropriate? When is perturbation theory more appropriate?
Consider the hydrogen molecule ion, $\text{H}_2^+$. Is it more sensible to use the secular equation (basis-set-expansion) or perturbation theory? What if the bond length is very small? What if the bond length is very large?
Show that if you minimize the energy as a function of the basis-set coefficients using the variational principle, then you obtain the secular equation.
If a uniform external electric field of magnitude $F$ in the $\hat{\mathbf{u}} = [\hat{u}_x,\hat{u}_y,\hat{u}_z]^T$ direction is applied to a particle with charge $q$, the potential $V(x,y,z) = -qF(u_x x + u_y y + u_z z)$ is added to the Hamiltonian. (This follows from the fact that the force applied to the particles is proportional to the electric field, $\text{force} = q \vec{E} = q F \hat{\mathbf{u}}$ and the force is $\text{force} = - \nabla V(x,y,z)$. If the field is weak, then perturbation theory can be used, and the energy can be written as a Taylor series. The coefficients of the Taylor series give the dipole moment ($\mu$), dipole polarizability ($\alpha$), first dipole hyperpolarizability ($\beta$), second dipole hyperpolarizability ($\gamma$) in the $\hat{\mathbf{u}}$ direction.
The dipole moment, $\mu$, of any spherical system is zero. Explain why.
The polarizability, $\alpha$, of any system is always positive. Explain why.
\begin{align} E_k(F) &= E_k(0) + F \left[\frac{dE_k}{d F} \right]_{F=0} + \frac{F^2}{2!} \left[\frac{d^2E_k}{d F^2} \right]_{F=0} + \frac{F^3}{3!} \left[\frac{d^3E_k}{d F^3} \right]_{F=0} + \frac{F^4}{4!} \left[\frac{d^4E_k}{d F^4} \right]_{F=0} + \cdots \\ &= E_k(0) - F \mu_{F=0} - \frac{F^2}{2!} \alpha_{F=0} - \frac{F^3}{3!} \beta_{F=0} - \frac{F^4}{4!} \gamma_{F=0} + \cdots \\ \end{align}
The Hellmann-Feynman theorem indicates that given the ground-state wavefunction for a molecule, the force on the nuclei can be obtained. Explain how.
What does it mean that perturbation theory is inaccurate when the perturbation is large?
Can you explain why the energy goes down when the electron-in-a-box is placed in an external field?
For a sufficiently-highly excited state, the effect of an external electric field is negligible. Why is this true intuitively? Can you show it graphically? Can you explain it mathematically?
What is the secular equation?
What is the Hellmann-Feynman theorem?
How is the Hellmann-Feynman theorem related to perturbation theory?
What is perturbation theory? What is the expression for the first-order perturbed wavefunction?
Multielectron systems
Approximate methods for multielectron systems.
Randy's book
D. A. MacQuarrie, Quantum Chemistry (University Science Books, Mill Valley California, 1983)
Perturbation theory
Variational method) | CommonCrawl |
Different low-cost materials to prevent the alteration induced by formic acid on unstable glasses
Rodrigo Arévalo1,
Jadra Mosa2,
Mario Aparicio2 &
Teresa Palomar ORCID: orcid.org/0000-0002-3762-87882
Heritage Science volume 9, Article number: 142 (2021) Cite this article
The most frequent cause of glass degradation is environmental moisture, which is adsorbed on its surface forming a hydration layer that induces the rupture of the glass network. This pathology is accelerated by the accumulation of volatile organic compounds (VOCs), like formic acid. Although there is extensive knowledge about their impact, concentrations inside display cases are difficult to reduce efficiently. This study presents the assessment of different materials to reduce the concentration of formic acid to mitigate the degradation produced in unstable glasses. With this objective, copper threads, steel wool, silica gel, and activated carbon were chosen as low-cost materials with good adsorption or reactivity to the VOCs, exposing them in desiccators to an environment of 100% RH and 10 ppm of formic acid. Given that silica gel obtained the best results, its optimization as a sorbent material was evaluated by maintaining, regenerating, or renewing it when exposed next to the same glass. The tests carried out concluded that the hygroscopic capacity of the glasses exposed with silica gel decreased and, therefore, a lower degradation is observed on its surface. In addition, regenerating and renewing weekly the silica gel improved the results.
Works of art exhibited in museums are affected by environmental factors such as light, temperature, humidity, and pollutants. Pollution in museums comes from two sources. On the one hand, the air from outside, which usually brings dust and inorganic particles [1]. On the other hand, the internal sources, which correspond to the materials used in the showcases such as wood, sealants, textiles, and varnishes that emanate organic compounds. These compounds are the main source of organic acid contamination, being the most common the acetic and formic acids [1,2,3]. For this reason, currently there is interest in monitoring and preventing the pollution in showcases and museum galleries, and in analyzing the effects of pollutants on cultural objects.
The concentration of organic acids in museum galleries is usually low presenting little risk of damaging objects. Room ventilation volumes and rates are usually sufficient to dilute the acid concentration to low levels [4]. However, the situation is very different within the display cases used to protect objects. The volume/surface emissive ratio and the ventilation rate in display cabinets are much lower than in the rooms. This tightness favors the accumulation of the volatile organic compounds (VOCs) released, which leads to the accumulation of these species in higher concentrations inside the cabinets than in rooms [4,5,6]. Although the amounts of VOCs released by these materials are low, closed spaces can reach relatively high concentrations [2]. Additionally, the ventilation of rooms maintains a low concentration of VOCs in comparison with sealed showcases. Some of the most common VOCs are acetic and formic acids. Natural reference values for formic acid in urban areas are between 0.1 and 32 µg/m3. However, inside the showcases they are usually between 10 and 38 µg/m3, being considered high to be between 38 and 230 µg/m3 and extremely high between 290 and 860 µg/m3 [7]. Similarly, acetic acid normal values in urban areas are between 0.3–40 µg/m3; while inside the showcases it is between 100–697 µg/m3, being considered high in 498–1195 and extremely high in 1494–2490 µg/m3 [7]. The concentration values of hundreds of µg/m3 of formic acid and acetic acid are frequent, reaching concentrations of > 1000 µg/m3 inside the display cases in extraordinary situations [2, 4].
There are multiple factors that can act as source of indoor pollution such as the ventilation, the heating and air conditioning systems, the exchange of indoor/outdoor air, the impact of visitors and, above all, the indoor pollutants of the materials used in the lining or construction of the showcases [4, 6, 8]. The movement of air masses favors the decrease of VOCs' concentration inside the museum. Nevertheless, it must be considered that heritage objects themselves can be also an internal source of VOCs [9, 10].
Several studies identify wooden furniture or derivatives as the main source of acetic and formic acids [1, 5, 8]. Wood components degrade under the influence of light, humidity, oxygen or high temperatures creating volatile compounds such as aldehydes (formaldehyde) and organic acids (acetic acid and formic acid) [1]. However, these high emissions can also be detected in modern and new enclosures, which are mainly constructed from low-emission materials such as metal and glass [5]. This is due to the fact that acetic and formic acids can also be emitted by lacquers, varnishes, paints, adhesives, sealants (polyurethane foams), and derivatives of the degradation of cellulose acetate [1, 8]. Likewise, these VOCs can be adsorbed by other materials from the boxes and be released slowly in time. In this way, they continue their action on the exhibited objects, even though they have eliminated the source of contamination [1]. On the other hand, historical furniture can be an integral part of a museum, being also a source of VOCs [5].
There are many pollutants that can affect museum objects, but there are also many methods and techniques to control the presence and concentration of harmful particles and gases [1]. Some display cabinets may be equipped with technical devices for active air circulation or for purging with inert gas. However, due to financial reasons, most enclosures are passive systems, as the costs of current solutions can be a problem [5]. These facts confirm that, although there is extensive knowledge about the impact of pollutants on the cultural property, concentrations within display cases have been difficult to reduce efficiently so far. For this reason, despite the importance of a correct choice of construction materials, it is necessary to treat the interior environment with sorbent materials capable of reducing the concentrations of the pollutants emitted and, in this way, avoiding the degradation of the exhibited objects.
Currently, a wide range of different adsorbents is commercially available, ranging from activated carbons and zeolites to molecular sieves [2, 11, 12]. The use of these sorbent materials can represent a cost-effective approach to minimize the damage caused by air pollutants in museum display cabinets. To identify VOCs and determine their concentrations, different devices based on active or passive sampling can be used. Active sampling uses the absorption of contaminated air in a tube, with the help of a pump. The gases accumulate in a sorbent material to later desorb and analyze them. In passive mode, a layer of sorbent is simply placed near the materials to be protected. This diffusive sampling works by allowing gas or vapor molecules to diffuse through a defined volume of still air until they reach a sorbent bed [13]. Examples of adsorbents are silica gel or activated carbon. Another way to remove contaminants from an environment is using sacrificial materials. Gases react with them forming solid stable compounds on the material surface. The periodical substitution of these materials induces the decrease of the contaminant concentration. In this group appears metals or salts such as carbonates.
As previously mentioned, different cultural materials can be affected by VOCs, including glasses. Its degradation is mainly affected by its chemical composition and by the environment in which it is exposed [14, 15]. Damage can appear immediately after obtaining the glass, due to its exposure to adverse conditions, or as a consequence of the passage of the centuries. Among them, it is worth highlighting crizzling, which is a degradation pathology that causes a network of fissures on the surface, resulting in a loss of transparency.
The most frequent cause of glass degradation in the atmospheric environment is environmental humidity, which is adsorbed on the surface forming a thin film of water [6, 14, 16]. Water induces the ion exchange of alkaline cations (Reaction 1), which are easy to extract due to their strong lattice distortion. This exchange results in their migration from the internal network to the surface, generating an increasingly basic solution due to the accumulation of OH− groups [3, 14, 17, 18].
$$\equiv {\text{Si}} - {\text{OK }} + {\text{ H}}_{{2}} {\text{O }} \to \, \equiv {\text{Si}} - {\text{OH }} + {\text{ KOH}}{.}$$
(Reaction 1)
The degradation rate in glass objects generated by environmental humidity is increased when they are also exposed to environments contaminated with organic acids [15]. Wooden objects used for storage in display cabinets are considered the main source of VOCs in museums, emitting aldehydes (formaldehyde) and organic acids (acetic acid and formic acid) [1]. The production of formic acid is linked to the division of pyruvic acid during the metabolic processes of wood [2, 3] The corrosion process begins with dissolution of formic acid in the gas phase on the surface aqueous film. This is followed by the dissociation of formic acid, which increases the concentration of protons, thus accelerating the ion exchange process and, consequently, the leaching of metal ions from the glass (Reaction 2) [15, 19]. Therefore, the alteration of glass exposed to a humid and acidic atmosphere is faster than in an uncontaminated humid atmosphere [3, 4, 14, 15, 18]. The Cannizzaro reaction can also form formate species on the glass surface as result of the reaction of formaldehyde in the alkaline surface films on glass [20, 21].
$$\equiv {\text{Si}} - {\text{OK }} + {\text{ HCOOH }} \to \equiv {\text{Si}} - {\text{OH }} + {\text{ HCOOK}}{.}$$
Previous studies of air pollutants in museums showed a clear positive correlation between temperature and relative humidity and the presence of acetic and formic acid [10, 17, 18]. At higher temperatures, the emission rate of VOCs increases [9]. Regarding relative humidity, the emission of acetic acid vapors increased by a factor of 2 to 3 as the humidity of the environment increased from 54 to 100% [3]. Therefore, controlling the factors that increase the emission of acids allows to reduce the damage they generate [4].
In the presence of acidic organic contaminants, anhydrous sodium formate is the predominant precipitate found on the glass surface. Likewise, it must be considered that the alkaline hydroxide film generated on the surface can be carbonated by the action of carbon dioxide in the atmosphere. In the same way, calcium hydroxide is also generated, which also evolves towards its carbonate form [16]. In addition, other air pollutants such as SO2 and NOx, which would give rise to sulphates and nitrates, must be considered. The deposits formed accumulate, giving rise to opaque crusts [17, 18, 22].
The main objective of this work is to evaluate how the properties of sorbent materials allow to reduce the concentration of formic acid and mitigate the degradation produced in heritage objects, specifically in unstable glasses. Formic acid was chosen because previous studies have detected high contents of formic acid in museographic collections and point to the formic acid as a relevant alteration agent for glasses [1,2,3, 6]. Aditionally, in previous experiments, it has probed that the formic acid increases more the degradation rate in unstable glasses in comparison to the acetic acid or formaldehyde [23]. To control the acid environment, low-cost materials susceptible of reaction or adsorption of formic acid were selected. For that purpose, passive catchment tests and the subsequent characterization of the materials used will be carried out. Then, the performance of the material with the best capture capacity will be evaluated by studying the degradation produced in a soda glass. With this study, we aim to initiate further research to finally obtain feasible solutions and real application in museums.
This work is divided into two studies. The first one is focused on the selection of the most adequate low-cost material to remove formic acid. The second study was the assessment of the protective capacity of the sorbent with an unstable soda glass.
Selection of sorbents
Four low-cost materials were selected to evaluate the reduction of the concentration of atmospheric formic acid. The materials tested were activated carbon, silica gel, copper, and steel. Two of them (carbon and silica) are general sorbents; however, metals are materials that react specifically with the formic acid.
Activated carbon is the most common sorbent due to its availability, low cost, and good efficiency. Previous studies consider it the most effective material to reduce air pollution generated outdoors and organic acids in showcases [4, 10, 12, 13]. Grosjean et al. studied how in all the tests in a passive way, 20 g of activated carbon removed pollutants from the air by passive diffusion at a rate that exceeded by a factor of 10 the loss of pollutants on the walls of the showcase [24]. On the other hand, Schieweck observed that both under active and passive conditions, pure and impregnated activated carbons showed good adsorption efficiency for formaldehyde, formic acid, and acetic acid [5].
Activated carbon corresponds to porous carbon in which microporosity has been enhanced through activation, usually through steam oxidation [13]. The carbon is injected into a stream of hot air to create activated carbon [5]. The stream of hot air creates a large number of small pores that increase the surface area [12]. It is characterized by having a fine porous structure, a high specific surface area between 300 and 2000 m2/g and a density between 200 and 600 kg/m3 [4, 5]. In addition, its adsorption capacity varies depending on the diameter of its pores, distinguishing between micropores (< 1 nm), mesopores (1–25 nm) and macropores (> 25 nm) [4]. This material is capable of reducing polluting gases by physical adsorption on the inner surface. This mechanism is generally based on relatively weak intermolecular forces, such as van der Waals interactions. Organic compounds with a molecular weight greater than 45 g/mol are considered good adsorbates on activated carbon [5]. The water capacity of the coals in this region is close to 30% of their dry mass. This water can displace adsorbed organic molecules [13].
Nutshell carbons are particularly useful for this purpose, as they have a very suitable original porosity for activation [13]. However, a drawback of using plant materials is the existence of residual content of inorganic ash [13]. Activated carbon is available in powder, granule, foam, or fabric form. From a practical point of view, activated carbon fabric is better to install than foam, which is better to install than granulate [4]. Carbon foams are easy to cut to size; however, they tend to lose some of the carbon, leaving numerous black spots [4].
Silica gel is used primarily for two functions, to maintain a stable relative humidity or as a desiccant to dehumidify the air [12]. Since this material absorbs moisture from indoor air, the impact on relative humidity should be considered when using it as passive adsorbent for organic acids. Previous studies consider that silica gel does not eliminate some air pollutants generated outside but that it can be a good option to control humidity while it is used in combination with another specific sorbent [5, 12]. Another possibility to consider is that since silica gel strongly adsorbs water vapor, previously adsorbed organic molecules can be displaced [13].
Metal objects are also susceptible to organic acid attack just like glass. Corrosion caused by the vapors of these acids on different metals and alloys has been studied for years [25,26,27,28]. Taking this into account, metals could be used as sorbents when acting as sacrificial material. Lead is the most cited metal in terms of degradation; however, it is not chosen due to its toxicity since the sorbent materials studied are intended to have real applicability in museums. Bronze, iron, and copper are other materials that can be used as possible sorbents, due to the degradation they present when exposed to organic acids [3].
There are numerous studies on the degradation of copper due to its exposure to organic acids, since they are able to cause metal corrosion even at very low concentrations of acid vapor [28, 29]. López-Delgado et al. studied the corrosion it presents when exposed to both formic and acetic acid and a relative humidity of 100% [29]. Tetreault et al. also studied the formation of copper corrosion products in the presence of formic acid, obtaining that at levels above 4 ppm copper increases in weight at both 54% and 75% RH [28]. An increase in the concentration of formic acid results in a strong effect on copper corrosion. At 8 ppm of formic acid, the copper samples are covered by a thin layer of an opaque green to gray matte film [28]. At a concentration of 10 ppm, the components of the patina are mainly cuprite (Cu2O), hydroxide of copper (Cu(OH)2), and copper formate (Cu(HCOO)2), although the latter appears as a small signal [25, 28]. Above 14 ppm, copper samples show whitish surface colors. The corrosion compounds found at 14 and 140 ppm were identified respectively as copper formate and copper formate dihydrate [28]. It is important to note that the higher the concentration of formic acid to which it is exposed, the more corrosion appears at lower relative humidity.
The other metal option that arises to study it as a sorbent material is steel wool [26, 27]. This material is composed of iron, which can react with organic acids to which it is exposed in the environment. Steel wool is made up of fibers that are often used for cleaning and polishing metal or wood surfaces. Steel wool is easily solubilized when reacting with dilute acid, forming ferrous ions when reacting with protons (Reaction 3). Furthermore, ferrous ions can be oxidized to ferric ions by dissolved oxygen (Reaction 4) [27]. Similarly, adsorption of formate ions on the iron surface may occur [26].
$${\text{Fe}}^{0} \left( {\text{s}} \right) \, + {\text{ 2 H}}^{ + } \left( {{\text{aq}}} \right) \, \to {\text{ Fe}}^{{{2} + }} \left( {{\text{aq}}} \right) \, + {\text{ H}}_{{2}} \left( {\text{g}} \right).$$
$${\text{2 Fe}}^{{{2} + }} \left( {{\text{aq}}} \right) \, + \, \raise.5ex\hbox{$\scriptstyle 1$}\kern-.1em/ \kern-.15em\lower.25ex\hbox{$\scriptstyle 2$} {\text{ O}}_{{2}} \left( {{\text{dissolved}}} \right) \, + {\text{ 2 H}}^{ + } \left( {{\text{aq}}} \right) \, \to {\text{ 2 Fe}}^{{{3} + }} \left( {{\text{aq}}} \right) \, + {\text{ H}}_{{2}} {\text{O }}\left( {\text{g}} \right).$$
Preparation of sorbent materials
The copper was a 2491X tempered wires from the RS brand, used as an electrical material, with a cross-sectional area of 0.75 mm2 and a filament of 0.2 mm. The 24 copper wires were separated to increase the specific surface area. The cables weighted 4.63 g. The steel wool used from the Dexter company corresponds to category 00, which has a very fine fiber size (8.89–12.7 µm) within the market range. To prepare it, the fibers were separated, as in the case of copper, and 4.64 g were weighed. Activated carbon, from the Scharlau brand (CAS: 7440-44-0), is used in powder with a heavy mass of 4.66 g and of "very pure" quality. Finally, the silica gel is from the Labkem brand (Ref: SGE0-002-1K0) and is granulated with a diameter size of 2–5 mm. To prepare it, it remained in the oven at 70 °C for three days, just until the moment of introduction into the desiccator, obtaining a weight of 4.61 g. During the test, the Petri dish with renewed or regenerated silica gel was maintained at 70 °C for 90 min in an oven before their substitution in the desiccator.
Glass preparation
The composition of the soda glass prepared is based on real nineteenth century soda glasses [14] but simplified to be more susceptible to the environment. The minor components were not considered to avoid interferences, and their percentage was added to the alkaline content. The fusion was made at 1600 °C for two hours in an alumina crucible in a Termolab electric furnace, followed by an annealing at 500 °C in a Carbolite furnace.
The glass obtained was later adapted for the accelerated aging tests. Slices and small glass blocks were cut and dried in an oven at 70 °C for 30 min to remove the water used in the process to lubricate the cutting disc. Subsequently, the slices were polished with the Buehler brand MetaServ 3000 polisher. Polishing was carried out progressively with P320, P600 and P1200 sandpaper according to FEPA standards, which correspond to a grain size of 46.2, 25.8 and 15.3 μm respectively, using ethanol or ethanol-based lubricant to lubricate the sandpaper. Finally, it was finished with a diamond paste polishing with 3 µm and 1 µm particles. On the other hand, the small glass blocks were ground and sieved following the standard UNE 400322:1999 [30]. For this, two mortars and sieves with different sizes were used, thus obtaining fractions between 0–300 μm, 300–500 μm and greater than 500 μm, which were stored in Eppendorf tubes.
Accelerated aging tests and experimental set-up
The humid and acidic environment was recreated in different desiccators. The prepared environments had 100% RH and 10 ppm of formic acid. These values were chosen in order to increase the hygroscopicity of glass and, therefore, the alteration rate [15].
To generate the acidic atmospheres, 0.549 mL of formic acid was added to 600 mL of distilled water to obtain 10 ppm following the description of Bastidas et al. [31] The different sorbents and materials were exposed during a period of 21 days, time in which the sorbents, sacrificial materials or glasses were reacted.
For the first test, the materials were deposited in Petri dishes inside the desiccators (Additional file 1: Fig. S1). In turn, the Petri dishes were supported on lower base supports to allow the formic acid and water vapors to rise more easily from the bottom of the desiccator.
For the second test, ~ 5.00 g of silica gel, a slice of the prepared glass embedded in resin, a porcelain weight boat with approximately 0.50 g of the glass fraction smaller than 300 µm and a thermohygrometer were introduced into each desiccator (Additional file 1: Fig. S1).
Characterization of the environment
The environments were monitored using thermohygrometers, and the adsorption capacity of the materials by ion exchange chromatography (IC).
The thermohygrometers, model BL-1D from the Rotronic company, were kept in the desiccators during the entire period of preparation and exposure to the environment. The thermohygrometers, with an accuracy of ± 3.0% HR and ± 0.3 °C, were programmed to record data every five minutes, obtaining the corresponding values for relative humidity and temperature.
The adsorption capacity for formic acid of the materials was evaluated by an indirect method by ion exchange chromatography. Samples of the solution from each desiccator (< 2 mL) were taken on days 0, 3, 7, 10, 14, and 21. All chromatographic analyzes were performed at room temperature using the Metrohm Advanced Compac ion chromatographic instrument (861 IC) with conductivity detector (IC-819), liquid Handling Pump Unit (IC Pump 833), sample degasser (IC-837) and an 800 Dosino Dosing Device. Data acquisition, calibration curve construction and peak integration were carried out with a Metrohm 761 data acquisition system interconnected to a computer running MagIC Net 3.3 software. The identification and quantification of formate was carried out in a Metrosep Organic Acids column (250 × 4 mm, Ø 5 µm). The mobile phase is 0.5 mM sulfuric acid and 15% acetone, with a flow rate of 0.5 mL/min. The injection volume was 20 µL and the analysis time 30 min. The standard used was Supelco's Formate Standard for IC. The calibration curve was done between 15 and 500 µl with a correlation of 0.99.
The calculation of the decrease in formic acid was carried out using the Eq. (1).
$$\Delta Concentration\left(\%\right)=\frac{\left[Formate\,Day\,X\right]-[Formate\,Day\,0]}{\left[Formate\,Day\,0\right]}\times 100.$$
Sorbents and glass samples were characterized by X-ray fluorescence (XRF), surface area analysis, gravimetry, Fourier Transformed Infrared Spectroscopy in Attenuated Total Reflectance mode (FTIR-ATR), Scanning Electron Microscopy with X-ray Energy Dispersive Spectroscopy (SEM–EDS), µ-Raman spectroscopy, and Optical Microscopy.
The exact composition of the glass was analyzed with a PANalytical MagicX (PW-2424) wavelength dispersed X-ray spectrometer equipped with a rhodium tube (SUPER SHARP) of 2.4 kW. The results were treated with the quantitative silicate analysis curve after analyzing the samples as pearl (fusion of 0.3000 g sample and 5.5 g of Li2B4O7). The specific surface area was evaluated by the Monosorb Surface Area Analyzer MS-13 equipment from the Quantachrome company. The samples are degassed for 2 h in a 70:30 He:N2 gas stream at 150 °C. The measurement takes place by nitrogen adsorption at 77 K by the one-point Brunauer–Emmett–Teller (BET) method. The BET equation considers the van der Waals forces to be solely responsible for the adsorption process. From this method, the area of a solid can be determined by knowing the amount of adsorbed gas that is necessary for a monolayer to form and the area that an adsorbed molecule occupies. The specific surface area of each material is evaluated three times, obtaining the average result.
The hygroscopic capacity of each glass was determined by gravimetry in 0.5000 g of the glass fractions with Ø < 300 μm using three porcelain weight boats for each glass. Before starting the test, the empty weight boats were stored for three days in their corresponding desiccator to ensure that they are hydrated. Every week, the weight boats were weighed. Likewise, a slice of each glass was inserted into the desiccators leaving the polished face exposed. After the period of exposure to the different environments (21 days), all the weight boats and slices were stored in a desiccator with silica gel (Ø = 2–5 mm). The percentage of the increase in weight of the glass due to hydration with respect to the initial value was calculated using Eq. (2).
$$\Delta weight (\%)= \frac{Weighing\,boat\,day\,X-Weighing\,boat\,day\,0}{Weighing\,glass\,day\,0}\times 100.$$
In order to observe the changes on silica gel and glasses, Fourier transform infrared spectroscopy (FTIR-ATR) was performed. In each spectrum, eight scans were made with a sweep from 4000 to 400 cm−1. The equipment used was the PerkinElmer brand Spectrum 100 FT-IR Spectrometer, together with a PIKE Technologies brand GladiATR Attenuated Total Reflectance accessory. This technique allows characterization through reflection or absorption spectra in the infrared range of the electromagnetic spectrum. From the frequencies of the functional groups, the compounds that compose it can be identified. In addition, it is necessary to consider the advantage that it can be used for both inorganic and organic substances in solid, liquid or gaseous state.
The surface of copper, steel, and silica gel samples was observed and analyzed by scanning electron microscopy with X-ray energy dispersive spectroscopy (SEM–EDS), using the Hitachi S-4700 equipment. This technique allows morphological and microanalytical characterization through electronic images and energy dispersion X-ray microanalysis. The interaction of the electron beam on the sample generates X-rays that reach the detector and allow obtaining microanalytical information, qualitative in this case. Prior to measurement, the samples were made conductive by sputtering with carbon. Likewise, the alteration products formed on copper and silica gel were characterized by Raman spectroscopy before and after the test. Raman spectra were recorded using a confocal Raman microscope integrated with atomic force microscopy (AFM) on an ALPHA 300AR microscope from WITec. This equipment allows combining the potential of an AFM microscope, being able to obtain images of up to 3 nm of lateral resolution, with the structural and compositional characterization of the materials at a submicron scale of confocal Raman spectroscopy. The microscope is equipped with a Nd:YAG laser. The spectra obtained were analyzed with the WITec Control Plus software. The acquisition time was 3.6 s for one single spectrum and the Raman image consists of 3000 spectra with a laser excitation of 532 nm and the incident laser power of 1 mW, using a tested area of 10 µm × 10 µm. The colors in the spectra correspond to different areas in the Raman image using a filter for 172–260 cm−1.
The surface of the glass samples was observed by optical microscopy in a Zeta Systems brand optical profilometer model.
Selection of the best sorbent for formic acid
The first study was focused on the evaluation of the materials and their characterization to identify the best low-cost formic-acid sink.
Specific surface area
The specific surface area results showed the low values of metals (Table 1). Of all the materials, copper presented the smallest value with a specific surface area of 0.1 m2/g, followed by steel with 0.5 m2/g. On the other hand, the silica gel obtained a significantly higher value with 63.1 m2/g. Finally, activated carbon presented the highest result, as expected, with 788.8 m2/g of specific surface area.
Table 1 Specific surface area results
The best materials for the adsorption of formic acid and water from the environment would be silica gel and activated carbon. However, this does not mean that copper and steel were useless; rather other factors must be taken into consideration. For example, they have a minimum specific surface area but these materials acted as a sacrifice material, degrading it before the exposed historical glass.
The environments were monitored with the thermohygrometers in order to identify how stable they are and how they evolve after the desiccator opening (Fig. 1a–e). The evolution of the humidity in each dessicator should be similar to the behaviour of the concentration of gaseous formic acid.
Evolution of the environmental conditions inside the desiccators as a function of time for (a) steel, (b) copper, (c) silica gel and (d) activated carbon. The asterisks on the abscissa axes mark the moments when the desiccator was opened
Figure 1 a shows that in the case of steel, the relative humidity remained stable at 99.9% throughout the test, except for the second day. The temperature undergoes oscillations associated with changes between the day and night periods, in addition to those associated with the weather of each day. The representation associated with temperature coincides in all the figures, as expected. In the case of copper (Fig. 1b), the relative humidity does not remain stable. However, these oscillations do not exceed 2%, so it is not a significant variation. The minimums in relative humidity are mostly associated with temperature maximums. This is because the relative humidity is inversely proportional to temperature, as defined by the saturation pressure of water vapor.
Figure 1c, d, corresponding to silica gel and activated carbon, show relative stability during the test. The lowest values were detected in the first days of the test because the materials were sensible to the formic acid but also to the environmental humidity. From that moment on, the activated carbon does not exceed 5% in its oscillations and silica gel does not even 2.5%.
None of the materials have been found to reduce relative humidity in a stable manner. Steel, copper, silica gel and activated carbon manage to keep the fluctuations in relative humidity within acceptable ranges.
Material's alteration
Scanning electron microscopy made it possible to observe the changes that occurred in the surface of copper wires, steel wool, and silica gel. As result of degradation, the copper wires were covered with degradation products (Fig. 2a, b). The Raman analyses of the degraded copper show the peaks at 151, 221 and 649 cm−1 (Fig. 2c) that corresponds to the active Raman bands of the copper oxide and low intensity bands of copper hydroxides that overlap with some of the formate ions. [32,33,34]. These results agree with the bibliography, according to which at a concentration of 10 ppm of formic acid the components of the patina are mainly cuprite (Cu2O), copper hydroxide (Cu(OH)2), and copper formate (Cu(HCOO)2), although the latter appears as a small signal [25, 28].
SEM image of copper wire (a) without degradation and (b) degraded and (c) Raman analysis of the deposits
Regarding silica gel, the surface of the sphere is homogeneous except for some craters and cracks in which small deposits were observed (Additional file 1: Fig. S2). The silica gel was analyzed before and after the test by FTIR-ATR and Raman spectroscopy. Figure 3 a shows the FTIR-ATR spectra obtained. The degraded silica gel clearly presents the bands associated with water at 1640 and 3300 cm−1, corresponding to the scissors bending of H–O–H and the stretching of O–H (Additional file 1: Table 1), respectively. Likewise, a signal associated with formate ions stands out at 1739 cm−1, due to the asymmetric stretching of COO−. Furthermore, the signal at 3300 cm−1 associated with the O–H stretching may be due to the formation of Si–OH groups. Similar results were observed by Raman spectroscopy on crushed spheres (Fig. 3b). Unaltered silica gel showed fluorescence but on degraded ones, new Raman bands assigned to the silica and the hydroxyls appeared [35,36,37].
a FTIR-ATR spectra and b Raman spectra of undegraded and degraded silica gel. The FTIR-ATR peaks identification is resumed in Additional file 1: Table S1
Finally, there were no significant changes in steel and activated carbon before and after exposure to the acidic and humid environment. No formation of deposits or changes of color were detected on their surface (Additional file 1: Fig. S3).
Removal efficiency
The capacity to remove the environmental formic acid by the material is directly related to the concentration of ion formate in the solution. The acid in the vapor phase tends to dilute in the water adsorbed or to react with the metals, so the equilibrium is recovered with the dilution found in the desiccator. Thus, more acid is adsorbed in the vapor phase, lower is the formate concentration in the dilution.
Analyzing Fig. 4, which represents the decrease in the formate concentration in the dilutions considering each material; it is observed that the best material is silica gel, which is the one that gives rise to a greater decrease in the concentration of formate ions. It is even worth noting its higher performance than activated carbon, which is an outstanding sorbent material. Until the 14th day of the test, the steel shows a behavior on a par with copper. However, at the end of the test, it shows a greater decrease in the formate concentration, separating from the trend that followed with respect to copper. This latter material could have already reached saturation, while steel had not. It is expected that the behavior of each material was maintained after more than 21 days of the test until their saturation.
Decrease in formate concentration (Eq. 1) as a function of time and material
Optimization of silica gel and impact on glass
According to the results of Sect. 3.1, silica gel is the most suitable material for the adsorption of formic acid. It was, therefore, the material selected for the following tests to evaluate the procedure to reduce the acidic environment and its impact on the glass surface.
Three different procedures for the silica gel were assessed: maintaining the same silica during the test, renewing the silica each week, or regenerating the same silica each week. One environment was monitored without silica as reference.
The impact of the removal of formic acid in the glass was assessed by the exposure of glass slices and glass powder. Table 2 gathers the chemical composition of the soda glass.
Table 2 Chemical analysis of the prepared glass analyzed by XRF
Figure 5a–d correspond to the monitoring of the environmental conditions recorded by the thermohygrometers in the desiccators. They showed differences based on the different use of the silica.
Evolution of the environmental conditions inside the desiccator as a function of time for (a) the maintained silica, (b) renewed, (c) regenerated and (d) reference. The asterisks on the abscissa axes mark the moments when the desiccator was opened
Figure 5a shows that maintaining the silica during the test, the relative humidity was stable at 99.9% throughout the process. On the other hand, the temperature undergoes fluctuations due to changes between the day and night periods. Figure 5b, c, corresponding to the renewed and regenerated silica gel, show very similar values. Relative humidity remained without significant variations since it was close to 99.9% and the oscillations did not exceed 5%. It is only worth highlighting the moments on days 7 and 14, where the decrease is greater than the maximum 10% allowed by preventive conservation standards in museums [38]. However, these falls coincide with the moments when the desiccators were opened and the dry silica was put inside. Finally, Fig. 5d corresponds to the desiccator used as a reference, where silica gel was not used. Its behavior in relative humidity also remained within the allowed values, its oscillations not exceeding 5%, being the minimums of relative humidity mostly coincide with the maximums of temperature.
All desiccators maintain stable relative humidity values, achieving the environmental conditions set during the study.
Gravimetry
The evolution in weight, according to Eq. (2), depending on the use of the silica gel is reflected in Fig. 6.
Increase in glass weight according to the use of silica gel (Eq. 2)
The boat with glass powder exposed to the humid and acidic environment without silica gel experienced the greatest increase in weight as opposed to the other environments. The absence of a sorbent material favors a higher hydration of the glass fraction. Regarding the weight increases of the glasses that had silica gel in their desiccators, when the same sorbent material is maintained during the 21 days, a greater increase is observed compared to the other two protocols. In this case, the silica gel becomes saturated and its capacity as sorbent decreases, which leads to a greater hydration of the glass. On the other hand, the lower weight increase is observed if the silica gel is regenerated or renewed. These weekly regeneration or renewal processes allow the sorbent material not to become saturated. The lowest increase was observed in the test in which the silica gel was renewed.
Figure 7 represents the evolution of the concentration of formate ions of the original solution as a function of the silica used. Higher is the decrease of the formate ions in the desiccator solution, higher is the materials' adsorption (non-stable glass and/or silica) of these ions.
Decrease in formate ions concentration (Eq. 1) according to the use of silica gel
The greatest decrease in formate ions corresponds to the reference desiccator. The acid atmosphere increased the hygroscopic capacity of the glass, as observed in Fig. 6, being the formate ions concentrated in the glass surface.
On the other hand, when the desiccators have silica gel, the sorbent adsorbs the formate ions and, therefore, higher is the decrease of the formate ions in the desiccator solution, higher can be the protection of the glass. In Fig. 7, it is observed that the greatest decrease in formate ions is obtained when the silica gel is regenerated every week, followed by the maintained silica gel and, finally, when the silica gel is renewed. This does not match with the observed in Fig. 6; however, there is a competition between the sorbent and the glass surface, mainly when the sorbent is saturated.
Infrared spectroscopy (FTIR-ATR)
The analysis through infrared spectroscopy (FTIR-ATR) allows to observe the evolution of the glass surface in humid and acidic environments. The degradation of glass slices is mainly related to hydration, which depends on the test duration or the way in which the silica gel is used (Fig. 8). Table 3 shows the assignment of the bands that appear in the different spectra represented.
Normalized FTIR-ATR spectra of glasses on days (a) 0, (b) 7, (c) 14 and (d) 21 of the test. The peaks identification is resumed in Table 3
Table 3 Allocation of glass FTIR-ATR bands identified in Fig. 8
Figure 8a–d show the evolution of the spectra of the surfaces of the glass slices as a function of the mode of use of the silica gel, within each day of measurement. Initially, all surfaces show the same spectrum (Fig. 8a). These signals correspond to the structure of the glass, with the band associated with the symmetric stretching of Si–O–Si at 750 cm−1 and the stretching of the silanol groups at 890 cm−1. Figure 8b–d show the appearance of new signals and a change in their intensities. The bands associated with the symmetric stretching of Si–O–Si (750 cm−1) decrease due to the hydrolytic attack of the glass network, while those corresponding to the stretching of the SiOH groups (900 cm−1) increase due to the formation of silanol groups. In addition, new bands associated with water appear at 1630, 2160 and 3300 cm−1, corresponding to scissors bending H–O–H, bend + libration H–O–H and stretching OH respectively, due to hygroscopic attack. Likewise, due to exposure to the acidic environment, the appearance of new bands associated with formate groups is observed due to the dissociation of formic acid in solution and interaction with the metal ions of the glass. These bands correspond to the bending of COO− (750 cm−1), symmetric stretching of COO− with Na (1350 cm−1), rocking of COO− (1080 and 1380 cm−1), asymmetric stretching of COO− with Na (1580 cm−1), bending vibration of CH (2730 cm−1) and stretching of CH (2820 cm−1). In none of the spectra obtained are the bands corresponding to the stretching of C=O of HCOOH, neither as monomer nor as dimer, located in the interval between 1731–1743 cm−1 for the monomeric form and between 1685–1712 cm−1 for dimeric [19]. This is because the dilution of formic acid in the adsorbed water has fully interacted with the metal ions on the surface of the glass.
The appearance of these bands associated with water and formic acid and the decrease of those corresponding to the initial structure of the glass is related to the advance in the hydration of the glass surface due to the atmosphere with 100% relative humidity and 10 ppm of formic acid.
The intensities of the bands in infrared spectra are highly dependent on the specific area of the surface being analyzed. To know the real effect of the degradation on the glass slices, the infrared spectroscopy analysis was carried out again two weeks after the degradation test ends, once the samples were dry (Fig. 9). In this way, the humidity of the glass surface does not interfere. Analyzing the Fig. 9, it is observed that in the glass exposed in the reference desiccator (without sorbent material), the bands associated with water at 1630, 2160 and 3300 cm−1 present a higher intensity than in the other glasses. Likewise, this glass has the most intense bands at 1380, 2730 and 2820 cm−1, corresponding to the formate ions. This glass together with the slice exposed to the maintained silica gel were the glasses in which the bands related to glass degradation were more intense, such as the low increase in the band of the asymmetric stretching of the Si–O–Si (750 cm−1), related to the formation of the silica gel layer on the surface [18, 22]. Higher is the intensity in these bands of water and formate ions, more advance is the degradation [18, 22]. Finally, the bands of water and formate ions from the glasses exposed with regenerated and renewed silica gel continued, following the descending order of intensity. These results coincide with those obtained in the gravimetry test.
Normalized FTIR-ATR spectra of the dry surface of the glass at the beginning and after the exposures. The peaks identification is resumed in Table 3
Optical microscopy
The surface of the glass slices was observed by optical microscopy at the end of the alteration test. Figure 10a–d correspond to the glasses treated with the maintained, renewed, regenerated silica gel and in the absence of sorbent material, respectively. Clearly, the difference is observed between the surfaces of the glasses that had sorbent material in their desiccator (Fig. 10a–c) versus the glass that did not have silica gel (Fig. 10d). Likewise, the alteration is in a lower degree when the silica gel was renewed (Fig. 10b) or regenerated (Fig. 10c), than in the case of maintaining the same silica during the 21 days of the test (Fig. 10a).
Glass surface with silica gel (a) maintained, (b) renewed (c) regenerated and (d) without it
A set of four low-cost materials (copper wires, steel wool, activated carbon, and silica gel) have been evaluated as formic acid sinks. The highest value of specific surface area corresponds to activated carbon, followed by silica gel. The low values of steel wool, and copper wires do not mean that they cannot act as sorbent materials, since they were used as sacrificial material.
The FTIR-ATR analyses showed the bands of water and formate ions in silica gel after their exposure showing that both materials acted as sorbents. SEM–EDS and µ-Raman spectroscopy identified the large deposits on degraded copper wires were associated with Cu2O, Cu(HCOO)2 and, probably, Cu(OH)2, and hydroxyls bands in degraded silica.
Regarding the capacity to remove the atmospheric formic ions, silica gel was the material that produced the greatest decrease, followed by active carbon. Furthermore, the relative humidity inside the silica gel desiccator was kept close to 100%, without significant variations. Therefore, it was concluded that the silica gel was the best material for the adsorption of formic acid in the evaluated conditions.
The second part of the study analyzed the degradation produced in a soda glass and evaluated its applicability in museum showcases. For that, four different procedures were evaluated: without the sorbent material and maintaining the silica gel, renewing and regenerating it during the test.
In the desiccator without silica gel, the glass showed the highest increase in weight, in contrast with those with sorbent material. The order from highest to lowest increase in weight was the maintained, regenerated and renewed silica. The ion chromatography analysis showed that the highest decrease in formate ions in dilution occurred in the desiccator without silica gel. This is justified by the higher hygroscopic capacity of glass, previously studied by gravimetry. For their part, the three desiccators with sorbent material can be compared with each other, obtaining that the silica regenerated each week achieved the best result. The characterization of the surface of the glass slices by FTIR-ATR clearly showed bands assigned to water and formate ions in all environments and throughout the exposure period. The comparison of the glass surface after the test showed that in the glass without sorbent material the signals of water and formate ions had a higher intensity than in the other glasses as result of its alteration. The alteration of the glasses with silica gel depended on the method used, being the renewed one as the best option. These results also agree with the observed by optical microscopy.
In summary, the presence of formic acid sinks always favors the removal of the acid environment improving the conservation of cultural objects. In the condition tested, silica gel turned out to be the best option. This material had the best efficiency to reduce the concentration of formic acid and mitigate the degradation produced in heritage objects exposed to a humid and acidic environment. One of the limitations of this procedure is that the material has to be regularly renewed to avoid their saturation. In addition, the evaluation of the material after studying the degradation produced in a soda glass showed the optimization of its use by regenerating it or renewing it weekly. These results should be complemented with more studies with other relevant pollutants and at lower humidity, e.g., to check if silica gel behaves superior to activated carbon also at 55% RH.
All data generated or analyzed during this study are included in this published article. Raw data (including spectra) are available upon request from the authors.
Budu A-M, Sandu I. Monitoring of pollutants in museum environment. Present Environ Sustain Develop. 2015;9:173–80.
Cruz AJ, Pires J, Carvalho AP, Brotas de Carvalho M. Comparison of adsorbent materials for acetic acid removal in showcases. J Cult Herit. 2008;9:244–52.
Gibson LT, Watt CM. Acetic and formic acids emitted from wood samples and their effect on selected materials in museum environments. Corros Sci. 2010;52:172–8.
Grøntoft T, Lankester P, Thickett D. Reduction of acidic pollutant gases inside showcases by the use of actived carbon adsorbers. e-Preserv Sci. 2015;12:28–37.
Schieweck A. Adsorbent media for the sustainable removal of organic air pollutants from museum display cases. Herit Sci. 2020;8:1–18.
Palomar T, García-Patrón N, Pastor P. Spanish Royal glasses with crizzling in historical buildings. The importance of environmental monitoring for their conservation. Build Environ. 2021;202:108054.
Grzywacz CM. Monitoring for gaseous pollutants in museum environments. Getty Conservation Institute; 2006.
Martellini T, Berlangieri C, Dei L, Carretti E, Santini S, Barone A, et al. Indoor levels of volatile organic compounds at Florentine museum environments in Italy. Indoor Air. 2020;30:900–13.
Smedemark SH, Ryhl-Svendsen M, Schieweck A. Quantification of formic acid and acetic acid emissions from heritage collections under indoor room conditions. Part I: laboratory and field measurements. Herit Sci. 2020;8:1–8.
Grøntoft T. Performance evaluation for museum enclosures. Measurement, modelling and mitigation of pollutant impact on objects in museum enclosures. Preserv Sci. 2012;9:36–46.
Parmar SS, Grosjean D. Sorbent removal of air pollutants from museum display cases. Environment International Pergamon. 1991;17:39–50.
Smedemark SH, Ryhl-Svendsen M, Toftum J. Removal of organic acids from indoor air in museum storage rooms by active and passive sorption techniques. Stud Conserv. 2020;65:251–61.
Harper M. Sorbent trapping of volatile organic compounds from air. J Chromatogr A. 2000;885:129–51.
Rodrigues A, Fearn S, Palomar T, Vilarigues M. Early stages of surface alteration of soda-rich-silicate glasses in the museum environment. Corros Sci. 2018;143:362–75.
Arévalo R, Mosa J, Aparicio M, Palomar T. The stability of the Ravenscroft's glass. Influence of the composition and the environment. J Non-Cryst Solids. 2021;565:120854.
Fernández JM. El vidrio. 3rd edn. Madrid: Consejo Superior de Investigaciones Científicas; 2003
Palomar T. Chemical composition and alteration processes of glasses from the Cathedral of León (Spain). Bol Soc Esp Ceram Vidrio. 2018;57:101–11.
Palomar T, Chabas A, Bastidas D, Fuente D, Verney-Carron A. Effect of marine aerosols on the alteration of silicate glasses. J Non-Cryst Solids. 2017;471:328–37.
Lee DH, Condrate RA. FTIR spectral characterization of thin film coatings of oleic acid on glasses: I. Coatings on glasses from ethyl alcohol. J Mater Sci. 1999;34:139–46.
Eggert G, Fischer A. The formation of formates: a review of metal formates on heritage objects. Herit Sci. 2021;9:1–13.
Thickett D, Ling D. Investigation of weeping glass deterioration under controlled relative humidity conditions. Stud Conserv. 2021;1–7.
Palomar T, de la Fuente D, Morcillo M, de Buergo MA, Vilarigues M. Early stages of glass alteration in the coastal atmosphere. Build Environ. 2019;147:305–13.
Cid R. Influencia de los COVs en la degradación del vidrio histórico. MSC dissertation. Universidad Autónoma de Madrid; 2021.
Grosjean D, Parmar SS. Removal of air pollutant mixtures from museum display cases. Stud Conserv. 1991;36:129–41.
Cano E, Torres CL, Bastidas JM. An XPS study of copper corrosion originated by formic acid vapour at 40% and 80% relative humidity. Mater Corros. 2001;52:667–76.
Quraishi MA, Ansari FA. Corrosion inhibition by fatty acid triazoles for mild steel in formic acid. J Appl Electrochem. 2003;33:233–8.
Özer A, Altundoǧan HS, Erdem M, Tümen F. Study on the Cr(VI) removal from aqueous solutions by steel wool. Environ Pollut. 1997;97:107–12.
Tétreault J, Cano E, van Bommel M, Scott D, Dennis M, Barthés-Labrousse MG, et al. Corrosion of copper and lead by formaldehyde, formic and acetic acid vapours. Stud Conserv. 2003;48:237–50.
López-Delgado A, Cano E, Bastidas JM, López FA. A laboratory study of the effect of acetic acid vapor on atmospheric copper corrosion. J Electrochem Soc. 1998;145:4140–7.
AENOR. UNE 400322:1999. Vidrio. Resistencia hidrolítica del vidrio en grano a 98 °C. Método de ensayo y clasificación. Madrid; 1999.
Bastidas JM, López-Delgado A, Cano E, Polo JL, López FA. Copper corrosion mechanism in the presence of formic acid vapor for short exposure times. J Electrochem Soc. 2000;147:999.
Sułowska J, Wacławska I, Szumera M. Effect of copper addition on glass transition of silicate-phosphate glasses. J Therm Anal Calorim. 2012;109:705–10.
Kadikova I, Morozova E, Yuryeva T v, Pankin D, Kadikova I, Morozova E, et al. Investigation of 19th century glass beads degraded areas by Raman spectroscopy and luminescence spectroscopy. International symposium on fundamentals of laser assisted micro- and nanotechnologies (FLAMN-19). Saint-Petersburg (Russia); 2019.
Colomban P, Schreiber HD. Raman signature modification induced by copper nanoparticles in silicate glass. J Raman Spectrosc. 2005;36:884–90.
Vivar Mora L, Naik S, Paul S, Dawson R, Neville A, Barker R. Influence of silica nanoparticles on corrosion resistance of sol–gel based coatings on mild steel. Surf Coat Technol. 2017;324:368–75.
Ruggiero L, Sodo A, Cestelli-Guidi M, Romani M, Sarra A, Postorino P, et al. Raman and ATR FT-IR investigations of innovative silica nanocontainers loaded with a biocide for stone conservation treatments. Microchem J. 2020;155:104766.
Colomban P, Etcheverry MP, Asquier M, Bounichou M, Tournié A. Raman identification of ancient stained glasses and their degree of deterioration. J Raman Spectrosc. 2006;37:614–26.
Dirección General de Bellas Artes y Bienes Culturales. Ministerio de Cultura. Normas de conservación preventiva para la implantación de sistemas de control de condiciones ambientales en museos, bibliotecas, archivos, monumentos y edificios históricos. Sección de Conservación Preventiva, Área de Laboratorio 2009.
Ito K, Bernstein HJ. The vibrational spectra of the formate, acetate and oxalate ions. Can J Chem. 1956;34:170–8.
Vasconcelos DCL, Carvalho JAN, Mantel M, Vasconcelos WL. Corrosion resistance of stainless steel coated with sol–gel silica. J Non-Cryst Solids. 2000;273:135–9.
Joni IM, Nulhakim L, Vanitha M, Panatarani C. Characteristics of crystalline silica (SiO2) particles prepared by simple solution method using sodium silicate (Na2SiO3) precursor. J Phys Conf Ser. 2018:12006.
Amuthambigai C, Mahadevan CK, Sahaya SX. Growth, optical, thermal, mechanical and electrical properties of anhydrous sodium formate single crystals. Curr Appl Phys. 2016;16:1030–9.
Verma PK, Kundu A, Puretz MS, Dhoonmoon C, Chegwidden OS, Londergan CH, et al. The bend+libration combination band is an intrinsic, collective, and strongly solute-dependent reporter on the hydrogen bonding network of liquid water. J Phys Chem B. 2018;122:2587–99.
Mojet BL, Ebbesen SD, Lefferts L. Light at the interface: the potential of attenuated total reflection infrared spectroscopy for understanding heterogeneous catalysis in water. Chem Soc Rev. 2010;39:4643–55.
Maas JPM. Infrared absorption spectrum of potassium formate, dependent on the sampling technique. Spectrochim Acta A. 1978;34:179–80.
Shokri B, Abbasi-Firouzjah M, Hosseini SI. FTIR analysis of silicon dioxide thin film deposited by Metal organic-based PECVD. In: Proceeding of 19th International Symposium on Plasma Chemistry Society. 2009.
Senturk U, Lee DH, Condrate RA, Varner JR. ATR-FTIR spectral investigation of SO2-treated soda-lime-silicate float glass. Mater Res Soc Symp P. 1996;407:337–42.
Socrates G. Infrared and Raman characteristic group frequencies: tables and charts. 3rd ed. West Sussex: Wiley; 2004.
Simonsen M, Sønderby C, Li Z, Søgaard E. XPS and FT-IR investigation of silicate polymers. J Mater Sci. 2009;44:2079–88.
The authors acknowledge A. Moure and A. Tamayo (ICV-CSIC) for providing some of the materials, R. Navidad (ICV-CSIC) for her help during the FTIR analyses, CSS research group (ICV-CSIC) to use the profilometer, and to the Analysis Service Unit facilities of ICTAN-CSIC for the ion chromatographic analyses and Estela de Vega for her assistance.
This work has been funded by Fundación General CSIC (ComFuturo Programme) and the Spanish Ministry of Science, Innovation and Universities (Project RTI2018-095373-J-I00). The authors wish to acknowledge support of the publication fee by the CSIC Open Access Publication Support Initiative through its Unit of Information Resources for Research (URICI) and the professional support of the Interdisciplinary Thematic Platform from CSIC Open Heritage: Research and Society (PTI-PAIS).
Dpto. Ciencias. Univ. Autónoma de Madrid. Campus de Cantoblanco. Avda. Tomás y Valiente, 7, 28049, Madrid, Spain
Rodrigo Arévalo
Instituto de Cerámica y Vidrio (ICV-CSIC), c/ Kelsen 5, Campus de Cantoblanco, 28049, Madrid, Spain
Jadra Mosa, Mario Aparicio & Teresa Palomar
Jadra Mosa
Mario Aparicio
Teresa Palomar
TP designed the study; RA carried out the tests and analyses; MA analyzed the samples by SEM; JM analyzed the samples by µ-Raman spectroscopy; RA and TP prepared the original draft. All authors read and approved the final manuscript.
Correspondence to Teresa Palomar.
Additional file 1
: Fig. S1. Distribution inside each desiccator during the (a) First test: selection of the best formic sink, (b) Second test: Optimization of silica gel. Fig. 2. SEM image of (a) the silica gel surface and (b) the deposits formed. Fig. 3. SEM image of the steel (a) before the test and (b) after the test. Table 1. Assignment of FTIR-ATR bands for silica gel.
Arévalo, R., Mosa, J., Aparicio, M. et al. Different low-cost materials to prevent the alteration induced by formic acid on unstable glasses. Herit Sci 9, 142 (2021). https://doi.org/10.1186/s40494-021-00617-x | CommonCrawl |
Waterloo, December 8 - 11, 2017
Contributed Papers Session
Abstract Submission Form
Block schedule
Venue & Travel/Transport
Shuttle (Hotel to Univ of Waterloo)
Abstract Submission Form Plenary, Prize and Public Lectures By session By speaker Posters
Org: Kevin Hare, Wentang Kuo and Yu-Ru Liu (University of Waterloo)
JULIA BRANDES, University of Waterloo
Vinogradov systems missing the linear slice [PDF]
The resolution of Vinogradov's Mean Value Conjecture by Wooley and Bourgain, Demeter and Guth (see previous talk) has transformed our understanding of systems of diagonal equations. We are now able to obtain sharp mean value estimates for systems of Vinogradov type, consisting of one equation of degree $j$ for al $1 \le j \le k$, in which it is possible to take advantage of certain symmetries of the system. In this talk we will explore systems of Vinogradov type, where the linear equation has been removed. It turns out that by elementary means it is possible to establish diagonal behavior in a larger range than what follows from Efficient Congruencing and $\ell^2$-decoupling methods. This is joint work with Trevor Wooley.
KARL DILCHER, Dalhousie University
Derivatives and special values of higher-order Tornheim zeta functions [PDF]
We study analytic properties of the Tornheim zeta function $T(r,s,t)$, and in particular the case $\omega_3(s):=T(s,s,s)$. While the values at positive integers have long been known, we evaluate $\omega_3(0)$ and show that $\omega_3(m)=0$ for all negative integers $m$. As our main result, we find the derivative of this function at $s=0$, which turns out to be surprisingly simple. I will also show that all these results have analogues for Tornheim zeta functions of arbitrary orders. These results were first conjectured by J. Borwein and D. Bailey using high-precision calculations based on an identity due to R. Crandall that involves a free parameter and provides an analytic continuation. This identity was also the main tool in the eventual proofs of our results. (Joint work with Hayley Tomkins).
DANIEL FIORILLI, University of Ottawa
Low-lying zeros of quadratic Dirichlet $L$-functions: the transition [PDF]
I will discuss recent joint work with James Parks and Anders Södergren. Looking at the one-level density of low-lying zeros of quadratic Dirichlet $L$-functions, Katz and Sarnak predicted a sharp transition in the main terms when the support of the Fourier transform of the implied test functions reaches the point $1$. By estimating this quantity up to a power-saving error term, we show that such a transition is also present in lower-order terms. In particular this answers a question of Rudnick coming from the function field analogue. We also show that this transition is also present in the Ratios Conjecture's prediction.
TRISTAN FREIBERG, University of Waterloo
Variance for primes in arithmetic progression: sparse, sparser,... [PDF]
Using Goldston's and Vaughan's approach to the Montgomery-Hooley asymptotic formula, Brüdern and Wooley extended this asymptotic to the case where the moduli run over polynomial values. As suggested by Brüdern and Wooley, the method of proof allows one to look at even sparser variance: we consider moduli of the form $\big[\exp\big((\log k)^{\gamma}\big)\big]$, where $1 < \gamma < 3/2$. This is joint work with Roger Baker.
JOHN FRIEDLANDER, University of Toronto
Twin primes via exceptional characters [PDF]
We sketch our arguments (joint with H. Iwaniec) which lead from the assumption of the existence of exceptional Dirichlet characters to the asymptotic formula in related ranges for the distribution of twin primes, providing our own version of a well-known theorem of R. Heath-Brown.
ALIA HAMIEH, University of Northern British Columbia
Special Values of L-functions of Hilbert Modular Forms [PDF]
This talk is based on joint work with Naomi Tanabe. We discuss the family of central values of the Rankin-Selberg convolutions of two adelic Hilbert modular forms both of which have varying weight parameter k. We prove that a large number of these values (though still of zero density) are non-vanishing. We also present an asymptotic formula for a (mollified) first moment of these values.
MATILDE LALIN, Université de Montréal
Remarks on the Mahler measure for arbitrary tori [PDF]
The Mahler measure of a Laurent polynomial $P$ is defined as the integral of $\log|P|$ over the unit torus with respect to the Haar measure. For multivariate polynomials, it often yields special values of $L$-functions. We consider a variation of the Mahler measure where the defining integral is performed over a more general torus. We focus our investigation on two particular polynomials related to certain elliptic curve $E$ and we establish new formulas for this variation of the Mahler measure in terms of $L'(E,0)$. This is joint work with my summer student T. Mittal.
YOUNESS LAMZOURI, York University
Prime number races with many contestants [PDF]
We investigate the logarithmic densities in prime number races with $r$ competitors modulo q, when $r, q\to \infty$, assuming the standard conjectures GRH and LI. Among our results, we uncover an interesting transition in the asymptotic behavior of these densities when $r=(\log q)^{1+o(1)}$. First, in a joint work with A. Harper, we prove that these densities are all asymptotic to $1/r!$ when $r\leq (\log q)^{1-\epsilon}$, thus showing that all biases dissolve in this range. On the other hand, in a recent joint work with K. Ford and A. Harper, we show that when $r/\log q\to \infty$, there exist r-way prime number races where the densities are much smaller than $1/r!$, and others where the densities are much larger than $1/r!$, answering a question of A. Feuerverger and G. Martin. The proofs use various probabilistic tools, including a version of Stein's method of exchangeable pairs, and a quantitative multidimensional Gaussian approximation theorem, obtained through Lindeberg's method.
ALLYSA LUMLEY, York University
Distribution of Values of $L$-functions associated to Hyperelliptic Curves over Function Fields [PDF]
In 1992, Hoffstein and Rosen proved a function field analogue to Gau\ss' conjecture regarding the class number, $h_D$, of a discriminant $D$ by averaging over all polynomials with a fixed degree. In this case $h_D=|\text{Pic}(\mathcal{O}_D)|$, where $\text{Pic}(\mathcal{O}_D)$ is the Picard group of $\mathcal{O}_D$. Andrade later considered the average value of $h_D$, where $D$ is monic, squarefree and its degree varies. He achieved these results by calculating the first moment of $L(1,\chi_D)$ in combination with Artin's formula relating $L(1,\chi_D)$ and $h_D$. For this talk we discuss the complex moments of $L(1,\chi_D)$. We show that these moments are very nearly equal to those of a random probabilistic model. We also describe the distribution of values for both $L(1,\chi_D)$ and $h_D$.
GREG MARTIN, University of British Columbia
The distribution of the number of subgroups of the multiplicative group [PDF]
Let $I(n)$ denote the number of isomorphism classes of subgroups of $(\mathbb{Z}/n\mathbb{Z})^\times$, and let $G(n)$ denote the number of subgroups of $(\mathbb{Z}/n\mathbb{Z})^\times$ counted as sets (not up to isomorphism). We prove that both $\log G(n)$ and $\log I(n)$ satisfy Erdös–Kac laws, in that suitable normalizations of them are normally distributed in the limit. Of note is that $\log G(n)$ is not an additive function but is closely related to the sum of squares of additive functions. We also establish the orders of magnitude of the maximal orders of $\log G(n)$ and $\log I(n)$.
RAM MURTY, Queen's University
Hilbert's tenth problem over number fields [PDF]
Hilbert's tenth problem for rings of integers of number fields remains open in general, although a conditional negative solution was obtained by Mazur and Rubin assuming some unproved conjectures about the Shafarevich-Tate groups of elliptic curves. In this talk, we highlight how the non-vanishing of certain L-functions is related to this problem. In particular, we show that Hilbert's tenth problem for rings of integers of number fields is unsolvable assuming the automorphy of L-functions attached to elliptic curves and the rank part of the Birch and Swinnerton-Dyer conjecture. This is joint work with Hector Pasten.
JONATHAN SORENSON, Butler University
Open problems related to finding strong pseudoprimes [PDF]
In joint work with Jonathan Webster, we presented an algorithm that, given $x,m>0$, finds all integers $\le x$ that are strong pseudoprimes to the first $m$ prime bases. Under the assumption of some conjectures, and assuming $m\rightarrow\infty$ with $x$, this algorithm takes at most $x^{2/3+\epsilon}$ time, for $\epsilon>0$. (doi.org/10.1090/mcom/3134)
After a quick overview of how the algorithm works, in this talk we will discuss several conjectures/open problems in analytic number theory that arise in the running time analysis of this algorithm.
AKSHAA VATWANI, University of Waterloo
Zeros of partial sums of $L$-functions [PDF]
We discuss some results regarding the distribution of zeros of partial sums of a certain class of $L$-functions. These involve obtaining Hal\'asz-type mean value estimates for a suitable class of multiplicative functions. This is joint work with Arindam Roy.
TREVOR WOOLEY, University of Bristol
Nested efficient congruencing and relatives of Vinogradov's mean value theorem [PDF]
The main conjecture in Vinogradov's mean value theorem states that, for each $\epsilon>0$, one has $$\int_{[0,1)^k}\Biggl| \sum_{1\le n\le X}e(\alpha_1x+\ldots +\alpha_kx^k)\Biggr|^{2s}\,{\rm d}{\underline \alpha}\ll X^{s+\epsilon}+X^{2s-k(k+1)/2}.$$ This is now a theorem of Bourgain, Demeter and Guth (in 2016, via $l^2$-decoupling) and the speaker (in 2014 for k=3, and in 2017 in general, via (nested) efficient congruencing). We report on some generalisations of this conclusion, some of which go beyond the orbit of decoupling and efficient congruencing. | CommonCrawl |
Is there Geometric Interpretation of Spinors?
Usually in Physics we define a spinor to be an element of the $\left(\frac{1}{2},0\right)$ representation space of the Lorentz group. Essentially this boils down to the 'n-tuple of numbers that transforms like a spinor' definition that physicists tend to use for vectors, covectors, and tensors.
However, vectors, covectors, and tensors also have geometric definitions that are much nicer than, and also equivalent to, the 'n-tuple of numbers' definition. For example, a vector can be thought of as an equivalence class of curves tangent at a point, or the directional derivative at a point. A covector can be thought of as a differential 1-form or as an equivalence class of functions with equal gradient at a point. Tensors are then tensor products of these spaces.
I was wondering if there is a similar definition of spinors based in differential geometry rather than just the representation theory of the Lorentz group. If so, are these specific to certain manifolds (complex, Lorentzian, etc), or are they general to all manifolds?
differential-geometry spin-geometry
gautampkgautampk
$\begingroup$ spin is related to rotation, if a thing has spin 'n' this means that this object is invariant under rotations $ 2\pi /n $ $\endgroup$ – Jose Garcia May 7 '17 at 10:59
$\begingroup$ @JoseGarcia Yes I know what spin is. I was asking if there was a definition of a spinor (an element of a complex vector space) that was not based in group theory. $\endgroup$ – gautampk May 7 '17 at 11:06
$\begingroup$ The spin groups (double covers of orientation-preserving isometry groups) are constructed in Clifford algebras, and the recursive isomorphisms for Clifford algebras are used to classify them as matrix algebras (over R, C, H) which give spin representations. I have heard some sources say these matrix representations are "non-canonical" or unnatural, in the sense of depending on coordinates and arbitrary choices, which suggests cause for pessimism in the search for direct geometric meaning. But I've never proved this statement (after satisfactorily formalizing the claim categorically first). $\endgroup$ – arctic tern May 8 '17 at 22:01
$\begingroup$ Not an answer but perhaps fun/interesting: cs.umb.edu/~eb/spinorSpanner.pdf $\endgroup$ – Ethan Bolker May 10 '17 at 23:58
$\begingroup$ Not directly what you want but I like the orientation explanation of spinors. In Misner, Thorne and Wheeler they draw a small box whose corners are connected by string to a larger box. They illustrate how rotation tangles the string but all is recorded by the spinor. $\endgroup$ – Drone Scientist May 17 '17 at 0:25
I understand that your background is in physics, but (since I have neither time nor desire to write here a crash course in the relevant mathematics), I write my answer pretending that you are a reasonably advanced graduate student in math.
Let me first try to interpret your question: You start with the standard representation of the group $G=SO(3,1)_0$ (the connected component of identity in $SO(3,1)$, equivalently, the subgroup of $SO(3,1)$ preserving the future light cone) on the Lorentzian 4-space $R^{3,1}$, the 3-dimensional real vector space equipped with a nondegenerate symmetric bilinear form $\langle, \rangle$ of signature $(3,1)$, with the associated quadratic form $$ x^2 +y^2 + z^2 -w^2. $$ The 2-fold nontrivial cover of the Lie group $G$ (the spinor group of $G$) is isomorphic to $SL(2, {\mathbb C})$ which has the natural (spinor) representation on ${\mathbb C}^2$. You would like to have an interpretation of vectors in ${\mathbb C}^2$ in terms of geometric objects associated with $R^{3,1}$.
a. First, I will do this in the case of the group $SO(2,1)_0$ since it is easier and illuminating. The natural representation of this group is on the Lorentzian 3-space $R^{2,1}$; the spinor group of $SO(2,1)_0$ is $SL(2, {\mathbb R})$ and its natural (spinor) representation is on ${\mathbb R}^2$. In order to interpret the corresponding spinors (vectors in ${\mathbb R}^2$), let me first note obstacles:
The dimension mismatch: 2 versus 3.
The action of $SL(2, {\mathbb R})$ on nonzero spinors is transitive (with stabilizers conjugate to the 1-parameter subgroup of strictly upper triangular matrices: Such matrices are called "unipotent"), while the action of $SO(2,1)_0$ on nonzero vectors in $R^{2,1}$ is non-transitive: There are three types of orbits, consisting of time-like, null and space-like vectors. Null-vectors will be most important for us, they are defined by the condition $\langle v, v \rangle=0$.
Any "natural" class of objects which one can define in $R^{2,1}$ will be acted upon by the group $SO(2,1)_0$ while we are interested in spinors, on which $SO(2,1)_0$ does not act.
While the space $C$ of future-like null-vectors is 2-dimensional (I regard the zero vector as a future null-vector for convenience), it does not have a natural structure of a vector space. Nevertheless, it does have a distinguished point (the origin) and a family of distinguished lines which are intersections of $C$ with affine 2-planes parallel to null-lines. The problem with these "lines" in $C$ is that some of them are only half-lines (the ones which pass through the origin); the rest are (connected) hyperbolas, so at least topologically they do look like lines.
Note, however, that:
The null-cone is 2-dimensional, homeomorphic to the 2-plane minus the origin. (Explicitly, you can define this homeomorphism by projecting $C$ to the coordinate plane in $R^{2,1}$ via the map $(x,y,z)\mapsto (x,y)$, where the quadratic form of the Lorentzian inner product is given by $x^2 + y^2 - z^2$.) This takes care of the dimension discrepancy.
The action of $SO(2,1)_0$ on the future null-cone $C$ minus the origin is transitive. The stabilizer of each nonzero null-vector is again 1-parameter (unipotent) subgroup of $SO(2,1)_0$. Under the covering map $SL(2, {\mathbb R})\to SO(2,1)_0$ the unipotent subgroups of the former map isomorphically to the unipotent subgroups of the latter. This is what we will exploit.
Now, I will let $S$ be the 2-fold cover of the punctured plane $N= C-\{{\mathbf 0}\}$. Informally, you can think of the elements of $S$ as null-vectors $v\in N$ each equipped with a "spin", a $\pm$ sign, which switches to the opposite sign as we rotate $v$ 360 degrees around the z-axis. In terms of polar coordinates we can think of the elements of $S$ as $$ (r, \theta), r\ne 0, 0\le \theta< 4\pi. $$ The reduction modulo $2\pi$ sends these to points in the punctured plane whose polar coordinates are $$ (r, \theta), 0\le \theta< 2\pi. $$ This passage to the 2-fold cover is the "unnatural step" which allows one to get spinors.
Now, the action of $SO(2,1)_0$ lifts to an action on $S$ but not of $SO(2,1)_0$ itself: The lift is the action of the spinor group $SL(2, {\mathbb R})$. One can verify (for instance, by observing that the action of $SL(2, {\mathbb R})$ on $S$ is transitive with point-stabilizers which are 1-parameter unipotent subgroups as required) that there is a diffeomorphism $S\to {\mathbb R}^2 -\{{\mathbf 0}\}$ which conjugates the action of $SL(2, {\mathbb R})$ on $S$ to the standard linear action of $SL(2, {\mathbb R})$ on ${\mathbb R}^2 -\{{\mathbf 0}\}$. Under this diffeomorphism the "lines" which I mentioned above map to affine lines in $SL(2, {\mathbb R})$; each half-line lifts to the union of two half-lines in $S$ which map to a line (minus the origin) in ${\mathbb R}^2 -\{{\mathbf 0}\}$.
This gives you a reasonably geometric "Lorentzian" interpretation of nonzero spinors in ${\mathbb R}^2$ as elements of the surface $S$: These are nonzero future null-vectors $v\in N$ "equipped with a $\pm$ sign" to indicate which sheet of the 2-fold cover they lift to. The latter description is unsatisfactory as a mathematical description but should be OK as far as your intuition goes. The rigorous definition is in terms of covering spaces as I noted above. In order to get the zero spinor as well, one can simply say that we are using a 2-fold "branched cover" of $C$, which is ramified over the origin.
b. Now, to the Lorentzian 4-space $R^{3,1}$.The difficulties are somewhat similar. Again, note nontransitivity of the action of $G$ on the set of nonzero vectors in $R^{3,1}$. Inspired by (a), one can try to use the future null-cone $C\subset {\mathbb R}^{3,1}$. However, this results in the dimensional mismatch (the cone $C$ is 3-dimensional while the spinor space is real 4-dimensional). Also, while $G$ does act transitively on $C$ (minus the origin), the stabilizers are a bit larger than the ones in ${\mathbb C}^2$: The stabilizers of nonzero vectors in ${\mathbb C}^2$ are complex 1-parameter unipotent (real 2-dimensional), conjugate to the group of strictly upper triangular complex 2-by-2 matrices $$ \left[\begin{array}{cc} 1&*\\ 0&1\end{array}\right] $$
while the stabilizers of nonzero null-vectors are 3-dimensional (in addition to 2-dimensional unipotent subgroups of $G$ which do lift to unipotent subgroups of $SL(2, {\mathbb C})$ we also have 1-parameter elliptic subgroups, isomorphic to $S^1$, which fix the null-vectors). Another problem is that $N= C- \{{\mathbf 0}\}$ is simply-connected, so taking its covering spaces would not be useful.
Nevertheless, what we can do is to take a future nonzero null-vector $v\in C$ and equip it with a half-plane $P$ which is tangent to the cone $C$ along the line spanned by $v$. Now, the stabilizer of each "flag" $(v,P)$ in $G$ is real 2-dimensional (a unipotent subgroup which lifts to a complex 1-dimensional unipotent subgroup of $SL(2, {\mathbb C})$ as required). I let $F$ denote the space of such "flags" $(v,P)$. It is not hard to check that this space is connected with the fundamental group isomorphic to ${\mathbb Z}_2$, which means that $F$ does have a connected 2-fold cover $S\to F$. One can also describe $F$ as the total space of the tangent bundle of the 2-sphere with the image of the zero section deleted. Now, we can play the same game as in (a): The elements of $S$ can be though of as flags $(v,F)$ equipped with a "spin", a $\pm$ sign which changes after we "spin" the half-plane $F$ around $v$ 360 degrees. One then verifies that the action of $G=SO(3,1)_0$ lifts to $S$ to an action of the spinor group $SL(2, {\mathbb C})$ on ${\mathbb C}^2$ minus the origin (again, by comparing the structure of point-stabilizers).
This is again a reasonably geometric description of (nonzero) spinors as elements of the 4-dimensional manifold $S$. The drawback of this description is that we do not see directly a complex structure on $S$ and the fact that the spinor group acts holomorphically; the linear structure is also very nontransparent. If this does not bother you, stop reading here; if it does bother you, proceed to the item (c).
c. I will now give a description of spinors which is derived from the Lorentzian geometry of ${\mathbb R}^{3,1}$, where the complex linear structure is transparent, but mathematics required to understand it gets harder.
Let's go back to the 3-dimensional null-cone $C\subset {\mathbb R}^{3,1}$. The space $\Sigma$ of future null-rays in $C$ is naturally diffeomorphic to the 2-sphere $S^2$; the action of $G$ on $C$ under the map of $\Sigma$ to $S^2$ becomes the conformal action of $PSL(2, {\mathbb C})$ on the Riemann sphere. The conformal structure on $\Sigma$ can be described as follows. For each nonzero null-vector $v\in N$, the restriction of the Lorentzian inner product to $T_vC$ (the tangent space of the cone $C$ at $v$, which is 3-dimensional) is degenerate positive semidefinite: The vector $v$ pairs to zero with each vector $w\in T_vC$. However, the projection $N\to \Sigma$ (sending the positive ray through $v\in C$ top a single point) divides out the line ${\mathbb R}v$ and, hence, $\langle, \rangle$ projects to a positive-definite inner product on the tangent plane to $\Sigma$ at the equivalence class $[v]$ of $v$. The action of $G$ on $\Sigma$ preserves the conformal class of the resulting Riemannian metric on $\Sigma$; the orientation is also preserved, hence, the action is conformal. (One can also describe the almost complex structure on $T\Sigma$ more directly but I will skip this.) The tangent bundle $T\Sigma$ is a complex one-dimensional vector bundle on $\Sigma$; algebraic geometers would call it the anticanonical bundle. It has degree $+2$, hence, there exists a "half-anticanonical line bundle" $L$ on $\Sigma$ so that the tensor square of $L$ is isomorphic to $T\Sigma$. The line bundle $L$ has degree $+1$; algebraic geometers call it the "hyperplane bundle" of ${\mathbb C}P^1\cong S^2$.
Remark. the total space $E$ of $L$ can be described as the 2-fold branched cover over $T\Sigma$ which is ramified over the image of the zero section of $T\Sigma$. Now, you may start to see a connection to Part (b). Fiberwise - this branched cover is nothing by the 2-fold branched cover over the complex plane ramifield at the origin. Now, you see a connection to Part (a).
The group $PSL(2, {\mathbb C})$ acts on $T\Sigma$ via (holomorphic) bundle automorphisms and this action lifts to an action of $SL(2, {\mathbb C})$ on $L$. It is easy to check that the space of holomorphic sections $\Gamma(L)$ of $L$ is complex-2-dimensional. The action of $SL(2, {\mathbb C})$ on the vector space $\Gamma(L)$ is manifestly complex linear, nontrivial. Hence, we obtain the spinor representation of $SL(2, {\mathbb C})$ on $\Gamma(L)\cong {\mathbb C}^2$.
A complex-analyst would describe sections of $L$ as "holomorphic half-vector fields" (or degree $-1/2$ holomorphic differentials) $$ \omega=f(z)dz^{-1/2}, $$ where $f(z)$ as a holomorphic function. The strange degree $-1/2$ refers to the transformation law for such differentials: If $z=g(w)$ a conformal mapping then $g_*\omega$ is given by $$ f(w) (g')^{-1/2} dw^{-1/2}. $$ If $w=\frac{az+b}{cz+d}$ then $$ f(w) (g')^{-1/2} dw^{-1/2}= f(w) (\frac{1}{(cw+d)^2})^{-1/2} dw^{-1/2} = f(w) (cw+d) dw^{-1/2}. $$ Note that this expression is meaningless unless we specify the 2-by-2 matrix $$ \left[\begin{array}{cc} a&b\\ c&d\end{array} \right]\in SL(2, {\mathbb C}). $$ This is your spinor representation.
These holomorphic differentials of order $-1/2$ are (like it or not) your spinors. The linear structure is very transparent: In order to add these fellows, you just add the functional parts:
$$ f_1(z)dz^{-1/2} + f_2(z)dz^{-1/2}= (f_1(z)+ f_2(z))dz^{-1/2}. $$ Linearity of the action of the spinor group is also clear (this action is just a change of variables in the differentials). The fact that the space of spinors is complex 2-dimensional might not be immediate but becomes clear once you think a bit about it. (The space is isomorphic to the space of holomorphic functions on the complex plane which have at worst simple pole at infinity, i.e. have the form $\alpha z + \beta$, $\alpha, \beta\in {\mathbb C}$.)
I am not sure of analytical importance of "holomorphic half-vector fields", but holomorphic half-order differentials $$ \omega=f(z)dz^{1/2}, $$ do appear naturally in complex analysis when considering 2nd order linear holomorphic ODEs, that's how I first learned about them; see for instance (a bit dated but very clearly written):
N. S. Hawley and M. Schiffer, Half-order differentials on Riemann surfaces, Acta Math. Volume 115 (1966), 199--236.
Edit. See also Chapter 1 (The geometry of world-vectors and spin-vectors) in
Roger Penrose, Wolfgang Rindler, Spinors and space-time, Volume I, 1984.
Moishe KohanMoishe Kohan
Spinors can be represented in geometric algebra as even multivectors, that is, even-graded elements of a geometric algebra.
A geometric algebra is like an exterior algebra (indeed, in a geometric algebra you can and usually do define an exterior product as well) but with a different (still associative) product between its elements: the geometric product. On vectors specifically, the geometric product is defined so that $v^2=\langle v,v\rangle$, where $\langle\cdot ,\cdot\rangle$ is the scalar product between vectors. So a geometric algebra is a Clifford algebra from another perspective.
Even-graded elements of a geometric algebra are those that are formed from products of an even number of vectors and the sums of such products. These even multivectors can act on vectors by two-sided multiplication to rotate and scale the vector (in the same way that quaternions are used to rotate vectors):
$$v'=\overline\psi v \psi$$
($\overline\psi$ is the reverse of the multivector $\psi$: just reversing the order of all factors in the geometric products that make up $\psi$.)
This is why spinors are sometimes called the "square roots" of vectors: because by two-sided multiplication, a spinor can transform a given reference vector into another vector.
So a spinor can essentially be thought of as a transformation that rotates and scales a vector. (In spacetime contexts this is a Lorentz rotation, so a spatial rotation plus a boost.) This is the most intuitive way of thinking about spinors that I have been able to find.
In differential geometry, this means that spinors are even multivectors in the geometric algebra formed from the tangent space of the manifold, and the bilinear covariants of the spinors are the results of the spinors acting by two-sided multiplication on vectors of a reference orthonormal frame on the manifold.
A little more info can be found here and here.
Matt DickauMatt Dickau
$\begingroup$ I am not sure this answers the question which was to define spinors on $R^{3,1}$ in terms of differential-geometric objects on the space-time. Clifford algebra of course can be used to define spinors (this is how it is usually done), but their geometric meaning becomes unclear with this formalism. $\endgroup$ – Moishe Kohan Jan 17 '18 at 20:46
$\begingroup$ See the last couple of paragraphs of my questions. I gave a geometric interpretation (spinor represents a rotation or lorentz transformation of vectors) and how it relates to differential geometry (defined from geometric algebra of the tangent space of the manifold). $\endgroup$ – Matt Dickau Jan 18 '18 at 20:59
Not the answer you're looking for? Browse other questions tagged differential-geometry spin-geometry or ask your own question.
What are spinors mathematically?
Vector fields on smooth manifolds and Lie algebras
Vector fields on manifolds
Relationship between $r$-forms and $r$-vectors
Question about the physical intuition behind tensors
Confusion about Tangent vs Cotangent differentials
Intuitive understanding of 2-forms, (1,1)-tensors, and other fundamental objects of exterior algebra or tensor algebra | CommonCrawl |
The Mathematical Ninja and The Slinky Coincidence
"No, no, wait!" said the student. "Look!"
"8.000 000 072 9," said the Mathematical Ninja. "Isn't that $\frac{987,654,321}{123,456,789}$? What do you think this is, some sort of a game?"
"It has all the hallmarks of…"
"I'll hallmark you in a minute!" said the Mathematical Ninja.
Seconds later, the students arms were above his head and a set of skittles had appeared from nowhere.
"Er, sensei? Ten out of ten for the Bond-esque bon mot, and everything, but I think you might lose some marks for it being a complete non-sequitur as far as the cartoon punishment goes. How about you focus on explaining the number trick?"
The Mathematical Ninja raised his eyebrows. "Sh," he said. "Playing." He swung the student at the skittles. "Leave your legs down! For heaven's sake."
"I can see it's a bit short of a billion divided by a bit short of 125 million," said the student, "so somewhere about 8 looks about right. But why so close?"
"It's to do with the binomial expansion," said the Mathematical Ninja, swinging again. "Feet still!"
"But those skittles are heavy and I've got dancing later. What about the binomial expansion?"
"Well, $(1-x)^{-2} = 1 + x + 2x^2 + 3x^2 + …$. With $x = 0.1$, that gives $\frac{100}{81} = 1.\dot{1}234567\dot{9}$."
"I can see that," said the student, stretching to knock down a skittle on his own terms.
"Meanwhile, $\frac{100}{9} = 11.\dot{1}$."
"As any fule no."
"And $\frac{100}{9} - \frac{100}{81} = \frac{800}{81} = 9.\dot{8}7654320\dot{9}$."
"I'd need to write that down, sensei, but I'll take your word for it."
"The upshot is that $\frac{9.87654320}{1.23456790}$ is eight, and the number you gave, $\frac{987654321}{123456789}$ is a tiny fraction larger on top and a tiny fraction smaller on the bottom."
"So it's not just a coincidence, it's based on something solid?"
"STRIKE!" screamed the Mathematical Ninja. "There are no coincidences."
* Edited 2016-04-04 to correct a typo. Thanks, @christianp!
An alternative proof of the $\sin(2x)$ identity
A student asks: what's it like to do a maths degree?
Secrets of the Mathematical Ninja: The magic of 0.7 | CommonCrawl |
The energy spectrum of cosmic rays beyond the turn-down around \(\varvec{10^{17}}\) eV as measured with the surface detector of the Pierre Auger Observatory
P. Abreu71,
M. Aglietta51,53,
J. M. Albury12,
I. Allekotte1,
A. Almela8,11,
J. Alvarez-Muñiz78,
R. Alves Batista79,
G. A. Anastasi51,62,
L. Anchordoqui86,
B. Andrada8,
S. Andringa71,
C. Aramo49,
P. R. Araújo Ferreira41,
J. C. Arteaga Velázquez66,
H. Asorey8,
P. Assis71,
G. Avila10,
A. M. Badescu74,
A. Bakalova31,
A. Balaceanu72,
F. Barbato44,45,
R. J. Barreira Luz71,
K. H. Becker37,
J. A. Bellido12,68,
C. Berat35,
M. E. Bertaina51,62,
X. Bertou1,
P. L. Biermann95,
P. Billoir34,
V. Binet6,
K. Bismark8,38,
T. Bister41,
J. Biteau36,
J. Blazek31,
C. Bleve35,
M. Boháčová31,
D. Boncioli45,56,
C. Bonifazi25,
L. Bonneau Arbeletche20,
N. Borodai69,
A. M. Botti8,
J. Brack97,
T. Bretz41,
P. G. Brichetto Orchera8,
F. L. Briechle41,
P. Buchholz43,
A. Bueno77,
S. Buitink14,
M. Buscemi46,
M. Büsken8,38,
K. S. Caballero-Mora65,
L. Caccianiga48,58,
F. Canfora79,80,
I. Caracas37,
J. M. Carceller77,
R. Caruso46,57,
A. Castellina51,53,
F. Catalani18,
G. Cataldi47,
L. Cazon71,
M. Cerda9,
J. A. Chinellato21,
J. Chudoba31,
L. Chytka32,
R. W. Clay12,
A. C. Cobos Cerutti7,
R. Colalillo49,59,
A. Coleman ORCID: orcid.org/0000-0003-1510-171292,
M. R. Coluccia47,
R. Conceição71,
A. Condorelli44,45,
G. Consolati48,54,
F. Contreras10,
F. Convenga47,55,
D. Correia dos Santos27,
C. E. Covault84,
S. Dasso3,5,
K. Daumiller40,
B. R. Dawson12,
J. A. Day12,
R. M. de Almeida27,
J. de Jesús8,40,
S. J. de Jong79,80,
G. De Mauro79,80,
J. R. T. de Mello Neto25,26,
I. De Mitri44,45,
J. de Oliveira17,
D. de Oliveira Franco21,
F. de Palma47,55,
V. de Souza19,
E. De Vito47,55,
M. del Río10,
O. Deligny33,
A. Di Matteo51,
C. Dobrigkeit21,
J. C. D'Olivo67,
L. M. Domingues Mendes71,
R. C. dos Anjos24,
D. dos Santos27,
M. T. Dova4,
J. Ebr31,
R. Engel38,40,
I. Epicoco47,55,
M. Erdmann41,
C. O. Escobar94,
A. Etchegoyen8,11,
H. Falcke79,80,81,
J. Farmer91,
G. Farrar89,
A. C. Fauth21,
N. Fazzini94,
F. Feldbusch39,
F. Fenu51,53,
B. Fick88,
J. M. Figueira8,
A. Filipčič75,76,
T. Fitoussi40,
T. Fodran79,
M. M. Freire6,
T. Fujii91,98,
A. Fuster8,11,
C. Galea79,
C. Galelli48,58,
B. García7,
A. L. Garcia Vegas41,
H. Gemmeke39,
F. Gesualdi8,40,
A. Gherghel-Lascu72,
P. L. Ghia33,
U. Giaccari79,
M. Giammarchi48,
J. Glombitza41,
F. Gobbi9,
F. Gollan8,
G. Golup1,
M. Gómez Berisso1,
P. F. Gómez Vitale10,
J. P. Gongora10,
J. M. González1,
N. González13,
I. Goos1,40,
D. Góra69,
A. Gorgi51,53,
M. Gottowik37,
T. D. Grubb12,
F. Guarino49,59,
G. P. Guedes22,
E. Guido51,62,
S. Hahn8,40,
P. Hamal31,
M. R. Hampel8,
P. Hansen4,
D. Harari1,
V. M. Harvey12,
A. Haungs40,
T. Hebbeker41,
D. Heck40,
G. C. Hill12,
C. Hojvat94,
J. R. Hörandel79,80,
P. Horvath32,
M. Hrabovský32,
T. Huege14,40,
A. Insolia46,57,
P. G. Isar73,
P. Janecek31,
J. A. Johnsen85,
J. Jurysek31,
A. Kääpä37,
K. H. Kampert37,
N. Karastathis40,
B. Keilhauer40,
J. Kemp41,
A. Khakurdikar79,
V. V. Kizakke Covilakam8,40,
H. O. Klages40,
M. Kleifges39,
J. Kleinfeller9,
M. Köpke38,
N. Kunka39,
B. L. Lago16,
R. G. Lang19,
N. Langner41,
M. A. Leigui de Oliveira23,
V. Lenok40,
A. Letessier-Selvon34,
I. Lhenry-Yvon33,
D. Lo Presti46,57,
L. Lopes71,
R. López63,
L. Lu93,
Q. Luce38,
J. P. Lundquist75,
A. Machado Payeras21,
G. Mancarella47,55,
D. Mandat31,
B. C. Manning12,
J. Manshanden42,
P. Mantsch94,
S. Marafico33,
A. G. Mariazzi4,
I. C. Mariş13,
G. Marsella46,60,
D. Martello47,55,
S. Martinelli8,40,
H. Martinez19,
O. Martínez Bravo63,
M. Mastrodicasa45,56,
H. J. Mathes40,
J. Matthews87,
G. Matthiae50,61,
E. Mayotte37,
P. O. Mazur94,
G. Medina-Tanco67,
D. Melo8,
A. Menshikov39,
K.-D. Merenda85,
S. Michal32,
M. I. Micheletti6,
L. Miramonti48,58,
D. Mockler13,38,
S. Mollerach1,
F. Montanet35,
C. Morello51,53,
M. Mostafá90,
A. L. Müller8,
M. A. Muller21,
K. Mulrey14,
R. Mussa51,
M. Muzio89,
W. M. Namasaka37,
A. Nasr-Esfahani37,
L. Nellen67,
M. Niculescu-Oglinzanu72,
M. Niechciol43,
D. Nitz88,
D. Nosek30,
V. Novotny30,
L. Nožka32,
A. Nucita47,55,
L. A. Núñez29,
M. Palatka31,
J. Pallotta2,
P. Papenbreer37,
G. Parente78,
A. Parra63,
J. Pawlowsky37,
M. Pech31,
F. Pedreira78,
J. Pȩkala69,
R. Pelayo64,
J. Peña-Rodriguez29,
E. E. Pereira Martins8,38,
J. Perez Armand20,
C. Pérez Bertolli8,40,
M. Perlin8,40,
L. Perrone47,55,
S. Petrera44,45,
T. Pierog40,
M. Pimenta71,
V. Pirronello46,57,
M. Platino8,
B. Pont79,
M. Pothast79,80,
P. Privitera91,
M. Prouza31,
A. Puyleart88,
S. Querchfeld37,
J. Rautenberg37,
D. Ravignani8,
M. Reininghaus8,40,
J. Ridky31,
F. Riehn71,
M. Risse43,
V. Rizi45,56,
W. Rodrigues de Carvalho20,
J. Rodriguez Rojo10,
M. J. Roncoroni8,
M. Roth40,
E. Roulet1,
A. C. Rovero5,
P. Ruehl43,
S. J. Saffi12,
A. Saftoiu72,
F. Salamida45,56,
H. Salazar63,
G. Salina50,
J. D. Sanabria Gomez29,
F. Sánchez8,
E. M. Santos20,
E. Santos31,
F. Sarazin85,
R. Sarmento71,
C. Sarmiento-Cano8,
R. Sato10,
P. Savina33,47,55,
C. M. Schäfer40,
V. Scherini47,
H. Schieler40,
M. Schimassek8,38,
M. Schimp37,
F. Schlüter8,40,
D. Schmidt38,
O. Scholten14,83,
P. Schovánek31,
F. G. Schröder40,92,
S. Schröder37,
J. Schulte41,
A. Schulz38,
S. J. Sciutto4,
M. Scornavacche8,40,
A. Segreto46,52,
S. Sehgal37,
R. C. Shellard15,
G. Sigl42,
G. Silli8,40,
O. Sima72,99,
R. Šmída91,
P. Sommers90,
J. F. Soriano86,
J. Souchard35,
R. Squartini9,
M. Stadelmaier8,40,
D. Stanca72,
S. Stanič75,
J. Stasielak69,
P. Stassi35,
A. Streich8,38,
M. Suárez-Durán13,
T. Sudholz12,
T. Suomijärvi36,
A. D. Supanitsky8,
Z. Szadkowski70,
A. Tapia28,
C. Taricco51,62,
C. Timmermans79,80,
O. Tkachenko40,
P. Tobiska31,
C. J. Todero Peixoto18,
B. Tomé71,
Z. Torrès35,
A. Travaini9,
P. Travnicek31,
C. Trimarelli45,56,
M. Tueros4,
R. Ulrich40,
M. Unger40,
L. Vaclavek32,
M. Vacula32,
J. F. Valdés Galicia67,
L. Valore49,59,
E. Varela63,
A. Vásquez-Ramírez29,
D. Veberič40,
C. Ventura26,
I. D. Vergara Quispe4,
V. Verzi50,
J. Vicha31,
J. Vink82,
S. Vorobiov75,
H. Wahlberg4,
C. Watanabe25,
A. A. Watson96,
M. Weber39,
A. Weindl40,
L. Wiencke85,
H. Wilczyński69,
M. Wirtz41,
D. Wittkowski37,
B. Wundheiler8,
A. Yushkov31,
O. Zapparrata13,
E. Zas78,
D. Zavrtanik75,76,
M. Zavrtanik75,76,
L. Zehrer75 &
Pierre Auger Collaboration
The European Physical Journal C volume 81, Article number: 966 (2021) Cite this article
We present a measurement of the cosmic-ray spectrum above 100 PeV using the part of the surface detector of the Pierre Auger Observatory that has a spacing of 750 m. An inflection of the spectrum is observed, confirming the presence of the so-called second-knee feature. The spectrum is then combined with that of the 1500 m array to produce a single measurement of the flux, linking this spectral feature with the three additional breaks at the highest energies. The combined spectrum, with an energy scale set calorimetrically via fluorescence telescopes and using a single detector type, results in the most statistically and systematically precise measurement of spectral breaks yet obtained. These measurements are critical for furthering our understanding of the highest energy cosmic rays.
Avoid the common mistakes
The steepening of the energy spectrum of cosmic rays (CRs) at around \(10^{15.5}\) eV, first reported in [1], is referred to as the "knee" feature. A widespread view for the origin of this bending is that it corresponds to the energy beyond which the efficiency of the accelerators of the bulk of Galactic CRs is steadily exhausted. The contribution of light elements to the all-particle spectrum, largely dominant at GeV energies, remains important up to the knee energy after which the heavier elements gradually take over up to a few \(10^{17}\) eV [2,3,4,5,6]. This fits with the long-standing model that the outer shock boundaries of expanding supernova remnants are the Galactic CR accelerators, see e.g. [7] for a review. Hydrogen is indeed the most abundant element in the interstellar medium that the shock waves sweep out, and particles are accelerated by diffusing in the moving magnetic heterogeneities in shocks accordingly to their rigidity. That the CR composition gets heavier for two decades in energy above the knee energy could thus reflect that heavier elements, although sub-dominant below the knee, are accelerated to higher energies, until the iron component falls off steeply at a point of turn-down around \({\simeq }\,10^{16.9}\) eV. Such a bending has been observed in several experiments at a similar energy, referred to as the "second knee" or "iron knee" [8,9,10,11]. The recent observations of gamma rays of a few \(10^{14}~\)eV from decaying neutral pions, both from a direction coincident with a giant molecular cloud [12] and from the Galactic plane [13], provide evidence for CRs indeed accelerated to energies of several \(10^{15}~\)eV, and above, in the Galaxy. A dozen of sources emitting gamma rays up to \(10^{15}~\)eV have even been reported [14], and the production could be of hadronic origin in at least one of them [15]. However, the nature of the sources and the mechanisms by which they accelerate CRs remain in general undecided. In particular, that particles can be effectively accelerated to the rigidity of the second knee in supernova remnants is still under debate, see e.g. [16].
Above \(10^{17}\) eV, the spectrum steepens in the interval leading up to the "ankle" energy, \({\sim }5{\times }10^{18}\) eV, at which point it hardens once again. The inflection in this energy range is not as sharp as suggested by the energy limits reached in the Galactic sources to accelerate iron nuclei beyond the iron-knee energy [17]. Questions arise, then, on how to make up the all-particle spectrum until the ankle energy. The hardening around \(10^{17.3}\) eV in the light-particle spectrum reported in [18] is suggestive of an extragalactic contribution to the all-particle spectrum steadily increasing. It has even been argued that an additional component is necessary to account for the extended gradual fall-off of the spectrum and for the mass composition in the iron-knee-to-ankle region, be it of Galactic [17] or extragalactic origin [19].
While the concept that the Galactic-to-extragalactic transition occurs somewhere between \(10^{17}\) eV and a few \(10^{18}\) eV is well-accredited, a full understanding of how it occurs is hence lacking. The approximately power-law shape of the spectrum in this energy range may mask a complex superposition of different components and phenomena, the disentanglement of which rests on the measurements of the all-particle energy spectrum, and of the abundances of the different elements as a function of energy, both of them challenging from an experimental point of view. On the one hand, the energy range of interest is accessible only through indirect measurements of CRs via the extensive air showers that they produce in the atmosphere. Therefore, the determination of the properties of the CRs, especially their mass and energy, is prone to systematic effects. On the other hand, different experiments, different instruments and different techniques of analysis are used to cover this energy range, so that a unique view of the CRs is only possible by combining measurements the matching of which inevitably implies additional systematic effects.
The aim of this paper is to present a measurement of the CR spectrum from \(10^{17}\) eV up to the highest observed energies, based on the data collected with the surface-detector array of the Pierre Auger Observatory. The Observatory is located in the Mendoza Province of Argentina at an altitude of 1400 m above sea level at a latitude of \(35.2^\circ \) S, so that the mean atmospheric overburden is 875 g/cm\(^2\). Extensive air showers induced by CR-interactions in the atmosphere are observed via a hybrid detection using a fluorescence detector (FD) and a surface detector (SD).
The layout of the SD and FD of the Pierre Auger Observatory are shown above. The respective fields of view of the five FD sites are shown in blue and orange. The 1600 SD locations which make up the SD-1500 are shown in black while the stations which belong only to the SD-750 and the boarder of this sub-array are highlighted in cyan
The FD consists of five telescopes at four sites which look out over the surface array, see Fig. 1. Four of the telescopes (shown in blue) cover an elevation range from \(0^\circ \) to \(30^\circ \) while the fifth, the High Elevation Auger Telescopes (HEAT), covers an elevation range from \(30^\circ \) to \(58^\circ \) (shown in red). Each telescope is used to collect the light emitted from air molecules excited by charged particles. After first selecting the UV band with appropriate filters (310–390 nm), the light is reflected off a spherical mirror onto a camera of 22\(\times \)20 hexagonal, 45.6 mm, photo-multiplier tubes (PMTs). In this way, the longitudinal development of the particle cascades can be studied and the energy contained within the electromagnetic sub-showers can be measured in a calorimetric way. Thus the FD can be used to set an energy scale for the Observatory that is calorimetric and so is independent of simulations of shower development.
The SD, the data of which are the focus of this paper, consists of two nested hexagonal arrays of water Cherenkov detectors (WCDs). The layout, shown in Fig. 1, includes the SD-1500, with detectors spread apart by 1500 m and totaling approximately 3000 km\(^2\) of effective area. The detectors of the SD-750 are instead spread out by 750 m, yielding an effective area of 24 km\(^2\). SD-750 and SD-1500 include identical WCDs, cylindrical tanks of pure water with a 10 m\(^2\) base and a height of 1.2 m. Three 9" PMTs are mounted to the top of each tank and view the water volume. When relativistic secondaries enter the water, Cherenkov radiation is emitted, reflected via a Tyvek lining into the PMTs, and digitized using 40 MHz 10-bit Flash Analog to Digital Converters (FADCs). Each WCD along with its digitizing electronics, communication hardware, GPS, etc., is referred to as a station.
Using data collected over 15 years with the SD-1500, we recently reported the measurement of the CR energy spectrum in the range covering the region of the ankle up to the highest energies [20, 21]. In this paper we extend these measurements down to \(10^{17}\) eV using data from the SD-750: not only is the detection technique consistent but the same methods are used to treat the data and build he spectrum. The paper is organized as follows: we first explain how, with the SD-750 array, the surface array is sensitive to primaries down to \(10^{17}\) eV in Sect. 2; in Sect. 3, we describe how we reconstruct the showers up to determining the energy; we illustrate in Sect. 4 the approach used to derive the energy spectrum from SD-750; finally, after combining the spectra measured by SD-750 and SD-1500, we present the spectrum measured using the Auger Observatory from \(10^{17}\) eV upwards in Sect. 5 and discuss it in the context of other measurements in Sect. 6.
Identification of showers with the SD-750: from the trigger to the data set
The implementation of an additional set of station-level trigger algorithms in mid-2013 is particularly relevant for the operation of the SD-750. Their inclusion in this work extends the energy range over which the SD-750 triggers with \(>98\%\) probability from \(10^{17.2}\) eV down to \(10^{17}\) eV.
To identify showers, a hierarchical set of triggers is used which range in scope from the individual station-level up to the selection of events and the rejection of random coincidences. The trigger chain, extensively described in [22], has been used since the start of the data taking of the SD-1500, and was successively adopted for the SD-750. In short, station-level triggers are first formed at each WCD. They are then combined with those from other detectors and examined for spatial and temporal correlations, leading to an array trigger, which initiates data acquisition. After that, a similar hierarchical selection of physics events out of the combinatorial background is ultimately made.
We describe in this section the design of the triggers (Sect. 2.1). We then illustrate their effect on the data, at the level of the amplitude of detected signals (Sect. 2.2) and on the timing of detected signals in connection with the event selection (Sect. 2.3). Finally we describe the energy at which acceptance is 100% (Sect. 2.4). A more detailed description of the trigger algorithms can be found in Appendix A.
The electromagnetic triggers
Using the station-level triggers, the digitized waveforms are constantly monitored in each detector for patterns consistent with what would be expected as a result of air-shower secondary particles (primarily electrons and photons of 10 MeV on average, and GeV muons) entering the water volume.Footnote 1 The typical morphologies include large signals, not necessarily spread in time, such as those close to the shower core, or sequences of small signals spread in time, such as those nearby the core in low-energy showers, or far from the core in high-energy ones. Atmospheric muons, hitting the WCDs at a rate of 3 kHz, are the primary background. The output from the PMTs has only a small dependence on the muon energy. The electromagnetic and hadronic background, while also present, yields a total signal that is usually less than that of a muon. Consequently, the atmospheric muons are the primary impediment to developing a station-level trigger for small signal sizes without contaminating the sampling of an air shower with spurious muons.
Originally, two triggers were implemented into the station firmware, called threshold (TH), more adept to detect muons, and time-over-threshold (ToT), more suited to identify the electromagnetic component. Both of these have settings which require the signal to be higher in amplitude or longer than what is observed for a muon traveling vertically through the water volume. As such, they have the inherent limitation of being insensitive to signals which are smaller than (or equal to) that of a single muon, thus prohibiting the measurement of pure electromagnetic signals, which are generally smaller.
To bolster the sensitivity of the array to such small signals, two additional triggers were designed. The first, time-over-threshold-deconvolved (ToTd), first removes the typical exponential decay created by Cherenkov light inside the water volume, after which the ToT algorithm is applied. The second, multiplicity-of-positive-steps (MoPS), is designed to select small, non-smooth signals, a result of many electromagnetic particles entering the water over a longer period of time than a typical muon pulse. This is done by counting the number of instances in the waveform where consecutive bins are increasing in amplitude. Both of the trigger algorithms are described in detail in Appendix A.
The implementation of the ToTd and MoPS (the rate of which is around 0.3 Hz, compared to 0.6 Hz of ToT and 20 Hz of TH) did not require any modification in the logic of the array trigger, which calls for a coincidence of three or more SD stations that pass any combination of the triggers described above with compact spacing, spatially and temporally [22]. We note that in spite of the low rate of the ToTd and MoPS relative to TH and ToT, the array rate more than doubled after their implementation. This, as will be shown in the following, is due to the extension of measurements to the more abundant, smaller signals.
Effect of ToTd and MoPS on signals amplitudes
The ToTd and MoPS triggers extend the range over which signals can be observed at individual stations into the region which is dominated by the background muons that are created in relatively low energy air showers. By remaining insensitive to muon-like signals, these two triggers increase the sensitivity of the SD to the low-energy parts of the showers that have previously been below the trigger threshold.
The effects of the additional triggers can be seen in the distribution of the observed signal sizes. An example of such a distribution, based on one month of air-shower data, is shown in Fig. 2.
Distribution of the signal sizes at individual stations which pass the TH and ToT triggers (solid black) and signals which pass only the ToTd and/or MoPS triggers (dashed red)
The signal sizes are shown in the calibration unit of one vertical equivalent muon (VEM), the total deposited charge of a muon traversing vertically through the water volume [22]. For the stations passing only the ToT and TH triggers (shown in solid black), the distribution of deposited signals is the convolution of three effects, the uniformity of the array, the decreasing density of particles as a function of perpendicular distance to the shower axis (henceforth referred to as the axial distance), and the shape of the CR spectrum resulting in the negative slope above \({\simeq }\,7\) VEM. Furthermore there is a decreasing efficiency of the ToT and TH at small signal sizes. The range of additional signals that are now detectable via the ToTd and MoPS triggers are shown in dashed red. As expected, ToTd and MoPS triggers increase the probability of the SD to detect small amplitude signals, namely between 0.3 and 5 VEM. That the high-signal tail of this distribution ends near 10 VEM is consistent with a previous study [24] that estimated that the ToT+TH triggers were fully efficient above this value.
The increase in station multiplicity when including the ToTd and MoPS triggers versus the original multiplicity with only ToT and TH. The black circles show the median increase in that multiplicity bin
The additional sensitivity to small air-shower signals also increases the multiplicity of triggered stations per event. This increase is characterized in Fig. 3, which shows the number of additional triggered stations per event as a function of the number of stations that pass the TH and ToT triggers, after removing spuriously triggered stations. The median increase of multiplicity in each horizontal bin is shown by the black circles and indicates a typical increase of one station per event.
Effects of ToTd and MoPS on signal timing
The increased responsiveness of the ToTd and MoPS algorithms tosmaller signals, specifically due to the electromagnetic component, has an effect also on the observed timing of the signals. In general, the electromagnetic signals are expected to be delayed with respect to the earliest part of the shower which is muon-rich, the delay increasing with axial distance. Further, in large events, stations that pass these triggers tend to be on the edge of the showers, where the front is thicker, thus increasing the variance of the arrival times. Such effects can be seen through the distribution of the start times for stations that pass the ToTd and MoPS triggers.
Distributions of start times with respect to a plane front for stations that pass the ToT and TH algorithms, in blue and in green, respectively. The signals due to ToTd and MoPS are shown in red. Positive residuals correspond to a delay with respect to the plane wave expectation
The residuals of the pulse start times with respect to a plane front fit of the three stations with the largest signals in the event are shown in Fig. 4 for different trigger types. The entries shown in blue correspond to stations that passed the ToT algorithm, the ones in green to stations that pass the TH trigger (but not the ToT trigger), and those in red to stations that pass the ToTd and/or MoPS triggers, only. For each of the trigger types, there is a clear peak near zero, which reflects the approximately planar shower front close to the core. Stations that pass the TH condition, but not the ToT one, tend to capture isolated muons, including background muons arriving randomly in time. This explains the vertical offset, flat and constant, in the green curve. In turn, the lack of such a baseline shift in the blue and red distributions gives evidence that the ToT, TOTd and MoPS algorithms reject background muons effectively. This is particularly successful for the ToTd and MoPS that accept very small signals, of approximately 1 VEM in size. One can see that these distributions have different shapes and that, in particular, the start time distributions of signals that pass the ToTd and MoPS have much longer tails than those of the TOT triggers, including a second distribution beginning around 1.5 \(\upmu \)s possibly due to heavily delayed electromagnetic particles.
The extended time portion of showers accessed by the ToTd and MoPS triggers has implications on the procedure used to select physical events from the triggered ones [22]. In this process, non-accidental events, as well as non-accidental stations, are disentangled on the basis of their timing. First, we identify the combination of three stations where they form a triangle, in which at least two legs are 750 m long, and where they have the largest summed signal among all such possible configurations. These stations make up the event seed and the arrival times of the signals are fit to a plane front. Additional stations are then kept if their temporal residual, \(\Delta t\), is within a fixed window, \(t_\text {low}< \Delta t < t_\text {high}\). Motivated by the differing time distributions, updated \(t_\text {low}\) and \(t_\text {high}\) values were calculated based on which trigger algorithm was satisfied. Using the distributions of timing residuals, shown in Fig. 4, the baseline was first subtracted. Then the limits of the window, \(t_\text {low}\) and \(t_\text {high}\), were chosen such that the middle 99% of the distribution was kept. The trigger-wise limits are summarized in Table 1.
Table 1 Temporal window limits \(t_\text {low}\) and \(t_\text {high}\) used to remove stations from an event, for each station-level trigger algorithm
Effect of the ToTd and MoPS on the energy above which acceptance is fully-efficient
Most relevant to the measurement of the spectrum is the determination of the energy threshold above which the SD-750 becomes fully efficient. To derive this, events observed by the FD were used to characterize this quantity as a function of energy and zenith angle. The FD reconstruction requires only a single station be triggered to yield a robust determination of the shower trajectory. Using the FD events with energies above \(10^{16.8}\) eV, the lateral trigger probability (LTP), the chance that a shower will produce a given SD trigger as a function of axial radius, was calculated for all trigger types. The LTP was then parameterized as a function of the observed air-shower zenith angle and energy. It is important to note that because the LTP is derived using observed air showers as a function of energy, this calculation reflects the efficiency as a function of energy based on the true underlying mass distribution of primary particles. Further details of this method can be found in [25].
The SD-750 trigger efficiency was then determined via a study in which isotropic arrival directions and random core positions were simulated for fixed energies between \(10^{16.5}\) and \(10^{18}\) eV. Each station on the array was randomly triggered using the probability given by the LTP. The set of stations that triggered were then checked against the compactness criteria of the array-level triggers, as described in [22]. The resulting detection probability for showers with zenith angles \(<40^\circ \) is shown as a solid blue line in Fig. 5 as a function of energy. The detection efficiency becomes almost unity (\(>98\%\)) at around \(10^{17}\) eV.Footnote 2 For comparison, we show in the same figure, in dashed red, the detection efficiency curve for the original set of station-triggers, TH and ToT, in which the full efficiency is attained at a larger energy, i.e., around \(10^{17.2}\) eV.
The detection efficiency of the SD-750 for air showers with \(\theta <40^\circ \) is shown for the original (dashed red) and expanded (solid blue) station-level trigger sets with bands indicating the systematic uncertainties. The trigger efficiency was determined using data above \(10^{16.8}\) eV and is extrapolated below this energy (shown in gray)
A description for the detection efficiency, \(\epsilon (E)\), below \(10^{17}\) eV, will be important for unfolding the detector effects close to the threshold energy (see Sect. 4). This quantity was fit using the results of the LTP simulations with \(\theta < 40^\circ \) and is well-parameterized by
$$\begin{aligned} \begin{aligned} \epsilon (E)&= \frac{1}{2}\left[ 1 + {\text {erf}}\left( \frac{\lg (E / \text {eV}) - \mu }{\sigma } \right) \right] , \end{aligned} \end{aligned}$$
where \({\text {erf}}(x)\) is the error function, \(\mu = 16.4 \pm 0.1\) and \(\sigma = 0.261 \pm 0.007\).
For events used in this analysis, there is an additional requirement regarding the containment of the core within the array: only events in which the detector with the highest signal is surrounded by a hexagon of six stations that are fully operational are used. This criterion not only ensures adequate sampling of the shower but also allows the aperture of the SD-750 to be evaluated in a purely geometrical manner [22]. With these requirements, the SD-750 data set used below consists of about 560,000 events with \(\theta < 40^\circ \) and \(E>10^{17}\) eV recorded between 1 January 2014 and 31 August 2018. The minimum energy cut is motivated by the lowest energy to which we can cross-calibrate with adequate statistics the energy scale of the SD with that of the FD (see Sect. 3.3). The corresponding exposure, \({\mathcal {E}}\), after removal of time periods when the array was unstableFootnote 3 (\({<}2\)% of the total) is \({\mathcal {E}}=(105\pm 4)\) km\(^2\) sr yr.
Energy measurements with the SD-750
In this section, the method for the estimation of the air-shower energy is detailed together with the resulting energy resolution of the SD-750 array. The measurement of the actual shower size is first described in Sect. 3.1 after which the corrections for attenuation effects are presented in Sect. 3.2. The energy calibration of the shower size after correction for attenuation is presented in Sect. 3.3. The energy resolution function is finally derived in Sect. 3.4.
Estimation of the shower size
The general strategy for the reconstruction of air showers using the SD-750 array is similar to that used for the SD-1500 array which is detailed extensively in [26]. In this process, the arrival direction is obtained using the start times of signals, assuming either a plane or a curved shower front, as the degrees of freedom allow. The lateral distribution of the signal is then fitted to an empirically-chosen function to infer the size of the air shower, which is used as a surrogate for the primary energy. The reconstruction algorithm thus produces an estimate of the arrival direction and the size of the air shower via a log-likelihood minimization.
The lateral fall-off of the signal, S(r), with increasing distance, r, to the shower axis in the shower plane is modeled with a lateral distribution function (LDF). The stochastic variations in the location and character of the leading interaction in the atmosphere result in shower-to-shower fluctuations of the longitudinal development that propagate onto fluctuations of the lateral profile, sampled at a fixed depth. Showers induced by identical primaries at the same energy and at the same incoming angle can thus be sampled at the ground level at a different stage of development. The LDF is consequently a quantity that varies on an event-by-event basis. However, the limited degrees of freedom, as well as the sparse sampling of the air-shower particles reaching the ground, prevent the reconstruction of all the parameters of the LDF for individual events. Instead, an average LDF, \(\langle S(r)\rangle \), is used in the reconstruction to infer the expected signal, \(S(r_\text {opt})\), that would be detected by a station located at a reference distance from the shower axis, \(r_\text {opt}\) [27, 28]. This reference distance is chosen so as to minimize the fluctuations of the shower size, down to \(\simeq \, 7\%\) in our case. The observed distribution of signals is then adjusted to \(\langle S(r)\rangle \) by scaling the normalization, \(S(r_\text {opt})\), in the fitting procedure.
The reference distance, or optimal distance, \(r_\text {opt}\), has been determined on an event-by-event basis by fitting the measured signals to different hypotheses for the fall-off of the LDF with distance to the core as in [28]. Via a fit of many power-law-like functions, the dispersion of signal expectations has been observed to be minimal at \(r_\text {opt}\simeq \, 450\) m, which is primarily constrained by the geometry of the array. The expected signal at 450 m from the core, S(450), has thus been chosen to define the shower-size estimate.
The functional shape chosen for the average LDF is a parabola in a log-log representation of \(\langle S(r)\rangle \) as a function of the distance to the shower core,
$$\begin{aligned} \ln \langle S(r) \rangle = \ln S(450)+\beta \,\rho + \gamma \,\rho ^2, \end{aligned}$$
where \(\rho =\ln (r/(450\,\text {m}))\), and \(\beta \) and \(\gamma \) are two structure parameters. The overall steepness of the fall-off of the signal from the core is governed by \(\beta \), while the concave deviation from a power-law function is given by \(\gamma \). The values of \(\beta \) and \(\gamma \) have been obtained in a data-driven manner, by using a set of air-shower events with more than three stations, none of which have a saturated signal. The zenith angle and the shower size are used to trace the age dependence of the structure parameters based on the following parameterization in terms of the reduced variables \(t=\sec \theta - 1.27\) and \(u=\ln S(450) - 5\):
$$\begin{aligned} \beta= & {} (\beta _0 + \beta _1 t + \beta _2 t^2)(1 + \beta _3 u), \end{aligned}$$
$$\begin{aligned} \gamma= & {} \gamma _0 + \gamma _1 u. \end{aligned}$$
For any specific set of values \({\mathbf {p}}=\{\beta _i, \gamma _i\}\), the reconstruction is then applied to calculate the following \(\chi ^2\)-like quantity, globally to all events:
$$\begin{aligned} Q^2({\mathbf {p}})=\frac{1}{N_\text {tot}}\sum _{k=1}^{N_\text {events}}\sum _{j=1}^{N_k}\frac{(S_{k,j}-\langle S(r_j,{\mathbf {p}})\rangle )^2}{\sigma _{k,j}^2}. \end{aligned}$$
The sum over \(N_k\) stations is restricted to those with observed signals larger than 5 VEM to minimize the impact of upward fluctuations of the station signals far from the core and hence to avoid biases from trigger effects, and to stations more than 150 m away from the core. The uncertainty \(\sigma _{k,j}\) is proportional to \(\sqrt{S_{k,j}}\) [26]. \(N_\text {tot}\) is the total number of stations in all such events. The best-fit {\(\beta _i\), \(\gamma _i\)} values are collected in Table 2.
Table 2 Best-fit {\(\beta _i\), \(\gamma _i\)} values defining the structure parameters of the LDF
Correction of attenuation effects
There are two significant observational effects that impact the precision of the estimation of the shower size. Both of these effects are primarily a result of the variable slant depth that a shower must traverse before being detected with the SD. Since the mean atmospheric overburden is 875 g/cm\(^2\) at the location of the Observatory, nearly all observed showers in the energy range considered in this analysis have already reached their maximum size and have started to attenuate [29]. Thus, an increase in the slant depth of a shower results in a more attenuated cascade at the ground, directly impacting the observed shower size.
The first observational effect is related to the changing weather at the Observatory. Fluctuations in the air pressure equate to changes in the local overburden and thus showers observed during periods of relatively high pressure result in an underestimated shower size. Similarly, the variations in the air density directly change the Molière radius which directly affects the spread of the shower particles. The increased lateral spread of the secondaries, or equivalently, the decrease in the density of particles on the ground, also leads to a systematically underestimated shower size. Both the air-density and pressure have typical daily and yearly cycles that imprint similar cycles upon the estimation of the shower size.
The relationship between these two atmospheric parameters and the estimated shower sizes has been studied using events detected with the SD [30]. From this relationship, a model was constructed to scale the observed value of S(450) to what would have been measured had the shower been instead observed at a time with the daily and yearly average atmosphere. When applying this correction to individual air showers, the measurements from the weather stations located at the FD sites are used. The values of S(450) are scaled up or down according to these measurements, resulting in a shift of at most a few percent. The shower size is eventually the proxy of the air-shower energy, which is calibrated with events detected with the FD (see Sect. 3.3). Since the FD operates only at night when, in particular, the air density is relatively low, the scaling of S(450) to a daily and yearly average atmosphere corrects for a \({\simeq }\,0.5\%\) shift in the assigned energies.
The second observational effect is geometric, wherein showers arriving at larger zenith angles have to go through more atmosphere before reaching the SD. To correct for this effect, the Constant Intensity Cut (CIC) method [31] is used. The CIC method relies on the assumption that cosmic rays arrive isotropically, which is consistent with observations in the energy range considered [32]. The intensity is thus expected to be independent of arrival direction after correcting for the attenuation. Deviations from a constant behavior can thus be interpreted as being due to attenuation alone. Based on this property, the CIC method allows us to determine the attenuation curve as function of the zenith angle and therefore to infer a zenith-independent shower-size estimator.
We empirically chose a functional form which describes the relative amount of attenuation of the air shower,
$$\begin{aligned} f_\text {CIC}(\theta ) = 1 + a x + bx^2. \end{aligned}$$
The scaling of this function is normalized to the attenuation of a shower arriving at \(35^\circ \) by choosing \(x = \sin ^2 35^\circ - \sin ^2 \theta \). For a given air shower, the observed shower size can be scaled using Eq. (6) to get the equivalent signal of a shower arriving with the reference zenith angle, \(S_{35}\), via the relationship \(S(450) = S_{35}\,f_\text {CIC}(\theta )\).
Isotropy implies that \({\mathrm {d}N/\mathrm {d}\sin ^2\theta }\) is constant. Thus, the shape of \(f_\text {CIC}(\theta )\) is determined by finding the parameters a and b for which the CDF of events above \(S(450) > S_\text {cut}\, f_\text {CIC}(\theta )\) is linear in \(\sin ^2 \theta \) using an Anderson-Darling test [33]. The parameter \(S_\text {cut}\) defines the size of a shower with \(\theta = 35^\circ \) at which the CIC tuning is performed, the choice of which is described below.
Since the attenuation that a shower undergoes before being detected is related to the depth of shower maximum and the particle content, the shape of \(f_\text {CIC}(\theta )\) is dependent on both the energy and the average mass of the primary particles at that energy. Further, this implies that a single choice of \(S_\text {cut}\) could introduce a mass and/or energy bias. Thus, Eq. (6) was extended to allow the polynomial coefficients, \(k \in \{a,\,b\}\), to be functions of S(450) via \(k( S(450)) = k_0 + k_1 y + k_2 y^2\) where \(y = \lg (S(450) / \text {VEM})\). The function \(f_\text {CIC}(\theta , S(450))\) was tuned using an unbinned likelihood.
The fit was performed so as to guarantee equal intensity of the integral spectra using eight threshold values of \(S_\text {cut}\) between 10 and 70 VEM, evenly spaced in log-scale. These values were chosen to avoid triggering biases on the low end and the dwindling statistics on the high end. The best fit parameters are given in Table 3. The resulting 2D distribution of the number of events, in equal bins of \(\sin ^2\theta \) and \(\lg S_{35}\), is shown in Fig. 6, bottom panel. It is apparent that the number of events above any \(\sin ^2{\theta }\) value is equalized for any constant line for \(\lg S_{35}\gtrsim 0.7\). The magnitude of the CIC correction is \((-27\pm 4)\)% for vertical showers (depending on S(450)) and \(+15\)% for a zenith angle of \(40^\circ \).
Top: histogram of reconstructed shower sizes and zenith angles. The solid black line represents the shape of \(f_\text {CIC}\) at 10 VEM. Bottom: same distribution but as a function of corrected shower size, \(S_{35}\), and zenith angle. The dashed black line indicates the mapping of the solid black line in the top figure after inverting the effects of the CIC correction
Table 3 The energy dependence of the CIC parameters (Eq. (6)) are given below
Energy calibration of the shower size
Correlation between the SD shower-size estimator, \(S_{35}\), and the reconstructed FD energy, \(E_\text {FD}\), for the selected hybrid events
The conversion of the shower size, corrected for attenuation, is based on a special set of showers, called golden hybrid events, which can be reconstructed independently by the FD and by the SD. The FD allows for a calorimetric estimate of the primary energy except for the contribution carried away by particles that reach the ground. The amount of this so-called invisible energy, \({\simeq }\,20\%\) at \(10^{17}\) eV and \({\simeq }\,15\%\) at \(10^{18}\) eV, has been evaluated using simulations [34] tuned to measurements at \(10^{18.3}\) eV so as to correct for the discrepancy in the muon content of simulated and observed showers [35]. The empirical relationship between the FD energy measurements, \(E_\text {FD}\), and the corrected SD shower size, \(S_{35}\), allows for the propagation of the FD energy scale to the SD events.
FD events were selected based on quality and fiducial criteria aimed at guaranteeing a precise estimation of \(E_\text {FD}\) as well as at minimizing any acceptance biases towards light or heavy mass primaries introduced by the field of view of the FD telescopes. The cuts used for the energy calibration are similar to those described in [29, 36]. They include the selection of data when the detectors are properly operational and the atmosphere properties like clouds coverage and the vertical aerosol depth are suitable for a good determination of the air-shower profile. A further quality selection includes requirements on the uncertainties of the energy assignment (less than 12%) and of the reconstruction of the depth at the maximum of the air-shower development (less than 40 g cm\(^{-2}\)). A possible bias due to a selection dependency on the primary mass is avoided by using an energy dependent fiducial volume determined from data as in [29].
Restricting the data set to events with \(E_\text {FD} \ge 10^{17}\) eV, (to ensure that the SD is operating in the regime of full efficiency) there are 1980 golden-hybrid events available to establish the relationship between \(S_{35}\) and \(E_\text {FD}\). Fourty-five events in the energy range between \(10^{16.5}\) eV and \(10^{17}\) eV are included in the likelihood as described in [37]. As \(S_{35}\) depends on the mass composition of the primary particles, the relation between \(S_{35}\) and \(E_\text {FD}\), shown in Fig. 7, accounts for the trend of the composition change with energy inherently as the underlying mass distribution is directly sampled by the FD. Measurements of \(\langle X_\text {max}\rangle \) suggest that this composition trend follows a logarithmic evolution up to an energy of \(10^{18.3}\) eV, beyond which the number of events available for this analysis is too small to affect the results in any way [36]. So we choose a power-law type relationship,
$$\begin{aligned} E_{\mathrm{SD}}=A S_{35}^B, \end{aligned}$$
which is expected from Monte-Carlo simulations in the case of a single logarithmic dependence of \(X_\text {max}\) with energy. The energy of an event with \(S_{35} = 1\) VEM arriving at the reference angle, A, and the logarithmic slope, B, are fitted to the data by means of a maximum likelihood method which models the distribution of golden-hybrid events in the plane of energies and shower sizes. The use of these events allows us to infer A and B while accounting for the clustering of events in the range \(10^{17.4}\) to \(10^{17.7}\) eV observed in Fig. 7 due to the fall-off of the energy spectrum combined with the restrictive golden-hybrid acceptance for low-energy, dim showers. A comprehensive derivation of the likelihood function can be found in [37].
The probability density function entering the likelihood procedure, detailed in [37], is built by folding the cosmic-ray intensity, as observed through the effective aperture of the FD, with the resolution functions of the FD and of the SD. Note that to avoid the need to model accurately the cosmic-ray intensity observed through the effective aperture of the telescopes (and thus to reduce reliance on mass assumptions), the observed distribution of events passing the cuts described above is used. The FD energy resolution, \(\sigma _\text {FD}(E)/E_\text {FD}\), is typically between 6% and 8% [38]. It results from the statistical uncertainty arising from the fit to the longitudinal profile, the uncertainties in the detector response, the uncertainties in the models of the state of the atmosphere, and the uncertainties in the expected fluctuations from the invisible energy. The SD shower-size resolution, \(\sigma _\text {SD}(S_{35})/S_{35}\), is, on the other hand, comprised of two terms, the detector sampling fluctuations, \(\sigma _\text {det}(S_{35})\), and the shower-to-shower fluctuations, \(\sigma _\text {sh}(S_{35})\). The former is obtained from the sum of the squares of the uncertainties from the reconstructed shower size and zenith angle, and from the attenuation-correction terms that make up the \(S_{35}\) assignment. The latter stem from the stochastic nature of both the depth of first interaction of the primary and the subsequent development of the particle cascade. This contribution thus depends on the CR mass composition and on the hadronic interactions in air showers. For this reason, the derivation of A and B follows a two-step procedure. A first iteration of the fit is carried out by using an educated guess for \(\sigma _\text {sh}(S_{35})\), as expected from Monte-Carlo simulations for a mass-composition scenario compatible with data [29]. The total resolution \(\sigma _\text {SD}(S_{35})/S_{35}\) is then extracted from data as explained next in Sect. 3.4 and used in a second iteration.
Table 4 The systematic uncertainties on the FD energy scale are given below. Lines with multiple entries represent the values at the low and high end of the considered energy range (\(\simeq \) 10\(^{17}\) and \(\simeq \) 10\(^{19}\) eV, respectively)
The resulting relationship is shown as the red line in Fig. 7 with best-fit parameters such that \(A=(13.2\pm 0.3)\) PeV and \(B=1.002\pm 0.006\). The goodness of the fit is supported by the \(\chi ^2/\text {NDOF} = 2120/1978\) (\(p = 0.013\)). We use these values of A and B to calibrate the shower sizes in terms of energies by defining the SD estimator of energies, \(E_\text {SD}\), according to Eq. (7). The SD energy scale is set by the calibration procedure and thus it inherits the A and B calibration-parameters uncertainties and the FD energy-scale uncertainties, listed in Table 4. The systematic uncertainty, after addition in quadrature, of the energy scale is about 14% and is almost energy independent. The energy independence is a consequence of the 10% uncertainty of the FD calibration, which is the dominant contribution.
Resolution function of the SD-750 array
The SD resolution as a function of energy is needed in several steps of the analysis. In the regime of full efficiency, it can be considered as a Gaussian function centered on the true energy, the width of which reflects the statistical uncertainty associated with the detection and reconstruction processes on one hand, and the stochastic development of the particle cascade on the other hand. The combination of the two can be estimated for the golden hybrid events, thus allowing us to account for the contribution of the shower-to-shower fluctuations in a data-driven way.
Each event observed by the SD and FD results in two independent measurements of the air-shower energy, \(E_\text {SD}\) and \(E_\text {FD}\), respectively. Unlike for the SD, the FD directly provides a view of the shower development so a total energy resolution, \(\sigma _\text {FD}(E)\), can be estimated for each of the golden hybrid events. Using the known \(\sigma _\text {FD}(E)\), the resolution of SD can be determined by studying the distribution of the ratio of the two energy measurements.
An example of the ratio of the energy assignments for the SD and FD is shown with black crosses for the energy bin indicated in the plot. The best fit ratio distribution for this bin is shown by the black line
For two independent, Gaussian-distributed random variables, X and Y, their ratio, \(z=X/Y\), produces a ratio distribution that depends on the means (\(\mu _X\), \(\mu _Y\)) and standard deviations (\(\sigma _X\), \(\sigma _Y\)) of the two variables, \({\text {PDF}}(z; \mu _X, \mu _Y, \sigma _X, \sigma _Y)\). Likewise, the ratio of the two energy measurements, \(z = E_\text {SD} / E_\text {FD}\), follows such a distribution to first order. Because the FD sets the energy scale of the Observatory, there is inherently no bias in the energy measurements with respect to its own scale and thus, on average, \(\mu _\text {FD}(E)=1\). Using the golden hybrid data set, the ratio distribution was fit in an unbinned likelihood analysis, \({\text {PDF}}(z; \mu _\text {SD}(E), 1, \sigma _\text {SD}(E), \sigma _\text {FD}(E))\).
The total SD energy resolution, as calculated using the golden hybrid events (red circles) is shown in bins with equal statistics. The parameterization of the resolution is shown by the solid blue line and the corresponding 68% confidence interval in dashed lines. The energy resolution, calculated using mass-weighted MC air showers (gray squares), is shown as a verification of the method
An example of the measured energy-ratio distributions is shown in Fig. 8 with the fitted curve overlaid on the data points. Carrying out the fit in different energy bins, the SD resolution, shown by the red points in Fig. 9, is represented by,
$$\begin{aligned} \frac{\sigma _\text {SD}(E)}{E} = (0.06 \pm 0.02) + (0.05 \pm 0.01) \sqrt{\frac{1\,\text {EeV}}{E}}. \end{aligned}$$
The corresponding curve is overlaid in blue, bracketed by the 68% confidence region.
To measure the spectrum above the \(10^{17}\) eV threshold, the knowledge of the resolution function, which induces bin-to-bin migration of events, and of the detection efficiency are also required for energies below this threshold. As a verification, particularly in the energy region where Eq. (8) is extrapolated, a Monte-Carlo analysis was performed. A set of 325,000 CORSIKA [39] air showers were used, consisting of proton, helium, oxygen, and iron primaries with energies above \(10^{16}\) eV. EPOS-LHC [40] was used as the hadronic interaction model. The air showers were run through the full SD simulation and reconstruction algorithms. The events were weighted based on the primary mass according to the Global Spline Fit (GSF) model [41] to account for the changing mass-evolution near the second knee and ankle. The reconstructed values of S(450) were corrected by applying the energy-dependent CIC method to obtain values for \(S_{35}\) and these values were then calibrated against the Monte-Carlo energies. During the calibration, a further weighting was performed based on the energy distribution of golden hybrid events to account for the hybrid detection efficiency. Following the calibration procedure, each MC event was assigned an energy in the FD energy scale (i.e. \(E_\text {MC} \rightarrow S_{35} \rightarrow E_\text {FD}\)).
The SD energy resolution was calculated using the mass-weighted simulations and is shown in gray squares in Fig. 9. Indeed, the simulated and measured SD resolutions show a similar trend and agree to within the uncertainties, supporting the golden hybrid method.
In the energy region at-and-below \(10^{17}\) eV, systematic effects also enter into play on the energy estimate. An energy-dependent offset, a bias, is thus expected in the resolution function for several reasons:
The application of the trigger below threshold, combined with the finite energy resolution, cause an overestimate of the shower size, on average, which is then propagated to the energy assignment.
The linear relationship assumed in Eq. (7) cannot account for a possible sudden change in the evolution of the mass-composition with energy. Such a change would require a broken power law for the energy calibration relationship.
In the energy range where the SD is not fully efficient, the SD efficiency is larger for light primary nuclei, thus preventing a fair sampling of \(S_{35}\) values over the underlying mass distribution.
Because there is an insufficient number of FD events which pass the fiducial cuts below \(10^{17}\) eV, the bias was characterized, using the same air-shower simulations as used for the resolution cross-check. The remaining relative energy bias is shown in Fig. 10.
The bias of the energy assignment for the SD-750 was studied using Monte Carlo simulations, weighted according to the GSF model [41]. The ratio of the assigned and expected values as a function of energy are shown (red circles) along with the parameterization (blue line) given in Eq. (9)
The ratio between the reconstructed and expected values are shown as the red points as a function of \(E_\text {FD}\). A larger bias of \(\simeq \) 20% is seen at low energies, where upward fluctuations are necessarily selected by the triggering conditions. In the range considered for the energy spectrum, \(E > 10^{17}\) eV, the bias is 3% or less. To complete the description of the SD resolution function, the relative bias was fit to an empirical function,
$$\begin{aligned} b_\text {SD}(E)= b_0 (\lg \tfrac{E}{\mathrm {eV}} - b_1)\exp \left( -b_2(\lg \tfrac{E}{\mathrm {eV}} - b_3)^2\right) + b_4. \nonumber \\ \end{aligned}$$
The corresponding best fit parameters (blue line in Fig. 10) are given in Table 5.
Table 5 Best-fit parameters for the relative energy bias of the SD-750, \( b_\text {SD}(E)\), given in Eq. (9)
Measurement of the energy spectrum
To build the energy spectrum from the reconstructed energy distribution, we need to correct the raw spectrum, obtained as \(J^\text {raw}_i=N_i/({\mathcal {E}}\Delta E_i)\), for the bin-to-bin migrations of events due to the finite accuracy with which the energies are assigned. The energy bins are chosen to be regularly sized in decimal logarithm, \(\Delta \lg E_i=0.1\), commensurate with the energy resolution. The level of migration is driven by the resolution function, the detection efficiency in the energy range just below the threshold energy, and the steepness of the spectrum. To correct for these effects, we use the bin-by-bin correction approach presented in [21]. It consists of folding the detector effects into a proposed spectrum function, \(J(E,{\mathbf {k}})\), with free parameters, \({\mathbf {k}}\), such that the result describes the set of the observed number of events \(N_i\). The set of expectations, \(\nu _i\), is obtained as \(\nu _i({\mathbf {k}})=\sum _j R_{ij}\mu _j({\mathbf {k}})\), where the \(R_{ij}\) coefficients (reported in a matrix format in the Supplementary material) describe the bin-to-bin migrations, and where \(\mu _j\) are the expectations in the case of an ideal detector obtained by integrating the proposed spectrum over \(E_j\) and \(E_j+\Delta E_j\) scaled by \({\mathcal {E}}\). The optimal set of free parameters, \(\hat{{\mathbf {k}}}\), is inferred by minimizing a log-likelihood function built from the Poisson probabilities to observe \(N_i\) events when \(\nu _i(\hat{{\mathbf {k}}})\) are expected.
Residuals of the SD-750 raw spectrum with respect to the power-law function \(J^\text {ref}(E)\). Data points from the SD-1500 spectrum measurement are superimposed
To choose the proposed function, we plot in Fig. 11 the residuals (red dots) of the SD-750 raw spectrum with respect to a reference function, \(J^\text {ref}(E)\), that fits the SD-1500 spectrum below the ankle energy down to the SD-1500 threshold energy, \(10^{18.4}\) eV. A re-binning was applied at and above \(10^{19}\) eV to avoid too large statistical fluctuations.
The reference function in this energy range, as reported in [21], is
$$\begin{aligned} J^\text {ref}(E)=J_0^\text {ref}\left( \frac{E}{10^{18.5}\,\text {eV}}\right) ^{-\gamma _1^\text {ref}}, \end{aligned}$$
with \(J_0^\text {ref}=1.315{\times }10^{-18}\) km\(^{-2}\) yr\(^{-1}\) sr\(^{-1}\) eV\(^{-1}\) and \(\gamma _1^\text {ref}=3.29\). The residuals of the SD-1500 unfolded spectrum with respect to \(J^\text {ref}(E)\) are also shown as open squares in Fig. 11. The sharp transition at \({\simeq }\,10^{18.7}\) eV to a different power law corresponds to the spectral feature known as the ankle. Such a transition is also observed, with much lower sensitivity, using data from the SD-750 array. Below \({\simeq }\,10^{18.7}\) eV and down to \({\simeq }\,10^{17.4}\) eV, one can see a shift of the raw SD-750 spectrum compared to \(J^\text {ref}(E)\). This is expected from a combination of primarily the resolution effects to be unfolded and of a possible mismatch, within the energy-dependent budget of uncorrelated uncertainties, of the SD-1500 and SD-750 \(E_\text {SD}\) energy scales. Below \({\simeq }\,10^{17.4}\) eV, a slight roll-off begins. Overall, these residuals are suggestive of a power-law function to describe the data leading up to the ankle energy where the spectrum hardens, with a gradually changing spectral index over the lowest energies studied. Consequently, the proposed function is chosen as three power laws with transitions occurring over adjustable energy ranges,
$$\begin{aligned} J(E,{\mathbf {k}}) {=} J_0 \left( \frac{E}{10^{17}\,\text {eV}}\right) ^{-\gamma _0} \prod _{i=0}^1\left[ 1{+}\left( \frac{E}{E_{ij}}\right) ^{\frac{1}{\omega _{ij}}}\right] ^{(\gamma _i{-}\gamma _j)\omega _{ij}}, \nonumber \\ \end{aligned}$$
with \(j=i+1\). The normalization factor \(J_0\), the three spectral indices \(\gamma _i\), and the transition parameter \(\omega _{01}\) constitute the free parameters in \({\mathbf {k}}\). The transition parameter \(\omega _{12}\), constrained with much more sensitivity using data from the SD-1500, is fixed at \(\omega _{12}=0.05\) [21].
Unfolded energy spectrum derived using data from the SD-750 array
Table 6 Best-fit values of the spectral parameters (Eq. (11)). The parameter \(\omega _{12}\) is fixed to the value constrained in [21]. Note that the parameters \(\gamma _0\) and \(E_{01}\) correspond to features below the measured energy region and are treated only as aspects of the unfolding fixed to their best-fit values to infer the uncertainties of the measured spectral parameters
Combining all the ingredients at our disposal, we obtain the final estimate of the spectrum, \(J_i\), unfolded for the effects of the response of the detector and shown in Fig. 12. It is obtained as
$$\begin{aligned} J_i=\frac{\mu _i}{\nu _i}J^\text {raw}_i = c_i\,J^\text {raw}_, \end{aligned}$$
where the \(\mu _i\) and \(\nu _i\) coefficients are estimated using the best-fit parameters \(\hat{{\mathbf {k}}}\). Their ratios define the bin-by-bin corrections used to produce the unfolded spectrum. The correction applied extends from 0.84 at \(10^{17}\) eV to 0.99 around the ankle (see Appendix B). The best-fit spectral parameters are reported in Table 6, while the statistical correlations between the parameters are detailed in Appendix B (Table 9). The goodness-of-fit of the forward-folding procedure is attested by the deviance of 15.9, which, if considered to follow the C statistics [42], can be comparedFootnote 4 to the expectation of \(16.2\pm 5.6\) to yield a p-value of 0.50.
Unfolded energy spectrum of the SD-750, scaled by \(E^{2.6}\)
The fitting function is shown in Fig. 13, superimposed to the spectrum scaled by \(E^{2.6}\), allowing one to better appreciate its characteristics, from the turn-over at around \(10^{17}\) eV up to a few \(10^{19}\) eV, thus including the ankle. The turn-over is observed with a very large exposure, unprecedented at such energies. However, as indicated by the magnitude of the transition parameter, \(\omega _{01}\simeq 0.49\), the change of the spectral index occurs over an extended \(\Delta \lg E\simeq 0.5\) energy range, so that the spectral index \(\gamma _0\) cannot be observed but only indirectly inferred. Also, the value of the energy break, \(E_{01}\simeq 1.24{\times }10^{17}\) eV, turns out to be close to the threshold energy. These two facts thus imply that, while a spectral break is found beyond any doubt, it cannot wholly be characterised, as only the higher energy portion is actually observed. Consequently, the fit values describing \(E_{01}\) and \(\gamma _0\) are not to be considered as true measurements but as necessary parameters in the fit function, the statistical resolutions of which are on the order of 35%. Once we infer their best-fit values, we use these values as "external parameters" to estimate the uncertainties of the other spectral parameters. This procedure gives rise to an increase of the systematic uncertainties, but is necessary as \(E_{01}\) and \(\gamma _0\) are not directly observed. Beyond the smooth turn-over around \(E_{01}\), the intensity can be described by a power-law shape as \(J(E)\propto E^{-\gamma _1}\), up to \(E_{12} = \left( 3.9\pm 0.8\right) {\times }10^{18}\) eV, the ankle energy, the value of which is within 1.4\(\sigma \) of that found with the much larger exposure of the SD-1500 measurement of the spectrum, namely \((5.0\pm 0.1){\times }10^{18}\) eV. Also the value of \(\gamma _1 = 3.34\pm 0.02\) is within 1.8\(\sigma \) of that obtained with the SD-1500 between \(10^{18.4}\) and \(10^{18.7}\) eV (\(3.29 \pm 0.02\)).
The characteristics of the measured spectrum can also be studied by looking at the evolution of the spectral index as a function of energy, \(\gamma (E)\). Rather than relying on the empirically chosen unfolding function, this slope parameter can be directly fit using the values calculated in J(E). Power-law fits were performed for a sliding window of width \(\Delta \lg E = 0.3\). The resulting estimations of the so obtained spectral indexes are shown in Fig. 14.
Evolution of the spectral index with energy. The measured spectral points were fit to power laws within a sliding window of \(\Delta \lg E = 0.3\). The values of \(\gamma _1\) and \(\gamma _2\) are represented by the dashed and dash-dotted lines, for reference
The values of the spectral index fits present a consistent picture of the evolution. Beginning at the lowest energies shown, \(\gamma (E)\) increases first quite rapidly, finally approaching a value of 3.3 leading up to the ankle asymptotically. Unsurprisingly, this is the value found for \(\gamma _1\) in the unfolding of both the SD-750 and SD-1500 spectra [21].
The systematic uncertainties that affect the measurement of the spectrum are dominated by the overall uncertainty of the energy scale, detailed in [43], and is, itself, dominated by the absolute calibration of the fluorescence telescopes (10%). The total uncertainty in the energy scale is \(\sigma _E / E = 14\)%. Once propagated, the steepness of the spectrum as a function of energy amplifies this uncertainty, roughly as \(\sigma _{J}/J = (\gamma _1 - 1)\sigma _E / E\), resulting in a total flux uncertainty of \(\sigma _{J}/J \simeq 35\)%. However, for a more exact calculation of the uncertainty, the energies of the individual events were shifted by \(\pm 14\)% and the unfolding procedure was repeated. The result is shown as dashed red lines in Fig. 15.
Systematic uncertainties in the flux measurement as a function of energy. The main contributions are shown separately
Beyond that of the energy scale, the additional uncertainties are subdominant but are important to understand as they have energy dependence and some are uncorrelated with other flux measurements made at the Observatory. Such knowledge is particularly important for the combination of the two SD spectra presented later in Sect. 5. The most relevant of these energy-dependent uncertainties is associated with the procedure of the forward-folding itself. The uncertainties in the resolution function and in the detection efficiency all contribute a component to the overall unfolding uncertainty. The forward-folding process was hence repeated by shifting, within the statistical uncertainties, the parameterizations of the energy resolution (Eq. (8)) and efficiency parameterization, and by bracketing the bias with the pure proton/iron mass primaries below full efficiency. The impact of the resolution uncertainties on the unfolding procedure is the larger, in particular at the highest energies. On the other hand, the energy bias and reduced efficiency below \(10^{17}\) eV only impacts the first few bins. These various components are summed in quadrature and are shown by the dotted blue line in Fig. 15. These influences are clearly seen to impact the spectrum by \({<}4\%\).
The last significant uncertainty in the flux is related to the calculation of the geometric exposure of the array. This quantity has been previously studied and is 4% for the SD-750 which directly translates to a 4% energy-independent shift in the flux [24].
The resulting systematic uncertainties of the spectral parameters are given in Table 6. For completeness, beyond the summary information provided by the spectrum parameterization, the correlation matrix of the energy spectrum is given in the Supplementary material. It is obtained by repeating the analysis on a large number of data sets, sampling randomly the systematic uncertainties listed above.
The combined SD-750 and SD-1500 energy spectrum
The spectrum obtained in Sect. 4 extends down to \(10^{17}\) eV and at the high-energy end overlaps with the one recently reported in [21] using the SD-1500 array. The two spectra are superimposed in Fig. 16. Beyond the overall consistency observed between the two measurements, a combination of them is desirable to gather the information in a single energy spectrum above \(10^{17}\) eV obtained with data from both the SD-750 and the SD-1500 of the Pierre Auger Observatory. We present below such a combination considering adjustable re-scaling factors in exposures, \(\delta {\mathcal {E}}\), and \(E_\text {SD}\) energy scales, \(\delta E_\text {SD}\), within uncorrelated uncertainties.
Superimposed SD spectra to be combined scaled by \(E^{2.6}\), the SD-750 (red circles) and the SD-1500 (black squares)
SD energy spectrum after combining the individual measurements by the SD-750 and the SD-1500 scaled by \(E^{2.6}\). The fit using the proposed function (Eq. (13)) is overlaid in red along with the one sigma error band in gray
The combination is carried out using the same bin-by-bin correction approach as in Sect. 4. The joint likelihood function, \({\mathcal {L}}({\mathbf {s}},\delta {\mathcal {E}},\delta E_\text {SD})\), is built from the product of the individual Poissonian likelihoods pertaining to the two SD measurements, \({\mathcal {L}}_{750}\) and \({\mathcal {L}}_{1500}\). These two individual likelihoods share the same proposed function,
$$\begin{aligned} J(E,{\mathbf {s}}) = J_0 \left( \frac{E}{E_0}\right) ^{-\gamma _0} \frac{\prod _{i=0}^3\left[ 1+\left( \frac{E}{E_{ij}}\right) ^{\frac{1}{\omega _{ij}}}\right] ^{(\gamma _i-\gamma _j)\omega _{ij}}}{\prod _{i=0}^3\left[ 1+\left( \frac{E_0}{E_{ij}}\right) ^{\frac{1}{\omega _{ij}}}\right] ^{(\gamma _i-\gamma _j)\omega _{ij}}},\nonumber \\ \end{aligned}$$
with \(j=i+1\) and \(E_0 = 10^{18.5}\) eV. As in [21], the transition parameters \(\omega _{12}\), \(\omega _{23}\) and \(\omega _{34}\) are fixed to 0.05. In this way, the same parameters \({\mathbf {s}}\) are used during the minimisation process to calculate the set of expectations \(\nu _i({\mathbf {s}},\delta {\mathcal {E}},\delta E_\text {SD})\) of the two arrays. For each array, a change of the associated exposure \({\mathcal {E}}\rightarrow {\mathcal {E}}+\delta {\mathcal {E}}\) impacts the \(\nu _i\) coefficients accordingly, while a change in energy scale \(E_\text {SD}\rightarrow E_\text {SD}+\delta E_\text {SD}\) impacts as well the observed number of events in each bin. Additional likelihood factors, \({\mathcal {L}}_{\delta {\mathcal {E}}}\) and \({\mathcal {L}}_{\delta E_\text {SD}}\), are thus required to control the changes of the exposure and of the energy-scale within their uncorrelated uncertainties. The likelihood factors described below account for \(\delta {\mathcal {E}}\) and \(\delta E_\text {SD}\) changes associated with the SD-750 only. We have checked that allowing additional free parameters, such as the \(\delta {\mathcal {E}}\) corresponding to the SD-1500, does not improve the deviance of the best fit by more than one unit, and thus their introduction is not supported by the data.
Both likelihood factors are described by Gaussian distributions with a spread given by the uncertainty pertaining to the exposure and to the energy-scale. The joint likelihood function reads then as
$$\begin{aligned} {\mathcal {L}}({\mathbf {s}},\delta {\mathcal {E}},\delta E_\text {SD})={\mathcal {L}}_{750}\times {\mathcal {L}}_{1500}\times {\mathcal {L}}_{\delta {\mathcal {E}}}\times {\mathcal {L}}_{\delta E_\text {SD}}. \end{aligned}$$
The allowed change of exposure, \(\delta {\mathcal {E}}\), is guided by the systematic uncertainties in the SD-750 exposure, \(\sigma _{\mathcal {E}}/{\mathcal {E}}=4\%\). Hence, the constraining term for any change in the SD-750 exposure reads, dropping constant terms, as
$$\begin{aligned} -2\ln {\mathcal {L}}_{\delta {\mathcal {E}}}(\delta {\mathcal {E}}) = \left( \frac{\delta {\mathcal {E}}}{\sigma _{\mathcal {E}}}\right) ^2. \end{aligned}$$
Likewise, uncertainties in A and B, \(\delta A\) and \(\delta B\), translate into uncertainties in the SD-750 energy scale. Statistical contributions stem from the energy calibration of \(S_{35}\), which are by essence uncorrelated to those of the SD-1500. Other uncorrelated contributions of the systematic uncertainties from the FD energy scales propagated to the SD-1500 and SD-750 could enter into play. The magnitude of such systematics, \(\sigma _{\mathrm {syst}}\), is difficult to quantify. By testing several values for \(\sigma _{\mathrm {syst}}\), we have checked, however, that such contributions have a negligible impact on the combined spectrum. Hence, the constraining term for any change in energy scale can be considered to stem from statistical uncertainties only and reads as
$$\begin{aligned} -2\ln {\mathcal {L}}_{E_{\mathrm {SD}}}(\delta A,\delta B)= & {} [\sigma ^{-1}]_{AA}(\delta A)^2+[\sigma ^{-1}]_{BB}(\delta B)^2 \nonumber \\&+\,2[\sigma ^{-1}]_{AB}(\delta A)(\delta B), \end{aligned}$$
where the notation \([\sigma ]_{ij}\) stands for the coefficients of the variance-covariance matrix of the A and B best-fit estimates and \([\sigma ^{-1}]\) is the inverse of this matrix.
Table 7 Best-fit values of the combined spectral parameters (Eq. (13)). The parameter \(\omega _{12}\), \(\omega _{23}\) and \(\omega _{34}\) are fixed to the value constrained in [21]. Note that the parameters \(\gamma _0\) and \(E_{01}\) correspond to features below the measured energy region and should be treated only as aspects of the combination
The outcome of the forward-folding fit is the set of parameters \({\mathbf {s}}\), \(\delta {\mathcal {E}}\), \(\delta A\) and \(\delta B\) that allow us to calculate the expectation values \(\mu _i\) and \(\nu _i\), and thus the correction factors \(c_i\), for both arrays separately. The resulting combined spectrum, obtained as
$$\begin{aligned} J^\text {comb}_i=\frac{c_{i,750}\,N_{i,750}+c_{i,1500}\,N_{i,1500}}{{\mathcal {E}}_i^\text {eff}\,\Delta E_i}, \end{aligned}$$
is shown in Fig. 17. Here, the observed number of events \(N_i^{750}\) in each bin is calculated at the re-scaled energies, while the effective exposure, \({\mathcal {E}}_i^\text {eff}\), is the shifted one of the SD-750 in the energy range where \(N_{i,1500}=0\), the one of the SD-1500 in the energy range where \(N_{i,750}=0\), and the sum \({\mathcal {E}}_{750}+\delta {\mathcal {E}}+{\mathcal {E}}_{1500}\) in the overlapping energy range. The set of spectral parameters are collected in Table 7, while the corresponding correlation matrix is reported in Appendix B (Table 11) for \(\delta {\mathcal {E}}\), \(\delta A\) and \(\delta B\) fixed to their best-fit values. The change in exposure is \(\delta {\mathcal {E}}/{\mathcal {E}}=+1.4\%\), while the one in energy scale follows from \(\delta A/A=-2.5\%\) and \(\delta B/B=+0.8\%\). The goodness-of-fit is evidenced by a deviance of 37.2 for an expected value of \(32\pm 8\). We also note that the parameters describing the spectral shape are in agreement with those of the two individual spectra from the SD arrays.
The impact of the systematic uncertainties, dominated by those in the energy scale, on the spectral parameters are reported in Table 7. For completeness, beyond the summary information provided by the spectrum parameterization, the correlation matrix of the energy spectrum itself is also given in the Supplementary material.
SD-750 spectrum (solid red circles) near the second knee along with the measurements from Akeno [44], GAMMA [45], IceTop [9], KASCADE-Grande [46], TALE [10], Tien Shan [47], Tibet-III [48], Tunka-133 [11], Yakutsk [49]. The experiments that set their energy scale using calorimetric observations are indicated by solid colored markers while those with an energy scale based entirely on simulations are shown by gray markers
We have presented here a measurement of the CR spectrum in the energy range between the second knee and the ankle, which is covered with high statistics by the SD-750, including 560,000 events with zenith angles up to \(40^\circ \) and energies above \(10^{17}\) eV. The measurement includes a total exposure of 105 km\(^2\) sr yr and an energy scale set by calorimetric observations from the FD telescopes. We note a significant change in the spectral index and with a width that is much broader than that of the ankle feature.
Such a change has been observed by a number of other experiments, and via various detection methods. Most notably, the nature of this feature was linked to a softening of the heavy-mass primaries beginning at \(10^{16.9}\) eV by the KASCADE-Grande experiment, leading to the moniker iron knee [8]. Additional analyses by the Tunka-133 [50] and IceCube [9] collaborations have given further evidence that high-mass particles are dominant near \(10^{17}\) eV and thus that it is their decline that largely defines the shape of the all-particle spectrum. The hypothesis is also supported by a preliminary study of the distributions of the depths of the shower maximum, \(X_\text {max}\), measured at the Auger Observatory [36, 51]. These have been parametrized according to the hadronic models EPOS-LHC [40], QGSJetII-04 [52] and Sibyll2.3 [53]. From these parametrizations, the evolution over energy of the fractions of different mass groups, from protons to Fe-nuclei, has been derived. From all three models, a fall-off of the Fe component above \(10^{17}\) eV is inferred. The consistency of all these observations strongly supports a scenario of Galactic CRs characterised by a rigidity-dependent maximum acceleration energy for particles with charge Z, namely \(E_\text {max}(Z)\simeq ZE_\text {max}^\text {proton}\), to explain the knee structures.
The measurements of the all-particle flux from various experiments [9,10,11, 44,45,46,47,48,49] in the energy region surrounding the second knee are shown in Fig. 18. Experiments which set their energy scale using calorimetric measurements are plotted using colored markers (Auger SD-750, TA TALE, TUNKA-133, Yakutsk) while the measurements shown in gray markers represent MC-based energy assignments. The spread between various experiments is statistically significant. However, all these measurements are consistent with the SD-750 spectrum within the 14% energy scale systematic uncertainty. Understanding the nature of the off-sets in the energy scales is beyond the scope of this paper. However, we note that the TALE spectrum agrees rather well with the SD-750 spectrum, offset by 5 to 6% in energy. The agreement is notable given that at-and-above the ankle, an energy scale off-set of around 11% is required to bring the spectral measurements with SD-1500 of the Auger Observatory and the SD of the Telescope Array into agreement [54].
Additionally, we have presented a robust method to combine energy spectra. Using the result from the SD-750 and a previously reported measurement using the SD-1500, a unified SD spectrum was calculated by combining the respective observed fluxes, energy resolutions, and exposures. The result has partial coverage of the second knee and full coverage of the ankle, an additional inflection at \({\simeq }\,1.4{\times }10^{19}\) eV, and the suppression. This procedure is applied to spectra inferred from a single detector type (i.e. water-Cherenkov detectors), but can be used for the combination of any spectral measurements for which the uncorrelated uncertainties can be estimated.
The impressive regularity of the all-particle spectrum observed in the energy region between the second knee and the ankle can hide an underlying intertwining of different astrophysical phenomena, which might be exposed by looking at the spectrum of different primary elements. In the future, further measurements will allow separation of the intensities due to the different components. On the one hand, \(X_\text {max}\) values will be determined down to \(10^{17}\) eV using the three HEAT telescopes. On the other hand, the determination of the muon component of EAS above \(10^{17}\) eV will be possible using the new array of underground muon detectors [35], co-located with the SD-750. This will help us in studying whether the origin of the second knee stems from, for instance, the steep fall-off of an iron component, as expected for Galactic CRs characterized by a rigidity-dependent maximum acceleration energy for particles with charge Z, namely \(E_\text {max}(Z)\simeq ZE_\text {max}^\text {proton}\). In addition, we will be able to extend the measurement of the energy spectrum below \(10^{17}\,\)eV with a denser array of 433 m-spaced detectors and with the analysis of the Cherenkov light in FD events [55]. The extension will allow us to lower the threshold and to further explore the second-knee region in more detail.
This manuscript has data included as electronic supplementary material. The online version of this article contains supplementary material, which is available to authorized users.
The response of an individual WCD to secondary particles has been studied using unbiased FADC waveforms and dedicated studies of signals from muons [23].
The energy-cut corresponding to the full-efficiency threshold increases with zenith angle, due to the increasing attenuation of the electromagnetic component with slant depth. The zenith angle \(40^\circ \) was chosen as a balance to have good statistical precision and a low energy threshold.
This is primarily due to the instabilities in the wireless communications systems as well as periods where large fractions of the array were not functioning.
Note that the p-value for a proposed function which does not include a transition from \(\gamma _0\) to \(\gamma _1\) can be rejected with more than \(20\sigma \) confidence.
For example, four bins with \(S_i \le S_{i+1} \le S_{i+2} \le S_{i+3}\) is considered one positive step, not three positive steps.
G.V. Kulikov, G.B. Khristiansen, On the size spectrum of extensive air showers. J. Exp. Theor. Phys. 35, 635 (1958)
HEGRA Collaboration, Energy spectrum and chemical composition of cosmic rays between \(0.3\) PeV and \(10\) eV determined from the Cherenkov light and charged particle distributions in air showers. Astron. Astrophys. 359, 682 (2000). arXiv:astro-ph/9908202
J.W. Fowler, L.F. Fortson, C.C.H. Jui, D.B. Kieda, R.A. Ong, C.L. Pryke et al., A measurement of the cosmic ray spectrum and composition at the knee. Astropart. Phys. 15, 49 (2001). https://doi.org/10.1016/S0927-6505(00)00139-0. arXiv:astro-ph/0003190
EAS-TOP Collaboration, The cosmic ray primary composition in the "knee" region through the EAS electromagnetic and muon measurements at EAS-TOP. Astropart. Phys. 21, 583 (2004). https://doi.org/10.1016/j.astropartphys.2004.04.005
MACRO, EAS-TOP Collaboration, The primary cosmic ray composition between \(10^{15}\) and \(10^{16}\) eV from extensive air showers electromagnetic and TeV muon data. Astropart. Phys. 20, 641 (2004). https://doi.org/10.1016/j.astropartphys.2003.10.004. arXiv:astro-ph/0305325
A.P. Garyaka, R.M. Martirosov, S.V. Ter-Antonyan, N. Nikolskaya, Y.A. Gallant, L. Jones et al., Rigidity-dependent cosmic ray energy spectra in the knee region obtained with the GAMMA experiment. Astropart. Phys. 28, 169 (2007). https://doi.org/10.1016/j.astropartphys.2007.04.004. arXiv:0704.3200
P. Blasi, The origin of galactic cosmic rays. Astron. Astrophys. Rev. 21, 70 (2013). https://doi.org/10.1007/s00159-013-0070-7. arXiv:1311.7346
KASCADE-Grande Collaboration, The spectrum of high-energy cosmic rays measured with KASCADE-Grande. Astropart. Phys. 36, 183 (2012). https://doi.org/10.1016/j.astropartphys.2012.05.023. arXiv:1206.3834
IceCube Collaboration, Cosmic ray spectrum and composition from PeV to EeV using 3 years of data from IceTop and IceCube. Phys. Rev. D 100, 082002 (2019). https://doi.org/10.1103/PhysRevD.100.082002. arXiv:1906.04317
Telescope Array Collaboration, The cosmic-ray energy spectrum between \(2\) PeV and \(2\) EeV Observed with the TALE detector in monocular mode. Astrophys. J. 865, 74 (2018). https://doi.org/10.3847/1538-4357/aada05. arXiv:1803.01288
N.M. Budnev et al., The primary cosmic-ray energy spectrum measured with the Tunka-133 array. Astropart. Phys. 117, 102406 (2020). https://doi.org/10.1016/j.astropartphys.2019.102406
A. Albert et al., Evidence of \(200\) TeV photons from HAWC. Astrophys. J. Lett. 907, L30 (2021). https://doi.org/10.3847/2041-8213/abd77b. arXiv:2012.15275
Tibet ASgamma Collaboration, First detection of sub-PeV diffuse gamma rays from the galactic disk: evidence for ubiquitous galactic cosmic rays beyond PeV energies. Phys. Rev. Lett. 126, 141101 (2021). https://doi.org/10.1103/PhysRevLett.126.141101. arXiv:2104.05181
LHAASO Collaboration, Ultrahigh-energy photons up to 1.4 petaelectronvolts from 12 gamma-ray galactic sources. Nature 594, 33–36 (2021). https://doi.org/10.1038/s41586-021-03498-z
LHAASO Collaboration, Discovery of the Ultra-high energy gamma-ray source LHAASO J2108+5157. arXiv:2106.09865
P. Cristofari, P. Blasi, E. Amato, The low rate of Galactic pevatrons. Astropart. Phys. 123, 102492 (2020). https://doi.org/10.1016/j.astropartphys.2020.102492. arXiv:2007.04294
A.M. Hillas, Can diffusive shock acceleration in supernova remnants account for high-energy galactic cosmic rays? J. Phys. G 31, R95 (2005). https://doi.org/10.1088/0954-3899/31/5/R02
KASCADE-Grande Collaboration, KASCADE-Grande measurements of energy spectra for elemental groups of cosmic rays. Astropart. Phys. 47 (2013) 54. https://doi.org/10.1016/j.astropartphys.2013.06.004. arXiv:1306.6283
R. Aloisio, V. Berezinsky, P. Blasi, Ultra high energy cosmic rays: implications of Auger data for source spectra and chemical composition. JCAP 10, 020 (2014). https://doi.org/10.1088/1475-7516/2014/10/020. arXiv:1312.7459
Pierre Auger Collaboration, Features of the energy spectrum of cosmic rays above \(2.5{\times } 10^{18}\) eV using the Pierre Auger Observatory. Phys. Rev. Lett. 125, 121106 (2020). https://doi.org/10.1103/PhysRevLett.125.121106. arXiv:2008.06488
Pierre Auger Collaboration, Measurement of the cosmic-ray energy spectrum above \(2.5{\times } 10^{18}\) eV using the Pierre Auger Observatory. Phys. Rev. D 102, 062005 (2020). https://doi.org/10.1103/PhysRevD.102.062005. arXiv:2008.06486
Pierre Auger Collaboration, Trigger and Aperture of the Surface Detector Array of the Pierre Auger Observatory. Nucl. Instrum. Meth. A 613, 29 (2010). https://doi.org/10.1016/j.nima.2009.11.018. arXiv:1111.6764
Pierre Auger Collaboration, Calibration of the surface array of the Pierre Auger Observatory. Nucl. Instrum. Meth. A 568, 839 (2006). https://doi.org/10.1016/j.nima.2006.07.066. arXiv:2102.01656
Pierre Auger Collaboration, The Pierre Auger Cosmic Ray Observatory. Nucl. Instrum. Meth. A 798, 172 (2015). https://doi.org/10.1016/j.nima.2015.06.058. arXiv:1502.01323
Pierre Auger Collaboration, The lateral trigger probability function for the ultra-high energy cosmic ray showers detected by the Pierre Auger Observatory. Astropart. Phys. 35, 266 (2011). https://doi.org/10.1016/j.astropartphys.2011.08.001. arXiv:1111.6645
Pierre Auger Collaboration, Reconstruction of events recorded with the surface detector of the pierre auger observatory. JINST 15, P10021 (2020). https://doi.org/10.1088/1748-0221/15/10/P10021. arXiv:2007.09035
A.M. Hillas, Derivation of the EAS spectrum. Acta Phys. Acad. Sci. Hung. 29, 355 (1970)
D.W. Newton, J. Knapp, A.A. Watson, The optimum distance at which to determine the size of a giant air shower. Astropart. Phys. 26, 414 (2007). https://doi.org/10.1016/j.astropartphys.2006.08.003. arXiv:astro-ph/0608118
J. Bellido (Pierre Auger Collaboration), Depth of maximum of air-shower profiles at the Pierre Auger Observatory: measurements above \(10^{17.2}\) eV and composition implications. PoS ICRC2017, 506 (2017). https://doi.org/10.22323/1.301.0506
Pierre Auger Collaboration, Impact of atmospheric effects on the energy reconstruction of air showers observed by the surface detectors of the Pierre Auger Observatory. JINST 12, P02006 (2017). https://doi.org/10.1088/1748-0221/12/02/p02006. arXiv:1702.02835
J. Hersil, I. Escobar, D. Scott, G. Clark, S. Olbert, Observations of extensive air showers near the maximum of their longitudinal development. Phys. Rev. Lett. 6, 22 (1961). https://doi.org/10.1103/PhysRevLett.6.22
Pierre Auger Collaboration, Cosmic-ray anisotropies in right ascension measured by the Pierre Auger Observatory. Astrophys. J. 891, 142 (2020). https://doi.org/10.3847/1538-4357/ab7236. arXiv:2002.06172
T.W. Anderson, D.A. Darling, A test of goodness of fit. J. Am. Stat. Assoc. 49, 765 (1954)
Pierre Auger Collaboration, Data-driven estimation of the invisible energy of cosmic ray showers with the Pierre Auger Observatory. Phys. Rev. D 100, 082003 (2019). https://doi.org/10.1103/PhysRevD.100.082003. arXiv:1901.08040
Pierre Auger Collaboration, Direct measurement of the muonic content of extensive air showers between \(2\times 10^{17}\) and \(2\times 10^{18}\) eV at the Pierre Auger Observatory. Eur. Phys. J. C 80, 751 (2020). https://doi.org/10.1140/epjc/s10052-020-8055-y
Pierre Auger Collaboration, Depth of maximum of air-shower profiles at the Pierre Auger Observatory. II. Composition implications. Phys. Rev. D 90, 122006 (2014). https://doi.org/10.1103/PhysRevD.90.122006. arXiv:1409.5083
H.P. Dembinski, B. Kégl, I.C. Mariş, M. Roth, D. Veberič, A likelihood method to cross-calibrate air-shower detectors. Astropart. Phys. 73, 44 (2016). https://doi.org/10.1016/j.astropartphys.2015.08.001. arXiv:1503.09027
Pierre Auger Collaboration, The energy scale of the Pierre Auger Observatory. PoS ICRC2019, 231 (2020). https://doi.org/10.22323/1.358.0231
D. Heck et al., CORSIKA: a Monte Carlo code to simulate extensive air showers. Report fzka 6019 (1998)
T. Pierog, I. Karpenko, J.M. Katzy, E. Yatsenko, K. Werner, EPOS LHC: test of collective hadronization with data measured at the CERN Large Hadron Collider. Phys. Rev. C 92, 034906 (2015). https://doi.org/10.1103/PhysRevC.92.034906. arXiv:1306.0121
H.P. Dembinski, R. Engel, A. Fedynitch, T. Gaisser, F. Riehn, T. Stanev, Data-driven model of the cosmic-ray flux and mass composition from \(10\) GeV to \(10^{11}\) GeV. PoS ICRC2017, 533 (2017). https://doi.org/10.22323/1.301.0533
M. Bonamente, Distribution of the C statistic with applications to the sample mean of Poisson data. J. Appl. Stat. 47, 2044 (2020). https://doi.org/10.1080/02664763.2019.1704703. arXiv:1912.05444
B. Dawson (Pierre Auger Collaboration), The Energy Scale of the Pierre Auger Observatory. PoS ICRC2019, 231 (2019). https://doi.org/10.22323/1.358.0231
M. Nagano, M. Teshima, Y. Matsubara, H. Dai, T. Hara, N. Hayashida et al., Energy spectrum of primary cosmic rays above \(10^{17}\) eV determined from the extensive air shower experiment at Akeno. J. Phys. G 18, 423 (1992). https://doi.org/10.1088/0954-3899/18/2/022
S. Ter-Antonyan, Sharp knee phenomenon of primary cosmic ray energy spectrum. Phys. Rev. D 89, 123003 (2014). https://doi.org/10.1103/PhysRevD.89.123003. arXiv:1405.5472
KASCADE-Grande Collaboration, KASCADE-Grande energy spectrum of cosmic rays interpreted with post-LHC hadronic interaction models. PoS ICRC2015, 359 (2016). https://doi.org/10.22323/1.236.0359
E.N. Gudkova, N.M. Nesterova, Results of the further analysis of data from the Tien Shan array in the energy spectrum of primary cosmic rays in the energy Range of \({2 \times 10^{13}}\) - \({3 \times 10^{17}}\) eV. Phys. Atomic Nuclei 83, 629 (2020). arXiv:2010.04236
TIBET III Collaboration, The All-particle spectrum of primary cosmic rays in the wide energy range from \(10^{14}\) eV to \(10^{17}\) eV observed with the Tibet-III air-shower array. Astrophys. J. 678, 1165 (2008). https://doi.org/10.1086/529514. arXiv:0801.1803
S.P. Knurenko, Z.E. Petrov, R. Sidorov, I.Y. Sleptsov, S.K. Starostin, G.G. Struchkov, Cosmic ray spectrum in the energy range \(10^{15}\)–\(10^{18}\) eV and the second knee according to the small Cherenkov setup at the Yakutsk EAS array, Proc. of 33rd ICRC (2013). arXiv:1310.1978
O.A. Gress, T.I. Gress, E.E. Korosteleva, L.A. Kuzmichev, B.K. Lubsandorzhiev, L.V. Pan'kov et al., The study of primary cosmic rays energy spectrum and mass composition in the energy range \(0.5\)–\(50\) PeV with TUNKA Eas Cherenkov array. Nucl. Phys. B-Proc. Suppl. 75, 299 (1999)
Pierre Auger Collaboration, Depth of maximum of air-shower profiles at the Pierre Auger Observatory: measurements above \(10^{17.2}\) eV and composition implications. PoS ICRC2017, 506 (2018). https://doi.org/10.22323/1.301.0506
S. Ostapchenko, QGSJET-II: physics, recent improvements, and results for air showers, in EPJ Web of Conferences, vol. 52, p. 02001 (EDP Sciences, 2013)
F. Riehn, H.P. Dembinski, R. Engel, A. Fedynitch, T. Gaisser, T. Stanev, The hadronic interaction model Sibyll 2.3c and Feynman scaling. PoS ICRC2017, 301 (2017). https://doi.org/10.22323/1.301.0301. arXiv:1709.07227
O. Deligny, (Pierre Auger and Telescope Array Collaborations), The energy spectrum of ultra-high energy cosmic rays measured at the Pierre Auger Observatory and at the Telescope Array. PoS ICRC2019, 234 (2019). https://doi.org/10.22323/1.358.0234
V. Novotny (Pierre Auger Collaboration), Measurement of the spectrum of cosmic rays above \(10^{16.5}\) eV with Cherenkov-dominated events at the Pierre Auger Observatory. PoS ICRC2019, 374 (2019). https://doi.org/10.22323/1.358.0374
The successful installation, commissioning, and operation of the Pierre Auger Observatory would not have been possible without the strong commitment and effort from the technical and administrative staff in Malargüe. We are very grateful to the following agencies and organizations for financial support: Argentina – Comisión Nacional de Energía Atómica; Agencia Nacional de Promoción Científica y Tecnológica (ANPCyT); Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET); Gobierno de la Provincia de Mendoza; Municipalidad de Malargüe; NDM Holdings and Valle Las Leñas; in gratitude for their continuing cooperation over land access; Australia – the Australian Research Council; Belgium – Fonds de la Recherche Scientifique (FNRS); Research Foundation Flanders (FWO); Brazil – Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); Financiadora de Estudos e Projetos (FINEP); Fundação de Amparo à Pesquisa do Estado de Rio de Janeiro (FAPERJ); São Paulo Research Foundation (FAPESP) Grants No. 2019/10151-2, No. 2010/07359-6 and No. 1999/05404-3; Ministério da Ciência, Tecnologia, Inovações e Comunicações (MCTIC); Czech Republic – Grant No. MSMT CR LTT18004, LM2015038, LM2018102, CZ.02.1.01/0.0/0.0/16_013/0001402, CZ.02.1.01/0.0/0.0/18_046/0016010 and CZ.02.1.01/0.0/0.0/17_049/0008422; France – Centre de Calcul IN2P3/CNRS; Centre National de la Recherche Scientifique (CNRS); Conseil Régional Ile-de-France; Département Physique Nucléaire et Corpusculaire (PNC-IN2P3/CNRS); Département Sciences de l'Univers (SDU-INSU/CNRS); Institut Lagrange de Paris (ILP) Grant No. LABEX ANR-10-LABX-63 within the Investissements d'Avenir Programme Grant No. ANR-11-IDEX-0004-02; Germany – Bundesministerium für Bildung und Forschung (BMBF); Deutsche Forschungsgemeinschaft (DFG); Finanzministerium Baden-Württemberg; Helmholtz Alliance for Astroparticle Physics (HAP); Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF); Ministerium für Innovation, Wissenschaft und Forschung des Landes Nordrhein-Westfalen; Ministerium für Wissenschaft, Forschung und Kunst des Landes Baden-Württemberg; Italy – Istituto Nazionale di Fisica Nucleare (INFN); Istituto Nazionale di Astrofisica (INAF); Ministero dell'Istruzione, dell'Universitá e della Ricerca (MIUR); CETEMPS Center of Excellence; Ministero degli Affari Esteri (MAE); México – Consejo Nacional de Ciencia y Tecnología (CONACYT) No. 167733; Universidad Nacional Autónoma de México (UNAM); PAPIIT DGAPA-UNAM; The Netherlands – Ministry of Education, Culture and Science; Netherlands Organisation for Scientific Research (NWO); Dutch national e-infrastructure with the support of SURF Cooperative; Poland -Ministry of Science and Higher Education, grant No. DIR/WK/2018/11; National Science Centre, Grants No. 2013/08/M/ST9/00322, No. 2016/23/B/ST9/01635 and No. HARMONIA 5–2013/10/M/ST9/00062, UMO-2016/22/M/ST9/00198; Portugal – Portuguese national funds and FEDER funds within Programa Operacional Factores de Competitividade through Fundação para a Ciência e a Tecnologia (COMPETE); Romania – Romanian Ministry of Education and Research, the Program Nucleu within MCI (PN19150201/16N/2019 and PN19060102) and project PN-III-P1-1.2-PCCDI-2017-0839/19PCCDI/2018 within PNCDI III; Slovenia – Slovenian Research Agency, grants P1-0031, P1-0385, I0-0033, N1-0111; Spain – Ministerio de Economía, Industria y Competitividad (FPA2017-85114-P and PID2019-104676GB-C32), Xunta de Galicia (ED431C 2017/07), Junta de Andalucía (SOMM17/6104/UGR, P18-FR-4314) Feder Funds, RENATA Red Nacional Temática de Astropartículas (FPA2015-68783-REDT) and María de Maeztu Unit of Excellence (MDM-2016-0692); USA – Department of Energy, Contracts No. DE-AC02-07CH11359, No. DE-FR02-04ER41300, No. DE-FG02-99ER41107 and No. DE-SC0011689; National Science Foundation, Grant No. 0450696; The Grainger Foundation; Marie Curie-IRSES/EPLANET; European Particle Physics Latin American Network; University of Delaware Research Foundation (UDRF) – 2019; and UNESCO.
Centro Atómico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET), San Carlos de Bariloche, Argentina
I. Allekotte, X. Bertou, G. Golup, M. Gómez Berisso, J. M. González, I. Goos, D. Harari, S. Mollerach & E. Roulet
Centro de Investigaciones en Láseres y Aplicaciones, CITEDEF and CONICET, Villa Martelli, Argentina
J. Pallotta
Departamento de Física and Departamento de Ciencias de la Atmósfera y los Océanos, FCEyN, Universidad de Buenos Aires and CONICET, Buenos Aires, Argentina
S. Dasso
IFLP, Universidad Nacional de La Plata and CONICET, La Plata, Argentina
M. T. Dova, P. Hansen, A. G. Mariazzi, S. J. Sciutto, M. Tueros, I. D. Vergara Quispe & H. Wahlberg
Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA), Buenos Aires, Argentina
S. Dasso & A. C. Rovero
Instituto de Física de Rosario (IFIR)-CONICET/U.N.R. and Facultad de Ciencias Bioquímicas y Farmacéuticas U.N.R., Rosario, Argentina
V. Binet, M. M. Freire & M. I. Micheletti
Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), Universidad Tecnológica Nacional-Facultad Regional Mendoza (CONICET/CNEA), Mendoza, Argentina
A. C. Cobos Cerutti & B. García
Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), Buenos Aires, Argentina
A. Almela, B. Andrada, H. Asorey, K. Bismark, A. M. Botti, P. G. Brichetto Orchera, M. Büsken, J. de Jesús, A. Etchegoyen, J. M. Figueira, A. Fuster, F. Gesualdi, F. Gollan, S. Hahn, M. R. Hampel, V. V. Kizakke Covilakam, S. Martinelli, D. Melo, A. L. Müller, E. E. Pereira Martins, C. Pérez Bertolli, M. Perlin, M. Platino, D. Ravignani, M. Reininghaus, M. J. Roncoroni, F. Sánchez, C. Sarmiento-Cano, M. Schimassek, F. Schlüter, M. Scornavacche, G. Silli, M. Stadelmaier, A. Streich, A. D. Supanitsky & B. Wundheiler
Observatorio Pierre Auger, Malargüe, Argentina
M. Cerda, F. Gobbi, J. Kleinfeller, R. Squartini & A. Travaini
Observatorio Pierre Auger and Comisión Nacional de Energía Atómica, Malargüe, Argentina
G. Avila, F. Contreras, M. del Río, P. F. Gómez Vitale, J. P. Gongora, J. Rodriguez Rojo & R. Sato
Universidad Tecnológica Nacional-Facultad Regional Buenos Aires, Buenos Aires, Argentina
A. Almela, A. Etchegoyen & A. Fuster
University of Adelaide, Adelaide, SA, Australia
J. M. Albury, J. A. Bellido, R. W. Clay, B. R. Dawson, J. A. Day, T. D. Grubb, V. M. Harvey, G. C. Hill, B. C. Manning, S. J. Saffi & T. Sudholz
Université Libre de Bruxelles (ULB), Brussels, Belgium
N. González, I. C. Mariş, D. Mockler, M. Suárez-Durán & O. Zapparrata
Vrije Universiteit Brussels, Brussels, Belgium
S. Buitink, T. Huege, K. Mulrey & O. Scholten
Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro, RJ, Brazil
R. C. Shellard
Centro Federal de Educação Tecnológica Celso Suckow da Fonseca, Nova Friburgo, Brazil
B. L. Lago
Instituto Federal de Educação, Ciência e Tecnologia do Rio de Janeiro (IFRJ), Rio de Janeiro, Brazil
J. de Oliveira
Escola de Engenharia de Lorena, Universidade de São Paulo, Lorena, SP, Brazil
F. Catalani & C. J. Todero Peixoto
Instituto de Física de São Carlos, Universidade de São Paulo, São Carlos, SP, Brazil
V. de Souza, R. G. Lang & H. Martinez
Instituto de Física, Universidade de São Paulo, São Paulo, SP, Brazil
L. Bonneau Arbeletche, J. Perez Armand, W. Rodrigues de Carvalho & E. M. Santos
Universidade Estadual de Campinas, IFGW, Campinas, SP, Brazil
J. A. Chinellato, D. de Oliveira Franco, C. Dobrigkeit, A. C. Fauth, A. Machado Payeras & M. A. Muller
Universidade Estadual de Feira de Santana, Feira de Santana, Brazil
G. P. Guedes
Universidade Federal do ABC, Santo André, SP, Brazil
M. A. Leigui de Oliveira
Universidade Federal do Paraná, Setor Palotina, Palotina, Brazil
R. C. dos Anjos
Instituto de Física, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ, Brazil
C. Bonifazi, J. R. T. de Mello Neto & C. Watanabe
Observatório do Valongo, Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ, Brazil
J. R. T. de Mello Neto & C. Ventura
Universidade Federal Fluminense, EEIMVR, Volta Redonda, RJ, Brazil
D. Correia dos Santos, R. M. de Almeida & D. dos Santos
Universidad de Medellín, Medellín, Colombia
A. Tapia
Universidad Industrial de Santander, Bucaramanga, Colombia
L. A. Núñez, J. Peña-Rodriguez, J. D. Sanabria Gomez & A. Vásquez-Ramírez
Institute of Particle and Nuclear Physics, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic
D. Nosek & V. Novotny
Institute of Physics of the Czech Academy of Sciences, Prague, Czech Republic
A. Bakalova, J. Blazek, M. Boháčová, J. Chudoba, J. Ebr, P. Hamal, P. Janecek, J. Jurysek, D. Mandat, M. Palatka, M. Pech, M. Prouza, J. Ridky, E. Santos, P. Schovánek, P. Tobiska, P. Travnicek, J. Vicha & A. Yushkov
Palacky University, RCPTM, Olomouc, Czech Republic
L. Chytka, P. Horvath, M. Hrabovský, S. Michal, L. Nožka, L. Vaclavek & M. Vacula
CNRS/IN2P3, IJCLab, Université Paris-Saclay, Orsay, France
O. Deligny, P. L. Ghia, I. Lhenry-Yvon, S. Marafico & P. Savina
Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE), Sorbonne Université, Université de Paris, CNRS-IN2P3, Paris, France
P. Billoir & A. Letessier-Selvon
Univ. Grenoble Alpes, CNRS, Grenoble Institute of Engineering Univ. Grenoble Alpes, LPSC-IN2P3, 38000, Grenoble, France
C. Berat, C. Bleve, F. Montanet, J. Souchard, P. Stassi & Z. Torrès
Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France
J. Biteau & T. Suomijärvi
Department of Physics, Bergische Universität Wuppertal, Wuppertal, Germany
K. H. Becker, I. Caracas, M. Gottowik, A. Kääpä, K. H. Kampert, E. Mayotte, W. M. Namasaka, A. Nasr-Esfahani, P. Papenbreer, J. Pawlowsky, S. Querchfeld, J. Rautenberg, M. Schimp, S. Schröder, S. Sehgal & D. Wittkowski
Institute for Experimental Particle Physics, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
K. Bismark, M. Büsken, R. Engel, M. Köpke, Q. Luce, D. Mockler, E. E. Pereira Martins, M. Schimassek, D. Schmidt, A. Schulz & A. Streich
Institut für Prozessdatenverarbeitung und Elektronik, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
F. Feldbusch, H. Gemmeke, M. Kleifges, N. Kunka, A. Menshikov & M. Weber
Institute for Astroparticle Physics, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
K. Daumiller, J. de Jesús, R. Engel, T. Fitoussi, F. Gesualdi, I. Goos, S. Hahn, A. Haungs, D. Heck, T. Huege, N. Karastathis, B. Keilhauer, V. V. Kizakke Covilakam, H. O. Klages, V. Lenok, S. Martinelli, H. J. Mathes, C. Pérez Bertolli, M. Perlin, T. Pierog, M. Reininghaus, M. Roth, C. M. Schäfer, H. Schieler, F. Schlüter, F. G. Schröder, M. Scornavacche, G. Silli, M. Stadelmaier, O. Tkachenko, R. Ulrich, M. Unger, D. Veberič & A. Weindl
III. Physikalisches Institut A, RWTH Aachen University, Aachen, Germany
P. R. Araújo Ferreira, T. Bister, T. Bretz, F. L. Briechle, M. Erdmann, A. L. Garcia Vegas, J. Glombitza, T. Hebbeker, J. Kemp, N. Langner, J. Schulte & M. Wirtz
II. Institut für Theoretische Physik, Universität Hamburg, Hamburg, Germany
J. Manshanden & G. Sigl
Department Physik-Experimentelle Teilchenphysik, Universität Siegen, Siegen, Germany
P. Buchholz, M. Niechciol, M. Risse & P. Ruehl
Gran Sasso Science Institute, L'Aquila, Italy
F. Barbato, A. Condorelli, I. De Mitri & S. Petrera
INFN Laboratori Nazionali del Gran Sasso, Assergi (L'Aquila), Italy
F. Barbato, D. Boncioli, A. Condorelli, I. De Mitri, M. Mastrodicasa, S. Petrera, V. Rizi, F. Salamida & C. Trimarelli
INFN, Sezione di Catania, Catania, Italy
M. Buscemi, R. Caruso, A. Insolia, D. Lo Presti, G. Marsella, V. Pirronello & A. Segreto
INFN, Sezione di Lecce, Lecce, Italy
G. Cataldi, M. R. Coluccia, F. Convenga, F. de Palma, E. De Vito, I. Epicoco, G. Mancarella, D. Martello, A. Nucita, L. Perrone, P. Savina & V. Scherini
INFN, Sezione di Milano, Milan, Italy
L. Caccianiga, G. Consolati, C. Galelli, M. Giammarchi & L. Miramonti
INFN, Sezione di Napoli, Naples, Italy
C. Aramo, R. Colalillo, F. Guarino & L. Valore
INFN, Sezione di Roma "Tor Vergata", Rome, Italy
G. Matthiae, G. Salina & V. Verzi
INFN, Sezione di Torino, Turin, Italy
M. Aglietta, G. A. Anastasi, M. E. Bertaina, A. Castellina, A. Di Matteo, F. Fenu, A. Gorgi, E. Guido, C. Morello, R. Mussa & C. Taricco
Istituto di Astrofisica Spaziale e Fisica Cosmica di Palermo (INAF), Palermo, Italy
A. Segreto
Osservatorio Astrofisico di Torino (INAF), Turin, Italy
M. Aglietta, A. Castellina, F. Fenu, A. Gorgi & C. Morello
Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano, Milan, Italy
G. Consolati
Dipartimento di Matematica e Fisica "E. De Giorgi", Università del Salento, Lecce, Italy
F. Convenga, F. de Palma, E. De Vito, I. Epicoco, G. Mancarella, D. Martello, A. Nucita, L. Perrone & P. Savina
Dipartimento di Scienze Fisiche e Chimiche, Università dell'Aquila, L'Aquila, Italy
D. Boncioli, M. Mastrodicasa, V. Rizi, F. Salamida & C. Trimarelli
Dipartimento di Fisica e Astronomia, Università di Catania, Catania, Italy
R. Caruso, A. Insolia, D. Lo Presti & V. Pirronello
Dipartimento di Fisica, Università di Milano, Milan, Italy
L. Caccianiga, C. Galelli & L. Miramonti
Dipartimento di Fisica "Ettore Pancini", Università di Napoli "Federico II", Naples, Italy
R. Colalillo, F. Guarino & L. Valore
Dipartimento di Fisica e Chimica "E. Segrè", Università di Palermo, Palermo, Italy
G. Marsella
Dipartimento di Fisica, Università di Roma "Tor Vergata", Rome, Italy
G. Matthiae
Dipartimento di Fisica, Università Torino, Turin, Italy
G. A. Anastasi, M. E. Bertaina, E. Guido & C. Taricco
Benemérita Universidad Autónoma de Puebla, Puebla, Mexico
R. López, O. Martínez Bravo, A. Parra, H. Salazar & E. Varela
Unidad Profesional Interdisciplinaria en Ingeniería y Tecnologías Avanzadas del Instituto Politécnico Nacional (UPIITA-IPN), Mexico, D.F., Mexico
R. Pelayo
Universidad Autónoma de Chiapas, Tuxtla Gutiérrez, Chiapas, Mexico
K. S. Caballero-Mora
Universidad Michoacana de San Nicolás de Hidalgo, Morelia, Michoacán, México
J. C. Arteaga Velázquez
Universidad Nacional Autónoma de México, Mexico, D.F., Mexico
J. C. D'Olivo, G. Medina-Tanco, L. Nellen & J. F. Valdés Galicia
Facultad de Ciencias Naturales y Formales, Universidad Nacional de San Agustin de Arequipa, Arequipa, Peru
J. A. Bellido
Institute of Nuclear Physics PAN, Krakow, Poland
N. Borodai, D. Góra, J. Pȩkala, J. Stasielak & H. Wilczyński
Faculty of High-Energy Astrophysics, University of Łódź, Łódź, Poland
Z. Szadkowski
Laboratório de Instrumentação e Física Experimental de Partículas – LIP and Instituto Superior Técnico-IST, Universidade de Lisboa-UL, Lisbon, Portugal
P. Abreu, S. Andringa, P. Assis, R. J. Barreira Luz, L. Cazon, R. Conceição, L. M. Domingues Mendes, L. Lopes, M. Pimenta, F. Riehn, R. Sarmento & B. Tomé
"Horia Hulubei" National Institute for Physics and Nuclear Engineering, Bucharest-Magurele, Romania
A. Balaceanu, A. Gherghel-Lascu, M. Niculescu-Oglinzanu, A. Saftoiu, O. Sima & D. Stanca
Institute of Space Science, Bucharest-Magurele, Romania
P. G. Isar
University Politehnica of Bucharest, Bucharest, Romania
A. M. Badescu
Center for Astrophysics and Cosmology (CAC), University of Nova Gorica, Nova Gorica, Slovenia
A. Filipčič, J. P. Lundquist, S. Stanič, S. Vorobiov, D. Zavrtanik, M. Zavrtanik & L. Zehrer
Experimental Particle Physics Department, J. Stefan Institute, Ljubljana, Slovenia
A. Filipčič, D. Zavrtanik & M. Zavrtanik
Universidad de Granada and C.A.F.P.E., Granada, Spain
A. Bueno & J. M. Carceller
Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de Santiago de Compostela, Santiago de Compostela, Spain
J. Alvarez-Muñiz, G. Parente, F. Pedreira & E. Zas
IMAPP, Radboud University Nijmegen, Nijmegen, The Netherlands
R. Alves Batista, F. Canfora, S. J. de Jong, G. De Mauro, H. Falcke, T. Fodran, C. Galea, U. Giaccari, J. R. Hörandel, A. Khakurdikar, B. Pont, M. Pothast & C. Timmermans
Nationaal Instituut voor Kernfysica en Hoge Energie Fysica (NIKHEF), Science Park, Amsterdam, The Netherlands
F. Canfora, S. J. de Jong, G. De Mauro, H. Falcke, J. R. Hörandel, M. Pothast & C. Timmermans
Stichting Astronomisch Onderzoek in Nederland (ASTRON), Dwingeloo, The Netherlands
H. Falcke
Faculty of Science, Universiteit van Amsterdam, Amsterdam, The Netherlands
J. Vink
Kapteyn Astronomical Institute, University of Groningen, Groningen, The Netherlands
O. Scholten
Case Western Reserve University, Cleveland, OH, USA
C. E. Covault
Colorado School of Mines, Golden, CO, USA
J. A. Johnsen, K.-D. Merenda, F. Sarazin & L. Wiencke
Department of Physics and Astronomy, Lehman College, City University of New York, Bronx, NY, USA
L. Anchordoqui & J. F. Soriano
Louisiana State University, Baton Rouge, LA, USA
J. Matthews
Michigan Technological University, Houghton, MI, USA
B. Fick, D. Nitz & A. Puyleart
New York University, New York, NY, USA
G. Farrar & M. Muzio
Pennsylvania State University, University Park, PA, USA
M. Mostafá & P. Sommers
University of Chicago, Enrico Fermi Institute, Chicago, IL, USA
J. Farmer, T. Fujii, P. Privitera & R. Šmída
Department of Physics and Astronomy, Bartol Research Institute, University of Delaware, Newark, DE, USA
A. Coleman & F. G. Schröder
Department of Physics and WIPAC, University of Wisconsin-Madison, Madison, WI, USA
L. Lu
Fermi National Accelerator Laboratory, Fermilab, Batavia, IL, USA
C. O. Escobar, N. Fazzini, C. Hojvat, P. Mantsch & P. O. Mazur
Max-Planck-Institut für Radioastronomie, Bonn, Germany
P. L. Biermann
School of Physics and Astronomy, University of Leeds, Leeds, UK
A. A. Watson
Colorado State University, Fort Collins, CO, USA
J. Brack
Hakubi Center for Advanced Research and Graduate School of Science, Kyoto University, Kyoto, Japan
T. Fujii
University of Bucharest, Physics Department, Bucharest, Romania
O. Sima
P. Abreu
M. Aglietta
J. M. Albury
I. Allekotte
A. Almela
J. Alvarez-Muñiz
R. Alves Batista
G. A. Anastasi
L. Anchordoqui
B. Andrada
S. Andringa
C. Aramo
P. R. Araújo Ferreira
H. Asorey
P. Assis
G. Avila
A. Bakalova
A. Balaceanu
F. Barbato
R. J. Barreira Luz
K. H. Becker
C. Berat
M. E. Bertaina
X. Bertou
P. Billoir
V. Binet
K. Bismark
T. Bister
J. Biteau
J. Blazek
C. Bleve
M. Boháčová
D. Boncioli
C. Bonifazi
L. Bonneau Arbeletche
N. Borodai
A. M. Botti
T. Bretz
P. G. Brichetto Orchera
F. L. Briechle
P. Buchholz
A. Bueno
S. Buitink
M. Buscemi
M. Büsken
L. Caccianiga
F. Canfora
I. Caracas
J. M. Carceller
R. Caruso
A. Castellina
F. Catalani
G. Cataldi
L. Cazon
M. Cerda
J. A. Chinellato
J. Chudoba
L. Chytka
R. W. Clay
A. C. Cobos Cerutti
R. Colalillo
A. Coleman
M. R. Coluccia
R. Conceição
A. Condorelli
F. Contreras
F. Convenga
D. Correia dos Santos
K. Daumiller
B. R. Dawson
J. A. Day
R. M. de Almeida
J. de Jesús
S. J. de Jong
G. De Mauro
J. R. T. de Mello Neto
I. De Mitri
D. de Oliveira Franco
F. de Palma
V. de Souza
E. De Vito
M. del Río
O. Deligny
A. Di Matteo
C. Dobrigkeit
J. C. D'Olivo
L. M. Domingues Mendes
D. dos Santos
M. T. Dova
J. Ebr
R. Engel
I. Epicoco
M. Erdmann
C. O. Escobar
A. Etchegoyen
J. Farmer
G. Farrar
A. C. Fauth
N. Fazzini
F. Feldbusch
F. Fenu
B. Fick
J. M. Figueira
A. Filipčič
T. Fitoussi
T. Fodran
M. M. Freire
A. Fuster
C. Galea
C. Galelli
B. García
A. L. Garcia Vegas
H. Gemmeke
F. Gesualdi
A. Gherghel-Lascu
P. L. Ghia
U. Giaccari
M. Giammarchi
J. Glombitza
F. Gobbi
F. Gollan
G. Golup
M. Gómez Berisso
P. F. Gómez Vitale
J. P. Gongora
J. M. González
N. González
I. Goos
D. Góra
A. Gorgi
M. Gottowik
T. D. Grubb
F. Guarino
E. Guido
S. Hahn
P. Hamal
M. R. Hampel
P. Hansen
D. Harari
V. M. Harvey
A. Haungs
T. Hebbeker
D. Heck
G. C. Hill
C. Hojvat
J. R. Hörandel
P. Horvath
M. Hrabovský
T. Huege
A. Insolia
P. Janecek
J. A. Johnsen
J. Jurysek
A. Kääpä
K. H. Kampert
N. Karastathis
B. Keilhauer
J. Kemp
A. Khakurdikar
V. V. Kizakke Covilakam
H. O. Klages
M. Kleifges
J. Kleinfeller
M. Köpke
N. Kunka
R. G. Lang
N. Langner
V. Lenok
A. Letessier-Selvon
I. Lhenry-Yvon
D. Lo Presti
L. Lopes
R. López
Q. Luce
J. P. Lundquist
A. Machado Payeras
G. Mancarella
D. Mandat
B. C. Manning
J. Manshanden
P. Mantsch
S. Marafico
A. G. Mariazzi
I. C. Mariş
D. Martello
S. Martinelli
H. Martinez
O. Martínez Bravo
M. Mastrodicasa
H. J. Mathes
E. Mayotte
P. O. Mazur
G. Medina-Tanco
D. Melo
A. Menshikov
K.-D. Merenda
S. Michal
M. I. Micheletti
L. Miramonti
D. Mockler
S. Mollerach
F. Montanet
C. Morello
M. Mostafá
A. L. Müller
M. A. Muller
K. Mulrey
R. Mussa
M. Muzio
W. M. Namasaka
You can also sea | CommonCrawl |
Combinatorics - past, present and future
Oxford Mathematician Katherine Staden provides a fascinating snapshot of the field of combinatorics, and in particular extremal combinatorics, and the progress that she and her collaborators are making in answering one of its central questions posed by Paul Erdős over sixty years ago.
"Combinatorics is the study of combinatorial structures such as graphs (also called networks), set systems and permutations. A graph is an encoding of relations between objects, so many practical problems can be represented in graph theoretic terms; graphs and their mathematical properties have therefore been very useful in the sciences, linguistics and sociology. But mathematicians are generally concerned with theoretical questions about graphs, which are fascinating objects for their own sake. One of the attractions of combinatorics is the fact that many of its central problems have simple and elegant formulations, requiring only a few basic definitions to be understood. In contrast, the solutions to these problems can require deep insight and the development of novel tools.
A graph $G$ is a collection $V$ of vertices together with a collection $E$ of edges. An edge consists of two vertices. We can represent $G$ graphically by drawing the vertices as points in the plane and drawing a (straight) line between vertices $x$ and $y$ if $x,y$ is an edge.
Extremal graph theory concerns itself with how big or small a graph can be, given that it satisfies certain restrictions. Perhaps the first theorem in this area is due to W. Mantel from 1907, concerning triangles in graphs. A triangle is what you expect it to be: three vertices $x,y,z$ such that every pair $x,y$ and $y,z$ and $z,x$ is an edge. Consider a graph which has some number $n$ of vertices, and these are split into two sets $A$ and $B$ of size $\lfloor n/2\rfloor$, $\lceil n/2\rceil$ respectively. Now add every edge with one vertex in $A$ and one vertex in $B$. This graph, which we call $T_2(n)$, has $|A||B|=\lfloor n^2/4\rfloor$ edges. Also, it does not contain any triangles, because at least two of its vertices would have to both lie in $A$ or in $B$, and there is no edge between such pairs. Mantel proved that if any graph other than $T_2(n)$ has $n$ vertices and at least $\lfloor n^2/4\rfloor$ edges, it must contain a triangle. In other words, $T_2(n)$ is the unique `largest' triangle-free graph on $n$ vertices.
Following generalisations by P. Turán and H. Rademacher in the 1940s, Hungarian mathematician Paul Erdős thought about quantitatively extending Mantel's theorem in the 1950s. He asked the following: among all graphs with $n$ vertices and some number $e$ of edges, which one has the fewest triangles? Call this quantity $t(n,e)$. (One can also think about graphs with the most triangles, but this turns out to be less interesting).
Astoundingly, this seemingly simple question has yet to be fully resolved, 60 years later. Still, in every intervening decade, progress has been made, by Erdős, Goodman, Moon-Moser, Nordhaus-Stewart, Bollobás, Lovász-Simonovits, Fisher and others. Finally, in 2008, Russian mathematician A. Razborov managed to solve the problem asymptotically (meaning to find an approximation $g(e/\binom{n}{2})$ to $t(n,e)$ which is arbitrarily accurate as $n$ gets larger). Razborov showed that, for large $n$, $g(e/\binom{n}{2})$ has a scalloped shape: it is concave between the special points $\frac{1}{2}\binom{n}{2}, \frac{2}{3}\binom{n}{2}, \frac{3}{4}\binom{n}{2}, \ldots$. His solution required him to develop the new method of flag algebras, part of the emerging area of graph limits, which has led to the solution of many longstanding problems in extremal combinatorics.
The remaining piece of the puzzle was to obtain an exact (rather than asymptotic) solution. In recent work with Hong Liu and Oleg Pikhurko at the University of Warwick, I addressed a conjecture of Lovász and Simonovits, the solution of which would answer Erdős's question in a strong form. The conjecture put forward a certain family of $n$-vertex, $e$-edge graphs which are extremal, in the sense that they should each contain the fewest triangles. So in general there is more than one such graph, one aspect which makes the problem hard. Building on ideas of Razborov and Pikhurko-Razborov, we were able to solve the conjecture whenever $e/\binom{n}{2}$ is bounded away from $1$; in other words, as long as $e$ is not too close to its maximum possible value $\binom{n}{2}$.
Our proof spans almost 100 pages and (in contrast to Razborov's analytic proof) is combinatorial in nature, involving a type of stability argument. It would be extremely interesting to close the gap left by our work and thus fully answer Erdős's question."
Revolving captions:
The graph $T_2(n)$, which is the unique largest triangle-free graph on $n$ vertices.
The minimum number of triangles $t(n,e)$ in an $n$-vertex $e$-edge graph plotted against $e/\binom{n}{2}$. This was proved in the pioneering work of A. Razborov.
Making new graphs from old: an illustration of a step in the proof of the exact result by Liu-Pikhurko-Staden.
Please contact us for feedback and comments about this page. Last update on 19 February 2018 - 16:30. | CommonCrawl |
DYNOMIGHT ABOUT RSS
Your ratios don't prove what you think they prove
Watching people discuss police bias statistics, I despair. Some claim simple calculations prove police bias, some claim the opposite. Who is right?
No one. Frankly, nobody has any clue what they are talking about. It's not that the statistics are wrong exactly. They just don't prove what they're being used to prove. In this post, I want to explain why, and give you the tools to dissect these kinds of claims.
I've made every effort to avoid politics, due to my naive dream where well-meaning people can agree on facts even if they don't agree on policy.
The obvious place to start is to look at the number of people killed by police. This is easy to find.
# in US (million) 41.3 185.5 57.1
# killed by police per year 219 440 169
# killed by police per million people 5.3 2.3 2.9
Does this prove the police are racist? Before you answer, consider a different division of the population.
# in US (million) 151.9 156.9
# killed by police per year 944 46
# killed by police per million people 6.2 0.29
And here's a third one.
<18 y/o
# in US (million) 72.9 53.6 63.2 137.3
# killed by police per year 19 283 273 263
# killed by police per million people 0.26 5.2 4.3 1.9
The first table above is often presented as an obvious "smoking gun" that proves police racism with no further discussion needed. But if that were true, then the second would be a smoking gun for police sexism and the third for police ageism. So let's keep discussing.
Of course, the second and third tables have obvious explanations: Men are different from women. The young are different from the old. Because of this, they interact with the police in different ways. Very true! But the following is also true:
average height (men) 175.5cm (5'9") 177.4cm (5'10) 169.5cm (5'7")
life expectancy 74.9 yrs 78.5 yrs 81.8 yrs
mean annual income $41.5k $65.9k $51.4k
median age 33 yrs 43 yrs 28 yrs
go to church regularly 65% 53% 45%
children in single-parent homes 65% 24% 41%
identify as LGBT 4.6% 3.6% 5.4%
live in a large urban area 82% 61% 82%
poverty 21% 8.1% 17%
men obese 41% 44% 45%
women obese 56% 39% 43%
completed high school 87% 93% 66%
completed bachelor's 22% 36% 15%
heavy drinkers 4.5% 7.1% 5.1%
Maybe it's uncomfortable, but it's a fact: In the US today, there are few traits where there aren't major statistical differences between races. (Of course this doesn't mean these differences are caused by race! This is a good example of why correlation does not imply causation.)
Suppose police were required wear augmented reality goggles. On those goggles, real-time image processing changes faces so that race is invisible. Would doing this cause police statistics to equalize with respect to race?
No. Even if race is literally invisible, young urban alcoholics will have different experiences with police than old teetotalers on farms. The fraction of these kinds of people varies between races. Thus, racial averages will still look different because of things that are associated with race but aren't race as such.
So despite the thousands of claims to the contrary, just looking at killings as a function of population size doesn't prove bias. Not does it prove a lack of bias. It really doesn't prove anything.
Why do police kill more men than women? We can't rule out police bias. But surely it's relevant that men and women behave differently? So, it might seem like we should normalize not by population size, but by behavior.
One popular suggestion is to consider the number of arrests:
# arrests for violent crimes per year (thousands) 146 230 83
# killed by police per thousand violent crime arrests 1.4 1.9 1.9
Some claim this proves the police aren't biased, or even that there is bias in favor of blacks. But that's nearly circular logic: If police are biased, that would manifest in arrests as much as killings. So what we are really calculating above is
\[\frac{\text{"Normal" killings + killings due to bias}}{\text{"Normal" arrests + arrests due to bias}}.\]
The ratio doesn't tell you much about how large the bias terms are. So, unfortunately this also doesn't prove anything.
Incidentally: There are some popular but different numbers out there for this same ratio. These have tens of thousands of re-tweets with no one questioning the math. But I've checked the source data carefully, and I'm pretty sure my numbers are right. (They reach the same basic conclusion anyway.)
The police have discretion when deciding to make an arrest. But a dead body either exists or doesn't. So why not normalize by the number of murders committed?
This turns out to be basically impossible:
Something like 40% of murders go unsolved, so the race of the murderer is unknown.
The only real source of murder statistics is the FBI. They treat hispanic/non-hispanic ethnicity as independent of race. Why not just ignore hispanics then? Well, you can't. Hispanics are still counted as white or black in an unknown way. It's impossible to compare to police shooting statistics where hispanic is an alternative race.
In around 31% of cases, the FBI has no information about race, and in 40% of cases, no information about ethnicity.
I've seen tons of articles use this version of the FBI's murder data that simply drops all the cases where data are unknown. None of these articles even acknowledge the issue of missing data or different treatment of hispanics.
Instead, let's look at murder victims. This is counterintuitive, but it's relatively rare for murders to cross racial boundaries (<20%). So this is a non-terrible proxy for the number of murders committed. Data from the CDC separates out black, white, and hispanics in a similar way as police shooting statistics.
# murder victims per year 9,908 5,747 3,186
# killed by police per murder victim 0.022 0.076 0.053
So what does this prove? Again, not much. The simple fact is that most police killings are not in the context of a murder or a murder investigation. Though there are exceptions, the precise context of police killings hasn't had enough study, and definitely not enough to get reliable statistics.
Ratios are hopeless
Really, though, it's not an issue of lacking data. Philosophically, consider the any possible ratio like
\[\frac{\text{# of people of a race killed by police}}{\text{# of times act } X \text{ committed by a member of a race}}.\]
For what act \(X\) does this really measure police bias? I think it's pretty clear that no such act exists, even if we could measure it. Races vary along too many dimensions. There are too many scenarios for police use of force. Bias interacts with the world in too many ways. You just can't learn anything meaningful with these sort of simplistic high-level statistics.
This doesn't mean we need to give up. It just means you need to get closer and try harder. In the next part of this series I'll look at some valiant attempts to do that. They will disappoint us too, but for different reasons.
Data Used:
Police shootings (average 2017-2019)
Number of people of each race / sex
Number of people by age
Data by race: Life expectancy / Income / Height / Church / Single-parent homes / Identifying LGBT / Median age / School / Drinking / Poverty / Urbanity / Obesity
Arrests for violent crime
Murder victims (p. 43)
This post is part of a series on bias in policing with several posts still to come.
Part 1: Your ratios don't prove what you think they prove (This post)
Part 2: The veil of darkness
Part 3: Policy proposals and what we don't know about them
Part 4: Why fairness is basically unobservable
A breakdown of the data on the homeless crisis across the U.S.
Statistical nihilism and culture-war island hopping
Political polarization is partly a sample bias illusion
Does the gender-equality paradox actually exist?
How the United States didn't ban the death penalty
dynomiiiiiiiiiight
ok fine
also send guide to life | CommonCrawl |
An improved method in fabrication of smart dual-responsive nanogels for controlled release of doxorubicin and curcumin in HT-29 colon cancer cells
Fatemeh Abedi1,2,
Soodabeh Davaran2,3,
Malak Hekmati1,
Abolfazl Akbarzadeh4,5,
Behzad Baradaran6 &
Sevil Vaghefi Moghaddam2
The combination therapy which has been proposed as the strategy for the cancer treatment could achieve a synergistic effect for cancer therapies and reduce the dosage of the applied drugs. On account of the the unique properties as the high absorbed water content, biocompatibility, and flexibility, the targeting nanogels have been considred as a suitable platform. Herein, a non-toxic pH/thermo-responsive hydrogel P(NIPAAm-co-DMAEMA) was synthesized and characterized through the free-radical polymerization and expanded upon an easy process for the preparation of the smart responsive nanogels; that is, the nanogels were used for the efficient and controlled delivery of the anti-cancer drug doxorubicin (DOX) and chemosensitizer curcumin (CUR) simultaneously like a promising strategy for the cancer treatment. The size of the nanogels, which were made, was about 70 nm which is relatively optimal for the enhanced permeability and retention (EPR) effects. The DOX and CUR co-loaded nanocarriers were prepared by the high encapsulation efficiency (EE). It is important to mention that the controlled drug release behavior of the nanocarriers was also investigated. An enhanced ability of DOX and CUR-loaded nanoformulation to induce the cell apoptosis in the HT-29 colon cancer cells which represented the greater antitumor efficacy than the single-drug formulations or free drugs was resulted through the In vitro cytotoxicity. Overall, according to the data, the simultaneous delivery of the dual drugs through the fabricated nanogels could synergistically potentiate the antitumor effects on the colon cancer (CC).
Cancer including the uncontrolled cell multiplication which aggressively metastasis on other parts of the body is considered a prominent cause of the death worldwide and is the generalized term for a class of the widespread diseases [1]. Although overwhelming researches have been done to stop cancer during the last decades, there are relatively few achievements in the field of cancer therapy. Despite some advancements in the cancer treatment, colon cancer (CC) has remained the third most common cancer recognized universally in the human beings [2]. The conventional chemotherapy has been known as a formal cancer treatment method accross the current cancer treatment methods including the surgical intervention, chemotherapy, radiotherapy, and a combination of these methods. The mechanism by which the chemotherapeutic agents induce apoptosis to the rapidly growing cancer cells is usually based on the interfering with the DNA synthesis and mitosis [4]. However, the nonselective action of the chemotherapeutic agents between cancerous and normal healthy tissues causes undesirable side effects that decrease the survival rate of patients. Moreover, due to the poor bioavailability of these agents, high doses are required, which leads to enhanced toxicity to the normal cells and multiple drug resistance (MDR). Therefore, the use of single-drug therapy is limited due to unaccepted toxicity in high doses and developing drug resistance [3]. Multi-drug therapy referring to the co-administration of two or more drugs with different mechanisms of action to the tumor site could be an efficient strategy to overcome the single-drug therapy's shortfalls [4]. In a multi-drug system, the appropriate drug combinations promote the synergistic anti-cancer response through different signaling pathways, enhance therapeutic efficacy, and prevent the drug resistance [5]. Despite the positive effects of the multi-drug therapy, it has not effected desirably on the cancer treatment as a result of the low bioavailability and lack of the targeted strategy which decreases therapeutic efficacy and increases the systemic toxicity. The emergence of nanotechnology which can deliver anti-cancer agents to the site of action with improved efficacy and minimum toxicity to the healthy tissues has led to the development of nanosystems [1]. A variety of systems has been investigated for the delivery of the chemotherapeutic agents including hydrogels [7], microspheres, and nanospheres [8], micelles [9, 10], and liposomes. By using the features like selective administration of the drugs to the tumor environment through the EPR effect, active cellular uptake, extended blood circulation time, and sustained drug release, the nanoscale drug carriers could promote the treatment efficacy in order to address the challenges accompanied with the conventional chemotherapeutic agents, [6, 7]. Resembling the soft tissue microenvironment of the human body, hydrogels included the three-dimensional polymeric structures which were capable to hold a large fraction of water [8, 9]. They can be designed in the form of continuous macroscopic networks, named macrohydrogels or discrete particles, named microgels (if their dimensions are above 1 µm), and nanogels (if their dimensions are in submicrometer ranges), respectively [10]. Recent studies have demonstrated that nanoscale hydrogels (nanogels) can be an ideal system for the delivery of various chemotherapeutic agents as a result of their unique properties such as excellent biocompatibility, high dispersibility in the aqueous medium, and well-designed structures [11, 12]. Also, the higher swelling capacity of the nanogels in a water medium enhanced their drug loading capacity in comparison with other nanocarriers such as polymeric micelles and liposomes. having great loading space, they enable to encapsulate not only small drug molecules but also huge biomacromolecules such as proteins, DNA, and polypeptides. The higher loading capacity of the nanogels can be ascribed to the self-assembly through the hydrophobic and electrostatic interactions, which is important for keeping the bioactivity of drug molecules and biomacromolecules [13, 14]. In contrast to the rigid nanoparticles, nanogels with a flexible and soft structure are capable of penetrating through the tumor vasculature system, while keeping the bioactivity of the protected therapeutic agents [15]. Furthermore, their flexible properties reduce the probability of their entrapment by macrophages and prolonging their circulating lifetime [16]. More importantly, compared to the other conventional carriers like liposomes and micelles, which are less stable than nanogels, it was proven that nanogels have higher cell uptake efficacy than the other nanocarriers, leading to improvements in the in vivo bioavailability and safety of the chemotherapeutic agents [17, 18]. Among the NGs, biodegradable ones have promising applications in intelligent delivery systems due to their degradability in the cellular microenvironment and adjustable physical properties. The resultant biodegraded materials have reduced in vivo toxicity compare to the nondegradable ones. Also, biodegradable NGs can be functionalized with stimuli-sensitive groups, which enable them to identify desired cells/tissue in vivo and undergo the cleavage of a certain bond triggered by a spatial stimulus, releasing therapeutic agents in a temporally specific manner to represent optimal therapeutic efficacy. Considering above, the stimuli-responsive delivery systems have attracted much attention since they can release their payload in a controllable way if they are triggered by the external stimuli (magnetic field, light, radiofrequency, …) as well as the internal stimuli (pH, temperature, redox, …) [19]. PNIPAAm is the most recognized thermosensitive polymer displaying phase separation at a lower critical solution temperature (LCST) of ∼ 32 °C in aqueous solution [20, 21]. It precipitates as the temperature is raised above its LCST at 32–33 °C, while it is highly water-soluble at low temperatures [20,21,22,23]. The narrow LCST of PNIPAAm prevents it from the potential biomedical application since it is lower than human body temperature. To adjust the LCST of PNIPAAm around the body temperature, it can polymerize with different co-monomers. The controlled release of drugs is another main issue in the stimuli-responsive delivery system that should be solved to acquire good bioavailability and therapeutic outcomes. Diffusion, degradation, and swelling are three important mechanisms of drug release. Concentration gradient and hydrolysis of protecting polymer, favor drug release from the carrier in diffusion and degradation mechanism, while drug diffusion as a result of polymer porosity increasing in release fluid, affect the controlled release of drug by the swelling mechanism. To achieve efficient drug release, it's not desirable to use the single responsive polymer due to the complex microenvironment of tumor tissue. In this regard, preparing dual sensitive polymers, capable of drug release in response to external/internal stimuli is favorable. Among them, temperature/pH-responsive hydrogels play an important role in developing intelligent polymeric nanostructures with controlled drug release [22, 24]. One of the biocompatible co-monomers that can be polymerized with PNIPAAm by free-radical polymerization, is N, N′-dimethylamino ethyl methacrylate (DMAEMA), a water-soluble cationic monomer containing pendant tertiary amine groups. Polymerization with DMAEMA import some additional properties to the hydrogels/nanogels, such as induction of drug release triggered by the acidic microenvironment of solid cancer [25, 26]. To explore the potential biomedical applications of nanogels, Duan et al. developed a thermosensitive triple-monomer constructed nanogels P(NIPAAm-DMAEMA-AA) (PNDA) and studied the cytotoxicity of DOX-loaded PNDA nanogels in A549 cells. In almost all the studies involving the combination of poly (NIPAAm) monomer with a co-monomer, N,N-methylenebisacrylamide (MBA) was commonly applied as the crosslinking agent [27]. In a study conducted by Musia et al. the influence of crosslinkers EGDMA, DEGDMA, and TEGDMA on PNIPAAm microsphere's thermosensitivity and morphology were studied. The results indicated that by increasing the crosslinker's chain length the polymeric network was loosened due to the increase in the distance between the polymer chains, which boost the swelling capacity of the polymer and increase free volume accessible to the drugs. Safajou et al. also investigate the effect of crosslinker content on the polymerization kinetics of TEGDMA crosslinked poly(methyl methacrylate) (PMMA) hollow particles by studying the pressure and temperature profiles during the reaction. They found that, the use of higher TEGDMA concentration leads to the higher polymerization rate, a decrease in the gel time, and a higher pressure at the gel point [28]. DOX is an anthracycline antibiotic that has been widely used in clinical cancer therapy [29]. While the efficacy of the DOX is only achieved at very high doses since most of the DOX eliminated from circulation due to its short half-life, the stated antitumor agent binds to DNA and activates biochemical events, causing cell apoptosis. [30]. Dose-dependent cardiac toxicity is a major adverse side effect having limited its clinical applications [31]. On the other hand, CUR is a polyphenolic bioactive compound that can be considered as a safe anticancer agent [32]. It has also many biological activities including antioxidant, anti-viral [33, 34], anti-inflammatory, and antimicrobial [35]. It can overcome multi-drug resistance by downregulation of p-glycoprotein [36], while it suffers from limitations such as low water solubility, fast metabolism, instability, and poor bioavailability [37]. The nanocarriers-based delivery systems can be a good strategy in cancer treatment in order to tackle the problems associated with the DOX and the polyphenol CUR in combination therapy. Utilizing the DOX/CUR nanoformulations in cancer therapy can develop sustain drug release, increase the bioavailability of drugs, and reduce the required drug doses. In the previous study of our team, we designed a cellulose-based pH-sensitive nanocarrier and used it for co-delivery of model anti-cancer drug methotrexate (MTX) and CUR to the MCF-7 and MDA-MB-231 breast cancer cell lines. The cytotoxicity studies revealed that CUR as an adjuvant drug could synergize the therapeutic efficacy of the MTX and reduce the required doses of MTX which is a promising result to avoid cytotoxicity of the normal healthy cells [38]. By considering the advantages of multi-drug therapy using CUR as an adjuvant drug, in this work the pH/thermosensitive biocompatible hydrogel, poly (NIPAAm-co-DMAEMA), was prepared and converted to the smart nanogels for the co-delivery of DOX and CUR drugs. The drug-release behavior, intending to improve treatment efficiency was also studied. The fabricated hydrogels and nanogels were characterized in terms of the physicochemical properties, and the anti-tumor efficacy of the dual drug-loaded nanogels using HT-29 colon cancer cells. Furthermore, the apoptotic response and cell growth inhibition treated by the different drug formulations were studied through the cell cycle analysis and DAPI staining.
N-Isopropylacrylamide (NIPAAm), tetraethylene glycol dimethacrylate (TEGDMA), potassium persulfate(PPS), and Polyvinyl alcohol (PVA MW = 89,000) was purchased from the Sigma-Aldrich. N, N-dimethyl-aminoethyl methacrylate (DMAEMA) monomers, was purchased from the Merck (Darmstadt, Germany). Curcumin (merk, Germany) and doxorubicin hydrochloride (Sigma, USA) was used without further purification. Methanol (HPLC grade, Fisher Scientific, UK), Dimethyl sulfoxide (DMSO), and dichloromethane (DCM) were obtained from the Merck Company. Phosphate buffer saline (PBS), MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) was purchased from the Sigma-Aldrich Company. Fetal bovine serum (FBS) and trypsin–EDTA were purchased from the Gibco (Life technologies) (Carlsbad, USA). HT-29 colon carcinoma cells were acquired from the cell bank (pasture institute Iran).
Synthesis and characterization of P(NIPAAm-co-DMAEMA)
For synthesizing the pH and thermosensitive P(NIPAAm-co-DMAEMA), the free radical polymerization methods were utilized [20, 22]. In brief, 3800 mg NIPAAm and 533 mg DMAEMA, along with TEGDMA (2% w/w), as a crosslinker, were dissolved in 660 µl deionized water. Before the polymerization reaction, the flask containing the desired material was purged with the nitrogen to completely remove any residual oxygen. After all, the reagents were dissolved and mixed thoroughly at 70 °C for 30 min in the presence of PPS (10% w/w) to initiate the polymerization reaction. The reaction solution was continuously stirred for 12 h under the nitrogen atmosphere to generate the P(NIPAAm-co-DMAEMA). The obtained hydrogel was purified for 72 h using the dialysis membrane with MWCO of 12,000 and dialyzed toward distilled water. The external aqueous solution was removed two times a day and displaced with fresh distilled water. Finally, the purified hydrogel was frozen and lyophilized to receive the dried product and the% yield of the polymer was obtained 81%.
Preparation of DOX/CUR-Hydrogels/Nanogels (DOX/CUR-HGs/NGs)
In this step, the DOX/CUR-hydrogels or DOX/CUR-nanogels was prepared using two different loading methods of DOX and CUR into the P(NIPAAm-co-DMAEMA). In both methods, the DOX/CUR feeding ratio 1:1 was used. These methods include:
Preparation of DOX/CUR-HGs
In this method, the fabrication of DOX/CUR-HGs was conducted according to the previously reported method with a little modification [38, 39]. Briefly, 2.5 ml DOX-HCl (2 mg/ml) was added to the 5 ml solution consisting 100 mg ultrasonically well-dispersed hydrogel in the distilled water and continued to stir for 24 h at room temperature in the dark. To remove physically adsorbed DOX from the surface of the hydrogels, the DOX-loaded hydrogels (DOX-HGs) were centrifuged (9000 rpm, 15 min) and washed by the distilled water [39]. The supernatant was collected and placed in the dark to measure unloaded DOX by using the calibration curve of the drug being placed in supporting information (Additional file 1: Figure S1). Owing to the poor water-solubility of the CUR, its dissolution required to be performed under the sink condition. To improve the water-solubility of the CUR, the surfactant tween 80 and the solvent Methanol (MeOH) were added to the dissolution medium (PBS) with the optimum ratio 1: 17: 83, respectively. For CUR loading, the DOX-HGs were added to 5 ml of 2 mg/ml solution of CUR in the mixture of PBS/MeOH/Tween 80. The mixture was stirred for 24 h under the dark conditions at room temperature to encapsulate the CUR within the DOX-HGs. The hydrogels were collected by centrifugation at 13,000 rpm for 10 min. To remove the physically adsorbed CUR from the surface of nanocomposite polymer, the prepared DOX/CUR-HGs were washed by the distilled water. It should be considered that the supernatant was stored in the dark to evaluate the loading content (LC) of the CUR. Finally, the obtained DOX/CUR-HGs were lyophilized and stored at 4 °C for later use [38]. The single drug-loaded hydrogel was also prepared with the same feeding ratio of DOX and CUR to compare the encapsulation efficacy (EE) of drugs in different formulations.
Preparation of DOX/CUR-NGs
The second approach included the fabrication of DOX and CUR-loaded nanogels (DOX/CUR-NGs) via a modified water-in-oil-in-water (W/O/W) emulsion technique Firstly, 1 ml of (2 mg/ml) DOX solution was added to the oil phase consisting of 5 mg CUR and 50 mg nanogel in 4 ml DCM/DMSO with a ratio of 1: 1, flowed by homogenizing at 7000 rpm for 3 min to form the W1/O emulsion. Secondly, the obtained W1/O emulsion was added to an aqueous solution of 50 ml polyvinyl alcohol (PVA) 0.5%, and the mixture was homogenized again at 15,000 rpm for 10 min to generate W1/O/W2 emulsion. Finally, the double emulsion was stirred at the room temperature for 5 h to evaporate the organic phase (Heidolph Instruments, Hei-VAP Series, Schwabach, Germany). The dual drug-loaded nanogels were collected through the centrifugation at 13,000 rpm for 20 min, and they were lyophilized for later use. For measuring the concentration of the encapsulated drugs by using the calibration curve of the drugs being placed in supporting the information, the supernatant was stored (Additional file 1: Figure S1).
Encapsulation efficiency (EE) and Loading content (LC)
In the first step, the standard calibration curves of both CUR and DOX were planned (Additional file 1: Figure S1). As mentioned before, the supernatants were taken out to estimate the amount of the unloaded drugs in hydrogels and nanogels (nanocarriers) using their calibration curves that was placed in supporting information. The concentration of the unloaded drugs was obtained by replacing their absorption and determined by UV–Vis spectroscopy, in the calibration curves. The subtraction of the unloaded drug mass from the total feeding drug mass gave the loaded drug mass. The percentage of the encapsulation efficiency (EE%) and loading content (LC%) was defined by the following equation:
$${\text{EE}}\% = \frac{{{\text{Amount}}\;{\text{of}}\;{\text{loaded}}\;{\text{drug}}}}{{{\text{Total}}\;{\text{drug}}}}\; \times \;100$$
$${\text{Amount}}\;{\text{of}}\;{\text{loaded}}\;{\text{drug}}\; = \;{\text{Total}}\;{\text{drug}}\; - \;{\text{unloaded}}\;{\text{drug}}$$
$${\text{LC}}\left( \% \right) = \frac{{{\text{Mass}}\;{\text{of}}\;{\text{the}}\;{\text{loaded}}\;{\text{drug}}\;{\text{in}}\;{\text{the}}\;{\text{nano}}\;{\text{carrier}}}}{{{\text{Nano}}\;{\text{carrier}}\;{\text{mass}}}} \times 100$$
In vitro release study of drugs
In vitro release studies of drugs from the nanocarriers were carried out using the sample and separate method (SS) [40]. The release study of drugs from the nanocarriers was evaluated in the sink conditions, (83% PBS, 1% tween 80, and 16% methanol) at two pH values (7.4 and 5.8) and two temperatures (37 °C and 40 °C). In this procedure, 5 mg nanocarriers were dispersed in 2 ml release medium and placed into the incubator shaker that provided continuous rotaition. At the fixed regular time intervals, 1 ml of release solution was withdrawn from the release media and centrifuged at 12,000 rpm for 5 min. The equivalent fresh buffer solution was added to the media to maintain the sink condition during the experiment. The drugs amount released from the nanocarriers were detected by the UV–Vis spectrophotometer at the maximum wavelength (λ max) of drugs. The drug concentration in several samples was defined in triplicate. The calculation of the released drug percentage from nanocarriers was done by the following equation:
$${\text{M}}_{{\text{i}}} = \frac{{{\text{c}}_{{\text{i}}} {\text{v}}_{{\text{t}}} + \sum {{\text{c}}_{{{\text{i}} - 1}} } {\text{v}}_{{\text{i}}} }}{{\text{t}}} \times 100$$
where, Mi is cumulative release percentage, Ci shows the concentration of drug in the released solution at the time (i), Vt presents the total volume of release solution, Vi is the sample volume, and t the concentration of the total drug (µg/ml).
Cell culture and evaluation of cytotoxicity
The human colorectal adenocarcinoma cell line (HT-29) was obtained from the National Cell Bank of Iran and cultured in RPMI 1640 medium perfected by antibiotics and FBS in the 25 cm2 culture flask. Cells were incubated for 24 h at 37 °C in damped air containing 5% CO2. When the cells population attained 70% confluency, Trypsin–EDTA was added to the flask and placed for 5 min in the incubator to detached cells. For neutralizing the trypsin, 2 ml FBS was utilized. The harvested cells were centrifuged at 3000 rpm for 8 min. Finally, the cells with fresh culture medium were seeded in 96-well microplates with a cell density of 15 × 103 cells per well and incubated for 48 h at 37 °C with 5% CO2. To evaluate the cytotoxicity of nanocarriers and the antitumor activity of DOX and CUR, the MTT metabolic activity assay at HT-29 cells were used. After two days of incubation, the cells were treated with different concentrations of drug formulations in sterile conditions. For this purpose, the different concentrations of free CUR and CUR-loaded hydrogels (CUR-HGs) (0.01, 0.1, 5, 15, 20, 40 µg/ml), free DOX and DOX-HGs ( 0.1, 5, 15, 20, 40, 60 µg/ml), DOX/CUR-HGs (1, 10, 50, 20, 30, 100 µg/ml) and DOX/CUR-NGs (0.75, 7.5, 15, 22.5, 37.5, 75 µg/ml) were added to the fresh cell culture medium in a 96-well plate and incubated for two days at 37 °C and 5% CO2. The cells were treated with different concentrations of the blank nanocarriers to evaluate the biocompatibility of the nanocarriers,. The untreated cells in the medium were also used as a control with 100% viability. In continue, the culture medium of the incubated plates was replaced by 150 µl fresh PBS followed by 50 µl MTT solution (2 mg/ml) and incubated for 4 h. After that, the culture medium was discarded, 150 µl DMSO was administered into the wells, and placed for 20 min in the incubator. Finally, the absorbance of the individual wells was recorded by using an assay reader (ELISA Reader, Tecan's Sunrise) at a wavelength of 570 nm. The percentage of cell viability was calculated as follows:
$${\text{Cell}}\;{\text{viability}}\left( \% \right) = \frac{{{\text{OD}}\;{\text{of}}\;{\text{the}}\;{\text{treated}}\;{\text{cells}}}}{{{\text{OD}}\;{\text{of}}\;{\text{control}}}} \times 100$$
The inhibitory concentration (IC50) including the concentration of drug that inhibits 50% of cell growth was calculated by using GraphPad Prism 8 (GraphPad Software, Inc., La Jolla, CA). The combination index (CI) values were calculated according to the Chou and Talalay's equation [41]:
$${\text{CI}}_{{\text{X}}} = \frac{{{\text{D}}_{1} }}{{({\text{IC}}_{{\text{x}}} )_{1} }} + \frac{{{\text{D}}_{2} }}{{({\text{IC}}_{{\text{x}}} )_{2} }}$$
where, (ICx)1 and (ICx)2 are the ICx of DOX-nanocarriers and CUR-nanocarriers, respectively. (D)1 and (D)2 are the concentration of DOX and CUR in the dual drug-nanocarriers at the ICx value.
DAPI staining
To access the nucleus condensation of HT-29 cells treated with DOX and CUR, the formulation DAPI (4′,6-diamidino-2-phenylindole) was applied according to as follows: the cells were seeded onto the sterile 96-well microplates with the density of 15 × 103 cells per well and incubated for 24 h. following the incubation, the culture medium was replaced by the fresh medium containing free drugs, free and drug-nanocarriers in which their concentration was around IC50 and incubated again for 48 h. Afterward, the cells were washed by PBS three times, and 1 ml freshly prepared paraformaldehyde (4% v/v) was used to fix the cells. After incubation for 60 min, the cells were permeabilized by adding 60 µl of 0.1% (v/v) Triton X-100 and incubated for 10 min. Then the nuclei of the cells were stained with 1 µg/mL DAPI solution for 10 min. Finally, DNA fragmentation and condensation in apoptotic cells were assessed under a fluorescent microscope (citation5: Bio Tek-USA) at 400 × magnification, and excitation at 405 nm for DAPI [26, 42]. The images were processed using ImageJ Software [43].
Cell cycle analysis
To assess the efficacy of different drug formulations on the cell cycle progression of HT-29 cells, flow cytometric analysis was performed. The cells were seeded in a 6-well plate at a density of 2 × 105 cells/well and incubated at 37 °C for 24 h. Then, they treated with free drugs and single/dual drug-nanocarriers at doses around their IC50 and incubated for 48 h. After incubation, the cells were trypsinized and centrifuged at 3000 rpm for 10 min. The harvested cells were washed by PBS, fixed with ethanol 75% and stored at − 20 °C. Afterward, the cells were collected by centrifugation and washed twice by PBS. Around 50 µl RNase A (10 µg/ml) was added to resuspended cells in 500 µl PBS and incubated for 30 min. Finally, the cells were collected again by centrifugation, resuspended in a solution composed of PBS, DAPI, and Triton X-100 with the ration 1000:1:1, respectively, and kept in dark for 10 min. The samples were then analyzed in terms of cell distribution in different cell cycle phases using flow cytometer MACSQ Analyzer 10 (Miltenyi Biotec, San Diego, CA) and Flow Jo V10 software. The lowest available flowrate setting was used for analysis. The data was collected using a 408 nm (violet) laser and available detector for this laser including V1 channel with 450/50 nm filter. The results were also demonstrated in the form of a histogram to determine the apoptotic phase and measure the proportion of cells in G0/G1, S, G2/M.
Characterizations of P(NIPAAm-co-DMAEMA)
FT-IR spectroscopy
The chemical structure and functional groups of P(NIPAAm-co-DMAEMA) were characterized by using Fourier transform infrared (FT-IR) spectra (Tensor 270, Bruker, German). The samples were prepared in the form of KBr pellet, a method in which the samples were mixed with the dry potassium bromide (KBr) powders and compressed into the disk form. The spectra of samples were displayed in the wavenumber range of about 400 to 4000 cm−1 at room temperature.
1H NMR spectroscopy
Proton nuclear magnetic resonance (1H NMR) was recorded on a Bruker AVANCE III 400 MHz (Bruker Daltonics Leipzig, Germany) spectrometer using d-dimethyl sulfoxide (DMSO-d6), as the solvent, and tetramethylsilane (TMS), as an internal standard (δ = 0.00). Chemical shifts (δ) were given in part per million (ppm).
TGA analysis
To study the thermal stability of P(NIPAAm-co-DMAEMA), thermogravimetric analysis (TGA) was conducted using the instrument Mettler Toledo TGA/SDTA 851e under N2 atmosphere from 25 to 600 °C at a heating rate of 10 °C min−1. The initial degradation temperature (Ti) and residual mass percent were defined from the TG curve, while maximum thermal degradation temperature (Tmax) was also collected from the DTG peaks maxima.
Field emission scanning electronic microscopy (FESEM)
The morphological properties of the synthesized nanocarriers and drug-nanocarriers were assessed by the field emission scanning electron microscopy. For the fabricated nanogels, one drop of the dissolved nanogels were placed on the aluminum foil and let dry. For the powder sample, the hydrogels and nanogels were sputtered with gold, and they were investigated by the FESEM instrument (MIRA3 FEG-SEM, Tescan) and (Hitachi, S4160).
Measuring the swelling behavior of P(NIPAAm-co-DMAEMA)
The classical gravimetric method was used to keep the study of the dynamic swelling behavior and measuring the swelling ratio of the hydrogels. In order to reach the equilibrium state, the prepared hydrogel was immersed in the distilled water at different temperatures (25, 37, and 40 °C) and two pH values (7.4, 5.8) for 24 and 48 h. The dry weight of each sample was obtained after removing the excess amount of the water by filter paper followed by weighing the sample. The ratio of the solvent weight to the polymer weight in the swollen polymer is known as the equilibrium weight swelling ratio (ESR) that, is calculated according to the following equation by considering the average value of three measurements for each sample [44, 45].
$$ESR = \frac{{{\text{W}}_{{\text{t}}} - {\text{W}}_{{\text{d}}} }}{{{\text{W}}_{{\text{d}}} }}$$
where Wt represents the swollen weight of the sample after the predetermined times and Wd is the dry weight of the sample before swelling.
Dynamic light scattering (DLS) technique
The hydrodynamic diameter (d.nm) and zeta-potential of the hydrogel were obtained at two pH values (7.4 and 5.8) using DLS (Zetasizer Nano ZS90; Malvern Instruments, UK). The hydrogels (100 µg/mL) were dispersed in distilled water and PBS by sonication in an ice bath for 10 min.
Determination of lower critical solution temperature (LCST)
The amount of 100 mg P(NIPAAm-co-DMAEMA) was immersed in 5 ml distilled water to swell, then the sample was heated from 25 up to 50 °C. The obtained changes were observed via the ratio of the u.v. transmittance curve to the increased temperature in the sample.
Statistical analyses were conducted by applying GraphPad Prism version 8 (GraphPad Software, Inc., La Jolla, CA). All tests were performed in the triplicated and represented as mean ± standard deviation (SD) for n = 3. Data were analyzed by using the one-way ANOVA analysis. The level of significance was calculated by p-value. *p < 0.05 is considered significant, while **p < 0.01, ***p < 0.001, and ****p < 0.0001 are considered highly significant.
Fourier transforms infrared (FTIR) spectroscopy
The co-presence of TEGDMA (crosslinker (, NIPAAm, and DMAEMA within the poly (NIPAAm-co-DMAEMA) polymer network could be characterized by using the FTIR spectrum (Fig. 1b). A signal at 1169 cm−1 was attributed to the stretching vibration of the C-O moiety of the DMAEMA copolymer [46]. Additionally, the strong peaks around 2926 and 1386 cm−1 are related to the aliphatic C-H stretching and bending mode, respectively. The broad absorption band at 1733 cm−1 can be attributed to the stretching vibration of esteric carbonyl (C = O) groups. Two additional peaks around 1649 cm−1, 1549 cm−1 were corresponding to the stretching vibration of C = O groups in amide functional groups and N–H bending vibration of amide groups in NIPAAm, respectively. The broad peak around 3451 cm−1 referred to the N–H stretching vibration of NIPAAm amide groups [47, 48].
Fabrication pathway and functional groups characterization of nanocarriers. a synthetic steps of P(NIPAAm-co-DMAEMA) through free-radical polymerization followed by modified emulsification method. b FT-IR spectrum of P(NIPAAm-co-DMAEMA). c 1H NMR spectra of P(PNIPAAm-co-DMAEMA) in d6-DMSO using a Bruker AVANCE III 400 MHz NMR spectrometer at 298 K. Polymerization conditions were 3.8 g PNIPAAm, 0.335 g DMAEMA and TEGDMA (2% w/w) as a crosslinker at 70 °C in H2O for 12 h. The solvent peak was at 2.5 ppm and the water peak was at 3.35 ppm. They are represented with the asterisk symbol (*)
The chemical structure of the P(NIPAAm-co-DMAEMA) was analyzed by 1H NMR using d6-DMSO as the solvent. The characteristic signals of PNIPAAm moiety were observed at 1.04 ppm (6H, (CH3)2CH), 1.46 ppm (2H, CH2–CH), 1.81 ppm (1H, CH–C = O), 3.84 ppm (1H, N–CH–(CH3)2), and 7.22 ppm (1H, NH-C = O), respectively. Similar analyses were reported by some related works [49, 50]. The chemical shifts related to the DMAEMA segment appeared at 0.89 ppm (3H, C-CH3), 2.08 ppm (2H, CH2-C(CH3)), 2.18 ppm (6H, CH2-N(CH3)2), 3.98 ppm (2H, CH2-O), respectively. The signal of the methylene group connected to the heteroatom N was masked by the solvent (DMSO) signal. The results were in accordance with the previously reported analysis of DMAEMA [51, 52].
Temperature and pH dependence of the equilibrium swelling ratio
To investigate the effect of pH and temperature on the equilibrium of swelling ratio, a certain amount of P(NIPAAm-co-DMAEMA) hydrogel was immersed in distilled water, buffer solutions with two pH values (5.8 and 7.4), as well as different temperatures 25, 37, and 40 °C, respectively. To make PNIPAAm pH-responsive, a weak acid/base can be polymerized with it. Here, the utilized pH-responsive monomer was N, N-dimethyl-aminoethyl methacrylate (DMAEMA) with the pKa around 7.5. Upon the copolymerization of NIPAAm with DMAEMA, the polymeric network became pH-sensitive because of the protonation of the tertiary amine groups of DMAEMA at pH < pKa causing the gel swell as a result of the electrostatic repulsion and an increase in the osmotic pressure. At pH > pKa the polymer network returns to its initial state [53, 54]. It is supposed that the swelling ratio of the P(NIPAAm-co-DMAEMA) is determined by some major factors like hydrophilic/hydrophobic balance in the polymer network, electrostatic repulsion, and ionic strength. According to the Table 1, when the temperature and pH increase, the swelling ratio of the hydrogel decreases dramatically. The polymer is sensitive to the ionic strength of the environment at the low pH where the tertiary amine groups of the DMAEMA are protonated; therefore, in the distilled water with lower ionic strength, there is the highest swelling ratio [55, 56]. In a low pH solution, the NIPAAm moiety of the polymer backbone exhibited slight dehydration of the isopropyl groups leading to the disappearance of some hydrogen bonds between N–H and C = O groups and changing the chains to the extended form. The reduction in the number of hydrogen bonds, accompanied with the electrostatic repulsion of protonated amine groups of DMAEMA, caused to the swelling of hydrogel followed by increasing the possibility of the fluid exchange with the environment. The pH-dependent release of the encapsulated drugs can corollate to the higher swelling rate of the hydrogels in the acidic medium, which may accelerate endosome disruption and enhanced the cytosolic level of drugs. On account of the increase in swelling rate, known as the proton sponge effect, the acidic facilitate drug releases [57]. When the temperature increased above the LCST of the polymer, the pendant NIPAAm chains changed into the global form and representd the hydrophobic behavior. The electrostatic repulsion became the major force, and the swelling of the hydrogel increased more than the temperatures below LCST. The DLS also confirmed the above-mentioned explanation via the hydrodynamic diameter determination of the hydrogels due to the pH and temperature changes, which will further explain in "Morphological characterization" section. In contrast, at physiological pH, the pH-responsive moiety is mostly in the initial state and the electrostatic repulsion between the ammonium groups disappeared. As a result, increasing the temperature above LCST of the fabricated hydrogel, led to the shrinkage of the polymer and decrease the swelling ratio [22].
Table 1 Swelling behavior of synthesized P(NIPAAm-co-DMAEMA) hydrogel in different conditions
Thermogravimetric (TGA) analysis
The thermal stability and degradation behavior of the P(NIPAAm-co-DMAEMA) were investigated by TGA and DTG at 10 °C min−1 under the N2 atmosphere. The results of the TGA curve represent the amount of weight loss by increasing temperature, while the first derivative of the curve (DTG) revealed the corresponding rate of weight loss. The peak of this curve (DTGmax) represents the degradation temperature of the polymer and can be used to compare the thermal stability of the materials. The TGA and DTG curves of the sample showed 16.5% weight loss at temperatures lower than 100 °C which was attributed to water evaporation [58]. As shown in Fig. 2, the degradation process had two maximum degradation rates around 316.17 and 402.1 °C. The lower degradation temperature referred to the thermal decomposition and dissociation of organic functional groups and the carboxyl abstraction process [58, 59], while the main degradation temperature corresponded to the decomposition temperature of P(NIPAAm-co-DMAEMA) hydrogel. Some characteristic temperatures on TGA and DTG curves were presented in Table 2. As can be seen in Fig. 2, the main degradation process occurred in the range of 280–420 °C, corresponding with about 82% weight loss and represent high thermal stability of the nanocomposite in the hyperthermia process. It was evident from the TGA curve that, total weight loss of P(NIPAAm-co-DMAEMA) is about 98%, which can be attributed to the removal of organic functional groups like the hydroxyl group and decomposition of the crosslinked conformation [20].
TGA and DTG thermograms displaying thermal degradation behaviors of a P(NIPAAm-co-DMAEMA)
Table 2 Thermal parameters derived from TGA and DTG data of P(NIPAAm-co-DMAEMA)
Morphological characterization
To study the morphology, size, and structure of the P(NIPAAm-co-DMAEMA), FESEM was performed. The FESEM micrograph of a blank hydrogel, DOX/CUR-HGs, and DOX/CUR-NGs are presented in Fig. 3. The rigid boundaries topology and a slightly larger size in the blank hydrogel compare to the DOX/CUR-nanocarriers are shown in Fig. 3a. The results of DOX/CUR-HGs and DOX/CUR-NGs morphology assessing revealed the uniformity in the size and shape with round topology (Fig. 3b, c). In nanogels, after encapsulation of DOX and CUR by emulsion process, the size of the particles decreased, and dispersion of the particles was improved (Fig. 3c). The emulsification process is created a stable system due to the favorable contact between oil and water phases using a suitable surfactant. The function of the surfactant is to decrease the interfacial tension between water and oils, preventing the coalescence of water droplets, which finally leads to reduce the droplet size of emulsion [68, 69]. As a result, the corresponding diameter distributions of the nanogel decreased significantly compared with hydrogel. Specifically, the average diameter of hydrogels were 604.32 ± 154.34 nm (Fig. 3d), while the average diameter of the nanogels were 113.31 ± 42.43 nm (Fig. 3e).
Morphological characterization of the nanocarriers. FESEM images representing the structure of (a) blank hydrogel, Scale bars represent 10 µm and 1 µm. b DOX/CUR-HGs (c) DOX/CUR-NGs. Scale bars represent 1 µm and 200 nm. d The corresponding diameter distributions of the hydrogels, e The corresponding diameter distributions of nanogels
Evaluation of size and zeta potential by (DLS) technique
The DLS technique was applied to determine the particle size distribution and zeta potential for hydrogel at two pH values (7.4, 5.8) and temperatures (37 °C, 40 °C). The results are shown in Table 3. To evaluate the particle size, the blank and dual drug-loaded hydrogels was dispersed using a probe sonicator (300 w, 20 s). The formation of surface hydration layers and pseudo-clusters caused the sizes obtained by DLS in order to be slightly larger than the particle size measured by FESEM [55 Figs. 4a, b demonstrate the particle size distribution for blank hydrogel and DOX/CUR-HGs in distilled water and room temperature around 994.6 and 689.9 nm, respectively. The mean particle size distribution of the DOX/CUR-HGs is lower than the blank hydrogel probably due to the decrease in the amount of electrostatic repulsion between polymer chains. Since, in physiological pH values (drug loading conditions), electrostatic interaction occurs between functional groups in nanocarriers and drugs, which cause the copolymer chain to shrink [42]. To prove the pH-sensitivity of P(NIPAAm-co-DMAEMA) zeta potential analysis was conducted at pH 7.4 and 5.8 in 37 °C. The results are shown in Table 3. As can be seen, the amount of zeta potential and particle size for the hydrogel at pH 5.8, was 2.53 mV and 618.6 nm, respectively, while at pH 7.4 was -3.45 and 394.5. (Fig. 4c, c'). This evidence may be explained by the protonation of tertiary amine groups on the surface of PDMAEMA at lower pH values and generation of intense electrostatic repulsion, which leads to an increase in the size of particles. Whereas, with increasing pH to 7.4, the zeta potential value for the hydrogel decreased to – 3.45 mV which leads to a reduction in particle size (Fig. 4d, d') [47]. Due to the presence of DMAEMA, the copolymer becomes more hydrophilic and forms more hydrogen bonds between the polymer chains and water molecules in the physiological pH (7.4), which causes a compact hydrogel network (Fig. 4e, f) [47]. It can also be noted that the size distribution was raised at 40 °C and acidic pH to 619.3 nm, while at the same temperature and pH 7.4, it reduced to 236.9 nm. As appraised from the evidence, the results of DLS are complementary to the swelling section.
Table 3 Physicochemical properties of synthesized hydrogels in different conditions
Hydrodynamic size distribution, the zeta potential of synthesized P(NIPAAm-co-DMAEMA) hydrogels in different conditions. a Size distribution of hydrogel in distilled water and room temperature. b Size distribution of dual drug-loaded hydrogels in distilled water and room temperature. Size distribution, the zeta potential of hydrogel (c, c') in pH 5.8, 37 °C. d, d' in pH 7.4, 37 °C. The size distribution of hydrogel (e) in pH 7.4, 40 °C. f in pH 5.8, 40 °C
Investigation of LCST nanocomposite and UV–Vis spectroscopy
PNIPAAm is introduced as a thermo-responsive moiety in the polymer backbone, which creates opportunities for biomedical applications [60]. A thermosensitive polymer represents significant hydration-dehydration changes in an aqueous solution near the LCST, which simultaneously undergoes a volume phase transition and the volume collapse [61]. The LCST of PNIPAAm hydrogel could be modulated by feeding the polymeric network with the DMAEMA monomer. The resulted P(NIPAAm-co-DMAEMA) has hydrophilic property due to the increasing amount of hydrogen bonds with water molecules which demand more energy to destabilize the prepared hydrogel and cause to display a higher LCST [62]. The LCST of the sample can be determined by assessing the reduction of transmittance in the UV–Vis upon heating of the sample up to 40 °C which is a rather sudden phenomenon. As depicted in Figs. 5a, b, at 25 °C, the sample is completely transparent, with a high transmittance percentage; however, at 40 °C, it becomes dim, and the amount of the transmittance percentage reduces. The amounts of the transmittance percentages were considered as a function of temperature. According to the reported researches [63,64,65], the LCST of the PNIPAAm is in the ranges of 32–37 °C. As can be seen in Fig. 5c, the LCST for P(NIPAAm-co-DMAEMA) was obtained in the ranges of 39–40 °C. The DLS studies further confirm the thermo-sensitivity of the hydrogel. Thus, at 25 °C, hydrogel particle size increases to 994.6 nm due to PNIPAAm branches unfolding and changing into the random coils as a result of the hydrogen bonds establishment with water molecules. When the temperature increased above the LCST (40 °C), the particle size decreased to 689.9 nm becuaes the weakening of intermolecular hydrogen bonded with a water molecule that leads to the water releases and strengthens the intramolecular hydrogen bonds. As a result, the hydrogel network precipitate as a solid gel out of a solution [66].
A visual illustration of P(NIPAAm-co-DMAEMA) aqueous solution and LCST determination using the UV–Vis spectrum. a, b P(NIPAAm-co-DMAEMA) aqueous solution images below and above LCST, respectively. c LCST determination by UV–Vis spectrum
Assessing the encapsulation efficacy of DOX and CUR
In this study, both hydrophilic and hydrophobic drugs including DOX and CUR respectively, incorporated into P(NIPAAm-co-DMAEMA) by two methods. The drug loading in the prepared hydrogel happens through physical entrapment that can be related to the electrostatic and hydrophobic interactions between the polymer chains and the drug molecules [31]. In the first approach, the hydrogel is allowed to swell in the solution drugs. The swelling property allows the carrier to absorb a large amount of solution. Finally, the dual drug-loaded hydrogels were achieved after freeze-drying. In the second method that was demonstrated in Fig. 6, the hydrophilic drug (DOX) is dissolved in an aqueous phase named as an internal phase and emulsified into an oily phase that contains the polymer and hydrophobic drug (CUR). Then, the obtained emulsion is emulsified again into the aqueous solution of PVA, which was known as an external phase [67, 68]. Due to the osmotic gradient, the phenomenon of the thermodynamic driven diffusive exchange of water and oil between the internal phase and external phase happened by the surfactants at the interface of water–oil, that can lead to the production of a simple emulsion or even disappearance of the multiple globules [69]. Also, it causes the swelling or shrinkage of the inner droplets, followed by rupture of the oily layer [70, 71]. This topic effectively neutralizes the diffusive driving force for departing of hydrophilic drugs from the nanoparticle and supplies the possibility for additional loading via surface adsorption or diffusion of both hydrophilic and hydrophobic drugs into the nanoparticle[68, 70]. Multiple-emulsions are widely used as templates to prepare nanometric carriers with encapsulated anti-cancer drugs. Herein, we modified the traditional double emulsion, solvent evaporation method to encapsulate DOX and CUR in nanocarriers using multiple external water phases. (Fig. 6). After the entrapment of drugs, loading content and release behavior was investigated using a UV–Vis spectrophotometer. Standard calibration curves of both drugs at wavelength 480 nm and 420 nm for DOX and CUR, respectively, in two pH values (7.4, 5.8) were placed in supporting information, and also the Linear fitting of the standard curves for both DOX and CUR was obtained to quantify drug loading. As shown in Table 4, the amount of DOX and CUR in either single/dual drug-loaded hydrogels is comparable with DOX/CUR-NGs. The encapsulation efficiency (EE%) and loading content (LC%) for all drugs formulations are presented in Table 4.
Schematic illustration of nanoparticle fabrication method. The double emulsion, solvent evaporation method was used to manufacture nanogels with encapsulated DOX and CUR drugs
Table 4 The amount of encapsulation efficacy and loading content
In Vitro release study of drugs from the carriers
Drug release from DOX/CUR-HGs
To investigate the dual pH/thermo-responsive property of the nanocarriers, the release study of the drugs from single/dual drug-loaded hydrogels (Fig. 7a–d), and DOX/CUR-NGs (Fig. 7e, f) in the pH values of 5.8 and 7.4 at 37 °C and 40 °C, were conducted. The amount of released DOX from single drug-loaded hydrogels was quite low, so that, around 42% DOX was released at pH 5.8 and 40 °C after 20 days study, but still was significantly higher compared to pH 7.4 and 37 °C. In contrast, the release of DOX from dual drug formulation was obtained 98% at pH 5.8 and 40 °C, while by decreasing the temperature to 37 °C in the same pH condition, the release percentage decreased to 70–80% after 7 days. In general, it can be concluded that the release of DOX in the co-delivery system was higher compared to the single drug delivery system. As shown in Fig. 7c, the release of CUR from the CUR-HGs at pH 7.4 and 37 °C was slow and reached only 28%, on the contrary, its release was fast at 40 °C under acidic conditions (around 49% after 7 days). Figure 7d represents the rapid release profiles of CUR from DOX/CUR-HGs at 40 °C and lysosomal pH (pH 5.8) that reached to 80% after 48 h, while at physiological pH (pH 7.4) and 37 °C, only 60% drug released from nanocarrier. In this study, it was tried to make drug formulations pH/thermo-sensitive to reduce adverse side effects against normal cells and subsequently increase the toxic effect against malignant cells [72]. The acidic pH and high temperature cause a higher release rate of drugs as observed in the release behavior of DOX and CUR. The pH-responsive property of nanocarrier depends on the ionization degree of the drug-polymer complex on different pH conditions. In acidic pH, the carboxylate groups and amine groups of prepared hydrogels were protonated. Zeta potential study further confirmed the positive charge of the hydrogels in acidic pH. Protonation of DOX amine groups and hydrogel carboxylate groups eliminated the hydrogen bond between them and quickening DOX release in acidic conditions. Also, the protonation of CUR enolate groups (pKa1 7.4) in acidic conditions promotes the release of CUR from nanocarriers [73]. The release of the drugs at 40 °C is attributed to the aggregation of PNIPAAm branches as a result of enhanced intramolecular hydrogen bonds, which leads to loosening the intermolecular hydrogen bonds with the drugs [21].
Cumulative in vitro release profiles of loaded drugs under various conditions at two pH values 7.4 and 5.8, and two temperature 37 °C and 40 °C. a The release profile of DOX from DOX-HGs, b The release profile of DOX from DOX/CUR-HGs, c The release profile of CUR from CUR-HGs d The release profile of Cur from DOX/CUR-HGs, e, f The release profile of DOX and CUR from DOX/CUR-NGs
Drug release from DOX/CUR-NG
The release rate of drugs from DOX/CUR-NGs is faster than DOX/CUR-HGs, so the release profile examination of nanogels (Fig. 7e, f) was performed during 48 h. As depicted in Fig. 7 the releases of DOX and CUR from dual drug-loaded nanogels were more efficient than hydrogel nanocarriers, so that, at pH 5.8 and 40 °C, the cumulative release percentages for DOX and CUR in DOX/CUR-NGs were 99%, after 48 h for DOX and 24 h for CUR. However, in the same temperature and physiological pH, the cumulative release percentages of DOX and CUR were 76% and 60%, respectively. The observation of rapid drug release at pH 5.8 than pH 7.4 can be explained by the proton sponge effect of DMAEMA content of the polymer, which is extensively explained in "1H NMR spectroscopy" section. As reported previously, the size of particles carrying bioactive molecules such as anticancer drugs significantly influences their biopharmaceutical properties. The release profile is one of the biopharmaceutical properties in nanocarriers that size distribution in the nanometer ranges can enhance the kinetics of release through the increase in the surface area. Considering the above, it took a maximum of 48 h for DOX/CUR-NGs to release both DOX and CUR, while the release of drugs from DOX/CUR-HGs happened in a sustained manner during 168 h which led to 90% release of its payload, which further confirmed the effect of nanoparticles size distribution on the release profiles.
In vitro cytotoxicity assay
To verify the biocompatibility and non-toxicity of blank nanocarrier and anti-tumor efficacy of DOX and CUR in different formulations, the MTT assay was conducted for 48 h in HT-29 colon cancer cells. As shown in Fig. 8a, the dose-dependent cytotoxicity of cells treated by the unloaded-hydrogel was evaluated in the concentration ranges between 5 to 500 µg/ml. The maximum viability was observed at 100 µg/ml while, by increasing the concentration up to 500 µg/ml the viability slightly decreased, suggesting the biocompatibility of the fabricated hydrogel and its potential application for drug delivery system [74]. As can be seen in Fig. 8c, the single drug-loaded nanocarriers have a more cytotoxic effect than free drugs which suggest the interference of nanocarriers in decreasing the viability of the cancer cells along with its ability to increase the drug internalization through the endocytic process. As depicted in Fig. 8b, the amount of IC50 for nanocarriers was much lower than free drugs and single drug-loaded nanocarriers, representing the enhanced cytotoxic effect of CUR and DOX in combination with each other. The IC50 amount of DOX-nanocarriers in HT-29 cells was 22.03, while when the cells were treated with CUR along with DOX, it decreased to 7.179 and 2.346 for DOX/CUR-HGs and DOX/CUR-NGs, respectively. The observed results indicate that CUR could synergize the therapeutic efficacy of anti-cancer drug DOX via induction of apoptotic cell death. To confirm the synergistic effect of the dual drug-loaded hydrogel, half-maximum inhibition concentration (IC50) of DOX and CUR in hydrogel along with combination index (CIx) of DOX and CUR-HGs with mass ratio 1:1 of DOX:CUR, and their cytotoxicity has compared. The combination index (CI) is a critical indicator to assess the effective interactions among multiple drugs, and the value of < 1, = 1, and > 1 suggests synergistic, additive, and antagonistic effects, respectively. The CI value 0.5 was calculated for DOX and CUR in HT-29 cells, indicating the synergistic effect of drugs. The cell viability of the samples of all drug formulations at different drug concentrations represented a completely dose-dependent pattern after treating for 48 h. CUR was applied as an active agent along with a chemotherapeutic drug DOX to persuade a synergetic effect against HT-29 cancer cells. The better cytotoxicity may be caused by the simultaneous release of the DOX and CUR from nanocarriers after internalization into the cancer cells and enhanced accumulation within the tumor site [75]. According to the results of in vitro cytotoxicity, the therapeutic efficacy of DOX and CUR in the nanogels formulation has more synergistically enhanced, that is, it represents more cytotoxicity compared to hydrogel formulation [8, 44].
Cytotoxicity of DOX and CUR formulations in HT-29 cancer cells. a Cell viability of HT-29 cells after treatment with different doses of the non-drug-loaded nanocarriers. b The IC50 comparison of the different drug formulations in HT-29 cells. c Cell viability of HT-29 cells after being exposed to different doses of free drugs, single drug-loaded hydrogel, DOX/CUR-HGs, and DOX/CUR-NGs. Comparison among groups was conducted by one-way ANOVA, *p < 0.05, **p < 0.01, ***p < 0.001
Study of induced apoptosis using DAPI staining
Investigation of morphological alterations induced by apoptosis in the HT-29 cells was detected by DAPI staining study. The chromatin morphological changes and the density of nuclei were observed by fluorescence microscopy after 48 h treatment. The morphological changes in cells treated with different drug formulations compared with the morphology of the untreated cells (control group). As depicted in Fig. 9a, b, the cells treated with free DOX and CUR had almost the same morphological changes, and slightly represent the sign of apoptosis. In contrast, according to Fig. 9d, e, the cells treated with single drug-loaded nanocarriers were exposed to chromatin condensation and nuclear fragmentation with more intensity. On the other hand, dual drug formulations induced more significant variations including, chromatin condensation, cell shrinkage, and strong fragmentation in the nuclei of the cells in the same dosages than free drugs and single drug-loaded nanocarriers (Fig. 9f, g). Finally, it's noteworthy to mention, DOX and CUR combination in nanocarriers with the synergistic therapeutic effect has a significant influence on the morphological changes of HT-29 cells than the other drug formulations, which can be related to the enhanced cytotoxicity in combination formulations.
Nuclear morphology changes and apoptotic cell proportion in HT-29 cells. Fluorescence microscopy images of nuclear morphology in HT-29 cells after 48 h exposure to the a Untreated cells (control), b Free DOX, c Free CUR, d DOX-HGs, e CUR-HGs, f DOX/CUR-NGs, g DOX and CUR-HGs. h The percentages of apoptotic cell death in HT-29 cells after being exposed to free drugs, single/dual drug-loaded nanocarriers. To determine the proportion of apoptotic cells, more than 100 stained cells were counted. As depicted in the diagram, dual drug-loaded nanocarriers induced highly significant apoptosis (P < 0.001) in comparison to single drug-loaded nanocarriers. The images were processed using ImageJ Software [43]. Comparison among groups was conducted by one-way ANOVA, *p < 0.05, ** p < 0.01, *** p < 0.001
The combined effects of DOX and CUR on cell cycle distribution
Flow cytometry applies in the cell cycle blocking studies using DNA staining indicates the percentage of cells existing in each cell cycle phase [76]. Herein, cell cycle analysis was conducted to investigate the cell cycle distribution of HT-29 cells after treated with different formulations of DOX and CUR. The drug formulations induced apoptosis via different pathways inhibit cells within the distinct phases of the cell cycle [29, 77]. Both drugs induce the accumulation of HT-29 cells in the G2/M phase [78,79,80]. The control of the growth and proliferation of the cancer cells in G2/M transition could be a useful checkpoint in cell cycle progression and simplify their apoptotic death [80]. As depicted in Fig. 10a, in the cells treated with free CUR and CUR-HGs formulations, while the percentage of cells in G0-G1 phase (10.3% and 1.39%, respectively) decreased in comparison with the untreated cells (66.4%), the percentage of cells in the G2/M phase (66.2% and 95.4%, respectively) increased in compared to untreated cells (18.7%). Similarly, for the cells treated with free DOX and DOX-HGs formulations, although the percentage of cells in the G0-G1 phase (4.72% and 6.33%, respectively) decreased, the percentage of cells in the G2/M phase (81.5% and 60.4%, respectively) increased in compared with untreated cells. Interestingly, the HT-29 cells treated with DOX/CUR-HGs and DOX/CUR-NGs for 48 h represent an increase in the percentage of G2/M phase (57.2% and 63.1%, respectively), and a decrease in the percentage of G0-G1 phase (7.25% and 6.57%, respectively), which is consistent with the previous studies mentioned DOX and CUR as the agents that arrest cell cycle progression in G2/M phases.
Cell cycle arrest analysis of HT-29 cells treated with different formulations of DOX and CUR. a Flow cytometry evaluation of DNA content in HT-29 cells after incubation with various formulations of drugs including free drugs, single/dual drug loaded-hydrogels, and nanogels for 48 h in concentrations around their IC50 values. b The proportion of cell cycle phase (%) and DNA distribution percentages in different cell cycle phases (subG1, G0-G1, S, and G2/M) for various formulations after DAPI staining in HT-29 cells. One-way ANOVA, followed by Tukey's HSD analysis, was used to determine p-values for different phases of the cell cycle. The difference was considered significant at *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001
In this work, a smart nanogels, based on P(NIPAAm-co-DMAEMA), were successfully developed and studied for controlled and efficient delivery of two model drugs DOX and CUR in HT-29 colon cancer cells. The resulted delivery system was characterized in terms of having the desired structure. The advantages of such nanogels systems include their simplicity of formulation, their swelling and collapse properties, and optimal loading capacity as well as the efficient release of drugs. The fabricated nanogels were used as a pH/thermo-responsive carriers that exhibited the LCST around 40 °C. It was found through the in vitro release studies that the nanocarriers released its payload in an acidic and temperature-facilitate manner so that the acidic pH and high temperature of cancer cells promoted the release of the drugs from the nanocarrier. The results of the cytotoxicity study revealed that DOX and CUR could synergistically induce apoptosis to the HT-29 colon cancer cells. Moreover, cell cycle analysis and DAPI staining studies proved the successful induction of the apoptosis by dual drug-loaded nanocarriers. In summary, the resulted smart nanogels could be served as a suitable candidate for the simultaneous delivery of hydrophilic and hydrophobic drugs, and they could achieve an efficient therapeutic activity in the combined cancer therapy.
The data required to reproduce these findings are available for any research.
Asghar K, et al. Investigation on a smart nanocarrier with a mesoporous magnetic core and thermo-responsive shell for co-delivery of doxorubicin and curcumin: a new approach towards combination therapy of cancer. RSC Adv. 2017;7(46):28802–18.
Gulbake A, et al. Insight to drug delivery aspects for colorectal cancer. World J Gastroenterol. 2016;22(2):582.
Lehár J, et al. Synergistic drug combinations tend to improve therapeutically relevant selectivity. Nat Biotechnol. 2009;27(7):659–66.
Mi Y, Zhao J, Feng S-S. Targeted co-delivery of docetaxel, cisplatin and herceptin by vitamin E TPGS-cisplatin prodrug nanoparticles for multimodality treatment of cancer. J Control Release. 2013;169(3):185–92.
Al-Lazikani B, Banerji U, Workman P. Combinatorial drug therapy for cancer in the post-genomic era. Nat Biotechnol. 2012;30(7):679–92.
Fang J, Nakamura H, Maeda H. The EPR effect: unique features of tumor blood vessels for drug delivery, factors involved, and limitations and augmentation of the effect. Adv Drug Deliv Rev. 2011;63(3):136–51.
Gupta M. Agrawal GP, Vyas SP, Polymeric nanomedicines as a promising vehicle for solid tumor therapy and targeting. Curr Mol Med. 2013;13(1):179–204.
Mishra B, et al. Hydrogels: an introduction to a controlled drug delivery device, synthesis and application in drug delivery and tissue engineering. Austin J Biomed Eng. 2017;4:1037.
Ahmed EM. Hydrogel: preparation, characterization, and applications: a review. J Adv Res. 2015;6(2):105–21.
Oh JK, et al. The development of microgels/nanogels for drug delivery applications. Prog Polym Sci. 2008;33(4):448–77.
Larsson M, et al. Nanocomposites of polyacrylic acid nanogels and biodegradable polyhydroxybutyrate for bone regeneration and drug delivery. J Nanomat. 2014;48(30):5418–29.https://doi.org/10.1155/2014/371307
Kabanov AV, Vinogradov SV. Nanogels as pharmaceutical carriers: finite networks of infinite capabilities. Angew Chem Int Ed. 2009;48(30):5418–29.
Neamtu I, et al. Basic concepts and recent advances in nanogels as carriers for medical applications. Drug Delivery. 2017;24(1):539–57.
Vinogradov SV. Polymeric nanogel formulations of nucleoside analogs. Expert Opin Drug Deliv. 2007;4(1):5–17.
Choi WI, et al. Efficient skin permeation of soluble proteins via flexible and functional nano-carrier. J Control Release. 2012;157(2):272–8.
Beningo KA, Wang Y-L. Fc-receptor-mediated phagocytosis is regulated by mechanical properties of the target. J Cell Sci. 2002;115(4):849–56.
Hasegawa U, et al. Nanogel-quantum dot hybrid nanoparticles for live cell imaging. Biochem Biophys Res Commun. 2005;331(4):917–21.
Ahmad Z, et al. Pharmacokinetic and pharmacodynamic behaviour of antitubercular drugs encapsulated in alginate nanoparticles at two doses. Int J Antimicrob Agents. 2006;27(5):409–16.
Cheng R, et al. Dual and multi-stimuli responsive polymeric nanoparticles for programmed site-specific drug delivery. Biomaterials. 2013;34(14):3647–57.
Peng C-L, et al. Development of thermosensitive poly (n-isopropylacrylamide-co-((2-dimethylamino) ethyl methacrylate))-based nanoparticles for controlled drug release. Nanotechnology. 2011;22(26):265608.
Motaali S, et al. Synthesis and characterization of smart N-isopropylacrylamide-based magnetic nanocomposites containing doxorubicin anti-cancer drug. Artificial Cells Nanomed Biotechnol. 2017;45(3):560–7.
Wang B, et al. Synthesis and properties of pH and temperature sensitive P (NIPAAm-co-DMAEMA) hydrogels. Colloids Surf B. 2008;64(1):34–41.
Hinrichs W, et al. Thermosensitive polymers as carriers for DNA delivery. J Control Release. 1999;60(2–3):249–59.
Orakdogen N. Design and synthesis of dual-responsive hydrogels based on N, N-dimethylaminoethyl methacrylate by copolymerization with N-isopropylacrylamide. Macromol Res. 2014;22(1):32–41.
Keddie DJ. A guide to the synthesis of block copolymers using reversible-addition fragmentation chain transfer (RAFT) polymerization. Chem Soc Rev. 2014;43(2):496–505.
Davaran S, et al. Novel dual stimuli-responsive ABC triblock copolymer: RAFT synthesis, "schizophrenic" micellization, and its performance as an anticancer drug delivery nanosystem. J Colloid Interface Sci. 2017;488:282–93.
Jovančić P, Vílchez A, Molina R. Synthesis of thermo-sensitive hydrogels from free radical copolymerization of NIPAAm with MBA initiated by atmospheric plasma treatment. Plasma Processes Polym. 2016;13(7):752–60.
Safajou-Jahankhanemlou M, Abbasi F, Salami-Kalajahi M. Synthesis and characterization of thermally expandable PMMA-based microcapsules with different cross-linking density. Colloid Polym Sci. 2016;294(6):1055–64.
Misra R, Sahoo SK. Coformulation of doxorubicin and curcumin in poly (D, L-lactide-co-glycolide) nanoparticles suppresses the development of multidrug resistance in K562 cells. Mol Pharm. 2011;8(3):852–66.
Sesarman A, et al. Co-delivery of curcumin and doxorubicin in PEGylated liposomes favored the antineoplastic C26 murine colon carcinoma microenvironment. Drug Delivery Translat Res. 2019;9(1):260–72.
Mizuta Y, et al. Sodium thiosulfate prevents doxorubicin-induced DNA damage and apoptosis in cardiomyocytes in mice. Life Sci. 2020;257:118074.
Zhao X, et al. Codelivery of doxorubicin and curcumin with lipid nanoparticles results in improved efficacy of chemotherapy in liver cancer. Int J Nanomed. 2015;10:257.
Shang Y-J, et al. Antioxidant capacity of curcumin-directed analogues: structure–activity relationship and influence of microenvironment. Food Chem. 2010;119(4):1435–42.
Chen Y, et al. Preparation of curcumin-loaded liposomes and evaluation of their skin permeation and pharmacodynamics. Molecules. 2012;17(5):5972–87.
Wilken R, et al. Curcumin: a review of anti-cancer properties and therapeutic activity in head and neck squamous cell carcinoma. Mol Cancer. 2011;10(1):12.
Um Y, et al. Synthesis of curcumin mimics with multidrug resistance reversal activities. Bioorg Med Chem. 2008;16(7):3608–15.
He S, et al. Single-stimulus dual-drug sensitive nanoplatform for enhanced photoactivated therapy. Biomacromol. 2016;17(6):2120–7.
Moghaddam SV, et al. Lysine-embedded cellulose-based nanosystem for efficient dual-delivery of chemotherapeutics in combination cancer therapy. Carbohyd Polym. 2020;250:116861.
Salehi R, Rasouli S, Hamishehkar H. Smart thermo/pH responsive magnetic nanogels for the simultaneous delivery of doxorubicin and methotrexate. Int J Pharm. 2015;487(1):274–84.
Amatya S, et al. Drug release testing methods of polymeric particulate drug formulations. J Pharm Investigat. 2013;43(4):259–66.
Chou T-C. Theoretical basis, experimental design, and computerized simulation of synergism and antagonism in drug combination studies. Pharmacol Rev. 2006;58(3):621–81.
Rahimi M, et al. Biocompatible magnetic tris (2-aminoethyl) amine functionalized nanocrystalline cellulose as a novel nanocarrier for anticancer drug delivery of methotrexate. New J Chem. 2017;41(5):2160–8.
Schneider CA, Rasband WS, Eliceiri KW. NIH Image to ImageJ: 25 years of image analysis. Nat Methods. 2012;9(7):671–5.
Shalviri A, et al. Novel modified starch–xanthan gum hydrogels for controlled drug delivery: Synthesis and characterization. Carbohyd Polym. 2010;79(4):898–907.
Kipcak AS, et al. Modeling and investigation of the swelling kinetics of acrylamide-sodium acrylate hydrogel. J Chem. 2014. https://doi.org/10.1155/2014/281063.
Chen Y, et al. Microporous PDMAEMA-based stimuli-responsive hydrogel and its application in drug release. J Appl Polym Sci. 2017;134(38):45326.
Echeverría C, et al. Thermoresponsive poly (N-isopropylacrylamide-co-dimethylaminoethyl methacrylate) microgel aqueous dispersions with potential antimicrobial properties. Polymers. 2019;11(4):606.
PubMed Central Article CAS PubMed Google Scholar
Ribeiro CA, et al. Electrochemical preparation and characterization of PNIPAM-HAp scaffolds for bone tissue engineering. Mater Sci Eng C. 2017;81:156–66.
Gharatape A, et al. A novel strategy for low level laser-induced plasmonic photothermal therapy: the efficient bactericidal effect of biocompatible AuNPs@(PNIPAAM-co-PDMAEMA, PLGA and chitosan). RSC Adv. 2016;6(112):110499–510.
Spěváček J, Konefał R, Čadová E. NMR study of thermoresponsive block copolymer in aqueous solution. Macromol Chem Phys. 2016;217(12):1370–5.
Huang Y, et al. Micellization and gelatinization in aqueous media of pH-and thermo-responsive amphiphilic ABC (PMMA 82-b-PDMAEMA 150-b-PNIPAM 65) triblock copolymer synthesized by consecutive RAFT polymerization. RSC Adv. 2017;7(46):28711–22.
Sevimli S, et al. Synthesis, self-assembly and stimuli responsive properties of cholesterol conjugated polymers. Polymer Chem. 2012;3(8):2057–69.
Gürdağ GL, Kurtulus B. Synthesis and Characterization of Novel Poly (N-isopropylacrylamide-co-N, N′-dimethylaminoethyl methacrylate sulfate) Hydrogels. Indust Eng Chem Res. 2010;49(24):2675–12684.
Xue W, Champ S, Huglin MB. Thermoreversible swelling behaviour of hydrogels based on N-isopropylacrylamide with a zwitterionic comonomer. Eur Polymer J. 2001;37(5):869–75.
Karg M, et al. Temperature, pH, and ionic strength induced changes of the swelling behavior of PNIPAM− poly (allylacetic acid) copolymer microgels. Langmuir. 2008;24(12):6300–6.
Moselhy J, et al. characterization of complexation of poly (N-isopropylacrylamide-co-2-(dimethylamino) ethyl methacrylate) thermoresponsive cationic nanogels with salmon sperm DNA. Int J Nanomed. 2009;4:153.
Liu X, et al. Adaptive amphiphilic dendrimer-based nanoassemblies as robust and versatile siRNA delivery systems. Angew Chem Int Ed. 2014;53(44):11822–7.
Zeinali E, Haddadi-Asl V, Roghani-Mamaqani H. Nanocrystalline cellulose grafted random copolymers of N-isopropylacrylamide and acrylic acid synthesized by RAFT polymerization: effect of different acrylic acid contents on LCST behavior. RSC Adv. 2014;4(59):31428–42.
Salehi R, et al. pH-Controlled multiple-drug delivery by a novel antibacterial nanocomposite for combination therapy. RSC Adv. 2015;5(128):105678–91.
Guan Y, et al. "On-Off" thermoresponsive coating agent containing salicylic acid applied to maize seeds for chilling tolerance. PLoS ONE. 2015;10(3):e0120695.
Bischofberger I, et al. Hydrophobic hydration of poly-N-isopropyl acrylamide: a matter of the mean energetic state of water. Sci Rep. 2014;4(1):1–7.
Zhang J, et al. The targeted behavior of thermally responsive nanohydrogel evaluated by NIR system in mouse model. J Control Release. 2008;131(1):34–40.
Wang W, Yu W. Preparation and characterization of CS-g-PNIPAAm microgels and application in a water vapour-permeable fabric. Carbohyd Polym. 2015;127:11–8.
Massoumi B, Ghamkhari A, Agbolaghi S. Dual stimuli-responsive poly (succinyloxyethylmethacrylate-b-N-isopropylacrylamide) block copolymers as nanocarriers and respective application in doxorubicin delivery. Int J Polym Mater Polym Biomater. 2018;67(2):101–9.
Francis R, et al. Synthesis of poly (N-isopropylacrylamide) copolymer containing anhydride and imide comonomers–A theoretical study on reversal of LCST. Polymer. 2007;48(22):6707–18.
Ghamkhari A, Massoumi B, Salehi R. A new style for synthesis of thermo-responsive Fe3O4/poly (methylmethacrylate-b-N-isopropylacrylamide-b-acrylic acid) magnetic composite nanosphere and theranostic applications. J Biomater Sci Polym Ed. 2017;28(17):1985–2005.
Hong L, et al. One-step formation of w/o/w multiple emulsions stabilized by single amphiphilic block copolymers. Langmuir. 2012;28(5):2332–6.
Xu J, et al. Controllable microfluidic production of drug-loaded PLGA nanoparticles using partially water-miscible mixed solvent microdroplets as a precursor. Scientific reports. 2017;7(1):1–12.
Idris, H., Experimental Studies of the Two Phase Flow and Monodispersed Pickering Emulsions Stabilized by LaponiteRD in a Microfluidic T-Junction. 2014, Institutt for fysikk.
Murphy NP, Lampe KJ. Fabricating PLGA microparticles with high loads of the small molecule antioxidant N-acetylcysteine that rescue oligodendrocyte progenitor cells from oxidative stress. Biotechnol Bioeng. 2018;115(1):246–56.
Agrawal A, Kulkarni S, Sharma S. Recent advancements and applications of multiple emulsions. Int J Adv Pharm. 2016;4(6):94–103.
Naderinezhad S, Amoabediny G, Haghiralsadat F. Co-delivery of hydrophilic and hydrophobic anticancer drugs using biocompatible pH-sensitive lipid-based nano-carriers for multidrug-resistant cancers. RSC Adv. 2017;7(48):30008–19.
Yang X, et al. Preparation of magnetite and tumor dual-targeting hollow polymer microspheres with pH-sensitivity for anticancer drug-carriers. Polymer. 2010;51(12):2533–9.
Zhao Z, et al. A nano-in-nano polymer–dendrimer nanoparticle-based nanosystem for controlled multidrug delivery. Mol Pharm. 2017;14(8):2697–710.
Zhang Y, et al. Co-delivery of doxorubicin and curcumin by pH-sensitive prodrug nanoparticle for combination therapy of cancer. Sci Rep. 2016;6(1):1–12.
Gordon JL, Brown MA, Reynolds MM. Cell-based methods for determination of efficacy for candidate therapeutics in the clinical Management of Cancer. Diseases. 2018;6(4):85.
CAS PubMed Central Article Google Scholar
Sa G, Das T. Anti cancer effects of curcumin: cycle of life and death. Cell Div. 2008;3(1):14.
Xia Y, et al. Functionalized selenium nanoparticles for targeted delivery of doxorubicin to improve non-small-cell lung cancer therapy. Int J Nanomed. 2018;13:6929.
Atashpour S, et al. Quercetin induces cell cycle arrest and apoptosis in CD133+ cancer stem cells of human colorectal HT29 cancer cell line and enhances anticancer effects of doxorubicin. Iranian J Basic Med Sci. 2015;18(7):635.
Chuah LH, et al. Cellular uptake and anticancer effects of mucoadhesive curcumin-containing chitosan nanoparticles. Colloids Surf B. 2014;116:228–36.
This work was supported by the Department of Organic Chemistry, Faculty of Pharmaceutical Chemistry, Tehran Medical Sciences. Also, the Liver and gastrointestinal disease research center, Tabriz University of medical sciences supported this project, as project NO:59723. Besides, the authors would like to thank the Drug Applied Research Center, Tabriz University of Medical Sciences cooperation in this project.
This work was supported by the Department of Organic Chemistry, Faculty of Pharmaceutical Chemistry, Tehran Medical Sciences. Also, the Liver and gastrointestinal disease research center, Tabriz University of medical sciences supported this project, as project NO:59723.
Department of Organic Chemistry, Faculty of Pharmaceutical Chemistry, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
Fatemeh Abedi & Malak Hekmati
Drug Applied Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
Fatemeh Abedi, Soodabeh Davaran & Sevil Vaghefi Moghaddam
Department of Medicinal Chemistry, Faculty of Pharmacy, Tabriz University of Medical Science, Tabriz, Iran
Soodabeh Davaran
Department of Medical Nanotechnology, Faculty of Advanced Medical Sciences, Tabriz University of Medical Sciences, Tabriz, Iran
Abolfazl Akbarzadeh
Universal Scientific Education and Research Network (USERN), Tabriz, Iran
Immunology Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
Behzad Baradaran
Fatemeh Abedi
Malak Hekmati
Sevil Vaghefi Moghaddam
FA: Investigation, Methodology, Project administration, Writing-original draft. SD: Supervision, Validation, Writing-review & editing. MH: Supervision, Validation. AA: Validation, Writing-review & editing. BB: Validation, Writing-review & editing. SVM: Investigation, Methodology, Writing-original draft. All authors read and approved the final manuscript.
Correspondence to Soodabeh Davaran.
Ethical approval and consent were not needed in this study.
We agree for publication.
Additional file1: Figure S1.
Calibration curves of Dox and Cur at pH 7.4. and Calibration curves of Dox and Cur at pH 5.4. the calibration curves of Dox and Cur at two pH values 7.4 and 5.4 were determined by measuring the absorption of Dox and Cur with known concentration using Shimatzu 1650 PC UV-Vis spectrophotometer. The absorptions as a function of Dox and Cur concentrations were recorded to construct calibration curves.
Abedi, F., Davaran, S., Hekmati, M. et al. An improved method in fabrication of smart dual-responsive nanogels for controlled release of doxorubicin and curcumin in HT-29 colon cancer cells. J Nanobiotechnol 19, 18 (2021). https://doi.org/10.1186/s12951-020-00764-6
pH/thermo-responsive
Nanogels
Dual-drug delivery
Controlled release | CommonCrawl |
Journal of Economic Structures
December 2016 , 5:4 | Cite as
Matching global cobalt demand under different scenarios for co-production and mining attractiveness
Alexandre Tisserant
Stefan Pauliuk
Part of the following topical collections:
MRIO for Global Resource Policy
Many new and efficient technologies require 'critical metals' to function. These metals are often extracted as by-product of another metal, and their future supply is therefore dependent on mining developments of the host metal. Supply of critical metals can also be constrained because of political instability, discouraging mining policies, or trade restrictions. Scenario analyses of future metal supply that take these factors into account would provide policy makers with information about possible supply shortages. We provide a scenario analysis for demand and supply of cobalt, a potentially critical metal mainly used not only in high performance alloys but also in lithium-ion batteries and catalysts. Cobalt is mainly extracted as by-product of copper and nickel.
A multiregional input–output (MRIO) model for 20 world regions and 163 commodities was built from the EXIOBASE v2.2.0 multiregional supply and use table with the commodity technology construct. This MRIO model was hybridized by disaggregating cobalt flows from the nonferrous metal sector. Future cobalt demand in different world regions from 2007 to 2050 was then estimated, assuming region- and sector-specific GDP growth, constant technology, and constant background import shares. A dynamic stock model of regional reserves for seven different types of copper, cobalt, and nickel resources, augmented with optimization-based region-specific mining capacity estimates, was used to determine future cobalt supply. The investment attractiveness index developed by the Fraser Institute specifically for mining industry entered the optimization routine as a measure of the regional attractiveness of mining.
The baseline scenario shows no cobalt supply constraints over the considered time period 2007–2050, and recovering about 60 % of cobalt content of the copper and nickel ore flows would be sufficient to match global cobalt demand. When simulating a hypothetical sudden supply dropout in Africa during the period 2020–2035, we found that shortages in cobalt supply might occur in such scenarios.
Cobalt Copper Nickel Scenario analysis Critical metals Dynamic input–output analysis Hybrid multiregional input–output model Companion metals By-product
investment attractiveness index
MRIO
multi-regional input–output
MR-SUT
multiregional supply-and-use table
supply and use tables
United States geological survey
The online version of this article (doi: 10.1186/s40008-016-0035-x) contains supplementary material, which is available to authorized users.
Almost all elements of the periodic table are used in modern technology, especially for renewable energy and communication technologies. Graedel et al. assessed the performance of potential substitutes for all major applications of the different elements (Greenfield and Graedel 2013). Their central finding is that no element can be completely replaced by others, making each element a unique and important contributor to modern technology.
These specialty metals may face supply constraints in the future, not just because of limited mineral resources, but also because of mismatch between demand and available production capacity (Gerling et al. 2004). Moreover, they may be subject to trade restrictions due to export limitations imposed by individual countries. As resources are getting depleted and ore grades decline, the costs of extracting a mineral will increase, though future scarcity is often not reflected in commodity prices at present (Prior et al. 2012). Regional resource scarcity, regional production capacity, regional policies, and international political relations strongly influence future availability and trade patterns for critical materials (European Commission 2010). To anticipate possible future development, one needs prospective assessment of the anthropogenic cycles of specialty metals (Pauliuk and Hertwich 2015), without attempting to predict the future. A scenario analysis of future supply of critical metals is a more modest and scientific approach than a prediction; it enables researchers and policy makers to study the consequences of future economic development on metal reserves and possible futures and supply shortages in different regions. Such a scenario analysis could provide the basis for dynamic assessment of material criticality, as proposed by (Roelich et al. 2014; Knoeri et al. 2013), by applying the criticality framework developed by (Graedel et al. 2012; Nuss et al. 2014) in a prospective model.
1.1 Existing approaches to estimate future metal flows and resource depletion
Estimating the time when current mineral resources will be depleted requires a scenario analysis of metal demand on the global scale and with a long-term perspective. While new for critical metals, this type of analysis is commonly applied to bulk materials including steel, cement, and aluminum. Some integrated assessment models (IAM), for example, consider energy-relevant bulk materials like steel and cement, but not critical materials. This can be seen from the 5th AR of the IPCC, where scenarios for GHG emissions from material production generated by IAMs are presented, but criticality aspects are not quantified (IPCC 2014).
Dynamic stock modeling and material flow analysis (MFA) can be combined to produce prospective studies of future global demand for bulk materials. It has been applied to steel (Hatayama et al. 2010; Pauliuk et al. 2013) and aluminum (Liu et al. 2012). The approach was refined to allow for the study of critical materials (Busch et al. 2014), but it has not yet been applied to potentially critical metals on the global scale. For specific sectors, especially electricity generation from renewable sources, prospective studies for total metal demand exist for silver, gallium, germanium, selenium, indium, tellurium, neodymium, and dysprosium (Zuser and Rechberger 2011; Elshkaki and Graedel 2013, 2014; Roelich et al. 2014; Løvik et al. 2015). These studies found potential supply shortages in different deployment scenarios for silver, tellurium, indium, and germanium.
Interconnections between host and by-product metals are increasingly recognized as a main component of criticality (Graedel et al. 2012; Frenzel et al. 2015; Peiró et al. 2013; Mudd et al. 2013b; Graedel and Reck 2015; Løvik et al. 2015). The previously mentioned MFA studies focused on quantifying flows of critical metals and potential imbalances between co-products of mining, but are lacking a clear link to models of total available resources and mining capacity. We believe that this link is of particular interest for critical metals, which are mainly extracted as by-product, because their future supply might still be bound to the mining developments of the host metal(s). Of particular concern are situations where new deposits enter production for their respective main metals, which contain no or insignificant amounts of the critical metal of interest.
Multiregional input–output analysis (MRIO) contains comprehensive information on international trade and it can be used to trace materials through the global supply chain, both in demand-driven (critical material footprint of final demand) and supply-driven models (distribution of critical materials across final products). Examples for the former include the study of neodymium, cobalt, and platinum footprint of Japanese households (Shigetomi et al. 2015) and for the latter they include the approach by Moran et al. (2014), who use a supply-driven model to trace a conflict mineral through the world economy. MRIO is suitable for making demand scenario analyses, because it contains 'recipes' based on coefficients denoting the amount of input required by unit of demand. From this information, product-specific demand scenarios and critical metal intensity of production scenarios can be created.
1.2 Mining development
The location and type of future mines of host metals will determine the supply of critical metals that are not mined as main product, because a new project for extraction of the host metal does not necessarily provide enough quantities of the companion metals to help match demand.
Factors that can influence the choice of where to expand capacities include ore grades, available infrastructure, environmental issues, relations with the local communities, political and social stability (Mudd et al. 2013b; Prior et al. 2012). These factors can be broadly divided into two categories: (1) the mineralogy of the deposit, which determines the grades, the recovery rate of by-product, and environmental issues. Cobalt recovery rates from by-products of copper and nickel mining can vary from 25 to 80 % depending mostly on the type of deposit, while at the same time, potential mines where cobalt could be recovered as main product would rely on arsenic-rich ores and that would have to be managed carefully (Mudd et al. 2013b). (2) The location and surroundings of the deposit which include the accessibility, land ownership rights, and political stability. For cobalt, the Democratic Republic of the Congo has been the main supplier but its instability is known to have influenced cobalt price (Seddon 2001), and electricity shortages are known to influence the amount of cobalt refined in Congo (USGS 2014).
Future production of fossil fuels is usually modeled using 'ultimately recoverable resources' models, and a similar methodology has recently been applied to copper (Northey et al. 2014). The model by Northey et al. only takes into account the ore grade when deciding where to install new mining capacity. We believe that the attractiveness of the deposit location should also be taken into account. The Fraser Institute yearly publishes a survey of mining companies from which the Investment Attractiveness Index (IAI) is derived, which can be used to rank countries and jurisdictions. It has two components: (1) the Policy Perception Index that looks at policy factors influencing investments and (2) the Best Practices Mineral Potential Index that rates the pure attractiveness of the jurisdiction's geology (Cervantes et al. 2014).
1.3 Research gap and goal
There is a lack of prospective modeling tools to estimate future global demand for critical materials in different regions. Models to indicate how fast known resources may be depleted in different regions are available but the investment attractiveness of different regions has not been taken into account.
The goal of this work is to demonstrate the usefulness of IO-based techniques for quantifying the future flows of critical metals, and to combine IO with dynamic stock models to make it respect physical boundaries that are absent in standard Leontief IO modeling. We show how a static MRIO table can be used to estimate future critical material demand and develop an optimization routine to determine the location of new mines under geological and policy constrains.
This paper focuses on the global demand for and supply of cobalt, a potentially critical metal mainly used in high performance alloys but also in lithium-ion batteries and catalysts.
A hybrid MRIO model based on the EXIOBASE v2.2.0 multiregional supply and use table was built to estimate cobalt demand in different regions (Wood et al. 2014). Extraction of cobalt can happen in several regions of the world. To match supply and demand for cobalt, we apply an optimization routine that determines in which regions and in which mine types resources are extracted and where new mining capacity will be installed. To do justice to the by-product nature of cobalt (Co) that is extracted from copper (Cu) and nickel (Ni) ore, we consider seven type of resources: one for each combinations of the three metals that can be present in a deposit at the same time (Co, Cu, Ni, Co–Cu, Co–Ni, Cu–Ni and Co–Cu–Ni). This optimization model determines where the mining of cobalt is more likely to happen by considering regional mining capacity by mine and the IAI from the Fraser Institute to model mining risk.
The rest of the paper is structured as follows. Sect. 2 introduces the model approach and the data used. Sect. 3 presents the results for several scenarios. Sect. 4 discusses the findings and the model and Sect. 5 concludes.
2 Methods
Cobalt is used in many different applications and often in small quantities. To link future human development with cobalt demand one needs a model that allows us to trace the flows of cobalt through the entire world economy to link final consumption with resource demand. This task is commonly solved using multiregional input–output analysis (MRIO) (Miller and Blair 2009). Different MRIO models, including WIOD, Eora, and EXIOBASE are available (Tukker and Dietzenbacher 2013) and for the purpose of this study we need an MRIO model that contains detailed information about the metal industries. We chose EXIOBASE because it contains six different metal production sectors (cobalt is part of the 'other non-ferrous metal' sector) and seven different metal mining sectors, including separate sectors for copper and nickel ore mining.
A general disadvantage of the MRIO approach when applied to critical materials is that tracing critical materials through IO tables requires gross assumptions about homogenous product mixes, as the aggregation level of I/O is usually so high that it does not allow to distinguish specific critical metals from the bulk of nonferrous metals. The source of product inhomogeneity is twofold. First, final products cover a very wide spectrum of devices, which are aggregated into a few sectors. EXIOBASE2, for example, contains only nine types of manufactured goods (Wood et al. 2014). Second, the critical materials are commonly aggregated together into the non-ferrous metal sector, or to even higher levels. Disaggregation of individual metal sectors in IO tables, as shown by Hawkins et al. (2007), Nakamura et al. (2008), Nakajima et al. (2013), Ohno et al. (2014), has been done at the country level, but requires very detailed data that are usually not available for critical metals on the global scale. Moreover, MRIO tables represent one-year snapshots of the world economy or short historic time series only, and there is no standard procedure for how to extrapolate these tables into the future.
The big advantage of tracing critical metals in MRIO tables is that the magnitude of their flows is usually much smaller than the magnitude of the aggregated nonferrous metal flows they are part of. In our case for example, the share of cobalt in the total output of other nonferrous metals is less than 2.6 %.1 Hence, critical material flows can be considered as perturbation or extension of the nonferrous metal sectors, and instead of properly disaggregating the A-matrix, it is sufficient to hybridize it by adding the data on the physical cobalt requirements of the different cobalt-consuming sectors and to calibrate the hybridized model so that the figures for cobalt use for the reference year are met.
MRIO tables represent one-year snapshots of the world economy, and there is no standard procedure for how to extrapolate these tables into the future. Still, scenario development for A-matrices is common in the literature (Leontief and Duchin 1986; De Koning et al. 2015; Hertwich et al. 2015; Gibon et al. 2015). To build scenarios for future cobalt demand, we start with the calibrated hybridized model for 2007, and extrapolate it into the future by scaling up final demand according to existing scenarios for regional average GDP growth rates published by the OECD (2015). These are broken down by distributing the global growth rates into the different sectors using historic sector-specific income elasticities for the period 1995–2011. This procedure does not take into account substantial changes in technology, demand structure, and trade patterns, which can be expected for the future. It therefore only provides a rough indication of possible future cobalt demand. A similar pathway was taken by Nakamura et al. (2014), who assumed a constant use pattern of steel throughout the 21st century in absence of detailed steel usage scenarios. To compensate for the simplicity of our basic scenario assumptions, we perform a sensitivity analysis on the global economic growth rate and the copper intensity of the world economy, which directly impacts cobalt supply from copper mines.
To understand how the metal demand derived from our hybrid-MRIO model can be matched, a mining optimization routine is developed to determine in which region and from which deposit the extraction shall happen. It is based on linear programming and assumes that mine production should happen preferably in regions that are perceived as less risky by mining companies.
2.1 Scenario development: economy, technology, and trade
Table 1 gives an overview of the assumed development for economy, technology, trade, mining risk, cobalt intensity, and mining that are used to build the scenarios for cobalt demand and supply between 2007 and the year 2050. The different parameter changes are explained below.
Overview of scenario development
Economic growth follows the baseline long-term GDP projection from the OECD. Scenario 5 tests the sensitivity of growth on metal demand
A constant technology A-matrix is assumed
Scenario 4 tests the sensitivity of primary copper-intensity of the economy on cobalt supply. Less copper mining could mean less cobalt extracted
A constant trade pattern is assumed
Mining risk
Mining risk is region- and mine type-specific
Scenario 2 tests the sensitivity of mining risk on metal extraction, by setting it equal for all regions and all mines
Scenario 3 assumes a drop-out of Africa in metal supply because of instability in the Republic Democratic of the Congo, which supplies a significant share of the studied metals
Cobalt intensity
Cobalt intensity of manufactured products remains the same as of 2007
Constant regional and mine type—specific ore grades are assumed and correspond to deposit-wide average for each mine type in each region
Mining capacity is increased endogenously
Cut-off grade
Cut-off grade, the minimum grade at which the metal can be economically extracted, is an important parameter for the mining industry and gives the amount of metal that can be extracted from the known reserves. Therefore, cut-off grade determines new refining capacity (optimized for the given cut-off grade) and regional distribution of extractable resources, which should be included into the model as capacity is bound to a given concentration of metal in ore and global mining risk is bound to the new distribution of extractable resources. The model, however, consider that each region have a constant ore grade for their resources, since all identified deposits (under extraction or not) are aggregated to model one mine at the regional level. It means that marginal mining capacity and production is bringing in line at regional average ore grade
Table 2 gives an overview of the scenarios we defined. Scenario 1 considers a region-specific mining risk for determining mining output whereas this component is ignored in scenario 2. Both scenarios come in two versions, where cobalt is considered a by-product and cobalt demand does not enter the optimization as constraint (a), and a version where Cu and Ni mining capacities are installed by taking Co supply into account.
Scenario definition for cobalt supply
a) Cobalt is only extracted as by-product (is not included in the optimization routine)
Mining risk is different in each regions
b) Cobalt needs to be supplied (is included in optimization routine and global extraction of cobalt shall equal global cobalt demand)
Mining risk is the same in all regions
Mining risk in RoW Africa is set to 100 during 2020–2035 and ramp-down of capacity from capacity 2020 to 5 % of capacity 2020 over 2023 until 2029 and ramp-up back to 80 % of 2020 capacity in 2035
a) Copper demand is reduced by 20 % in 2050
Primary copper demand is modified in the A matrix (cobalt is considered by-product only and mining risk differs in each regions)
b) Copper demand is increased by 20 % in 2050
a) Growth rate is slowed down by 20 % in 2050
GDP growth is 'slowed down' or 'speed up' (cobalt is considered by-product only and mining risk differs in each regions)
b) Growth rate is speed up by 20 % in 2050
2.2 Details of model computation
The model to link final demand for products and services to the depletion of cobalt resources consists of several parts (Fig. 1). Below, each step is explained in detail. For a full description of the methodology we refer to the Additional file 1.
Open image in new window
Model structure. 1 Estimating future final demand by using exogenous GDP projections and breaking them down into sectors using historic growth rates for the spending in different sectors. 2 Hybridizing the 20 regions MRIO model to separate cobalt demand (physical foreground matrix Acobalt) from demand for non-ferrous metals. 3 Determining the total demand of cobalt by region and year using the Leontief I/O model and assuming a constant A-matrix. 4 Solve a linear program to determine extraction patterns for Cu, Ni, and Co that are maximally attractive for investors. 5 Use a dynamic stock model of the known cobalt resources to determine their depletion over time
2.3 Step 1: Estimating future final demand across all sectors and regions
Our simple scenario to project future cobalt demand is based on a final demand increase according to GDP growth projections. These projections were retrieved from the OECD until 2050 and aggregated if necessary following the regional aggregation of the MRIO model (OECD 2015). Some historical values of GDP were also retrieved from the World Bank (2015). Some countries/regions were not listed in the OECD dataset. Their GDP growth was assumed to be the same as the average growth of non-OECD countries.
Using the time series for multiregional final demand for the years 1995 and 2011 (EU DESIRE Project 2013), we determined historic sector-specific growth rates over these 16 years, which were used as proxy to determine future income elasticities to distribute the overall GDP growth in a country across the 163 sectors of the MRIO model (cf. section S1-1 in the Additional file 1).
We believe that keeping a product-specific growth rate is better than growing demand only with GDP growth, because some sectors grow faster than others as countries get richer and this impacts metal demand. At the same time, it is hard to say which demand for products will grow more or less, because of not only new technology but also lifestyle and 'level of development' and income level. For those reasons, we kept the product-specific growth at the world average to take into account and attenuate those effects.
2.4 Steps 2 and 3: The hybrid MR-SUT and hybrid MRIO model
The core of the supply chain model is a MRIO model (Miller and Blair 2009). The MRIO model was built from the EXIOBASE multiregional supply-and-use table (MR-SUT) v2.2.0 (Wood et al. 2014). The reference year is 2007, the unit is million EUR, the 48 countries and regions were aggregated into 20 world regions, and the number of commodities and industries per region is 163.
The hybrid model is built starting at the supply and use level. To the MR-SUT, one industry was added for each region: Refined cobalt production, supplying one main product: Refined cobalt, as shown in Fig. 1. Regional and sector-specific use of cobalt is estimated for 2007 using balancing algorithms and regional estimates provided by the Cobalt Development Institute and the United States Geological Survey (USGS) to match the global demand and global use pattern for cobalt. For more details regarding the procedure, we refer to section S1-4.1.1 in the Additional file 1. Use of cobalt is denoted at the intersection between the domestic refined cobalt product and the domestic cobalt using industries. And supply of cobalt is denoted on the diagonal as each producer of refined cobalt supplies its total amount of cobalt to the domestic markets. We did not determine any trade pattern for cobalt, since we are only interested in the global demand for cobalt and a matching level of supply.
To avoid double counting, background economic data need to be corrected by the amount of disaggregated production happening in the foreground using cobalt price information and inputs to cobalt production shall also be taken out from the background. Here, this is however not done for three reasons: (1) the main issue being that the SUT structure does not match perfectly the estimated use of cobalt. For example, certain industries that we know are using cobalt, do not require any 'Other non-ferrous metal' in the SUT, and hence the cobalt flow cannot be disaggregated. (2) Estimating the price of a single commodity flow is difficult: price information that can be found for refined cobalt comes as market price, which is different than the valuation of the SUT in basic prices. (3) The global value of refined cobalt represents only about 2.5 % of the global value of the 'Other non-ferrous metal' sectors, and hence, the error introduced by the hybridization is very small.
A detailed description of the hybridization is contained in section S1-4.3 in the Additional file 1.
2.5 Step 4: Extraction model
A linear program is applied to determine which mines will be exploited to supply the metals. It aims at minimizing the global mining risk of supplying metal. We define the mining risk as being the complement of the IAI provided by the Fraser Institute (Cervantes et al. 2014), which measure the attractiveness of a jurisdiction for mining companies. The core equations of the model are shown below.
$$\begin{array}{*{20}l} {\text{minimize:}} & {\sum\limits_{m}^{{}} {\mathop C\nolimits_{m}^{T} \mathop { \cdot P}\nolimits_{m} } }\quad\quad\quad \quad\,\,\, {\forall m \in M} \\ {\text{subject to:}} & {\sum\limits_{m}^{{}} {\mathop G\nolimits_{m} \mathop { \cdot P}\nolimits_{m} } \; = \;\mathop D\nolimits_{{}} } \quad\quad {\forall m \in M} \\ {} & {\mathop P\nolimits_{m} \; > \;0} \quad\quad\quad\quad\quad {\forall m \in M} \\ {} & {\mathop P\nolimits_{m} \; \le \;\mathop L\nolimits_{m} } \quad\quad\quad\quad\,\,\, {\forall m \in M} \\ \end{array}$$
where M is the number of mine types, C m is a column vector representing the mining risk of mine type m and its length equals the number of regions. P m is the mine production vector determined by the linear program. It gives the amount of ore extracted in mine type m in the different regions and it has the same length as C. G m is the ore grades matrix of mine type m. It gives the average ore grade of the different metals (cobalt, copper, and nickel in our case) in the ore of mine type m in the different regions and has dimension number of metals studied times the number of regions. D represents the demand of the different metals that need to be mined. The last two equations set the bounds for the production vector: it has to be positive and should be lower or equal to the mining capacity L m that is a vector of the maximum amount of ore that can be extracted from mine type m in the different regions. Additional file 2 provides the values for all parameters that are used in this work.
We are interested in assessing the future supply of cobalt, which is mostly extracted as by-product of copper and nickel mining. To reflect this by-product nature of cobalt, the model shown in equation 1 is solved by only considering copper and nickel demand. The obtained output vector P is then multiplied with the cobalt grade of the different mines to determine the amount of cobalt that can be supplied. The resulting cobalt output is compared with cobalt demand. An alternative scenario would be to say that in the future, more emphasis will be put on meeting cobalt demand upfront. In that case, cobalt demand enters the linear program along with copper and nickel demand.
It is worth noting that average ore grades also influence the cost of extraction: If the ore grade of two mines mining only copper differs by a factor of ten but both mines show the same cost for extracting one ton of ore, the model will choose the one with higher grade since it can meet the copper demand 10 times 'less risky' than the low grade mine.
2.6 Step 5: Cobalt, copper, and nickel resources and the capacity constraint
The only operating mine with cobalt as main product is the Bou Azzer mine in Morocco. Therefore, cobalt supply is highly dependent on demand for nickel and copper, as these metals represent the main revenue streams for the mining company exploiting the deposit. This by-product nature couples the amount of cobalt that can be extracted to the demand for copper and nickel and might lead to imbalances between supply and demand. To model resources of copper, nickel, and cobalt, seven types of deposits/mines were defined: deposit that consists of only cobalt, copper or nickel, the ones that have two co-products cobalt–copper, cobalt–nickel, and copper–nickel, and the deposits that allow extraction of the three metals together.
The assessment of resources for those mine types is performed using the extensive data gathered by Mudd and Jowitt (2014) and Mudd et al (2013), which consists of detailed information on all deposits, being currently exploited or not, that contain nickel and copper, respectively (Mudd and Jowitt 2014; Mudd et al. 2013). The two databases should overlap when copper and nickel are both present, however, the names and deposit sizes do not always correspond between the two datasets. In case of conflict, we use the information from the nickel database as it contains more recent data. The mines/deposits in the databases are split into the seven groups defined above. The amount of each of the three metals in each deposit is determined by the reported amount of ore and the ore grade of the different metals in presence.
Data gathered at the deposit/mine level in each countries are aggregated under the regions defined by the MRIO model and average concentrations for each metal are calculated in each regions for each type of mine. This inventory allows us to build the grade matrices for each mine type m, G m , which give the amount of metal that can be extracted per kg of ore mined in each region and the reserves in ore in each type of mine m in each region, R m .
Each year, resource depletion is determined by subtracting the mine production of the previous year from the reserves. New mine capacity is installed following some simple rules. First, each year, each mine type in each region increases its capacity by 3 %. Furthermore, if the capacity utilization rate of a mine is higher than 80 % and the mine has more than 20 years of operating time left at current capacity, then this mine is allowed to increase capacity by 20 %. Finally, we make sure that the mining capacity cannot be bigger than the remaining ore reserves.
3.1 World cobalt demand
For the basic growth scenario, global annual cobalt demand increases from 50 kt/yr in 2007 to 110 kt/y in 2030 and 190 kt/yr in 2050, which is a roughly fourfold increase over the modeling period (Fig. 2). According to that scenario, the Chinese economy will remain the largest cobalt consumer and its share in the global cobalt consumption will increase from about 20 % in 2007 to about 35 % in 2060. As constant technology was assumed, cobalt demand scales proportional to GDP growth. Other economic regions like Africa or Indonesia will experience annual growth rates of up to eight and 6.5 % in 2010, but with the A-matrix we used, their cobalt demand will be mostly satisfied by imports of cobalt-containing products. Cumulative cobalt demand in ore that needs to be extracted for the period 2007–2050 amount to about 6300 ktons, which is about 40 % of the estimated total cobalt reserves of 16 Mt that are known and recoverable, mostly from copper and nickel mines (Mudd et al. 2013).
Economy-wide demand for cobalt, by region, for the baseline scenario (sector-specific weighted economic growth and constant technology). Black-dashed lines show the global cobalt demand under high and low economy growth (scenarios 5a and 5b)
3.2 Projections of future supply of cobalt, copper, and nickel
We present results for only a selection of the most interesting scenarios, the complete set of figures and results for all scenarios can be found in Additional file 3. While there is enough known cobalt in the ground to meet demand until 2050 at least, a significant share of it is likely to come from politically instable regions, such as the Democratic Republic of the Congo, which increases the supply risk of cobalt (Fig. 3).
Supply of cobalt, copper, and nickel for three selected scenarios, by region. The three scenarios 1a, 1b, and 3a consider a country-specific mining risk. Left Cobalt demand does not enter the optimization. Middle Cobalt demand is included in the optimization. Right RoW Africa drops out as metal ore supplier between 2020 and 2035, cobalt demand does not enter the optimization
According to the solution of the linear program for metal supply for scenario 1a, RoW Africa, of which the Democratic Republic of the Congo is a part, supplies about as much cobalt as all other world regions together (Fig. 3 left). In this scenario, there is ample supply of cobalt as unconstrained by-product from copper and nickel mining for the years after 2015, which means that not all by-products from copper and nickel mining have to undergo Co-recovery. If the mining output solution is constrained to exactly meet cobalt demand (Fig. 3 middle), the contribution of RoW America to global cobalt supply shifts towards the end of the modeling period, and RoW Africa and RoW Asia become the largest suppliers for 2015–2045.
A drop-out of one major cobalt supplier, here RoW Africa, would lead to a supply shortage of cobalt of about 20 % of global demand, provided the mining mix in other world regions did not change or previously unused tailings from Co and Ni mining were not used to produce cobalt (Fig. 3 right).
While the contributions of the different regions to global cobalt supply change significantly between the three scenarios, there is not much change in the regional pattern for Cu and Ni supply. Cu supply is dominated by RoW America, especially from Chile, and Ni supply is split rather evenly across RoW Asia, RoW America, Indonesia, Russia, and others. The drop-out of RoW Africa in scenario 3a is hardly noticeable in the Cu supply mix.
Cobalt is a typical accompanying metal. Future cobalt supply will be almost entirely met from the by-products of copper, nickel, or copper–nickel mines. More than 90 % of the cobalt will come from cobalt–copper and cobalt–nickel mines, while a fraction of 5–10 % comes from cobalt–copper–nickel mines (Fig. 4).
Supply of cobalt, copper, and nickel by mine type and scenario. Same scenarios as shown in Fig. 3
While the cobalt–nickel and cobalt–copper–nickel mines will account for 30–60 % of global nickel supply in the different scenarios, the mines with co-production of metals play almost no role in global copper supply. In Fig. 4, copper and nickel supply always meets demand because during the entire modeling time there is always sufficient supply from mines which have copper and nickel as their respective main product.
In the scenarios where cobalt supply is not part of the optimization routine, only a fraction of the cobalt needs to be recovered from the accompanying metals of copper and nickel mining. This fraction lies around 60 % for most of the modeling period (Fig. 5), and it is slowly declining, which is because cobalt-rich mines tend to be located in regions with higher mining risk, which are exploited in later years. Mudd et al. estimate that about 67 % of the Co contained in the deposits with reliable data can be economically recovered (Mudd et al. 2013), which would mean that in our demand scenarios 1a, 2a, and 3a, all cobalt-containing by-product fractions would have to undergo cobalt recovery. Between 2046 and 2050, depending on the scenario, there is not enough copper resources left to justify the required capacity increase to match copper demand, and the model stops. For scenario 4b, in which the world economy gradually becomes more copper-intensive, this happens already in 2046 (see Additional file 3).
Minimum cobalt recovery rate from by-products of copper and nickel mining to match cobalt demand in the scenarios where Cobalt supply is not part of the optimization routine. Dashed lines show when cobalt supply does not allow to match demand
Dashed lines in Fig. 5 indicate supply shortages because the determined 'low risk' extraction pattern does not contain enough Co in the extracted ore to match global demand. However, from the mine capacity plot for scenario 3a (in Additional file 3), we can see that even in the period of shortage of cobalt supply, the installed mining capacity can provide enough cobalt to match demand. That means that there is enough capacity to rearrange the extraction pattern to match global demand, albeit at higher mining risk.
4.1 Comparison of MRIO-scenario with the actual development for 2007–2012
Table 3 shows the world refinery production figures compiled by the USGS and from our own model calculations (USGS 2014). It shows that the hybrid-MRIO seems to underestimate the projection of future cobalt demand when compared to reported values by the USGS.
Comparison of cobalt consumption for the world
World refinery production (tons cobalt)
World cobalt demand growth (%/year)
Year\ref
This work
−0.45
Furthermore, a brochure containing a cobalt market forecasting report by Roskill presents a growth estimate of more than 6 % per year for cobalt demand worldwide up to 2018. The authors of this brochure also expect the cobalt demand to reach over 11,000 tons/year in 2018 (Roskill 2014), whereas our base scenario only estimates a demand around 76,000 tons/year. Comparing our results with both the USGS and Roskill's numbers leads to the same conclusion that our projection for cobalt demand may underestimate growth in the near future, and the depletion of known resources might proceed event faster than we anticipate. A main reason for the accelerated use of cobalt could be its increasing application in new technologies, as pointed out in the introduction. Accelerated growth could be reproduced by the MRIO model by adjusting the coefficients of the A-matrix, as demonstrated by Leontief and Duchin, for example (Leontief and Duchin 1986).
4.2 Cobalt as a by-product of copper and nickel mining
Cobalt supply is dependent on sufficiently high copper and nickel demand. Our scenario calculations show that in a setting where cobalt supply is of no concern, the extraction pattern with minimal global political risk will lead to ample supply of cobalt in the by-products of copper and nickel mining. If major resources like the ones in the Democratic Republic of the Congo seize to supply the world markets due to political unrest, however, this is likely to affect copper, nickel, and cobalt supply alike and new supply patterns with potentially higher political and investment risk will have to be found to ensure that demand for these three metals can be met. While we could not find evidence for problems with cobalt supply in the next three decades, mainly due to corresponding high growth in copper and nickel demand in our scenarios, this may not be the case for accelerated cobalt use, for example, due to a massive upscaling of the production of cobalt-containing magnets or lithium-ion batteries. The future supply of special metals requires close monitoring of demand trends, a comprehensive assessment of mineral resources and refined modeling techniques to generate scenarios for metal supply as basis for investment and resource policy.
4.3 Limits of the demand model
EXIOBASE is the only one of the available MRIO models that contains separate sectors for copper and nickel mining; only cobalt demand had to be disaggregated from other nonferrous metals. Since all three studied metals are traded on global markets, their prices can be expected to vary across narrow ranges only and we therefore believe that the use of a monetary IO model as basis for modeling physical metal demand can be justified. The larger challenge lies in the creation of meaningful scenarios for the MRIO A-matrix and final demand. Our simple attempt shall give a first impression of possible future cobalt demand under the assumption that the present structure of the world economy is preserved. Our estimate for final demand could be refined using sector-specific income elasticities. Scenarios for the future A-matrix are relevant for a very wide range of applications, not only for our study. We argue that there should be a systematic, transparent, and inclusive effort by the IO community to generate scenarios for future MRIO tables, including data generated by integrated assessment models and studies of the possible efficiency and material requirements of future technology. This development is already ongoing, and first results include the THEMIS model for renewable energy supply (Hertwich et al. 2015) and the scenario work on climate change mitigation by Koning et al. (2015). Scenarios for future trade patterns, which enter the multiregional A-matrix, can be determined using the gravity model of trade (Tinbergen 1962) or, to determine trade patterns that follow a certain objective, the world trade model (Duchin et al. 2015; Duchin 2005; Strømman and Duchin 2006; Duchin and Stephen 2015).
Better data on the share of cobalt in the nonferrous metal consumption of different economic sectors are needed to produce more trustworthy demand estimates. Since this information is not part of macroeconomic statistics, it could be estimated using process inventory databases like ecoinvent (www.ecoivent.org), the bilateral flows of cobalt embedded in the different commodity groups determined by Nansai et al. (2014), and the end-use sector split estimated by Harper et al. (2012). As resource scarcity and material criticality become more established on the political agenda, more detailed statistics on inter-industrial metal use may become compulsory in the future, as it is already the case for the use of potential conflict minerals in the US (Securities and Exchange Commission 2012).
4.4 Limits of the resource supply model
We used an aggregated representation of copper, nickel, and cobalt resources with 20 regions and seven resource types to determine regional patterns of resource extractions. More sophisticated and site-specific extraction models, like the one developed by Northey et al. can give a more detailed picture and may contain a more realistic and site-specific representation of capacity extension and the development of new mining projects (Northey et al. 2014). It would also allow us to take into account the cut-off grade to determine the economically recoverable amount of resources. This is particularly important for cobalt as deposits with higher grade in copper or nickel, which would probably be exploited first, do not necessarily have high cobalt grade at the same time, therefore what can be true at an aggregated level might not be the same at the deposit scale. Investment and supply risks have a site-specific component, too. They not only depend on the political climate of the hosting country but also on how the operator interferes with local ecosystems and how local communities react to pollution, land use change, or relocation of people. A better index taking into account country and site-specific considerations could be therefore developed, which could also include the mineralogy of the deposit and the associated environmental risk. Examples for environmental risks include the extraction of radioactive by-products (thorium in case of Dysprosium) and the arsenide nature of ores where cobalt is the main metal (Elshkaki and Graedel 2014; Mudd et al. 2013).
4.5 Scenarios for cobalt trade
Next to the location of cobalt-producing mines, the trade pattern of cobalt, which is outside the scope of this work, is another determiner of the global cobalt supply chains and the supply risk of individual countries. What determines trade relationships? Srivastava et al. concluded that political instability negatively impacts the amount of exports of a country, and that trade flows are especially large between countries with colonial relations in the past (Srivastava and Green 1986). Morrow et al. found that democracy and common geopolitical interests increase trade, but strategic alliances not necessarily do so. Pollins argues in the same way by saying that trade is significantly influenced by broader political relations (Morrow et al. 1998, 1999; Pollins 1989). More recently, Umana Dajud classified the political proximity of countries according to the correlation of their votes in the UN, differences in forms of government, and ideological distance of citizens, and found that political differences impact trade relations (Umana Dajud 2013). Another important aspect is the increasing willingness of the international community to stop the protracting effect of specialty metal ore export on regional conflicts, for example, the export of tantalum-containing minerals from the Democratic Republic of the Congo (Moran et al. 2014), which may alter global trade pattern.
Future trade modeling needs to bridge the gap between these qualitative findings and their application in quantitative trade models such as the gravity and the world trade models (Duchin et al. 2015; Duchin 2005; Strømman and Duchin 2006; Duchin and Stephen 2015).
Currently, our model does not contain the refining stage of cobalt because we believe the location of refining to be of secondary importance, as refiners are much easier to relocate and expand than mines. Therefore, in our model, we did not consider any restrictions or risks associated with the trade of both ore and refined cobalt.
4.6 Recycling and dynamic modeling of the use phase of cobalt
Recycling is central to decoupling resource demand from economic development in regions with mature in-use stocks. With the high end-of-life recycling rate of cobalt [68 % were reported for the US (Graedel et al. 2011)] and because in-use stocks of cobalt have been growing continuously, one can expect rising amounts of Co-containing scrap in the future. Comprehensive long-term scenarios for metal cycles need to address recycling and therefore need to include dynamic stock models not only for the mineral resources of cobalt, but also for the use phase of cobalt. The combination of dynamic stock models and IO models has been demonstrated already (Nakamura et al. 2014; Kagawa et al. 2015), and it was applied in an MRIO context by Hertwich et al. to estimate the turnover rates of energy installations (Hertwich 2015). The combination of dynamic material stock models with input–output models with a by-product-technology construct, which would allow researchers to study how secondary materials can replace primary production, has to our knowledge not been attempted yet.
Estimating future resource depletion and its impact on the economy in different world regions is a complicated endeavor with uncertain results. Rather than trying to be complete in our assessment, we presented several elements that could eventually become part of a very comprehensive analysis of future material criticality. The elements we included are the global multiregional long-term scope, the consideration of both factor endowments (mineral resources) and mining risk, the decision of actors to expand or maintain production capacity, the coupling between main and accompanying metals, and the consequences of possible drop-outs of major suppliers on global supply chains. We showed that already with a stylized approach as used here, one can quantify the impact of economic growth and supply chain disruptions on resource depletion. Moreover, we extended existing methodology by combining MRIO with dynamic stock models of metal resources and by incorporating mining risk into a resources depletion model. These extensions may be relevant for applications in a different context.
World refined cobalt production in 2007 was 53,300 tons at a metal price of 30.55 $/lb (49.2 €/kg at 2007 rate)(USGS 2010). Total global supply of other nonferrous metals in EXIOBASE is 102796 M€.
SP designed the research and built the MRIO model from the CREEA MR-SUT. AT collected the foreground data and built the hybrid model, the linear program, and the dynamic stock model, and performed the analysis. Both AT and SP wrote the paper. Both authors read and approved the final manuscript.
The work of SP was funded by the European Commission under the DESIRE Project (Grant number 308552). The research was conducted without involvement of the funding source. The authors thank Richard Wood for his help with parsing the CREEA MR-SUT and for advices regarding hybridization of the MRIO, Guillaume Majeau-Bettez for advice and programming assistance on the treatment of exclusive by-products in a multiregional MR-SUT, and Yasushi Kondo for making us aware of the economic freedom index of the Fraser Institute. AT thanks Richard Wood for the time given to complete this side project. The authors thank the journal's editor and two anonymous reviewers for their constructive comments.
40008_2016_35_MOESM1_ESM.pdf (2 mb)
Additional file 1: Additional file that contains the details regarding the model and the MRIO framework used (aggregation of products and regions) and that also contains some additional results.
40008_2016_35_MOESM2_ESM.xlsx (39 kb)
Additional file 2: Additional file provinding the set of input data used for the hybridation of the MRIO, economic growth and the extraction model.
40008_2016_35_MOESM3_ESM.zip (10.3 mb)
Additional file 3: Additional file providing the complete set of figures and results for all scenarios as defined in table 2.
Busch J, Steinberger JK, Dawson DA, Purnell P, Roelich KE (2014) Managing critical materials with a technology- specific stocks and flows model. Environ Sci Technol 48(2):1298–1305CrossRefGoogle Scholar
Cervantes M, Green KP, Wilson A (2014) Survey of Mining Companies: 2013/2014. http://www.fraserinstitute.org/research-news/display.aspx?id=20902
Duchin F (2005) A world trade model based on comparative advantage with m regions, n goods, and k factors. Econ Syst Res 17(2):141–162. http://www.tandfonline.com/doi/abs/10.1080/09535310500114903\nhttp://www.tandfonline.com/doi/abs/10.1080/09535310500114903#.Us1XbbTn6So\nhttp://www.tandfonline.com/doi/pdf/10.1080/09535310500114903
Duchin F, Stephen SH (2015) Sustainable use of global resources: combining multiregional input-output analysis with a world trade model for evaluating scenarios. part 2: implementation. J Ind Ecol 00:1–9Google Scholar
Duchin F, Stephen SH, Strømman AH (2015) Sustainable use of global resources: combining multiregional input-output analysis with a world trade model for evaluating scenarios. part 1: conceptual framework. J Ind Ecol 00:1–8Google Scholar
Elshkaki A, Graedel TE (2013) Dynamic analysis of the global metals flows and stocks in electricity generation technologies. J Clean Prod 59:260–273CrossRefGoogle Scholar
Elshkaki A, Graedel TE (2014) Dysprosium, the balance problem, and wind power technology. Appl Energy 136:548–559CrossRefGoogle Scholar
EU DESIRE Project. 2013. EU DESIRE ProjectGoogle Scholar
EuropeanCommission (2010) Critical raw materials for the EU, report of the Ad hoc working group on defining critical raw materials. Eucom 39(July):1–84Google Scholar
Frenzel M, Tolosana-Delgado R, Gutzmer J (2015) Assessing the supply potential of high-tech metals – a general method. Res Policy 46:45–58. http://linkinghub.elsevier.com/retrieve/pii/S0301420715000781
Gerling JP, Wellmer F, Gerling JP (2004) Raw material availability—with a focus on fossil energy resources. World Min—Surf Undergr 56(4):254–262Google Scholar
Gibon T, Wood R, Arvesen A, Bergesen JD, Suh S, Hertwich EG (2015) A methodology for integrated, multiregional life cycle assessment scenarios under large-scale technological change. Environ Sci Technol 49(18):11218–11226. http://pubs.acs.org/doi/10.1021/acs.est.5b01558
Graedel TE, Allwood J, Birat JP, Buchert M, Hagelüken C, Reck BK, Sibley SF, Sonnemann G (2011) What do we know about metal recycling rates? J Ind Ecol 15(3):355–366. http://doi.wiley.com/10.1111/j.1530-9290.2011.00342.x
Graedel TE, Reck BK (2015) Six years of criticality assessments: What have we learned so far? J Ind Ecol 00(0):n/a–n/a. http://doi.wiley.com/10.1111/jiec.12305
Graedel TE, Barr R, Chandler C, Chase T, Choi J, Christoffersen L, Friedlander E et al (2012) Methodology of metal criticality determination. Environ Sci Technol 46(2):1063–1070CrossRefGoogle Scholar
Greenfield A, Graedel TE (2013) The omnivorous diet of modern technology. Resour Conserv Recycl 74:1–7CrossRefGoogle Scholar
Harper EM, Kavlak G, Graedel TE (2012) Tracking the metal of the goblins: cobalt's cycle of use. Environ Sci Technol 46(2):1079–1086CrossRefGoogle Scholar
Hatayama H, Daigo I, Matsuno Y, Adachi Y (2010) Outlook of the world steel cycle based on the stock and flow dynamics. Environ Sci Technol 44(16):6457–6463CrossRefGoogle Scholar
Hawkins T, Hendrickson C, Higgins C, Matthews HS, Suh S (2007) A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis. Environ Sci Technol 41(3):1024–1031CrossRefGoogle Scholar
Hertwich EG, Gibon T, Bouman EA, Arvesen A, Suh S, Heath GA, Bergesen JD, Ramirez A, Vega MI, Shi L (2015) Integrated life-cycle assessment of electricity-supply scenarios confirms global environmental benefit of low-carbon technologies. Proc Natl Acad Sci USA 112(20):6277–6282. http://www.pnas.org/lookup/doi/10.1073/pnas.1312753111
IPCC (2014) Mitigation of climate change: the working group III (WGIII) contribution to the Fifth Assessment Report on mitigation of climate change. Switzerland, GenevaGoogle Scholar
Kagawa S, Nakamura S, Kondo Y, Matsubae K, Nagasaka T (2015) Forecasting replacement demand of durable goods and the induced secondary material flows. J Ind Ecol 19(1):10–19CrossRefGoogle Scholar
Knoeri C, Wäger PA, Stamp A, Althaus HJ, Weil M (2013) Towards a dynamic assessment of raw materials criticality: linking agent-based demand—with material flow supply modelling approaches. Sci Total Environ 461–462:808–812CrossRefGoogle Scholar
de Koning A, Huppes G, Deetman S, Tukker A (2015) Scenarios for a 2°C world: a trade-linked input–output model with high sector detail. Climate Policy 3062(December):1–17. http://www.tandfonline.com/doi/abs/10.1080/14693062.2014.999224
Leontief W, Duchin F (1986) The Future Impact of Automation on Workers. Oxford University Press, New YorkGoogle Scholar
Liu G, Bangs CE, Müller DB (2012) Stock dynamics and emission pathways of the global aluminium cycle. Nat Clim Change 2(10):1–5Google Scholar
Løvik AN, Restrepo E, Müller DB (2015) The global anthropogenic Gallium system: determinants of demand, supply and efficiency improvements. Environ Sci Technol 49(9):5704–5712. http://www.ncbi.nlm.nih.gov/pubmed/25884251
Miller RE, Blair PD (2009). Input-Output Analysis—Foundations and Extensions. 2nd ed. Cambridge University PressGoogle Scholar
Moran D, McBain D, Kanemoto K, Lenzen M, Geschke A (2014) Global supply chains of coltan a hybrid life cycle assessment study using a social indicator. J Ind Ecol 00:1–9Google Scholar
Morrow JD, Siverson RM, Tabares TE, Stanford JDM (1998) The political determinants of internation trade: the major powers. Am Political Sci Rev 92(3):649–661CrossRefGoogle Scholar
Morrow JD, Siverson RM, Tabares TE (1999) Correction to "the political determinants of international trade". Am Political Sci Rev 93(4):931–933CrossRefGoogle Scholar
Mudd G, Weng Z, Jowitt S, Turnbull ID, Graedel TE (2013b) Quantifying the recoverable resources of by-product metals: the case of cobalt. Ore Geol Rev 55(C):87–98. http://dx.doi.org/10.1016/j.oregeorev.2013.04.010
Mudd G, Jowitt S (2014) A detailed assessment of global Ni resource trends and endowments. Econ Geol 109:1813–1841CrossRefGoogle Scholar
Mudd G, Weng Z, Jowitt S (2013) A detailed assessment of global Cu resource trends and endowments. Econ Geol 108:1163–1183CrossRefGoogle Scholar
Nakajima K, Ohno H, Kondo Y, Matsubae K, Takeda O, Miki T, Nakamura S, Nagasaka T (2013) Simultaneous material flow analysis of nickel, chromium, and molybdenum used in alloy steel by means of input-output analysis. Environ Sci Technol 47(9):4653–4660CrossRefGoogle Scholar
Nakamura S, Murakami S, Nakajima K, Nagasaka T (2008) Hybrid input-output approach to metal production and its application to the introduction of lead-free solders. Environ Sci Technol 42(10):3843–3848CrossRefGoogle Scholar
Nakamura S, Kondo Y, Kagawa S, Matsubae K, Nakajima K, Nagasaka T (2014) MaTrace: tracing the fate of materials over time and across products in open-loop recycling. Environ Sci Technol 48(13):7207–7214CrossRefGoogle Scholar
Nansai K, Nakajima K, Kagawa S, Kondo Y, Suh S, Shigetomi Y, Oshita Y (2014) Global flows of critical metals necessary for low-carbon technologies: the case of neodymium, cobalt, and platinum. Environ Sci Technol 48(3):1391–1400CrossRefGoogle Scholar
Northey S, Mohr S, Mudd G, Weng Z, Giurco D (2014) Modelling future copper ore grade decline based on a detailed assessment of copper resources and mining. Resour Conserv Recycl 83:190–201. doi: 10.1016/j.resconrec.2013.10.005 CrossRefGoogle Scholar
Nuss P, Harper EM, Nassar NT, Reck BK, Graedel TE (2014) Criticality of iron and its principal alloying elements. Environ Sci Technol 48(7):4171–4177CrossRefGoogle Scholar
OECD (2015) GDP long-term forecast. OECD. https://data.oecd.org/gdp/gdp-long-term-forecast.htm. Accessed November 3, 2015
Ohno H, Matsubae K, Nakajima K, Nakamura S, Nagasaka T (2014) Unintentional flow of alloying elements in steel during recycling of end-of-life vehicles. J Ind Ecol 18(2):242–253CrossRefGoogle Scholar
Pauliuk S, Hertwich EG (2015) Prospective models of society's future metabolism—what industrial ecology has to contribute. In: Roland C, Angela D (ed) Taking stock of industrial ecology. Springer, Dordrecht, In pressGoogle Scholar
Pauliuk S, Milford RL, Müller DB, Allwood J (2013) The steel scrap age. Environ Sci Technol 47(7):3448–3454Google Scholar
Peiró LT, Méndez GV, Ayres RU (2013) Material flow analysis of scarce metals: sources, functions, end-uses and aspects for future supply. Environ Sci Technol 47(6):2939–2947CrossRefGoogle Scholar
Pollins BM (1989) Conflict, cooperation and commerce: the effect of international political interactions on bilateral trade flows. Am J Polit Sci 33(3):737–761CrossRefGoogle Scholar
Prior T, Giurco D, Mudd G, Mason L, Behrisch J (2012) Resource depletion, peak minerals and the implications for sustainable resource management. Glob Environ Change 22(3):577–587CrossRefGoogle Scholar
Roelich KE, Dawson DA, Purnell P, Knoeri C, Revell R, Busch J, Steinberger JK (2014) Assessing the dynamic material criticality of infrastructure transitions: a case of low carbon electricity. Appl Energy 123:378–386CrossRefGoogle Scholar
Roskill (2014) A 2014 Roskill report Cobalt : Market Outlook to 201. CobaltGoogle Scholar
Securities and Exchange Commission (2012) 17 CFR Parts 240 and 249b. http://www.sec.gov/rules/final/2012/34-67716.pdf
Seddon M (2001) The cobalt market—current volatility versus future stability? Appl Earth Sci 110(2):71–74CrossRefGoogle Scholar
Shigetomi Y, Nansai K, Kagawa S, Tohno S (2015) Trends in Japanese households' critical-metals material footprints. Ecol Econ 119: 118–126. http://linkinghub.elsevier.com/retrieve/pii/S0921800915003468
Srivastava RK, Green RT (1986) Determinants of bilateral trade flows. J Bus 59(4):623–640CrossRefGoogle Scholar
Strømman AH, Duchin F (2006) A world trade model with bilateral trade based on comparative advantage. Econ Syst Res 18(3):281–297. http://www.scopus.com/inward/record.url?eid=2-s2.0-33748112791&partnerID=40\nhttp://pdfserve.informaworld.com/877850_751305627_755220960.pdf
Tinbergen J (1962) Shaping the World Economy; Suggestions for an International Economic Policy. Twentieth Century Fund, New YorkGoogle Scholar
Tukker A, Dietzenbacher E (2013) Global multiregional input–output frameworks: an introduction and outlook. Econ Syst Res 25(1):1–19. http://www.tandfonline.com/doi/abs/10.1080/09535314.2012.761179
Umana Dajud C (2013) Political Proximity and International Trade. Econ Pol 25(3):n/a–n/aGoogle Scholar
USGS (2014) Minerals Yearbook Cobalt - 2012Google Scholar
Wood R, Stadler K, Bulavskaya T, Lutter S, Giljum S, de Koning A, Kuenen J et al (2014) Global sustainability accounting—developing exiobase for multi-regional footprint analysis. Sustainability 7(1):138–163CrossRefGoogle Scholar
World Bank (2015) GDP (current US$). World Bank. http://databank.worldbank.org/data/reports.aspx?source=2&type=metadata&series=NY.GDP.MKTP.CD. Accessed 3 Nov 2015
Zuser A, Rechberger H (2011) Considerations of resource availability in technology development strategies: the case study of photovoltaics. Resour Conserv Recycl 56(1):56–65CrossRefGoogle Scholar
© Tisserant and Pauliuk. 2016
1.Industrial Ecology Programme, Department of Energy and Process EngineeringNorwegian University of Science and TechnologyTrondheimNorway
2.Faculty of Environment and Natural ResourcesUniversity of FreiburgFreiburgGermany
Tisserant, A. & Pauliuk, S. Economic Structures (2016) 5: 4. https://doi.org/10.1186/s40008-016-0035-x
Received 24 April 2015
Revised 04 December 2015
DOI https://doi.org/10.1186/s40008-016-0035-x
Publisher Name Springer Berlin Heidelberg | CommonCrawl |
Advanced materials and technologies for supercapacitors used in energy conversion and storage: a review
M. I. A. Abdel Maksoud ORCID: orcid.org/0000-0001-7708-96461,
Ramy Amer Fahim2,
Ahmed Esmail Shalan3,4,
M. Abd Elkodous5,6,
S. O. Olojede7,
Ahmed I. Osman ORCID: orcid.org/0000-0003-2788-78398,
Charlie Farrell9,10,
Ala'a H. Al-Muhtaseb11,
A. S. Awed12,
A. H. Ashour1 &
David W. Rooney8
Environmental Chemistry Letters volume 19, pages 375–439 (2021)Cite this article
Supercapacitors are increasingly used for energy conversion and storage systems in sustainable nanotechnologies. Graphite is a conventional electrode utilized in Li-ion-based batteries, yet its specific capacitance of 372 mA h g−1 is not adequate for supercapacitor applications. Interest in supercapacitors is due to their high-energy capacity, storage for a shorter period and longer lifetime. This review compares the following materials used to fabricate supercapacitors: spinel ferrites, e.g., MFe2O4, MMoO4 and MCo2O4 where M denotes a transition metal ion; perovskite oxides; transition metals sulfides; carbon materials; and conducting polymers. The application window of perovskite can be controlled by cations in sublattice sites. Cations increase the specific capacitance because cations possess large orbital valence electrons which grow the oxygen vacancies. Electrodes made of transition metal sulfides, e.g., ZnCo2S4, display a high specific capacitance of 1269 F g−1, which is four times higher than those of transition metals oxides, e.g., Zn–Co ferrite, of 296 F g−1. This is explained by the low charge-transfer resistance and the high ion diffusion rate of transition metals sulfides. Composites made of magnetic oxides or transition metal sulfides with conducting polymers or carbon materials have the highest capacitance activity and cyclic stability. This is attributed to oxygen and sulfur active sites which foster electrolyte penetration during cycling, and, in turn, create new active sites.
Rising global population and the global energy crisis has led to concerns regarding electrical energy generation and consumption. There is therefore a need for an alternative energy storage device that has a higher capacity than the current technologies. Prior to now, the storage of electrical energy has been exclusively based on batteries and capacitors. Batteries have been the most utilized and preferred candidate, owing to high energy capacity coupled with insubstantial power evolved. However, when substantial energy is required at high power, capacitors remain the suitable device to date. Despite their benefits, both batteries and capacitors are inadequate for storing high energy and power density required for effective consumption and performance of renewable energy systems (Najib and Erdem 2019). Inventors and innovators in the field have been encountering bottlenecks with current solutions such as short lifecycles and shelf lives associated with batteries. This was only the case until revolutionary trends brought about applications of nanotechnology in the manufacturing of electrical appliances and large storage capacity devices (Burke and Zhao 2015). Nanotechnology is an advancement in the field of technology that deals with manipulation and regulation of substances on a nanoscale measurement, employing scientific skills from a diverse biomedical and industrial approach (Soares et al. 2018). Nanoparticles, a nano-size object that has three external nanoscale dimensions is the fundamental constituent of nanotechnology, while nanomaterials are materials with interior or exterior structures on the nanoscale dimension (Anu and Saravanakumar 2017; Jeevanandam et al. 2018). Nanomaterials possess unique chemical and physical characteristics that offer advantages and promotes them as an appropriate candidate for extensive utilization in fields such as electronics (Kang et al. 2015) and supercapacitors, where the storage of energy is required (Saha et al. 2018). It is now evident that the energy storage system is an important way to offer a solution to the rising demand in world energy generation and consumption (Nocera 2009).
Supercapacitors are electrochemical energy storage devices possessing both great power density and energy density with long lifecycle and high charging/discharging (Sun et al. 2018a). These properties are the reason for high-energy storage ability exhibited by supercapacitors for technological advancement (Chen and Dai 2013). SCs have been described as a capacitor that offers high storage space, larger than other capacitors with low internal resistance, which viaducts the gap between rechargeable cells and the conventional capacitors. In addition to high power capacity and longevity, low weight, large heat range of − 40 °C to 70 °C, ease to package and affordable maintenance are the main advantages supercapacitors have over other devices that stores energy (Wang et al. 2009). The components of supercapacitors are an electrolyte, two electrodes and a separator which electrically isolate the two electrodes. These electrodes represent the most essential and fundamental constituent of supercapacitors (Pope et al. 2013; Iro et al. 2016); hence, the performance of the supercapacitors largely depends on the electrochemical properties of electrodes, the voltage range and the electrolyte. Iro et al. (2016) reported that applications of supercapacitors such as the ability to compliment the power of battery usage during emergency power supplies and in electric vehicle power systems are largely dependent on its useful attributes. Wide usefulness of supercapacitors has been described in fuel cell vehicles, low-emission hybrid vehicles, electric vehicles, forklifts, power quality upgrading and load cranes (Miller and Simon 2008; Cai et al. 2016). Fabrication of supercapacitors using printing technology has utilized diverse nanomaterials such as conductive polymers, electrolytes, transition metal carbides, transition metal dichalcogenides, nitrides and hydroxides (Sun et al. 2018a).
Magnetic metal oxide nanoparticles represent an attractive type of materials among inorganic solids because they are cheap and easy to prepare in large quantities (Masala and Seshadri 2004). Among different magnetic materials, spinel ferrites and inorganic perovskite oxides have superior performance as an electrode in supercapacitor applications. The emerging evidence has revealed that spinel ferrites of different elements are currently applicable in the design of supercapacitor energy storage devices. Spinel ferrite nanomaterials possess a high energy density, durability and good capacitance retention, high power and effective long-term stability (Elkholy et al. 2017; Liang et al. 2020). Recently, manganese zinc ferrite (MnZnFe2O4) nanoneedles were successfully synthesized, with higher specific capacitance than that of MnFe2O4 and ZnFe2O4. More so, the nanoneedles fabricated were found to exhibit a high surface area, powerful long-term stability and very high columbic effectiveness, which makes it suitable for supercapacitors application (Ismail et al. 2018). Perovskite oxides are functional nanomaterials that have received great attention to potential applications, and it has been widely employed in the fabrication of anion-intercalation supercapacitors. These nanomaterials are greatly influenced by valence state of B-site element, surface area and internal resistance. More importantly, research on energy and power densities of perovskite oxides are scanty (Nan et al. 2019; Ding et al. 2017). Design of La-based perovskite with high density, wide voltage window and high energy capacity for a flexible supercapacitor application was reported in the literature (Ma et al. 2019a). Although, the transition metal oxides have relatively poor conductivity and thus poor capacitance. Therefore, an oxygen replacement with sulfur was recently performed which led to transition metal sulfides. They have been viewed as materials capable of application in the fabrication of supercapacitors owing to their characteristics such as good electrical conductivity, high specific capacitance, electrochemical redox sites and minimal electronegativity, which led to the synthesis of ternary nanostructures like Co0.33Fe0.67S2 in supercapacitors application (Liu et al. 2018a). In addition, the highly flexible, lightweight asymmetric supercapacitor "graphene fibers/NiCo2S4" was fabricated with an extremely high value of both energy density and volumetric capacity (Cai et al. 2016). This was in search for a more durable and efficient energy storage device with high volumetric capacity, high energy density and wide voltage window. The partially substituting Co by the transition metals (i.e., Zn, Mn, Ni, and Cu) in the Co3O4 lattice leads to produce an inverse spinel structure, in which the external cation occupies the B-sites, while cobalt occupies both the A- and B-sites (Kim et al. 2014). This presents effective channels for ion diffusion enrichment toward charge carriers (electrons or holes) that jump into the A-site and B-site for high electrical conduction (Liu et al. 2018b). ZnCo2O4 nanoparticles show the specific capacitance values of 202, 668 and 843, 432 F g−1 (Bhagwan et al. 2020). The electrochemical characteristics of transition metal sulfides are much better than the electrochemical properties of transmission metal oxides. This can be explained by the presence of sulfur atoms instead of oxygen atoms. Hence, the lower electronegativity of sulfur than that of oxygen facilitates electron transfer in the metal sulfide structure easier than that in the metal oxide form. Thus, replacing oxygen with sulfur, providing more flexibility for nanomaterials synthesis and fabrication (Jiang et al. 2016). Li et al. (2019a) have found that the ZnCo2S4 electrode displays an extraordinary specific capacitance ~ 1269 F g−1, which is 4 multiplies of those for Zn–Co ferrite electrode (~ 296 F g−1), due to the ZnCo2S4 electrode having low charge-transfer resistance, and likewise, exceptional ion diffusion rate compared with achieved from the ZnCo2O4 electrode.
Furthermore, graphene and carbon nanotubes are carbon-derived nanomaterials that have received great attention in their potential application as efficient electrode materials in the design of supercapacitors owing to their high mechanical properties with great specific surface area and most importantly competent electrical properties (Chen and Dai 2013). Further, other forms of carbon-nanomaterials like carbon derivatives, xerogel, carbon fiber, activated carbon and template carbon likewise been applied in the design of supercapacitors and they also serve as the supercapacitor's electrodes. These materials possess powerful lifecycles, durable power density, lasting cycle durability and desirable columbic reliability (Yin et al. 2014). Carbon-based nanomaterials are relatively cheap, readily accessible and very common with characteristic permeability which enables easy penetration of electrolytes into the electrodes, to boost the capacitance of the supercapacitors. Besides, its huge surface area and effective conductance of electricity make them applicable in electric supercapacitors with double layer (Yang et al. 2019a; Cheng et al. 2020a). In the same context, the extraordinary specific surface area and conductivity are demanded to secure excellent capacity achievement for the electrodes. Therefore, mineral oxide, two-dimensional carbon composites and polymer composites that possess high conductivity are normally utilized in electric devices with a high display. Especially, two-dimensional carbon composites improve capacity achievement via enhancing their surface area, porosity and electric conducting. Notwithstanding this level of concern, ZnCo2O4 efficiency needs more promotion by morphological and chemical modifications (Kathalingam et al. 2020). Hence, the incorporation of nitrogen-doped graphene oxide and polyaniline with the ZnCo2O4 affects on electrochemical performance. The prepared electrode exhibited a high capacity of about 720 F g−1 and retained ~ 96% from its original capacitance over 10 × 103 cycles (Kathalingam et al. 2020). Also, the fabricated ZnCo2S4@hydrothermal carbon spheres/Fe2O3@pyrolyzed polyaniline nanotubes unveiled a high capacitance about ~ 150 mA h g−1 and retained 82% from its original capacity after 6x103 cycles and confirming huge energy density (~ 85 W h kg−1) at a moderate power density of 460 W kg−1 (Hekmat et al. 2020).
The conducting polymer materials are pseudo-capacitance materials with poor lifecycles when compared with carbon-based materials (Snook et al. 2010). Numerous good properties of conducting polymer materials like flexibility, conductivity, ease of synthesis, financial viability and high pseudo-capacitance conducting polymer materials such as polythiophene, polypyrrole and polyaniline have received great attention in the potential supercapacitor application. Despite these good properties, pure conducting polymer materials exhibit poor cycling stability and lower power and energy densities (Huang et al. 2017a).
This review focuses on spinel ferrites MFe2O4, MMoO4 and MCo2O4, where M denotes a transition metal ion. Additional focus areas include perovskite oxides, transition metals sulfides, carbon materials and conducting polymer materials, as materials that have been extensively and widely employed in the fabrication of supercapacitors to establish loopholes in some of these nanomaterials. This would ultimately offer guidelines on how to design better energy storage devices with a higher power, density and sufficient storage ability.
Supercapacitor-based on spinel ferrites
Spinel ferrites constitute metal oxide compounds of minute classes of transition metals that are originally obtained from magnetite (Fe3O4). The spinel ferrites exhibit good magnetic and electrical characteristics, which has brought about its broad applications in high-density data storage, water remediation, drug delivery, sensors, spintronics, immunoassays using magnetic labeling, hyperthermia of cancer cells, optical limiting, magnetocaloric refrigeration and magnetic resonance imaging (Farid et al. 2017; Dar and Varshney 2017; Amirabadizadeh et al. 2017; Pour et al. 2017; Alcalá et al. 2017; Yan and Luo 2017; Sharma et al. 2017; Winder 2016; Samoila et al. 2017; Niu et al. 2017; Anupama et al. 2017; El Moussaoui et al. 2016; Patil et al. 2016; Ghafoor et al. 2016; Ashour et al. 2018; Amiri and Shokrollahi 2013; Ouaissa et al. 2015; Houshiar et al. 2014; Maksoud et al. 2020a, b; Abdel Maksoud et al. 2020a; Hassan et al. 2019; Patil et al. 2018; Žalnėravičius et al. 2018; Thiesen and Jordan 2008; Koneracká et al. 1999; Arruebo et al. 2007; Basuki et al. 2013; Gupta and Gupta 2005a, b; Jain et al. 2008; Liu et al. 2005; Abdel Maksoud et al. 2020b). Besides these applications, raising attention in energy storage research via dissemination is due to the fast-growing demand for electronic devices that are manufactured to be smaller, lighter and relatively cheaper. Therefore, an all-in-one device demands effective energy storage components which will fit into such design criteria with enhanced energy performance (Reddy et al. 2013; Zhu et al. 2015; Hao et al. 2015). The crystal structure of some oxides such as ionic oxides, specifically oxides of Fe, permits visibility of complex composition of magnetic ordering. The type of such magnetic ordering is known as ferrimagnetism. The structure of these materials has two spins (up and down), and also, the net magnetic moment of all the directions is not zero (Reitz et al. 2008). For the various neighboring sublattices, the atoms' magnetic moments are opposed to each other, nevertheless, the opposing moments are unbalanced (O'handley 2000; Cullity and Graham 2011).
Spinel ferrites are distinguished via the nominal composition MFe2O4, where M denotes divalent cations possessing an ionic radius within 0.6 and 1 Å, such examples are magnesium, copper, nickel, manganese, zinc, cobalt, etc. Also, M can be substituted by any different metal ions. The ferric ions can be substituted via extra trivalent cations such as aluminum, chromium, etc. The spinel structure originates from the MgAl2O4 which owns a cubic structure. This crystal was first discovered by Bragg and by Nishikawa (Ashour et al. 2014).
In the spinel lattice, each cell has a cubic arrangement and comprises eight MeFe2O4 molecules. The large O2− ions produce a face-centered cubic lattice. The cubic cell has two types of interstitial sites: (1) tetrahedral sites enclosed via 4 oxygen anions (A-site), (2) octahedral sites enclosed by 6 oxygen anions (B-site) (Shah et al. 2018; Yadav et al. 2018; Kefeni et al. 2020). Figure 1 shows the tetrahedral and octahedral positions in the FCC lattice (Cullity and Graham 2011; AJMAL 2009; Vijayanand 2010; Bhame 2007; Sachdev 2006).
Adapted with permission from Kefeni et al. (2020), Copyright 2020, Elsevier
Spinel ferrite structure showing oxygen (red), tetrahedral (yellow) and octahedral (blue) sites.
On the basis of the cation distribution, ferrites can be subdivided into three classes: The possible distribution of the metal ions can be represented by the general formula (Cullity and Graham 2011):
$$\left( {{\text{M}}_{\delta }^{ + 2} {\text{Fe}}_{1 - \delta }^{ + 3} } \right)\left[ {{\text{M}}_{1 - \delta }^{ + 2} {\text{Fe}}_{1 + \delta }^{ + 3} } \right]{\text{o}}_{4}$$
where δ is the degree of inversion. The ions inside the brackets () are located in tetrahedral sites, while those inside the brackets [] occupy the octahedral sites. According to this distribution, there are three categories of spinel ferrites:
Normal spinel (δ = 1) the formula becomes (M2+) [Fe2] O4 and the divalent metal ions are in tetrahedral sites. ZnFe2O4 and CdFe2O4 are examples for normal spinel ferrites.
Inverse spinel ferrite (δ = 0) the formula becomes (Fe3+) [M2+Fe3+] O4. In this case, the divalent metal ions completely occupy the octahedral sites while the iron is equally divided between the tetrahedral and octahedral sites. NiFe2O4 and CoFe2O4 are examples of inverse spinel ferrites.
Intermediate ferrite \(( 0 < \delta < 1\)) in which the M and Fe3+ ions are distributed uniformly over the tetrahedral and octahedral sites. MnFe2O4 is an example of the intermediate ferrites (Cullity and Graham 2011).
For anode materials, three varieties of available charge-storage mechanisms are considered: alloying–de-alloying, intercalation–deintercalation and conversion reactions (Park et al. 2010; Zhang 2011; Kumar et al. 2004). The conversion-reaction mechanism applies to spinel ferrites as one of the oxides of transition elements. In spinel ferrites and through the initial discharging process, the crystal structure is destructed into different mineral particles, following with the production of the Li2O form. As mineral particles promote the electrochemical action using the production/destruction of Li2O that supplies the route for the conversion reaction mechanism (Jiang et al. 2013; Yuvaraj et al. 2016). To obtain an extraordinary power and excellent energy density Li-ion batteries, suitable electrode materials with remarkable specific capacities, cell voltages and Li-dispersion coefficients are necessary. After the effort of Poizot et al. (2000), transition metal-oxide nanoparticles have been examined as a possible electrode for Li-ion batteries. They are extraordinary electrochemical characteristics reaching 700 mA h g−1 with no loss of their initial capacitance over 100 lifecycles at special rates of charging. This superior electrochemical reactivity of spinel ferrites confirmed that they attend to the developed satisfaction of such batteries.
Spinel MFe2O4 where M is Co, Zn and Mn
In the past few years, attention has shifted toward the application of spinel ferrite and their derivative composites (Shin et al. 2018; Reddy and Yun 2016). The spinel ferrite which has nominal composition MFe2O4, where M is magnesium, zinc, copper, manganese, nickel and cobalt, present notable discharging of capacitance up to 1000 mA h g−1, which is about three orders of magnitude higher than commercial anodes made from graphite (Yuvaraj et al. 2016; Yin et al. 2013).
Cobalt ferrite CoFe2O4 nanoparticles
The nanoparticles of cobalt ferrite CoFe2O4 are a common ferromagnetic substance. The CoFe2O4 has an inverse spinel structure where Co2+ ion species are located at the B-site and the Fe3+ ion species are found at both the A and B sites as in the formula (Fe3+) [Co2+Fe3+] O4. Interestingly, the ferrite materials are an interlacing structure of metal ions with positive charges and divalent oxygen ions with their negative charge. CoFe2O4 is a likely suitable for sensing devices as well as active and passive microwave devices due to its remanence, coercivity and high resistance (Sharifi et al. 2012; Yin et al. 2006). Also, CoFe2O4 is cubic in structure belonging to the Fd3m space group. Further, it is an insulator (ρ ≈ 105 Ωm) with saturation magnetization = 90 A m2 kg−1 and magnetic moment (µ = 3.7µB). In this circumstance, millimetre-sized single crystals of CoFe2O4 show almost an insignificant coercive field. Moreover, at 300 K, the crystallites CoFe2O4 samples sized 120 and 40 nm exhibit coercive fields of about 453 and 4650 Oe, respectively (Amiri and Shokrollahi 2013; Ouaissa et al. 2015; Houshiar et al. 2014). Also, CoFe2O4 stores Li-ions via a conversion reaction, and it theoretically possesses a unique specific capacitance (> 900 mA h g−1). However, it has critical disadvantages like high volume change that affects the trituration and accumulation of the active material and high resistivity that leads to reduced cycling stability and a lowering rate capability of the CoFe2O4 (Lavela et al. 2009; Kumar et al. 2014). Lately, Hennous et al. (2019) have studied the 57Fe Mossbauer spectra of CoFe2O4 as a function of temperature (Fig. 2). Every spectrum produces a splitting owns magnetic nature (almost 6-line) including a broadening line attributed to the aligned Fe ions via a magnetic field locating various dissimilar sites. The reverse sextets arise due to the diverse number of cobalt and iron neighbors in tetrahedral and octahedral sites. At low temperatures, the tetrahedral site has a magnetically hyperfine field (50 Tesla) and declines regularly with rising its temperature (to 40 Tesla in 227 C. While, the octahedral site has a magnetic hyperfine field bigger than its value in the other site (tetrahedral site), which declined also with arising temperature. The nanoparticles of CoFe2O4 can enhance the capacitance of the composite electrode and have an immeasurable electrochemical activity, which leads to the improvement in energy and power densities of a supercapacitor. Recently, Elseman et al. (2020) have established a facile one-step pathway to synthesize CoFe2O4/carbon spheres nanocomposite as a novel electrode. The glucose (as a source for carbon spheres) was directly combined with CoFe2O4 via the solvothermal approach at specific conditions. The electrode has significantly increased the electrochemical capacitance of 600 F g−1, with loss of 5.9% of its initial capacitance over 5 × 103 exhibiting an energy density of 27.08 W h kg−1 and a power density 750 W kg−1. This can be attributed to its structure which is hierarchical shaped allowing great electrical conductance. These results showed that the prepared composite electrode has much high specific capacity with maximum retention ability. Finally, the results affirmed that the electrode is very attractive applicants for supercapacitor materials. Also, Reddy et al. (2018a) have used ZnO to increase the electrochemical properties of CoFe2O4. The electrochemical analyses showed that the ZnO@CoFe2O4 nanocomposite electrode in a 3 M KOH aqueous solution performed a large specific capacitance (4050 F g−1), with an excellent energy density about 77 W h kg−1. This electrode presented excellent cycling stability and retained about 91% of its specific capacitance after 1000 cycles. Besides, the electrode exhibited a specific capacitance (~ 3500 F g−1) and cycling stability (~ 50%) lower than the ZnO@CFO nanocomposite electrode. These outcomes of the nanocomposite were confirmed as electrodes for subsequent generation supercapacitor.
Adapted with permission from Hennous et al. (2019), Copyright 2019, Royal Society of chemistry
57Fe Mossbauer spectra as a function of temperature for CoFe2O4. The figure illustrates splitting and magnetic nature of CoFe2O4 where each broadening line assigned to iron ions via a magnetic field settling multiple disparate sites.
Zinc ferrite ZnFe2O4
Zn ferrite is the common material for electrochemical applications due to its eco-friendly nature, sufficient resources, cost-effectiveness, strong redox process and extraordinary theoretical capacity of 2600 F g−1 (Vadiyar et al. 2015, 2016a; Raut and Sankapal 2016; Zhang et al. 2018a). However, its lower conductivity, volume fluctuations during charge and discharge rhythm and low cycling stability cycles make it unsuitable for efficient supercapacitors. To defeat those disadvantages, the conducting polymers or conducting materials were added to the Zn ferrite to enhance the electronic conductivity and to improve the cycling stability (Yang et al. 2018; Qiao et al. 2018). Israr et al. (2020) have synthesized a nanocomposite series of Zn ferrite/nanoplatelets of graphene. The cyclic voltammetry curves for the as-synthesized electrode are displayed in Fig. 3. The figure shows that the curve shape is kept fixed for electrode even at higher scan rates, meaning its higher rate ability. It is worth to mention that the conducting network of graphene created within the formation of the nanocomposite is the main reason for this higher specific capacity and great rate ability. The high conductance of nanoplatelets of graphene within the nanocomposite structure makes efficient transport of charge as well as develops the electrode's capability rate. The synthesized nanocomposites can be applied as electrochemical capacitors with an excellent capacitance of 314 F g−1, great performance rate and lost about 22.6% of its initial capacitance.
Adapted with permission from Israr et al. (2020), Copyright 2020, Elsevier
(ZFO)1-x(GNPs)x electrodes CV curves, where ZFO is refer to Zn ferrite and GNPs refer to nanoplatelets of graphene.
In the same context, Yao et al. (2017) have successfully synthesized carbon-coated Zn ferrite/graphene composite by a general multistep strategy. During the anodic process, one broad peak rises at ~ 1.50–2.10 V, representing the oxidation of the base zinc ions (Zn0 to Zn2+) and iron ions, i.e., Fe0 to Fe3+. The electrochemical analyses confirm that electrode offers a discharge capacity (initial) with a value of 1235 mA h g−1 and loss about 465 mA h g−1 over 150 cycles with a high value of capacity and good cycling performance. The microstructural stability and the very low accumulation of hierarchical spheres of electrode are the most common reasons for allowing appropriate transportation of the ion/electrons leading to this enhanced electrochemical achievement. The electrochemical results are influenced by carbon layer novel architectures (~ 3–6 nm) and graphene nanosheets with ultrathin thickness. The studied electrode can be applied in Li-ion batteries as a high-performance alternative anode.
Manganese ferrite MnFe2O4
Spinel MnFe2O4 is characterized by rapid valence-state response-ability, high electrochemical activity along with it is a cheap, readily available and eco-friendly material. Therefore, spinel Mn ferrite NPs has been lately examined as proper electrodes for batteries based on lithium and sodium ions, batteries of metal-air and SCs (Xiao et al. 2013; Sankar and Selvan 2014, 2015; Lin and Wu 2011). But, the Mn ferrite has reduced both rate capability and cycling stability due to the inferior electrical conductivity and the serious effect of ion insertion/deinsertion performance during the charging/discharging process (Cheng et al. 2011; Guan et al. 2015; Wang et al. 2014a). Because of the integrated advantages of the quantum dot, it can be assumed that when the size of spinel Mn ferrite decreased into the quantum scale, the available surface area and the electrochemically active sites will greatly be developed in addition to rapid surface-controlled pseudo-capacitance behavior with reduction in the ion carrying route (Su et al. 2018). Besides, the electrode has an excellent performance rate owing to the integration between the great capacitance and extraordinary cycling stability. Su et al. (2018) have demonstrated the successful preparation of Mn ferrite@Nitrogen-doped graphene via the solvothermal method. The prepared electrode displays an extraordinary capacity of about ~ 517 F g−2. Furthermore, carbon encapsulation is promising for the development for rate and cycling achievement, providing a satisfying capacitance (~ 150 F g−1) as well as an excellent lifecycle up to 65 × 103 cycles. These conclusions make the prepared materials are proper electrodes for energy storage applications.
The influence of electrolyte types on the electrochemical performance of Mn ferrite was evaluated. Vignesh et al. (2018) have documented a facile synthesis of Mn ferrite by co-precipitation technique. The electrochemical analysis of Mn ferrite was examined with various types of electrolytes, such as potassium hydroxide, lithium phosphate and lithium nitrate (Fig. 4). The highest capacity of 173 F g−1 via using potassium hydroxide, 31 F g−1 via using lithium nitrate and 430 F g−1 via using lithium phosphate were achieved.
Adapted with permission from Vignesh et al. (2018), Copyright 2018, Elsevier
Cyclic voltammetry profile and specific capacitance as a function of the current density of MnFe2O4 electrode materials in aqueous KOH (a–c), lithium nitrate (d–f), lithium phosphate (g–i) as electrolytes, respectively. It is illustrated that the results achieved a high capacity of 173 F g−1 via using potassium hydroxide, 31 F g−1 via using lithium nitrate and 430 F g−1 via using lithium phosphate.
Between these electrolytes, the potassium hydroxide electrolyte showed loss about 40% from its original capacitance with highest performance rate due to high accessibility of surface, synergistic activities and improved electronic conductivity of Mn-ferrite. Besides, the synthesis of symmetric cells via Mn-ferrite as an electrode material with potassium hydroxide as an electrolyte presented power density, specific capacitance and energy density of 1207 W kg−1, 245 F g−1 and 12.6 W h kg−1, respectively. Moreover, the Mn-ferrite keeps more than 105% of its original capacity after 10 × 103 cycles.
Spinel metal molybdates
The binary metal molybdates (NiMoO4, CoMoO4, FeMoO4, etc.) have gained significant interest in the energy-related research area (compared to metal oxides, hydroxides and sulfides). This is due to their low cost, environmental friendliness, abundant resources, suitable electrical, electrochemical and mechanical properties for high capacity supercapacitors (Zhang et al. 2019a; Huang et al. 2016a). Lately, researchers have focused on the improvement in metal molybdates as electrode materials for supercapacitor applications.
Nickel molybdate NiMoO4
The nickel molybdate NiMoO4 has gained significant attention in recent years as a proper electrode material for supercapacitor, due to its inexpensive cost, unlimited sources, great redox activity, well-defined redox performance and eco-friendly compatibility (Guo et al. 2014; Yin et al. 2015a). The nickel molybdate has many crystals' shapes, and this depends upon the synthesizing technique and temperature of the annealing process as illustrated in Fig. 5 (Kumar et al. 2020; Liu et al. 2013a; Chen et al. 2015; Hussain et al. 2020).
Nickel molybdate has many crystals' shapes, and this depends upon the synthesizing technique and temperature of the annealing process. a–d SEM images of a nanoflower, adapted with permission from Kumar et al. (2020). Copyright 2020, Royal Society of Chemistry, b nanorods, adapted with permission from Liu et al. (2013a), Copyright 2013, Royal Society of chemistry; c nanowire, adapted with permission from Chen et al. (2015), Copyright 2015, Elsevier; d nanogravel, adapted with permission from Hussain et al. (2020), Copyright 2020, Elsevier; e the crystal structure, adapted with permission from Huang et al. (2018a), Copyright 2018, Royal Society of Chemistry; f, g EDX spectra and elemental mapping images, adapted along with permission from Kumar et al. (2020) Copyright in 2020, Royal Society of Chemistry, for nickel molybdate ferrite
The specific capacitance and better cycling stability of nickel molybdate are dependent on the crystals' shapes. Ajay et al. (2015) observed that the two-dimensional nickel molybdate like-nanoflakes synthesized via rapid microwave-assisted achieved 1739 F g−1 of specific capacitance at 1 mV s−1 of scan rates. While, Huang et al. (2015a) found that three-dimensional form interconnected nickel molybdate like-nanoplate arrays show a specific capacitance as high as 2138 F g−1 at a current density of 2 mA cm−2, and an outstanding cyclability where lost 13% of its original capacity over 3 × 103 cycles. Also, Cai et al. (2013), have synthesized nickel molybdate nanospheres and nanorods via simple hydrothermal techniques. The nickel molybdate nanospheres displayed a higher value of specific capacitance and good both stability of its lifecycle and rate capability than nickel molybdate nanorods. This behavior may be due to their massive surface area and good electrical conductivity. Nickel molybdate nanospheres displayed ~ 974 F g−1 of specific capacitances while it was ~ 945 F g−1 for nanospheres. In another study, Cai et al. (2014a) observed that the mesoporous nickel molybdate like-nanosheets displayed a higher specific capacitance and cycling stability than nickel molybdate like-nanowires.
Notwithstanding these benefits, nickel molybdate as metal oxides materials undergoes lower cyclic stability attributed to structural degradation induced via the hard-redox reactions. Furthermore, the breakdown of the nanostructure produced via the high volume change, particle agglomeration and variable solid electrolyte interface creates an extreme reduction in capacity (Budhiraju et al. 2017). To defeat the above-mentioned defects, the synthesizing of electrode materials via coating very conductive materials onto active materials has shown to be sufficient (Wang et al. 2017a). To date, conductive polymers, owing to their excellent electrical conductivity, plasticity and simple fabrication display effective properties when working as electrode materials (Huang et al. 2016b). Yi et al. (2020) reported a rational study and the structure of Ni-oxide@nickel molybdate like-porous sphere coated with polypyrrole. The outcomes reveal that the shell of nickel molybdate and polypyrrole with high electronic conductivity reduces the charge-transfer reaction resistance of Ni-oxide and then increases the electrochemical kinetics of Ni-oxide. The initial capacitance of Ni-oxide/nickel molybdate/polypyrrole is 941.6 F g−1 at 20 A g−1. Particularly, the electrode holds capacitance of 850.2 F g−1 and remains 655.2 F g−1 with high retention of 77.1% at 30 A g−1 even after 30,000 cycles.
Cobalt molybdate CoMoO4 nanoparticles
Cobalt molybdate CoMoO4 has many advantages like nickel molybdate, such as cost-effectiveness, eco-friendliness and high electrochemical performance (Mai et al. 2011a). The considerable stability of one-dimensional form CoMoO4 like-nanorods structure exhibited exceptional stability with high specific capacitance (Liu et al. 2013b). The synthesized CoMoO4 by a simple sonochemical technique gave electrochemical performance and capacity of ~ 133 F g−1 at 1 mA cm−2 of current density (Veerasubramani et al. 2014). Furthermore, the CoMoO4 like-nanoplate arrays produced a maximum capacity of 227 μA h cm−2 at 2.5 mA cm−2 and showed superior cyclic stability and energy density in the operating voltage window of 1.5 V (Veerasubramani et al. 2016). Nevertheless, metal oxides naturally have a short diffusion distance of electrolytes that resulted in lower electrical conductivity and restricted their application as electrodes for pseudocapacitors. High surface area and electrical conductivity of graphene material enable it to be used as an electrode for supercapacitor (Sun et al. 2011). Nevertheless, graphene supercapacitors have low energy density and restrict its usage in several significant applications. The obtained CoMoO4@graphene composites possessed high electroactive areas which could promote accessible accession of OH− ions and quick charge carriers (Xia et al. 2013). Jinlong et al. (2017) have reported the synthesizing of CoMoO4@reduced graphene-oxide nanocomposites via the hydrothermal method. The electrode nanocomposites electrode showed a remarkable capacity about of ~ 856 F g−1 at 1 A g−1 and retain about 94.5% of its original capacitance over 2000 cycles. The electrode nanocomposites presented high electrochemical conductivity compared to pristine CoMoO4. This improvement is attributed to the obtained composites that had a greater specific surface area and average pore size than the pristine for nanoparticles of CoMoO4. The CoMoO4 like-nanoflake promoted electrolyte transport through the charging/discharging process and presented numerous active sites available for electrochemical reactions. The synergetic effect between reduced graphene-oxide and CoMoO4 also increased the performance of the supercapacitor.
Iron(II) molybdate FeMoO4
Iron(II) molybdate FeMoO4 is a part of the several notable mineral molybdates and assumed to give higher redox chemistry attributed to the mixed combinations of both Fe and Mo cations. To the day, Iron(II) molybdate widely utilized as promising electrode toward Li-ion batteries (Wang et al. 2014b). Wang et al. (2014b) have reported the doping of Iron(II) molybdate with graphene via a simple hydrothermal. The results confirmed that the Iron(II) molybdate/reduced graphene-oxide composite possesses specific capacitance 135 F g−1 at 1 A g−1 larger than those obtained of Iron(II) molybdate 96 F g−1 or reduced graphene-oxide 66 F g−1. Furthermore, the capacitance of the composite decayed gradually and reached 29.6% loss after 500 cycles. Recently, Nam et al. (2020) have successfully synthesized FMO nanosheet via the chemical bath deposition procedure. The outcomes demonstrate that the FMO electrode is highly proper in the supercapacitor application. The Iron(II) molybdate electrode shows excellent electrochemical achievements with specific capacity of about 158 mA h g−1 at 2 A g−1, and 9% loss of its original capacitance over 4 × 103 cycle.
Spinel cobaltites
Until now, significant research has been conducted and led to the promotion of spinel cobalt oxide Co3O4 because of its cost-effective components, original plenty, excellent redox ability and extraordinary theoretical specific capacitance (Zhai et al. 2017). Nevertheless, due to the high electrical resistivity as a result of its semiconducting nature, the electrochemical achievements of most published Co3O4 electrodes are still far from expectations, with restricted specific capacitances and moderate power densities (Lu et al. 2017; Zhang et al. 2015a). Hence, considerable effort is being focused on offering more eco-friendly and moderately affordable alternative metals to partially substitute Co for making ternary spinel cobaltites, to collaboratively give excellent reversible capacities, preferred electrical conductivity and interesting redox chemistry (Liu et al. 2016a; Hui et al. 2016). Intrinsically, Co3O4 is characterized as a normal spinel structure, in which the Co2+ and Co3+ ions occupy the A-site and B-site, respectively (Gao et al. 2016a). The partially substituting Co by the transition metals (i.e., Zn, Mn, Ni, and Cu) in the Co3O4 lattice leads to produce an inverse spinel structure, in which the external cation occupies the B-sites, while Co occupies both the A- and B-sites (Kim et al. 2014). This presents effective channels for ion diffusion enrichment toward charge carriers (electrons or holes) that jump into the A-site and B-site for high electrical conduction (Liu et al. 2018b).
Nickel cobaltite (NiCo2O4)
NiCo2O4 as a mineral oxide represents a proper candidate used in the energy storage area owing to a high special capacity, extraordinary electric conduction and excellent stability (Xu et al. 2018a; Yuan et al. 2020). The nanoparticles of nickel-cobaltite were initially published as an exceptional display electrode candidate for electrochemical capacitors (Wei et al. 2010). Consequently, several nickel-cobaltite structures with various morphologies exhibited increased capacitive achievements as opposed to the bulk structure. Searches on Web of Science have revealed that about 1000 articles related to the application of nickel-cobaltite materials for electrochemical capacitors have been published to date. The composites of the nanoparticles of nickel-cobaltite originated on a substrate owns conduction nature is utilized in capacitors applications. Current research has confirmed that the incorporation of different elements upon the nanoparticles of nickel-cobaltite leading to achieving the excellent capacity and durability of the nanoparticles of nickel-cobaltite (Lin and Lin 2017). This performance-enhanced electrochemical property of the nanoparticles of nickel-cobaltite because attributing to production more transportation channels to easy charges motion leading to improve its electric conduction (Cheng et al. 2020b).
The nanoparticles of the spinel nickel-cobaltite own inverse structure where Ni2+ cations settle the B-sites and Co2+ ions settle the B- and A-sites equally. The nanoparticles of spinel nickel-cobaltite, a semiconductor (p-type) owns narrow bandgap (~ 2.1 eV) with suitable electric conduction. Spinel nickel-cobaltite has excited many researchers due to its promising cost-effectiveness and eco-friendliness properties compared with other metals oxides materials. The basic reactions can be displayed as the next equations (Cheng et al. 2020b):
$$\begin{aligned} & {\text{NiCo}}_{2} {\text{O}}_{4 } + {\text{OH}}^{ - } + {\text{H}}_{2} {\text{O}} \Leftrightarrow {\text{NiOOH}} + 2{\text{CoOOH}} + {\text{e}}^{ - } \\ & 2{\text{CoOOH}} + {\text{OH}}^{ - } \Leftrightarrow {\text{CoO}}_{2} + {\text{H}}_{2} {\text{O}} + {\text{e}}^{ - } \\ \end{aligned}$$
Through the charge–discharge cycle, the redox reactions only appear on the surface of the electrode materials. It was observed that the specific capacitance of the spinel nickel-cobaltite improved after many hundreds of cycles to a limit range, owing to its exceptional morphologies and the activation process of the electrode (Cheng et al. 2020b).
Yang et al. (2019b) have synthesized the nanoparticles of spinel nickel-cobaltite with nanoneedle morphology via the hydrothermal technique. The nanoneedle of spinel nickel-cobaltite changed to nanoflake morphology via a template on the surface of a self-assembly graphene oxide/multiwall carbon nanotube. The template/substrate worked as a seed layer to promote the production of nucleation sites to facilitate the nanoparticles of spinel nickel-cobaltite to build on the surface of the template/substrate, through promoting the nanoneedle-like array morphology. The electrode composite showed extraordinary specific capacitance of 1525 F g−1 at 1 A g−1 and 1081 F g−1 at 100 A g−1, respectively. The prepared composite electrodes were utilized as both the anode and cathode, the supercapacitor showed the highest power density and maximum energy density of 5151 W kg−1 and 25.2 W h kg−1, respectively. Besides, is displayed superior cycling stability, where lost 0.4% only of the primary capacitance over 7 × 103 cycle thus affirming its suite for supercapacitor applications.
Both the nanoparticles of spinel nickel-cobaltite and MnO2 have an edge owing to their characteristic abundance in nature, high theoretical capacitance and cost-effectiveness (Yuan et al. 2017). Xu et al. (2018a) first published that the hierarchical nanoparticles of spinel nickel-cobaltite@manganese dioxide core–shell nanowire arrays showed exceptional characteristics for electrochemical capacitors. The excellent performance was associated with the significant core–shell form and the synergistic impacts of the mixed enrichment from the porous nanoparticles of spinel nickel-cobaltite core and the thin manganese dioxide shell. Also, Zhang et al. (2016a) utilized galvanostatic electrodeposition to attach manganese dioxide nanoflakes on a two-dimensional form of the nanoparticles of spinel nickel-cobaltite structures on the steel mesh outside. The studied electrode offers a specific capacitance with a value of 914 F g−1 at 0.5 A g−1 along with after 3000 cycles has a loss of 12.9%.
Zinc cobaltite ZnCo2O4
Spinel-type ZnCo2O4 is one of the spinel transition oxide group and characteristic cobaltite with Zn2+ ions locating the A-sites of spinel Co3O4 (Wu et al. 2015a). The eco-friendly, low-priced and abundant Zn, Co atoms show the high electrochemical activities; therefore, it is strongly applied in energy storage applications. Zhou et al. reported one-dimensional from the spinel-type ZnCo2O4 porous nanotubes which exhibit an extraordinary specific capacitance of 770 F g−1 at 10 A g−1 (Zhou et al. 2014). Also, Venkatachalam et al. (2017) used a hydrothermal technique to prepare the spinel-type ZnCo2O4 like-hexagonal nanostructured, showing 845.7 F g−1 at a current density of 1 A g−1. Finally, Kathalingam et al. (2020) prepared the spinel-type ZnCo2O4@Nitrogen-doped-graphene oxide/polyaniline hybrid nanocomposite via a hydrothermal approach. The highest specific capacitance was 720 F g−1 at 10 mV s−1 and 96.4% capacity retention after 104 cycles were achieved. This enhanced performance for the composite electrode was ascribed to the improvements from reinforced material porosity characteristics.
The underlying mechanism of this action influenced by various cation substitutions (Mn, Ni, and Cu) has been discussed (Fig. 6). Liu et al. (2018b) presented a systematic examination to clarify the impact of metals replacement on the pseudocapacitive performance of spinel Co3O4. The replacement of Co by transition metals in the Co3O4 lattice can concurrently increase charge transference and ion dispersion, that way showing improved electrochemical properties. The MnCo2O4 gives magnificent specific capacitance about (~ 2145 F g−1) at 1 A g−1. Also, more than 92% of its primary capacitance is kept after 5 × 103 cycles. Besides, the MnCo2O4/activated carbon electrode produces an exceptional energy density (⁓56 W h kg) at a power density of about 800 W kg−1.
Adapted with permission from Liu et al. (2018b). Copyright 2018, Royal Society of chemistry
This figure exhibits that the MCo2O4 nanowires are completely segregated with the symmetrical arrangement, which could be useful to the ions transport to redox-active positions, then probably enhancing the electrochemical features. The images of the field-emission scanning electron microscopy (FESEM) of a, d, g MnCo2O4, b, e, h NiCo2O4, and c, f, i CuCo2O4 nanowires at different magnifications.
Inorganic perovskite-type oxides
The inorganic perovskite-type oxides show special physicochemical characteristics in ferroelectricity (Pontes et al. 2017; Rana et al. 2020; Cao et al. 2017), piezoelectricity (Perumal et al. 2019; Vu et al. 2015; Xie et al. 2019), dielectric (Arshad et al. 2020; Zhou et al. 2019; Boudad et al. 2019), ferromagnetism (Yakout et al. 2019; Ravi and Senthilkumar 2017; Alvarez et al. 2016), magnetoresistance (Wang et al. 2015a; Liu et al. 2007; Dwivedi et al. 2015), and multiferroic (Li et al. 2019b; Zhang et al. 2016b; Pedro-García et al. 2019). They are interesting nanomaterials for broad applications in catalysis (Grabowska 2016; Yang and Guo 2018; Hwang et al. 2019; Xu et al. 2019a; Ramos-Sanchez et al. 2020), fuel cells (Kaur and Singh 2019; Sunarso et al. 2017; Jiang 2019), ferroelectric random access memory (Gao et al. 2020; Chen et al. 2016a; Wang et al. 2019a), electrochemical sensing and actuators (Govindasamy et al. 2019a; Deganello et al. 2016; Atta et al. 2019; Zhang and Yi 2018; Rosa Silva et al. 2019), and supercapacitors (Song et al. 2020; Salguero Salas et al. 2019; Lang et al. 2019; George et al. 2018). Furthermore, these materials possess a significant advantage that is the simple crystalline structure and low cost for the preparation of these materials in monocrystalline or polycrystalline form. Any small modification of their typical crystal structure and chemical composition may lead to the production of unique transport (Choudhary et al. 2020), magnetic (Abbas et al. 2019), catalytic (Abirami et al. 2020), thermochemical (Gokon et al. 2019), mechanical(Wang et al. 2016a), and electrochemical (Baharuddin et al. 2019) properties. Recently, increased efforts have taken place by research groups worldwide concentrating on optimizing the physical properties of perovskite-structured compounds. Most investigations are based on confirming a correlation between the crystalline structure and the chemical stoichiometry of the major components. These have led to an enhancement in the functional properties of the perovskites (Rendón-Angeles et al. 2016).
The atomic arrangement for perovskites originally related to the prototype mineral perovskite, CaTiO3, with the formula ABO3, where B is a small transition mineral cation and A is larger. It was assumed that the unit cell of CaTiO3 could be interpreted by Ca2+ ions at the corners of a cube, with Ti4+ ions at the body center and O2− ions at the center of the faces (Schaak and Mallouk 2002).
To illustrate the correlation between the A, B, and O ions, the typical ABO3 perovskite possesses a cubic crystal structure with tolerance factor (\(\tau\)) = 1, which is represented as \(\tau\) = (rA + rO)/\(\surd 2\) (rB + rO), where rA, rB and rO are the ionic radii of A, B and oxygen elements, respectively. Goldschmidt has revealed that the cubic perovskite structure is stable only in tolerance factor a close range of 0.8 < \(\tau\) < 0.9, and a slightly larger range for distorted perovskite structures with orthorhombic or rhombohedral symmetry. The replacement of multiple cations into the A- or B-sites can change the symmetry of the pristine structure and, consequently, the physical and chemical properties (Zhang et al. 2016c). These changes in symmetry can be fulfilled over relatively little disfigurement in the crystal structure. This is evident in compounds that have smaller and larger values, leading to tilting of the BO6 octahedral to permeate space. For orthorhombic structures, the tilting is about the b and c axes and for rhombohedral structures, the tilting is about each axis. This tilting brings the decrease in coordination number for A, B or both ions. In addition to tilting, displacement of cations can also lead to structural distortion.
The structure of rare-earth manganites RMnO3 perovskite (R = rare earth element) is widely affected via the internal structural distortions existing in the compound (Chen et al. 2007; Dabrowski et al. 2005). The structure is formed by inter-combined MnO6 octahedra in rare-earth. Usually, the lattice of perovskite lattice has distorted due to (1) octahedral tilting and/or (2) Jahn–Teller deformation (Siwach et al. 2008). Nandy et al. (2017) reported the influence of Na+ substituting on internal lattice deformation of EuMnO3. The common atomic order of Eu1−xNaxMnO3 samples is presented in Fig. 7. It is obvious that 6 atoms of oxygen settle in face-centered of the cubic and 1 manganese atom settle body-centered of the cubic outlines the MnO6 octahedra; finally, the corners were occupied via both of europium and sodium atoms. The lattice is exposed to deformations via the octahedra MnO6 tilting and Jahn–Teller effect. The possibility for various replacements at the site of the cations is the principal feature of perovskites, which results in the appearance of great groups of compounds with different cations in B site (ABxB1−xO3); with various cations in A site (AxA1−xBO3); and with replacements in both cation position (AxA1−xByB1−yO3) (Assirey 2019).
Adapted with permission from Nandy et al. (2017). Copyright 2017, Elsevier
a, b MnO6 tilting arrangement of atoms and combining c angles between asymmetrical bond Eu1−xNaxMnO3 samples, it is obvious that 6 atoms of oxygen settle in face-centered of the cubic and 1 manganese atom settle body-centered of the cubic outlines the MnO6 octahedra, finally the corners were occupied via both of europium and sodium atoms.
The phases of perovskite oxides have been classified into 2 categories (Assirey 2019):
The ternary perovskite-type oxides are divided into A1+B5+O3, A2+B4+O3, A3+B3+O3 types and oxygen- and cation-deficient phases. The oxygen and cation-deficient phases will be regarded as those which include large vacancies and not phases which are only slightly non-stoichiometric. Several of these hold Β ions of one element in two valence states and should not be confused with the complex perovskite compounds which contain different elements in various valence states (Assirey 2019; Pan and Zhu 2016; Galasso 2013).
The complex perovskite-type compounds \({\text{A}}\left( {{\text{B}}_{x}^{{\prime }} {\text{B}}_{y}^{\prime \prime } } \right){\text{O}}_{3}\) will be classified into four compounds which contain (Galasso 2013; Modeshia and Walton 2010):
Compounds possess twice as much lower valence state elements as higher valence state elements, \({\text{A}}\left( {{\text{B}}_{0.67}^{\prime } {\text{B}}_{0.33}^{\prime \prime } } \right){\text{O}}_{3}\).
Compounds possess twice as much higher valence state elements as lower valence state elements, \({\text{A}}\left( {{\text{B}}_{0.33}^{\prime } {\text{B}}_{0.67}^{\prime \prime } } \right){\text{O}}_{3}\).
Compounds possess equal proportions of the two B elements, \({\text{A}}\left( {{\text{B}}_{0.5}^{\prime } {\text{B}}_{0.5}^{\prime \prime } } \right){\text{O}}_{3}\).
Compounds with oxygen-deficient phases, \({\text{A}}\left( {{\text{B}}_{x}^{\prime } {\text{B}}_{y} } \right){\text{O}}_{3 - z}\).
Potassium niobate (KNbO3) presents various crystal arrangements depending on temperature, as compiled in Fig. 8. Above its curie temperature TC = 708 K, it loses its ferroelectric properties and becomes cubic. While, below its curie temperature, it exhibits tetragonal, orthorhombic and then rhombohedral lattice with a reduction in temperature (Grabowska 2016; Zhang et al. 2013a, 2016c; Hirel et al. 2015).
Adapted with permission from Hirel et al. (2015)
Crystal structures of cubic, orthorhombic and tetragonal and rhombohedral KNbO3. Green spheres represent Nb, red spheres represent oxygen and purple spheres represent K.
KNbO3 in orthorhombic phase has lattice parameters: a = 3.973, b = 5.693, and c = 5.721 Å belongs space group Amm2, cubic phase KNbO3 has lattice parameter of a = 4.022 Å with space group (Pm3m), while KNbO3 tetragonal phase belongs to space group (P4mm) (Magrez et al. 2006).
As a promising and crucial device for energy storage/conversion, supercapacitors have gained interest and wide appeal owing to its fast charge and discharge cycle, long-lasting lifecycle, high power density and safe operation (Lang et al. 2017). Investigating unique electrode materials, particularly coating electrodes with conductive matter is one of the most impactful ideas to enhance conductivity. It was not until 2014 before studies on perovskites as anodes for supercapacitors emanated when Mefford et al. (2014) examined the electrochemical properties LaMnO3 for supercapacitors and suggested oxygen-anion-intercalation as the mechanism that charge storage depends upon. Besides, in toward supercapacitors and hybrid supercapacitors, the perovskites have some edge when utilized as anodes. Where they have a great significance of oxygen vacancies, i.e., they have a mineral character in the ground state due to B cations 3d and O 2p orbitals through the Fermi level among the total density of states (Liu et al. 2018c). Hence, the immense content of oxygen vacancies (Ovacancy), and remarkable conductivity allows their extraordinary energy densities. Also, the perovskites store charge by oxygen intercalation and the excellent diffusion pathways along crystal domain boundaries leading the promotion of the dispersion rate (Nan et al. 2019).
The La-based perovskite oxides were observed to possess numerous merits like heightened electronic conduction, broad window of voltage and excellent stability of charge/discharge pathway. A well-known procedure to increase the electronic conduction (or decrease the resistance) of composite-based on LaBO3 perovskites is through the completely/or partial incorporation of diverse cations (Ca2+, Sr2+ etc.) for La3+ species on A-site, leading to a larger number of oxygen vacancies are inserted in the structure (Nan et al. 2019; Ma et al. 2020). For LaMnO3 perovskite, the charge imbalance after the substitution is partially offset through the oxidation of Mn3+ species (d4) to Mn4+ oxide species (d3) in the B-site, jointly with the Jahn–Teller effect of manganese (Mn3+) ion species, attending to the perovskite structure deformation (Louca et al. 1997). The structure of the perovskite is assumed to possess a significant impact on the Ovacancy concentration, the O2− diffusivity, along with the electrochemical behavior (Liu et al. 2016b).
Hence, future research should pay more attention to the quantity of Ovacancy required (Nan et al. 2019). The studies of the possibility of applying the perovskite oxides in supercapacitors were insufficient. Thus, in the next section, the impact of cation substitution on perovskite supercapacitors, and consequently, the changes in their electrochemical performance was reviewed.
Influence of cation substitution in A-site of perovskite oxides
Ma et al. (2020) have examined the influence of A-site substitution of LaMnO3 perovskite via calcium ions (Ca2+) or strontium (Sr2+). The La0.85Ca0.15MnO3 and La0.85Sr0.15MnO3 samples are synthesized via the sol–gel method. Schematic diagrams of the oxygen intercalation process in the phases of the crystal structure (orthorhombic/rhombohedral) of the studies samples are offered in Fig. 9. The relation between the oxygen octahedron deformity and Jahn–Teller impact as illustrated above as Mefford detailed, R1 has illustrated the oxidation pathway of (Mn2+) to (Mn3+). One of Ovacancy is fulfilled by O2− intercalation, collectively with 2 ions of Mn2+ oxidized to Mn3+ as shown in the following equation:
$${\text{La}}_{0.85} {\text{A}}_{0.15} \left[ {{\text{Mn}}_{2\delta }^{2 + } ;{\text{Mn}}_{1 - 2\delta }^{3 + } } \right]0_{2.925 - \delta } + 2\delta 0{\text{H}}^{ - } \leftrightarrow {\text{La}}_{0.85} {\text{A}}_{0.15} {\text{Mn}}^{3 + } {\text{O}}_{2.925} + 2\delta {\text{e}}^{ - } + \delta {\text{H}}_{2} {\text{O}}$$
Nevertheless, the variation is that the La0.85A0.15Mn3+O2.925 is yet shown as an oxygen-deficient when every of the Mn2+ are oxidized to Mn3+. Therefore, the following step which expects the oxidation process of Mn3+ to Mn4+ as shown in the next equation:
$${\text{La}}_{0.85} {\text{A}}_{0.15} {\text{Mn}}^{3 + } {\text{O}}_{2.925} + 2\delta 0{\text{H}}^{ - } \leftrightarrow {\text{La}}_{0.85} {\text{A}}_{0.15} \left[ {{\text{Mn}}_{2\delta }^{4 + } ;{\text{Mn}}_{1 - 2\delta }^{3 + } } \right]0_{2.925 + \delta } + 2\delta {\text{e}}^{ - } + \delta {\text{H}}_{2} {\text{O}}$$
The last step is classified into 2 steps. At \(\delta\) = 0.075, it occurs through O2− that continuously arrested to fulfil the residual Ovacancy, and the ions of Mn3+ are transformed into ions of Mn4+ (R2-1 in Fig. 9). The Ovacancy completely diffuses to the surface of the material, \({\text{La}}_{0.85} {\text{A}}_{0.15} \left[ {{\text{Mn}}_{0.15}^{4 + } ;{\text{Mn}}_{0.85}^{3 + } } \right]0_{3}\) is formed. Then, the second step occurs, the Mn3+ ions are more transformed to Mn4+, appearing in the oxygen over abundance \({\text{La}}_{0.85} {\text{A}}_{0.15} \left[ {{\text{Mn}}_{2\delta }^{4 + } ;{\text{Mn}}_{1 - 2\delta }^{3 + } } \right]0_{2.925 + \delta } (\delta > 0.075)\) product (R2-2 in Fig. 9).
Adapted with permission from Ma et al. (2020). Copyright 2020, Elsevier
a La0.85Ca0.15MnO3; b La0.85Sr0.15MnO3 compositions: the structures of crystal and the oxygen intercalation pathways of A-site replacement, the La0.85Ca0.15MnO3 and La0.85Sr0.15MnO3 samples with higher essential Ovacancy display excellent capacitance features than LaMnO3 and store more energy by the Ovacancy.
Therefore, La0.85Ca0.15MnO3 and La0.85Sr0.15MnO3 samples with higher essential Ovacancy display excellent capacitance features than LaMnO3 and store more energy by the Ovacancy tailored redox pseudocapacitance. The capacitances achieved are ~ 33.0 mF cm−2, 129.0 mF cm−2, and 140.5 mF cm−2 for LaMnO3, La0.85Sr0.15MnO3, and La0.85Ca0.15MnO3, respectively. The La0.85Ca0.15MnO3 electrode produces the most exceptional capacitance behavior due to the lower value of ion dispersion impedance, the most distinguished concentricity of Ovacancy and the sufficient exploitation of the perovskite bulk structure.
Also, Mo et al. (2018) have prepared Ca-doped perovskite lanthanum manganite via the sol–gel technique. Between fabricated samples, La0.5Ca0.5MnO3 exhibited low essential resistance of 2.13 Ω cm2 and an extraordinary specific surface area of 23.0 m2 g−1. The highest specific capacitance achieved was 170 F g−1 at 1 A g−1. Nevertheless, La–Ca–MnO3 met serious elements leaching, resulting in small cycling stabilities and thereby restricting their applications as electrode materials of supercapacitors. Therefore, Ca-doped lanthanum manganite samples were not attractive applicants for supercapacitor applications. Overall, developments in electrochemical performances of manganite electrodes need different effective techniques to prevent cations leaching in Ca-doped perovskite lanthanum manganite. Wang et al. (2019b) have fabricated nanofibers of LaxSr1−xFeO3 oxides via combining electrospinning. As an outcome, they produced La0.7Sr0.3FeO3 nanofibers exhibiting outstanding performance as an electrode for supercapacitor purposes including increased specific surface area ~ 28.0 m2 g−1 and efficient unique of the huge porosity. The LaxSr1−xFeO3 (x = 0.3) NPs exhibited an extraordinary capacitance around 520 F g−1, which is still more than other samples. Additionally, over 5 × 103 cycles and at 20 A g−1, the LaxSr1−xFeO3 (x = 0.3) owns superior rate strength and stability over cycling (~ 84%) of its primary capacitance. Also, Cao et al. (Cao et al. 2015a) have synthesized nanofibers of the nanoparticles of LaxSr1−xCo0.1Mn0.9O3−δ oxides via electrospinning technique. The authors examined the impact of Sr cation substitution in A-site. They found that strontium substitutes the site of La ions; therefore, the morphology of LaxSr1−xCo0.1Mn0.9O3−δ nanofibers is affected. Where, as the rise in Sr2+ content, their coarseness and diameters suffer from reduction. But in contrast, with enhancing the Sr2+ content, the area of surface for the studied sample and also, their grain size significantly increased. Moreover, both bond angles and length between manganese and oxygen ions are significant parameters that possess an outstanding effect in the double exchange of electrons and enhancing the electric conduction leading the improving electrochemical display of perovskites. The electrochemical activities of LaxSr1−xCo0.1Mn0.9O3−δ nanofibers are significantly enhanced when the length is considerably reduced, and the angle is about 180°. The influence of cations substituting on A-site was further investigated by Wang et al. (2020a). The electrospinning and calcination techniques were used to fabricate porosity nanofibers of gadolinium Gd-substituted SrNiO3 (Fig. 10). Some diffraction peaks of gadolinium substituted SrNiO3 (at x = 0.5, and 0.7) are insignificantly increased and passivate owing to the lattice structure deformation from Sr-substituting. The octahedron of NiO6 and the bond angle between Ni and oxygen are deformed via the occupancy ratio in tetrahedral site elements, which are generated through the various radii between Gd3+ and Sr2+ ions. Jahn–Teller effect appears as a result of the dissimilar balance in A-site cations, causing stretching and distorting for the standard cubic crystal system on the c-axis, furthermore, lead to weaken the crystallinity of the crystal lattice. Hence, Gadolinium(III) ions with a shorter ion radius than Lanthanum are occupied as A-site ions, and then Strontium(II) ions with a larger ion radius are preferred to locate in the tetrahedral site.
Adapted with permission from Wang et al. (2020a). Copyright 2020, Elsevier
a The preparation schematic for nanofibers of GdxSr1-xNiO3, b GSN CV curves, c GSN GCD curves, d capacitance of GSN Vs. scan rate, e capacitance of GSN Vs. the current density, where GSN is refer to GdxSr1-xNiO3.
The synthesized GdxSr1−xNiO3 perovskite has more Ovacancies and ion defects. It's meriting remarking that the Ovacancies of GdxSr1−xNiO3 is simple to achieve and transferred by weak the bond between cation in octahedral site and oxygen and smaller energy, which can promote the transport of electric charge and perform with an outstanding performance in electrochemical energy storage. The product gadolinium-substituted SrNiO3 at x = 0.7 owns the outstanding activities when utilized as an electrode for supercapacitors, which is strongly affected by the supreme surface area of approximately 16 m2 g−1 and rational radius of pores reached 3.7 nm. The gadolinium-substituted SrNiO3 at x = 0.7 exhibits a significant voltage window and outstanding capacitance, where gadolinium-substituted SrNiO3 at x = 0.7 possesses specific capacitance of 929 F g−1 in 1 Molar of sodium sulfate and 764 F g−1 in 1 Molar of potassium hydroxide. Besides, the gadolinium-substituted SrNiO3 at x = 0.7, the device exhibits an excellent energy density about 54 W h kg−1 and the power density of 1 kW kg−1 at 1 A g−1. Furthermore at 20 A g−1, the sample shows 20 kW kg−1 as a remarkable power density and 19W h kg−1 as a unique energy density.
In summary, cations substituting in the tetrahedral site of the perovskite has a prominent role in the extent of control or change grain size then obtaining a huge surface area. Moreover, it will affect on bowing the angle between the metal and O2−, and consequently, the variation in the bond length between the metal and O2−. Hence, this pathway leading the electric conduction and O2− dispersion rate of perovskites will likewise be improved because of Ovacancy. A suited amount of cations substituting in the tetrahedral site could achieve perovskites with enhancing the perovskites capacity display (Nan et al. 2019).
Influence of cation substitution in the octahedral site of perovskite oxides
Various research concerning anion-intercalation supercapacitors has considered that the suitable choice of the octahedral site cation intends to enhance the Ovacancy or decrease the inherent resistivity (Elsiddig et al. 2017; Zhu et al. 2016; Li et al. 2017a). Besides, the electrochemical display is based on the octahedral site elements. Liu et al. (2020) investigated the stability window of Sr2CoMoO6−δ affected by B-site cations substituting. The successful substituting of Ni2+ into the Sr2CoMoO6−δ lattice with various content, i.e., Sr2CoMo1−x/100Nix/100O6−δ was affirmed via XRD. A small increment in lattice constants was seen with substituting the Ni atom at the expense of the molybdenum ratio. This is explained by viewing the ionic radius of Ni2+ (0.69 Å), which is larger than the ionic radius of Mo6+ (0.59 Å) through the octahedral site. The cyclic voltammetry curves of the Ni2+ substituted the Sr2CoMoO6 electrode confirm that the predominant mechanism for store the carriers is intercalation pseudocapacitive. Nickel substituted the Sr2CoMoO6 samples showed NiO and Co3O4 NPs and perovskite oxide phases which provide the entire capacity. The resulting the Ovacancy energy of the studied perovskite due to nickel and cobalt cations incorporation was also explained by density-functional theory estimation. The generation of oxygen vacancies was promoted once the B-site cations were accelerated from the oxide lattice within the perovskite. With increasing the scan rates, the oxidation peaks moved positively, while reduction peaks moved on the opposite way, implying fast redox reactions and excellent reversibility occurring in the electrodes. Tomar et al. (2018) have enhanced the oxygen vacancies strontium cobaltite SrCoO3 via Mo-doping i.e. SrCo0.9Mo0.1O3−δ. The sol–gel method was utilized to synthesize SrCoO3 and SrCo0.9Mo0.1O3-δ as an oxygen anion-intercalated charge-storage substances. An extremely high value of diffusion coefficient is characteristic of the efficient accessibility of OH− ions inside the SrCo0.9Mo0.1O3−δ electrode. At 1 A g−1, the specific capacitance of SrCo0.9Mo0.1O3−δ is around 1220.0 F g−1. SrCo0.9Mo0.1O3−δ exhibits extremely excellent capacitance retention at high current density. Also at 10 A g−1, the SrCo0.9Mo0.1O3−δ electrode exhibited excellent cycling stability and columbic efficiency (6.48% only loss from its original capacitance over five thousand cycles). Furthermore, SrCo0.9Mo0.1O3−δ exhibits better performance than SrCoO3, which is ascribed to higher oxygen vacancies and structural stability. From the above outcomes, we deduce that the substituting of cations inside the B-site enhances the Ovacancies and improves the capacitance.
In the conclusion of the above review, the potential window of perovskite can be controlled via the cations substituting over the octahedral site. Moreover, as substituent cations possess large orbital valence electrons, the Ovacancies grew, and then the specific capacity or specific capacitance multiplied (Nan et al. 2019). Furthermore, Table 1 reviews the electrochemical characteristics of some of the latest reported supercapacitors based on the magmatic oxides and their composites.
Table 1 Electrochemical performance of magnetic oxides and their composites
Transition metals sulfide based on nanocomposite electrode for supercapacitor applications
Transition metal sulfides, like MoS, CoS, NiS, MnS, FeS etc., represent potential materials for energy storage applications owing to the excellent electrochemical characteristics they exhibit (Zhang et al. 2020b). The electrochemical characteristic of transition metal sulfides is much better than the electrochemical properties of transmission metal oxides. This can be explained by the presence of sulfur atoms instead of oxygen atoms. Hence, the lower electronegativity of sulfur than that of oxygen facilitates electron transfer in the metal sulfide structure easier than that in the metal oxide form. Thus, replacing oxygen with sulfur provides more flexibility for nanomaterials synthesis and fabrication (Jiang et al. 2016).
Transition metal sulfides have attracted interest in many fields of research including, supercapacitors, solar cells and lithium-ion batteries because of their distinctive optical and electrical characteristics, especially when mixed with other materials to prepare nanocomposite structures (Rao 2020).
The main advantages of using nanostructured transition metal sulfides as improved materials that can be utilized as an electrode in electrochemical supercapacitors are because of their excellent electrochemical behavior. Such properties are distinctive structures of their crystal lattice, ultra-high specific capacitance, excellent conductivity of electric current, great redox activity, and small value of their electronegativity (Geng et al. 2018; Yu and David Lou 2018). These superior electrical characteristics of transition metal sulfides are mainly related to their specific forms and structures with extraordinary morphology of their surfaces, in terms of having unique shapes (nano-flowers, nano-rods, kelp-like, nano-wires, flaky, hierarchical, the nano-honeycomb-like, etc.) (Li et al. 2020).
Nickel sulfide
Nickel sulfide (NiS) is a semiconductor and can be present in many various compositions. It can also be incorporated in a lot of interesting applications including supercapacitors, dye-sensitized solar cells and quantum-dots. Many electrode materials based on NiS have been studied to investigate their capability of being used as a supercapacitor. NiS nanocomposites have exceptional physicochemical properties with excellent transportation of ions over the electrode surface (Rao 2020). Besides, NiS nanocomposites possess high electrochemical functioning and performance for them to be widely applied as catalysts, as pseudo-capacitors and in dye-sensitized solar cells (Kim et al. 2016). Despite all of these interesting properties and characteristics of NiS nanocomposites, they still have some drawbacks such as limited stability of their questionable lifecycle (Ikkurthi et al. 2018).
For example, Xu et al. (2017) synthesized a nanocomposite electrode based on NiS and NiCo2S4 hydrothermally, the synthesis process is presented as a schematic diagram in Fig. 11. They used activated carbon as a negative electrode and NiCo2S4/NiS as a positive one. They used a supercapacitor of nickel cobaltite sulfide/nickel sulfide, which had a large active surface area with enhanced electrochemical characteristics such as, at 160 W kg−1 of power density, it exhibits an energy density value of 43.7 W h kg−1 and at a current density of 1 mA cm−2 the specific capacitance reached its maximum value of 123 F g−1.
Hydrothermal synthesis of the nickel cobaltite sulfide/nickel sulfide nanocomposite
Cobalt sulfide
Cobalt sulfide CoS2 has many advantages in the field of supercapacitors as it is readily available raw materials, easy to synthesize and environment-friendly material, in addition to its high electrical conductance with plenty of sites available for redox reactions to occur (Li et al. 2016a). Several nanostructured electrode materials based on CoS have been prepared for utilization in the area of energy storage and supercapacitors. Recently, Govindasamy et al. (2019b) used the hydrothermal method to spread nanostructured nickel cobaltite sulfide/cobalt sulfide on a piece of carbon cloth in a two-step process as shown in Fig. 12. The prepared nickel cobaltite sulfide/cobalt sulfide exhibits a good specific capacitance of 1565 F g−1 at a current density 1 A g−1 and retained 91% of its initial SC after a number of 8000 cycles at a current density 1 A g−1. At a power density value of 242.8 W kg−1, the energy density value was 17 W h kg−1.
Adapted with permission from Govindasamy et al. (2019b). Copyright (2019) Elsevier
Fabrication of nickel cobaltite sulfide/cobalt sulfide coated over carbon cloth, a part of carbon cloth was put in the uniform solution and hydrothermally heated at 120 °C, and gradually reduced its temperature. The Cobalt tetraoxide Co3O4/carbon cloth was washed and dried overnight. Lastly, the samples were calcined at 200 °C.
Iron sulfide
Being reasonably priced, exhibiting very good electrical conductivity and possession of an excess of active sites; Iron sulfide (FeS2) has attracted the interest of many researchers for its potential use in energy storage applications (Zhao et al. 2017a; Pham et al. 2018; Yu et al. 2018). A huge number of supercapacitors based on nanocomposites of FeS2 as an electrode material has been prepared with a variety of interesting morphologies and structures. For example, Balakrishnan et al. (2019) fabricated a hybrid supercapacitor based on FeS2 and reduced graphene oxide hydrothermally. The prepared hybrid supercapacitor has a much greater value of specific capacitance than pure iron sulfide (i.e. the difference was 21.28 mF cm−2 under the same conditions). Moreover, at a current density of 0.3 mA cm−2, it retained 90% of its initial SC after 10,000 cycles. Figure 13 shows the Scanning electron microscopy images for the preparation of the hybrid supercapacitor.
Adapted with permission from Balakrishnan et al. (2019). Copyright (2019) Elsevier
Scanning electron microscopy (SEM) images of a, b micro flowers of FeS2 and c, d microspheres of reduced graphene oxide/iron sulfide hybrid.
Molybdenum disulfide (MoS2) is cheap, simply prepared in nanosheet form, with very high surface area and excellent conductivity (Liu et al. 2016c; Palsaniya et al. 2018). Owing to these excellent properties, MoS2 and its based nanocomposites have been extensively studied in many fields and applications like catalysis, energy storage, supercapacitors, and Li-ion batteries (Osman et al. 2018).
As an example, Yang et al. (2017) used the hydrothermal reaction pathway with glucose assistance to manufacturing an asymmetric supercapacitor in the form of hierarchical arrays of NiS based on MoS2 nanosheets on a backbone of carbon nanotubes as shown in Fig. 14. The prepared electrode demonstrated a specific capacitance of 676.4 F g−1 at 1 A g−1, and the retained capacitance percentage was 100% at a current density of 5 A g−1 after 2000 cycles.
Adapted with permission from Yang et al. (2017). Copyright (2017) Elsevier
a Synthesis process of nickel sulfide/molybdenum disulfide/carbon nanotube. b Pathways of electron transport in the nickel sulfide/molybdenum disulfide/carbon nanotube supercapacitor.
Another example is the hydrothermal synthesis of a novel nanocomposite based supercapacitor of molybdenum disulfide and graphitic carbon nitrides (g-C3N4/MoS2) in a flower-like shape by Xu et al. (2019b). The specific capacitance of this supercapacitor was 532.7 F g−1 at 1 A g−1 and retained 88.6% of its initial capacitance after 1000 lifecycles. These superior electrochemical characteristics may be attributed to the synergetic action between flowery MoS2 and the nanosheets of graphitic carbon nitrides (see Fig. 15) which facilitates the charge-transfer process.
Adapted with permission from Xu et al. (2019b). Copyright (2019) Elsevier
Morphology of a graphitic carbon nitrides g-C3N4/MoS2 nanocomposite. a SEM image, b TEM, c HR-TEM, d sketch of the graphitic carbon nitrides/MoS2 nanocomposite structure, As observed in the figure, a more uniform and smooth molybdenum disulfide structure performed without aggregation. TEM confirms that most of the molybdenum disulfide are grown on the surface of the graphitic carbon nitrides, which means that the graphitic carbon nitrides sheets give beneficial sites for the extension of the molybdenum disulfide. SEM: scanning electron microscopy, TEM: transmission electron microscopy, HR: high resolution.
Recently, Manuraj et al. have synthesized a nanocomposite hetero-structured solid substance comprising of molybdenum sulfide, MoS2, nanowires and RuO2 nanoparticles via hydrothermal and chemical reduction procedures. In a three-electrode configuration, the MoS2–RuO2 hybrid electrode shows specific capacitance reached 972 F g−1 at 1 A g−1, while, in the two-electrode configuration, its presented 719 F g−1 as presented in Fig. 16. Furthermore, the symmetric supercapacitor based on the composite electrodes shows high cycling stability which retained about 100% from its initial capacitance after 10 × 103 cycles. Also, MoS2–RuO2 hybrid electrode shows a high energy density value of 35.92 W h kg−1 at power density 0.6 kW kg−1.
Adapted with permission from Manuraj et al. (2020), Copyright (2020) Elsevier
Cyclic voltammetry curves of a molybdenum disulfide, b molybdenum disulfide/ruthenium oxide, c capacitance s. scan rate. c Galvanostatic charge/discharge curves of d molybdenum disulfide, e molybdenum disulfide/ruthenium oxide, f capacitance versus current density, The figures display two of redox peaks, which designate the high performance of the material matching to the introduction and parentage of electrons. At higher scan rates, peaks are moved as the ions may be bound over the electrode surface, at lower scan rates, the ions could efficiently migrate into the internal active positions.
Tin sulfides
Many studies have been performed to enhance the electrochemical activities of tin sulfides (SnS and SnS2), using numerous approaches. These include doping with metal or non-metal ions, use of a carbon matrix and material engineering into nanostructured forms of tin sulfides and their nanocomposites to apply them as electrochemical capacitors (Mishra et al. 2017; Wang et al. 2015b). Recently, Parveen et al. (2018) synthesized SnS2 in different shapes of nanostructures like; ellipsoid tin sulfide (EL-SnS2), flower-like (FL-SnS2), and sheet-like (SL-SnS2). The flower-like tin sulfide was the most promising one with small pore size and larger surface area exhibiting 432 F g−1 of specific capacitance at 1 A g−1.
Manganese sulfide
Manganese sulfide (MnS) is also a cheap, naturally abundant, environmentally friendly compound and theoretically, it possesses a high supercapacitance and electrical conductivity due to its various oxidation states ranging from + 2 to + 7 (Palaniyandy et al. 2019). Moreover, MnS is present in three polymorphic states: α (cubic), β (cubic), and γ (hexagonal) (Yu et al. 2016). A summary of some of the most recent work on MnS is shown in Table 2.
Table 2 Electrochemical characteristics of transition metals sulfide-based nanocomposite electrodes for supercapacitor applications
Tungsten sulfide
Tungsten sulfide (WS2) is again abundant in nature and is found as hexagonal crystals belonging to the space group P63/mmc (Eftekhari 2017). WS2 crystals are forming relatively brittle, restacked nanosheets with slight electrical conductivity, restricting its application as a supercapacitor (Xia et al. 2018). Hence, many approaches have been followed to enhance its electrochemical performance, such as doping with binary metals, non-metals, carbon materials and conducting polymers (Xia et al. 2018).
Choudhary et al. (Choudhary et al. 2016) prepared a nanowire of tungsten(VI) oxide (WO3) and comprised it with a tungsten sulfide (WO3/WS2) core/shell structure. They used a foil of W and applied KOH on its surface to promote its oxidation at 650 °C, forming a hexagonal single crystal of WO3 (h-WO3), followed by a sulfurization process to finally form h-WO3/WS2 nanowires as illustrated in Fig. 17. The synthesized hybrid supercapacitor demonstrated superior electrochemical characteristics and losses a negligible percentage of its primary capacity after 30,000 lifecycles.
Adapted with permission from Choudhary et al. (2016). Copyright (2016) American Chemical Society
a Fabrication of tungsten oxide/tungsten sulfide composites. b A photo of the studied system and nanowires image, The nanowires like structures of the crystalline Tungsten(VI) oxide are sulfurized in furnace supporting by sulfur medium via the chemical vapor deposition, which transforms the outside surface of the Tungsten(VI) oxide to two-dimensional Tungsten sulfide.
Carbon materials for supercapacitors applications
Carbon-derived materials hold numerous benefits such as great quantity in raw materials (abundance), thermal stability, value-added chemicals, ease of processing and modification. Consequently, they have displayed countless attention and high potential in different energy-related applications (Wang et al. 2008, 2018a; Meng et al. 2014; Li et al. 2016c; Jiang et al. 2012; Osman et al. 2019a, b, 2020a, b; Osman 2020; Chen et al. 2019b). Mesoporous carbon materials consider as promising targets for advanced applications due to their exceptional features, which enables them to engross universal apprehension over the last few decades (Qiang et al. 2017; Zhang et al. 2017c; Sevilla et al. 2017; Wang et al. 2006; Hooch Antink et al. 2018). There are several physical arrangements for mesoporous carbons, containing nanoparticles (Górka and Jaroniec 2010; Lee et al. 2011), nanosheets (Wang et al. 2018a; Li et al. 2017b; Ding et al. 2013), nanotubes (Osman et al. 2019a, 2020a, b; Guo et al. 2011), nanofibers (Wu et al. 2015b), etc., which can adapt with several categories of industrial applications. Additionally, there are different pore size in the nanostructures of mesoporous carbons, including micropores, mesopores and macropores, which is of noteworthy prominence for their supercapacitor application.
Several preparation pathways, including nanocasting direct synthesis strategies, were studied to obtain mesoporous carbon materials with different particle structures via several reaction pathways (Fig. 18), which all have separate advantages and disadvantages (Li et al. 2016d).
Adapted with permission from Ref. Li et al. (2016d). Copyright 2016 Springer Nature
Mesoporous carbonaceous materials derived from various routes. Interestingly, mesoporous inorganic substances can reproduce their internal structures in nanoporous carbon construction with promising distributed mesoporosity. The nanocasting techniques for creating mesoporous carbons involved two advanced procedures, the hard and soft templating approaches.
Nanocasting method showed the best ability, compared to direct synthesis methods, to prepare unvarying dispersed mesopores in carbon materials with attracting features to produce highly symmetric mesoporous inorganic solid substances as appropriate templates in the energy storage application. Interestingly, mesoporous inorganic substances can reproduce their internal structures in nanoporous carbon construction with promising distributed mesoporosity. The nanocasting techniques for creating mesoporous carbons involved two advanced procedures, the hard and soft templating approaches. Commonly, the nanocasting technique is a relatively predictable templating progression. Notwithstanding that the synthesized mesoporous carbons have inimitable physical and chemical features, the large-scale production has quite a few drawbacks.
High-performance supercapacitor electrode material via 3D carbon nanosheet
Due to the high cost of graphene and its derivatives, three-dimensional porous carbon nanosheets, synthesized via facile methods, have received attention for large scale applications because of their largely opened layer, excellent electronic transportation ability and high specific surface area. The obtained results for the prepared bark-based carbon demonstrates specific features toward a remarkable function in energy storage. The as-fabricated bark-based carbon-700-based supercapacitors exhibit an enchanting capacitance, exceptional capacitance retention and attractive energy density for supercapacitor application systems. The universal method of preparing a carbon nanosheet from bark, which exists in a tree's construction is considered as environmentally friendly (as schematically shown in Fig. 19) (Li et al. 2019e), can be very succinct, as the bark contains the periderm as well as the lignin that oriented hollow tube cellulose fibers (Keränen et al. 2013; Sun et al. 2018b; Chen et al. 2018b).
Adapted with permission from Li et al. (2019e) Copyright© 2019, American Chemical Society
Preparation of 3D porous carbon nanosheet. The universal method of preparing a carbon nanosheet from bark, which exists in a tree's construction is considered as environmentally friendly.
Additionally, Fig. 20a illustrates the main structure of untreated bark that confirms the distribution of both abundant pores as well as different sizes in the raw materials, The pollen can be activated and the spherical porous structure of the materials kept as it is while using copper salts in the preparation pathway to synthesize the carbon nanosheet (Liu et al. 2018g). The SEM images of bark-based carbon 700 °C are demonstrated in Fig. 20b, c, which confirm the formation of a typical flower-like carbon structure with outstanding three-dimensional vertical carbon structure through the carbon nanosheet. As well, the TEM image (Fig. 20d) was used for the confirmation of the texture for the obtained bark-based carbon samples, in which the thin nanosheet structure of the as-prepared material was undeniably discovered. In addition, the N2 adsorption–desorption measurements, through curves in Fig. 20e, were used to detect the obtained samples microstructures. The hysteresis loops located at 0.4–0.9 P/P0 disclose the existence of the mesoporous (Chen et al. 2019c). The pore size distribution curves premeditated from density-functional theory are represented in Fig. 20f, which demonstrated the same pore structure with pores sizes principally determined at 0.8 and 1.2 nm. Reasonably, the current study can conclude that both treatment temperatures, as well as the hard template, are indispensable factors toward obtaining porous carbon nanosheets via biomass.
Adapted with permission from Li et al. (2019e) Copyright © 2019, American Chemical Society
a–c SEM images of bark, bark-based carbon at 700 C, and flower-like carbon, respectively. Which confirm the formation of a typical flower-like carbon structure with outstanding three-dimensional vertical carbon structure through the carbon nanosheet and d TEM image of the bark-based carbon at 700 C, e BET curves adsorption/desorption confirm mesoporous nature and f distribution of pore radius of bark-based carbon.
The performance of the as-prepared carbon nanosheet can be obtained via the electrochemical activity measurements by applying these materials in the supercapacitor. Figure 21a confirmed the obtained capacity ability curves of bark-based carbon at 700 °C, which proposes remaining capacitor activities of the bark-based carbon at 700 °C. Moreover, the galvanostatic charge/discharge, as well as specific capacitances results, are developed to consider the capacity implemented as an electrode material (Fig. 21b, c). The results indicated that bark-based carbon at 700 °C displays an exceptional capacitance around ~ 340.0 F g−1, comparing to that of bark-based carbon at 600 °C around ⁓290 F g−1 and finally bark-based carbon at 800 °C displays capacity 309 F g−1. Likewise, Fig. 21d illustrates the electrochemical impedance spectroscopy analysis of bark-based carbon samples, which indicates related plot profiles that contain a semicircle and around vertical lines in low and high frequencies, respectively, to result in significantly better supercapacitor behavior. Thus, it can be established that bark-based carbon at 700 °C owns the lower values of resistance about 0.26 O, indicating the exceptional electrochemical performance of the 3D porous carbon nanosheet.
a Cyclic voltammetry and b the galvanostatic charge/discharge curves of bark-based carbon versus current densities. c capacitances and d Nyquist plots of bark-based carbon samples. The results indicated that bark-based carbon (at 700 °C) displays an exceptional capacitance comparing to those obtained of bark-based carbon (at 600 C0).
Graphene-based nanocomposites for supercapacitor applications
Graphene which exists in hexagonal assembly can be defined as a two-dimensional single layer of sp2 hybridized carbonaceous atoms. The number and arrangement of graphene layers determine the electronic characteristics of graphene. Additionally, interlayer ordering and the layer number with a different thickness could affect the chemical and physical characteristics of graphene.
Graphene has received great research attention owing to its extraordinary features. For instance, its powerful mechanical strength, porosity, large specific area, improved conductivity, and electrochemically active nature. Different physical and chemical pathways can be used to attain graphene as well as several composite materials between graphene and other compounds that make graphene appropriate to improve the electrochemical activity of different materials for numerous applications like lithium-ion batteries and supercapacitors. Graphene-derived materials possess a monumental potential for applications in broad areas such as conversion, electronics, energy storage and catalysis (Sun et al. 2011; Chen and Hsu 2011; Liu et al. 2012; Yu et al. 2012; Shih et al. 2013; Zhang et al. 2012; Hou et al. 2013; Wang et al. 2013a; Girishkumar et al. 2010; Jin et al. 2013; Hassoun et al. 2012; Pan et al. 2013; Yang et al. 2013; Gao et al. 2012; Wang et al. 2013b; Zhang et al. 2013b; Zhu et al. 2012; Luo et al. 2012; Xu et al. 2013; Lin et al. 2013; Huang et al. 2012; Wang et al. 2011). Scheme 1 described the information on characteristics of graphene that enables its wide range of applications, and the features of graphene for different applications.
Graphene material along with their unique properties and various applications. Graphene-derived materials possess a monumental potential for applications in broad areas such as conversion, electronics, energy storage and catalysis (Mahmood et al. 2014)
Graphene and their composites were widely employed for progress in supercapacitors. Where it has got significant attention, attributed to its exceptionally surface area achieved ~ 2542.0 m2 g−1 and its unique electrical conduction characteristic. Also, one layer of G performs extraordinary capacitance around ~ 20.0 μF cm−1 which is larger than other composites based on C materials. The highest energy density of the supercapacitors depends on various parameters namely; electrode nature, current collectors, separators, type, and density of electrolyte, working voltage window of the cell, and the retention performance (El-Kady et al. 2016). Graphene, as an electrode material, has a large enrichment to the performance of the supercapacitor. It owns numerous obvious shapes in all four dimensions as quantum dots, wires (one dimensional), films (two dimensional), and monoliths (three dimensional). Further to the four-dimensional self-healing structure (Yadav and Devi 2020).
Graphene oxide material along with the reduced graphene oxide species are examined as possible electrode materials for supercapacitors because of their remarkably great specific surface area, superior electrical conductivity, and exceptional mechanical properties (Wang et al. 2009; Ke and Wang 2016). Michael et al. have synthesized an asymmetrical supercapacitor device based on graphene oxide via a simple screen-printing method. The capacitance was increased from 0.82 to 423 F g−1, after graphene oxide incorporation. The device exhibited a power density of about 13.9 kW kg−1 at the energy density up to11.6 W h kg−1. Also, Zhang et al. (2016f) have successfully synthesized a reduced graphene oxide/nickel foam electrode via flame-induced reduction of dry graphene oxide onto nickel foam. The produced composite material offers a specific capacitance that reaches 228.6 F g−1 at 1 A g−1 and retained high cycling stability up to 94.7% after 10,000 cycles. The excellent performance is ascribed to the cross-linking disordered network along with the random distribution of the resulted pores that allows fast transport of ions to the active sites (Zhang et al. 2016f). Recently, Sahoo et al. (2016) have synthesized a novel porous ternary nanohybrid based on NiMn2O4, reduced Graphene oxide, and Polyaniline as an excellent supercapacitor electrode material. The NiMn2O4/reduced graphene oxide/polyaniline shows a specific capacitance of 757 F g−1 at 1 A g−1. Further, the electrode presented the highest energy density of (70 W h kg−1) with retained about 93% after 2000 cycles (Fig. 22).
Adapted with permission from Sahoo et al. (2016), Copyright (2016) Elsevier
Preparation of NiMn2O4/reduced graphene oxide/polyaniline displays the synthesis mechanism of the ternary nanocomposite. Originally, the hydrothermal conditions induced the formation of NiMn2O4 on the surface of graphene. Lastly, an in situ polymerization method was conducted to fabricate Polyaniline on the binary composite.
Mariappan et al. (2019) have synthesized ternary hybrid nanocomposites with varying weight portions of reduced graphene oxide/polypyrrole/Co ferrite and reduced graphene oxide/polypyrrole/Fe3O4 by a hydrothermal procedure (Fig. 23). The specific capacitance for 37 wt% reduced graphene oxide/58 wt% Polypyrrole/5 wt%Fe3O4 (FO5), 32 wt% reduced graphene oxide/54 wt% polypyrrole/14 wt%Fe3O4 (FO14), 37 wt%rGO/58 wt% polypyrrole/5 wt% Co ferrite (CFO5), and 32 wt%rGO/54 wt% polypyrrole/14 wt%Co ferrite (CFO14) is reached to 261, 141, 108 and 68 F g−1 at 1 A g−1, respectively. Between the studied samples, FO5 presents high specific capacitance with excellent rate capacitance (163 F g−1). As an outcome, the FO5//AC cell shows the specific capacitance of 39 F g−1 with superior rate ability and excellent cycling performances. The energy density is observed to range between 18–4.2 W h kg−1 at a power density between 0.3–10.5 kW kg−1, respectively.
Adapted with permission from Mariappan et al. (2019), Copyright (2019) Elsevier
Capacitive and diffusion measured capacitance parts for synthesized ternary hybrid nanocomposites with varying weight portions of reduced graphene oxide/polypyrrole/Co ferrite and reduced graphene oxide/polypyrrole/Fe3O4 a FO5, b FO14, c CFO5, and d CFO14. e, f Trasatti plot for evaluation the specific capacitance contribution of the external surface of the electrode for all nanocomposites.
Also, the doping graphene with nitrogen is an efficient route to enhance its properties and therefore, it has been used in lithium-ion batteries and supercapacitors. During a nitrogen atom is doped into graphene, three public bonding arrangements within the carbon lattice, namely pyridinic N, pyrrolic N, and graphitic N (quaternary N) are seen (Fig. 24) (Wang et al. 2012; Yadav and Dixit 2017).
Bonding configurations types of nitrogen atom doped graphene. During a nitrogen atom is doped into graphene, three public bonding arrangements within the carbon lattice (Yadav and Dixit 2017)
For the illustration of pyridinic N, one Nitrogen atom are replaced carbon matrix and then make chemical bonds with 2 Carbon atoms at the graphene edges gives a one-electron (p) to the π system. The reason for naming Pyrrolic N attributes to that the nitrogen atoms give 2 electrons (p) to the π system and then create chemical bonds in the ring with the 5 neighbors of C atoms. Finally, quaternary nitrogen atoms that replace C atoms in the hexagonal ring. Among these N-types, pyrrolic N appears a sp3 hybridized while the other two types appear sp2 hybridized (Yadav and Devi 2020). The N-graphene displays various properties compared with pure graphene. For example, the spin density and charge arrangement of C atoms will be effected via the neighbor nitrogen substituents, which produces the activation region on the graphene surface (Wang et al. 2012). Chen et al. (2013) have synthesized N-doped graphene hydrogel via the hydrothermal approach. The fabricated electrode exhibited extraordinary power density of 205 kW kg−1 and retained about 92.5% capacitance after 4000 cycles at100 A g−1. Recently, Rezanezhad et al. (2020) have synthesized the Mn–Nd co-doped LaFeO3 perovskite NPs via the hydrothermal technique (Fig. 25). Subsequently, the system was incorporated with N-Graphene oxide nano-sheets. The La0.8Nd0.2Fe0.8Mn0.2O3 sample shows a higher specific capacitance of 158 F g−1. Also, it was observed that the incorporation of N-Graphene oxide mainly improves the specific capacitance of the nanocomposite to increase up to 1060 F g−1. Additionally, the composite exhibited exceptional capacity retention as 92.4% after 10,000 cycles which higher than of those for the La0.8Nd0.2Fe0.8Mn0.2O3 sample (85.37%).
Adapted with permission from Rezanezhad et al. (2020) Copyright (2020) Elsevier
Fabrication of N-graphene oxide from graphene oxide by hydrothermal technique.
Xu et al. (2019c) have synthesized a NiS/MoS2@N-reduced graphene oxide composite through the hydrothermal approach. The NiS/MoS2@N-reduced graphene oxide hybrid is employed as an electrode exhibiting an extraordinary specific capacity (2225 F g−1; at 1 A g−1), and a high rate of 1347.3 F g−1 at 10 A g−1. Also, the NiS/MoS2@N-reduced graphene oxide demonstrates unique capacitive property reached 1028 F g−1 at 1 A g−1. Further, it gives high energy density up to 35.69 W h kg−1 at good power 601.8 W kg−1. Besides, it possesses excellent cycle stability where it retained about 94.5% from its original capacitance 50,000 cycles (Fig. 26).
Adapted with permission from Xu et al. (2019c), Copyright (2019) Elsevier
a 3 Dimensional NiS/MoS2@N-reduced graphene oxide composites schematic fabrication, b cyclic voltammetry curves versus scan rates. c The galvanostatic charge/discharge curves versus current densities. d Capacitances versus current densities. e Plots of Ragone.
Conducting polymers
Conducting polymer hydrogels have been extensively-utilized in the field of energy storage as supercapacitors owing to many promising and useful attributes like wonderful electrochemical activities, good electrical conductivity, distinctive solid–liquid interface, high stretchability, unique elastic resilience and good energy and power densities (Li et al. 2018; Xu et al. 2020; Ma et al. 2019b; Qin et al. 2017; Wang et al. 2018b, 2019c). In this regard, the rationale of supercapacitors based on conducting polymer hydrogels, current challenges and future directions were explained in light of many recent research reports.
Stretchable supercapacitors with good mechanical properties are seen as very promising power supplies for electronic devices (Wang et al. 2019c). Zhaokun Yang et al. used a phytic acid-assisted molecular bridge to fabricate supercapacitors with high electrochemical activity and good mechanical properties through combining two kinds of conducting polymers, the poly(3,4-ethylene dioxythiophene) and polyaniline (Yang et al. 2019c). Phytic acid allowed the benzoic to quinoid structure's transition. The obtained hydrogel possessed largely-improved mechanical characteristics compared to poly(3,4-ethylene dioxythiophene), thanks to the molecular interaction between poly(3,4-ethylene dioxythiophene) and polyaniline. The recorded energy density was about 0.25 Mw h cm−3 at 107.14 mW cm−3 power density. This good activity was attributed to many factors including, the partial removal of polystyrene sulfonate from poly(3,4-ethylene dioxythiophene) and its conversion from benzoic to quinoid structure and the interaction between the employed polymers which allowed sustained electron and ion transfer and provided quick and reversible redox reactions. Another asymmetrical supercapacitor based on manganese oxide nanoflakes-loaded on polypyrrole nanowires was reported by Weidong He et al. via a simple and eco-friendly method (He et al. 2017). The prepared core–shell structure had a large surface area and permitted an efficient ion transfer due to the decreased distance of ion transmission. The synergistic impact of both MnO2 and polypyrrole led to a relatively-high specific capacitance of 276 F g−1 at 2 A g−1. In addition, capacitance retained ratio of about 72.5% was recorded at harsh charge/discharge circumstances of 200 F g−1 at 20 A g−1. Moreover, good flexibility and mechanical stability indicated by minimal capacitance reduction, high energy density (25.8 W h kg−1 at 901.7 W kg−1 power density), unique cycling stability of 90.3% at 3 A g−1 after 6000 cycles and a high voltage window of 1.8–2 V were obtained. The electrochemical characteristics of the prepared MnO2@polypyrrole flexible supercapacitor, were collected and are shown in Fig. 27.
Adapted with permission from He et al. (2017), Copyright 2017, Elsevier
a Cyclic voltammetry curves versus scan rates, b the galvanostatic charge/discharge versus current densities, c cycling stability, d Ragone plot, e cycling activity at various bending and f single and double supercapacitor the galvanostatic charge/discharge of prepared MnO2@polypyrrole.
To achieve further flexibility, Panpan Li et al. reported a macromolecular self-assembly-based method to develop a 3D Polyaniline/graphene hydrogel. The fabricated 3D Hybrid exhibited powerful interconnectivity and improved mechanical properties (Li et al. 2018). The suggested device showed high strain (around 40%) and achieved considerable energy density of 8.80 mW h cm−3 at 30.77 mW cm−3 power density. In addition to that, the proposed supercapacitor could avoid short-circuiting and effectively defeat large structural deformation.
Another comparative study to understand the role of conducting polymers in supercapacitors was carried out by Zichen Xu et al. where four different polymers including Polyaniline, polypyrrole, poly(3,4-ethylene dioxythiophene) and polythiophene were loaded on a composite of zin sulfide and reduced graphene oxide as shown in Fig. 28 (Xu et al. 2020). The investigated samples were fabricated via polymerization of the conducting polymers on ZnS/reduced graphene oxide composite which was prepared by a hydrothermal route. All employed conducting polymers increased the specific capacitance and cyclic stability of the prepared composite. However, their result showed that the ZnS/reduced graphene oxide/polyaniline composite possessed the highest capacitance activity and cyclic stability. In the two-electrode configuration, the recorded stability and specific capacitances were 76.1% and 722 F g−1 at 1 A g−1, respectively after 1000 cycles. While, in the three-electrode system, the obtained specific capacitance and stability were 1045.3 F g−1 and 160% at the same conditions. In addition, the maximum power and energy densities were 18 kW kg−1 and 349.7 W h kg−1. This superior characteristic of the ZnS/reduced graphene oxide/polyaniline composite was attributed to N and S active sites of this composite which fostered electrolyte penetration during cycling and allowed further active sites.
Adapted with permission from Xu et al. (2020), Copyright 2020, Royal Society of chemistry
Synthesis of conducting polymers-loaded onto ZnS/reduced graphene oxide composite. The amount of ZnS/reduced graphene oxide was dispersed in deionized water. The solution of acetonitrile dropwise in 3, 4-ethylenedioxythiophene in the presence of ammonium persulfate and stirred in an ice bath.
Highly-flexible, conducting polymer-based supercapacitors were fabricated by Qingqing Qin et al. by employing polybenzimidazole of 100 megapascals tensile strength (Qin et al. 2017). In their study, graphite paper-coated activated carbon was integrated with the polybenzimidazole conducting polymer. The obtained device showed low series resistance and very high capacitance retention stability more than 90% after 10,000 cycles. Besides, the electrochemical performance of the tested supercapacitors remained stable after twisting, bending and rolling; indicating their unique flexibility and mechanical damage-resistant reliability.
Stretchable electrodes are the basis of stretchable supercapacitors. Xi Wang et al. reported the fabrication of stretchable electrodes based on polyaniline or poly(1,5-diaminoanthraquinone) polymers supporting acrylate rubber/multi-wall carbon nanotubes composite (Wang et al. 2018b). The prepared acrylate rubber/multi-wall carbon nanotubes loaded on poly(1,5-diaminoanthraquinone) and acrylate rubber/multi-wall carbon nanotubes loaded on Polyaniline exhibited a large volumetric capacitance at 1 mA cm−2 of about 20.2 F cm−3 and 17.2 F cm−3, respectively, as shown in Fig. 29. The unique energy density of about 2.14 mW h cm−3 was obtained after assembling asymmetrical supercapacitor by employing poly(1,5-diaminoanthraquinone)-loaded acrylate rubber/multi-wall carbon nanotubes as the anode and polyaniline-loaded acrylate rubber/multi-wall carbon nanotubes as the cathode. Moreover, capacitance retention of 86% at 30 mA cm−2 and good cycling stability after harsh strain conditions were achieved.
Adapted with permission from Ref. Wang et al. (2018b), Copyright 2018, Royal Society of chemistry
a Cyclic voltammetry curves measured at 10 mV s−1, b the galvanostatic charge/discharge curves, c capacitance vs current density and d capacitance versus cycle number of the fabricated acrylate rubber/multi-wall carbon nanotubes/poly (1,5-diaminoanthraquinone).
Carbon nanotubes have allowed the uniform distribution of conducting polymers without any need of binding compounds or linkers. Besides, they possess excellent conducting and mechanical properties. Frackowiak et al. (2006), reported the fabrication of three different composites made of multiwall carbon nanotubes, polyaniline, polypyrrole and poly(3,4-ethylene dioxythiophene) conducting polymers. The prepared composites exhibited both pseudo-capacitance and electrostatic attraction. The employed multiwall carbon nanotubes allowed good mechanical properties and preserved the active materials of the tested conducting polymers from mechanical deformation during long cycling measurements. A range of capacitance values from 100 to 330 F g−1 was obtained at capacitance voltage 0.6–1.8 V using various asymmetric configurations. This unique performance was attributed to the presence of multiwall carbon nanotubes which allowed high charge/discharge rates through an enhanced charge transfer.
A similar study was conducted by employing reduced graphene oxide sheets. Jintao Zhang et al. reported the in situ polymerization of poly(3,4-ethylene dioxythiophene), polyaniline, and polypyrrole on the surface of reduced graphene oxide (Zhang and Zhao 2012). Due to the synergic effect of conducting polymers and reduced graphene oxide sheets. The prepared nanocomposites displaced above 80% retained capacitance after 1000 cycles. In addition, reduced graphene oxide@polyaniline composite showed 361 F g−1 specific capacitance at 0.3 A g−1 current density. While specific capacitances of 248 F g−1 and 108 F g−1 were recoded for reduced graphene oxide–polypyrrole and reduced graphene oxide@poly(3,4-ethylene dioxythiophene) composites, respectively, as shown in Fig. 30.
Adapted with permission from Ref. Zhang and Zhao (2012), Copyright 2012, American Chemical Society
Cyclic voltammograms of a reduced graphene oxide@poly(3,4-ethylene dioxythiophene) composite, b reduced graphene oxide@polypyrrole composite and c reduced graphene oxide@polyaniline composite, d charge/discharge pattern of reduced graphene oxide@poly(3,4-ethylene dioxythiophene) composite, e reduced graphene oxide@polypyrrole composite and f reduced graphene oxide@polyaniline composite.
Based on the electrostatic attraction between surfactants of positive charge and negatively-charged graphene oxide sheets, Zhang et al. reported a simple and cost-effective method for the preparation of graphene oxide@polypyrrole sandwich structure (Zhang et al. 2010). The prepared composite showed a unique performance with a capacitance of 500 F g−1. High cyclic stability was also achieved. The reported properties were attributed to many factors including, exfoliated graphene oxide which enabled many active sites for both sides' conjugation of polypyrrole, the prepared 3D structure enabled cyclic stability, resistance reduction by graphene oxide and polypyrrole which effectively-contributed to the overall capacitance.
Similarly, Wang et al. (2005) used the electrochemical route for synthesizing carbon nanotubes@polypyrrole composite. The composite was prepared via polypyrrole plating into the host membrane's pores. High conductivity (I–V relation) and stability were obtained as shown in Fig. 31.
Adapted with permission from Ref. Wang et al. (2005), Copyright 2004, American Chemical Society
Cyclic voltammetry curves of a carbon nanotubes and Cl−-doped polypyrrole nanowires b polypyrrole films.
Another configuration based on poly(N-phenylglycine) conducting polymer was reported by Vedi Kuyil et al. which was synthesized via in situ polymerization and N-phenylglycine's electrodeposition on exfoliated graphite sheets (Muniraj et al. 2020). The electrochemical performance of the investigated device showed a unique specific capacitance at 10 mV s−1 of 367 mF cm−2. Interestingly, an outstanding 8.36 μW h cm−2 energy was recorded at 1.65 mW cm−2 power density using 1.1 V potential window.
Dirican et al. (2020) reported electrodeposition and electrospinning-based method for the fabrication of Polyaniline@MnO2@porous carbon nanofibers for supercapacitors. The proposed device combined the advantages of porous carbon nanofibers good cyclic stability, large conductivity of Polyaniline and MnO2 nanoparticles' high pseudocapacitance. As a result, the prepared device exhibited high capacitance of about 289 F g−1 and large retained capacitance of 91% after 1000 cycles as shown in Fig. 32. Besides, the configuration of the asymmetrical cell showed an enhanced energy density of 119 W h kg−1 and 322 W kg−1 power density.
Adapted with permission from Dirican et al. (2020), Copyright 2020, Elsevier
a Galvanostatic charge/discharge patterns of polyaniline@MnO2@porous carbon nanofibers, MnO2@porous carbon nanofibers and porous carbon nanofibers, b specific capacitance of porous carbon nanofibers, MnO2@PCNFs and Polyaniline@MnO2@porous carbon nanofibers and c retained capacitance of Polyaniline@MnO2@porous carbon nanofibers, MnO2@porous carbon nanofibers and porous carbon nanofibers. The prepared device exhibited high capacitance (289 F g−1) and largely retained capacitance.
Recent studies on polymer-based supercapacitors are summarized in Table 3.
Table 3 Recent studies on polymer-based supercapacitors
Bibliometric analysis
Prior to the bibliometric analysis, preliminary Web of Science results showed there were only two publications in the last three years using the search criteria of TOPIC: ("supercapacitor") AND TOPIC: ("transition metal") AND TOPIC: (spinel ferrites) Timespan: Last 5 years. Indexes: SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH, ESCI. Additionally, the document types are research articles, this indicates that there is a significant gap in the literature regarding spinel ferrites and transition metal ions (oxide or sulfide). On the other hand, using the search criteria (TOPIC: ("supercapacitor") AND TOPIC: ("conducting polymer") over a similar time frame indicated 364 results for the conducting polymers, this clearly shows there is an abundant amount of research regarding conducting polymers as supercapacitors. Among the results, there are 323 research articles along with 28 review articles.
The bibliometric mapping of supercapacitors over the last 5 years showed 964 results using the search criteria (from Web of Science Core Collection) "TOPIC: (supercapacitor transition metal) OR "supercapacitor" over the last 5 years. Again, as seen in Fig. 33 most of the research outputs are conducting polymers and graphene in the energy storage field. Another identified cluster (shown in green) is the growing field of composite materials used as supercapacitors. As seen in the density visualization map (Fig. 34), derived from bibliometric results, there are prominent keywords that dominate the existing research. These include but not limited to graphene, nanostructure and Ni foam. Interestingly, composites fall slightly outside the dense region.
Bibliometric network mapping of the supercapacitors research field in the last 5 years
Bibliometric density visualization mapping of the supercapacitors research field (2015–2020)
Supercapacitors were employed for normal applications like memory protection and internal battery backup. However, in recent years, the application area has widened significantly toward hybrid carriers, smartphones, and energy collection. The latest technologies on the horizon encourage making and placing supercapacitors into direct competition with rechargeable batteries.
In this review, we selected various electrode materials such as spinel ferrites, perovskite oxides, transition metals sulfides, carbon materials, and conducting polymer materials and evaluated their performance and outlined their advantages and disadvantages in the application of supercapacitors. The current review highlights the available literature documented on the electrochemical activities of nanostructured of selected materials, their composites, and possible approaches to implementing these materials in Li-ion batteries in the soon future.
The spinel ferrite and perovskite oxides based materials present notable discharge capacities of 1000 mA h g−1, which is two to three times higher than that those obtained via graphite anodes (Yuvaraj et al. 2016; Yin et al. 2013). In magnetic oxides and through the initial discharging cycle, the crystal structure is destructed into different mineral particles following with the production of the Li2O form. As performed mineral particles promote the electrochemical action using the production/destruction of Li2O that supplies the route for the conversion reaction mechanism. The magnetic oxides have many crystals whose shapes depend upon, the synthesizing technique, and temperature of the annealing process. Besides, their specific capacitance and better cycling stability are dependent on the crystals' shape (Ajay et al. 2015). Also, the replacement of multiple cations into the A- or B-sites can change the symmetry of the pristine structure and consequently, the physical and chemical properties (Zhang et al. 2016c). The magnetic oxides (spinel ferrites and perovskite oxides) as anodes holds an edge for supercapacitors and hybrid supercapacitors (Liu et al. 2018c). Hence, the immense content of oxygen vacancies (Ovacancy), and remarkable conductivity allow their extraordinary energy densities. Also, the perovskites store charge by oxygen intercalation and the excellent diffusion pathways along crystal domain boundaries leading the promotion of the dispersion rate (Nan et al. 2019). However, the transition metal sulfides are promising materials for energy storage applications because of their excellent electrochemical characteristics. The electrochemical characteristics of transition metal sulfides are much better than that of transition metal oxides; this is can be explained by the presence of sulfur atoms instead of oxygen atoms. Hence, the lower electronegativity of sulfur than that of oxygen facilitates electron transfer in the metal sulfide structure easier than that in the metal oxide form. Thus, replacing oxygen with sulfur, provides more flexibility for nanomaterials synthesis and fabrication (Jiang et al. 2016).
However, the lower conductivity, low cycling stability and volume change during charge/discharge cycles of metals oxides and transition metal sulfides make them insufficient materials for performing supercapacitors. To defeat those disadvantages, the conducting polymers or conducting materials were added to the magnetic oxides or transition metal sulfides to amplify the electronic conductivity and to enhance the cycling stability (Yang et al. 2018; Qiao et al. 2018). Conducting polymer hydrogels have been extensively used in the field of energy storage for supercapacitors production owing to many promising and outstanding properties like powerful electrochemical activities, improved electrical conductivity, distinctive solid–liquid interface, high stretchability, unique elastic resilience and good power and energy densities (Li et al. 2018; Xu et al. 2020; Ma et al. 2019b; Qin et al. 2017; Wang et al. 2018b, 2019c). Also, graphene has received great attention in research owing to its extraordinary features, such as high conductivity, powerful mechanical strength, large specific area, porosity, and electrochemically active nature. The result showed that the composites that comprise of magnetic oxides or transition metal sulfides with conducting polymers or conducting materials possessed the highest capacitance activity and cyclic stability. These superior characteristics of these composites were attributed to oxygen and S active sites of this composite which fostered electrolyte penetration during cycling and allowed further active sites (Xu et al. 2020).
In brief, it is deduced that the electrochemical achievement of the magnetic oxides or transition metal sulfides is improved in the following techniques: designed magnetic oxides or transition metal sulfides that have considerable surface areas, possess a huge porosity, composites with carbonaceous materials (core–shells and graphene), and/or conducting polymers, that decrease the irreversible capacity loss and the production of stable supercapacitors. Hence, mixed-magnetic oxides or transition metal sulfides and their composites are the ideal prospective materials for the next generation of energy-storage applications.
Abbas YM et al (2019) Investigation of structural and magnetic properties of multiferroic La1−xYxFeO3 Perovskites, prepared by citrate auto-combustion technique. J Magn Magn Mater 482:66–74. https://doi.org/10.1016/j.jmmm.2019.03.056
Abdel Maksoud MIA et al (2020a) Insight on water remediation application using magnetic nanomaterials and biosorbents. Coord Chem Rev 403:213096. https://doi.org/10.1016/j.ccr.2019.213096
Abdel Maksoud MIA et al (2020b) MANanostructured Mg substituted Mn–Zn ferrites: a magnetic recyclable catalyst for outstanding photocatalytic and antimicrobial potentials. J Hazard Mater. https://doi.org/10.1016/j.jhazmat.2020.123000
Abirami R et al (2020) Synthesis and characterization of ZnTiO3 and Ag doped ZnTiO3 perovskite nanoparticles and their enhanced photocatalytic and antibacterial activity. J Solid State Chem 281:121019. https://doi.org/10.1016/j.jssc.2019.121019
Acharya J et al (2020) Facile one pot sonochemical synthesis of CoFe2O4/MWCNTs hybrids with well-dispersed MWCNTs for asymmetric hybrid supercapacitor applications. Int J Hydrog Energy 45:3073–3085. https://doi.org/10.1016/j.ijhydene.2019.11.169
Ajay A et al (2015) 2 D amorphous frameworks of NiMoO4 for supercapacitors: defining the role of surface and bulk controlled diffusion processes. Appl Surf Sci 326:39–47. https://doi.org/10.1016/j.apsusc.2014.11.016
Ajmal M (2009) Fabrication and physical characterization of Ni1−XZnxe2O4 and Cu1−XZnxFe2O4 ferrites. Quaid-i-Azam University, Islamabad
Alamro T, Ram MK (2017) Polyethylenedioxythiophene and molybdenum disulfide nanocomposite electrodes for supercapacitor applications. Electrochim Acta 235:623–631. https://doi.org/10.1016/j.electacta.2017.03.102
Alcalá O et al (2017) Toroidal cores of MnxCo1 − xFe2O4/PAA nanocomposites with potential applications in antennas. Mater Chem Phys 192:17–21. https://doi.org/10.1016/j.matchemphys.2017.01.035
Alvarez G et al (2016) About room temperature ferromagnetic behavior in BaTiO3 perovskite. J Magn Magn Mater 401:196–199. https://doi.org/10.1016/j.jmmm.2015.10.031
Amirabadizadeh A et al (2017) Synthesis of ferrofluids based on cobalt ferrite nanoparticles: Influence of reaction time on structural, morphological and magnetic properties. J Magn Magn Mater 434:78–85. https://doi.org/10.1016/j.jmmm.2017.03.023
Amiri S, Shokrollahi H (2013) The role of cobalt ferrite magnetic nanoparticles in medical science. Mater Sci Eng, C 33:1–8. https://doi.org/10.1016/j.msec.2012.09.003
Anitha T et al (2019) Facile synthesis of ZnWO4@WS2 cauliflower-like structures for supercapacitors with enhanced electrochemical performance. J Electroanal Chem 841:86–93. https://doi.org/10.1016/j.jelechem.2019.04.034
Ansari SA et al (2017) Mechanically exfoliated MoS2 sheet coupled with conductive polyaniline as a superior supercapacitor electrode material. J Colloid Interface Sci 504:276–282. https://doi.org/10.1016/j.jcis.2017.05.064
Anu M, Saravanakumar M (2017) A review on the classification, characterisation, synthesis of nanoparticles and their application. IOP Conf Ser Mater Sci Eng. https://doi.org/10.1088/1757-899x/263/3/032019
Anupama M et al (2017) Investigation on impedance response and dielectric relaxation of Ni–Zn ferrites prepared by self-combustion technique. J Alloys Compd 706:554–561. https://doi.org/10.1016/j.jallcom.2017.02.241
Arruebo M et al (2007) Magnetic nanoparticles for drug delivery. Nano Today 2:22–32. https://doi.org/10.1016/S1748-0132(07)70084-1
Arsalani N et al (2018) Novel PANI/MnFe2O4 nanocomposite for low-cost supercapacitors with high rate capability. J Mater Sci: Mater Electron 29:6077–6085. https://doi.org/10.1007/s10854-018-8582-6
Arshad M et al (2020) Fabrication, structure, and frequency-dependent electrical and dielectric properties of Sr-doped BaTiO3 ceramics. Ceram Int 46:2238–2246. https://doi.org/10.1016/j.ceramint.2019.09.208
Arul NS et al (2018) Facile synthesis of ZnS/MnS nanocomposites for supercapacitor applications. J Solid State Electrochem 22:303–313. https://doi.org/10.1007/s10008-017-3782-1
Asen P et al (2019) One step synthesis of SnS2–SnO2 nano-heterostructured as an electrode material for supercapacitor applications. J Alloys Compd 782:38–50. https://doi.org/10.1016/j.jallcom.2018.12.176
Ashour A et al (2014) Electrical and thermal behavior of PS/ferrite composite. J Magn Magn Mater 369:260–267. https://doi.org/10.1016/j.jmmm.2014.06.005
Ashour A et al (2018) Antimicrobial activity of metal-substituted cobalt ferrite nanoparticles synthesized by sol–gel technique. Particuology 40:141–151. https://doi.org/10.1016/j.partic.2017.12.001
Assirey EAR (2019) Perovskite synthesis, properties and their related biochemical and industrial application. Saudi Pharm J 27:817–829. https://doi.org/10.1016/j.jsps.2019.05.003
Atta NF et al (2019) Effect of B-site doping on Sr2PdO3 perovskite catalyst activity for non-enzymatic determination of glucose in biological fluids. J Electroanal Chem 852:113523. https://doi.org/10.1016/j.jelechem.2019.113523
Awasthi GP et al (2018) Layer—structured partially reduced graphene oxide sheathed mesoporous MoS2 particles for energy storage applications. J Colloid Interface Sci 518:234–241. https://doi.org/10.1016/j.jcis.2018.02.043
Awata R et al (2020) High performance supercapacitor based on camphor sulfonic acid doped polyaniline/multiwall carbon nanotubes nanocomposite. Electrochim Acta 347:136229. https://doi.org/10.1016/j.electacta.2020.136229
Baharuddin NA et al (2019) Structural, morphological, and electrochemical behavior of titanium-doped SrFe1−xTixO3−δ (x = 0.1–0.5) perovskite as a cobalt-free solid oxide fuel cell cathode. Ceram Int 45:12903–12909. https://doi.org/10.1016/j.ceramint.2019.03.216
Balakrishnan B et al (2019) Facile synthesis of pristine FeS2 microflowers and hybrid rGO-FeS2 microsphere electrode materials for high performance symmetric capacitors. J Ind Eng Chem 71:191–200. https://doi.org/10.1016/j.jiec.2018.11.022
Bandyopadhyay P et al (2020) Zinc–nickel–cobalt oxide@NiMoO4 core–shell nanowire/nanosheet arrays for solid state asymmetric supercapacitors. Chem Eng J 384:123357. https://doi.org/10.1016/j.cej.2019.123357
Barakzehi M et al (2020) MOF-modified polyester fabric coated with reduced graphene oxide/polypyrrole as electrode for flexible supercapacitors. Electrochim Acta 336:135743. https://doi.org/10.1016/j.electacta.2020.135743
Barik R et al (2019) Stannous sulfide nanoparticles for supercapacitor application. Appl Surf Sci 472:112–117. https://doi.org/10.1016/j.apsusc.2018.03.172
Basuki JS et al (2013) Using fluorescence lifetime imaging microscopy to monitor theranostic nanoparticle uptake and intracellular doxorubicin release. ACS Nano 7:10175–10189. https://doi.org/10.1021/nn404407g
Bhagwan J et al (2020) Aqueous asymmetric supercapacitors based on ZnCo2O4 nanoparticles via facile combustion method. J Alloys Compd 815:152456. https://doi.org/10.1016/j.jallcom.2019.152456
Bhame SD (2007) Structural, magnetic, and magnetostrictive properties of substituted lanthanum manganites and spinel ferrites. CSIR-National Chemical Laboratory, Pune. http://dspace.ncl.res.in:8080/xmlui/bitstream/handle/20.500.12252/2592/TH1590.pdf?sequence=1. Accsessed 23/07/2020
Bhaumik M et al (2020) High-performance supercapacitors based on S-doped polyaniline nanotubes decorated with Ni(OH)2 nanosponge and onion-like carbons derived from used car tyres. Electrochim Acta 342:136111. https://doi.org/10.1016/j.electacta.2020.136111
Boudad L et al (2019) Structural, morphological, spectroscopic, and dielectric properties of SmFe0.5Cr0.5O3. Mater Today Proc 13:646–653. https://doi.org/10.1016/j.matpr.2019.04.024
Budhiraju VS et al (2017) Structurally stable hollow mesoporous graphitized carbon nanofibers embedded with NiMoO4 nanoparticles for high performance asymmetric supercapacitors. Electrochim Acta 238:337–348. https://doi.org/10.1016/j.electacta.2017.04.039
Burke A, Zhao H (2015) Applications of supercapacitors in electric and hybrid vehicles. In: ITS
Cai D et al (2013) Comparison of the electrochemical performance of NiMoO4 nanorods and hierarchical nanospheres for supercapacitor applications. ACS Appl Mater Interfaces 5:12905–12910. https://doi.org/10.1021/am403444v
Cai D et al (2014a) Enhanced performance of supercapacitors with ultrathin mesoporous NiMoO4 nanosheets. Electrochim Acta 125:294–301. https://doi.org/10.1016/j.electacta.2014.01.049
Cai F et al (2014b) Hierarchical CNT@NiCo2O4 core–shell hybrid nanostructure for high-performance supercapacitors. J Mater Chem A 2:11509–11515. https://doi.org/10.1039/C4TA01235F
Cai W et al (2016) Transition metal sulfides grown on graphene fibers for wearable asymmetric supercapacitors with high volumetric capacitance and high energy density. Sci Rep 6:26890. https://doi.org/10.1038/srep26890
Cai Y-Z et al (2019) Tailoring rGO-NiFe2O4 hybrids to tune transport of electrons and ions for supercapacitor electrodes. J Alloys Compd 811:152011. https://doi.org/10.1016/j.jallcom.2019.152011
Cao Y et al (2015a) Structure, morphology and electrochemical properties of LaxSr1−xCo0.1Mn0.9O3−δ perovskite nanofibers prepared by electrospinning method. J Alloys Compd 624:31–39. https://doi.org/10.1016/j.jallcom.2014.10.178
Cao Y et al (2015b) Sr-doped lanthanum nickelate nanofibers for high energy density supercapacitors. Electrochim Acta 174:41–50. https://doi.org/10.1016/j.electacta.2015.05.131
Cao X et al (2017) Structural, optical and ferroelectric properties of KNixNb1−xO3 single crystals. J Solid State Chem 256:234–238. https://doi.org/10.1016/j.jssc.2017.08.032
Cao M et al (2020) Lignin-based multi-channels carbon nanofibers@SnO2 nanocomposites for high-performance supercapacitors. Electrochim Acta 345:136172. https://doi.org/10.1016/j.electacta.2020.136172
Chandel M et al (2018) Synthesis of multifunctional CuFe2O4-reduced graphene oxide nanocomposite: an efficient magnetically separable catalyst as well as high performance supercapacitor and first-principles calculations of its electronic structures. RSC Adv 8:27725–27739. https://doi.org/10.1039/C8RA05393F
Chandrasekaran NI et al (2018) High-performance asymmetric supercapacitor from nanostructured tin nickel sulfide (SnNi2S4) synthesized via microwave-assisted technique. J Mol Liq 266:649–657. https://doi.org/10.1016/j.molliq.2018.06.084
Chang C et al (2017) Layered MoS2/PPy nanotube composites with enhanced performance for supercapacitors. J Mater Sci: Mater Electron 28:1777–1784. https://doi.org/10.1007/s10854-016-5725-5
Chao J et al (2018) Sandwiched MoS2/polyaniline nanosheets array vertically aligned on reduced graphene oxide for high performance supercapacitors. Electrochim Acta 270:387–394. https://doi.org/10.1016/j.electacta.2018.03.072
Chauhan H et al (2017) Development of SnS2/RGO nanosheet composite for cost-effective aqueous hybrid supercapacitors. Nanotechnology 28:025401. https://doi.org/10.1088/1361-6528/28/2/025401
Chen T, Dai L (2013) Carbon nanomaterials for high-performance supercapacitors. Mater Today 16:272–280. https://doi.org/10.1016/j.mattod.2013.07.002
Chen J-T, Hsu C-S (2011) Conjugated polymer nanostructures for organic solar cell applications. Polym Chem 2:2707–2722. https://doi.org/10.1039/C1PY00275A
Chen Y et al (2007) Crystal growth and magnetic property of orthorhombic RMnO3 (R = Sm–Ho) perovskites by mild hydrothermal synthesis. J Cryst Growth 305:242–248. https://doi.org/10.1016/j.jcrysgro.2007.03.052
Chen P et al (2013) Hydrothermal synthesis of macroscopic nitrogen-doped graphene hydrogels for ultrafast supercapacitor. Nano Energy 2:249–256. https://doi.org/10.1016/j.nanoen.2012.09.003
Chen Y et al (2015) Flexible all-solid-state asymmetric supercapacitor assembled using coaxial NiMoO4 nanowire arrays with chemically integrated conductive coating. Electrochim Acta 178:429–438. https://doi.org/10.1016/j.electacta.2015.08.040
Chen JH et al (2016a) Mixed-phase Ni–Al as barrier layer against perovskite oxides to react with Cu for ferroelectric memory with Cu metallization. J Alloys Compd 666:197–203. https://doi.org/10.1016/j.jallcom.2016.01.100
Chen J et al (2016b) Pyrite FeS2 nanobelts as high-performance anode material for aqueous pseudocapacitor. Electrochim Acta 222:172–176. https://doi.org/10.1016/j.electacta.2016.10.181
Chen Y et al (2017a) In situ growth of polypyrrole onto three-dimensional tubular MoS2 as an advanced negative electrode material for supercapacitor. Electrochim Acta 246:615–624. https://doi.org/10.1016/j.electacta.2017.06.102
Chen JS et al (2017b) Rational design of self-supported Ni3S2 nanosheets array for advanced asymmetric supercapacitor with a superior energy density. ACS Appl Mater Interfaces 9:496–504. https://doi.org/10.1021/acsami.6b14746
Chen X et al (2018a) Preparation of a MoS2/carbon nanotube composite as an electrode material for high-performance supercapacitors. RSC Adv 8:29488–29494. https://doi.org/10.1039/c8ra05158e
Chen L et al (2018b) Two-dimensional porous carbon nanosheets from exfoliated nanopaper-like biomass. Mater Lett 232:187–190. https://doi.org/10.1016/j.matlet.2018.08.111
Chen C et al (2019a) Reduced ZnCo2O4@NiMoO4H2O heterostructure electrodes with modulating oxygen vacancies for enhanced aqueous asymmetric supercapacitors. J Power Sources 409:112–122. https://doi.org/10.1016/j.jpowsour.2018.10.066
Chen H et al (2019b) Upcycling food waste digestate for energy and heavy metal remediation applications. Resour Conserv Recycl X 3:100015. https://doi.org/10.1016/j.rcrx.2019.100015
Chen X et al (2019c) Natural plant template-derived cellular framework porous carbon as a high-rate and long-life electrode material for energy storage. ACS Sustain Chem Eng 7:5845–5855. https://doi.org/10.1021/acssuschemeng.8b05777
Chen Y et al (2020) Excellent performance of flexible supercapacitor based on the ternary composites of reduced graphene oxide/molybdenum disulfide/poly(3,4-ethylenedioxythiophene). Electrochim Acta 330:135205. https://doi.org/10.1016/j.electacta.2019.135205
Cheng Q et al (2011) Graphene and nanostructured MnO2 composite electrodes for supercapacitors. Carbon 49:2917–2925. https://doi.org/10.1016/j.carbon.2011.02.068
Cheng F et al (2020a) Boosting the supercapacitor performances of activated carbon with carbon nanomaterials. J Power Sources 450:227678. https://doi.org/10.1016/j.jpowsour.2019.227678
Cheng JP et al (2020b) Recent research of core–shell structured composites with NiCo2O4 as scaffolds for electrochemical capacitors. Chem Eng J 393:124747. https://doi.org/10.1016/j.cej.2020.124747
Choudhary N et al (2016) High-performance one-body core/shell nanowire supercapacitor enabled by conformal growth of capacitive 2D WS2 layers. ACS Nano 10:10726–10735. https://doi.org/10.1021/acsnano.6b06111
Choudhary N et al (2020) Correlation between magnetic and transport properties of rare earth doped perovskite manganites La0.6R0.1Ca0.3MnO3 (R = La, Nd, Sm, Gd, and Dy) synthesized by Pechini process. Mater Chem Phys 242:122482. https://doi.org/10.1016/j.matchemphys.2019.122482
Chu H et al (2018) Ni, Co and Mn doped SnS2-graphene aerogels for supercapacitors. J Alloys Compd 767:583–591. https://doi.org/10.1016/j.jallcom.2018.07.126
Cui X et al (2017) Dopamine adsorption precursor enables N-doped carbon sheathing of MoS2 nanoflowers for all-around enhancement of supercapacitor performance. J Alloys Compd 693:955–963. https://doi.org/10.1016/j.jallcom.2016.09.173
Cullity BD, Graham CD (2011) Introduction to magnetic materials. Wiley, New York
Dabrowski B et al (2005) Structural, transport, and magnetic properties of RMnO3 perovskites (R = La, Pr, Nd, Sm, 153Eu, Dy). J Solid State Chem 178:629–637. https://doi.org/10.1016/j.jssc.2004.12.006
Dar M, Varshney D (2017) Effect of d-block element Co2+ substitution on structural, Mössbauer and dielectric properties of spinel copper ferrites. J Magn Magn Mater 436:101–112. https://doi.org/10.1016/j.jmmm.2017.04.046
Das T, Verma B (2019) Synthesis of polymer composite based on polyaniline-acetylene black-copper ferrite for supercapacitor electrodes. Polymer 168:61–69. https://doi.org/10.1016/j.polymer.2019.01.058
Deganello F et al (2016) Electrochemical properties of Ce-doped SrFeO3 perovskites-modified electrodes towards hydrogen peroxide oxidation. Electrochim Acta 190:939–947. https://doi.org/10.1016/j.electacta.2015.12.101
Deshagani S et al (2019) Nickel cobaltite@poly(3,4-ethylenedioxypyrrole) and carbon nanofiber interlayer based flexible supercapacitors. Nanoscale 11:2742–2756. https://doi.org/10.1039/C8NR08645A
Deshagani S et al (2020) Altered crystal structure of nickel telluride by selenide doping and a poly(N-methylpyrrole) coating amplify supercapacitor performance. Electrochim Acta 345:136200. https://doi.org/10.1016/j.electacta.2020.136200
Ding J et al (2013) Carbon nanosheet frameworks derived from peat moss as high performance sodium ion battery anodes. ACS Nano 7:11004–11015. https://doi.org/10.1021/nn404640c
Ding R et al (2017) Perovskite KNi 0.8 Co 0.2 F 3 nanocrystals for supercapacitors. J Mater Chem A 5:17822–17827. https://doi.org/10.1039/C7TA05209J
Dirican M et al (2020) Polyaniline/MnO2/porous carbon nanofiber electrodes for supercapacitors. J Electroanal Chem 861:113995. https://doi.org/10.1016/j.jelechem.2020.113995
Dutta S, De S (2018) MoS2 Nanosheet/rGO hybrid: an electrode material for high performance thin film supercapacitor, vol 5. Elsevier, Amsterdam, pp 9771–9775. https://doi.org/10.1016/j.matpr.2017.10.165
Dwivedi GD et al (2015) Low temperature magnetic and transport properties of LSMO–PZT nanocomposites. RSC Adv 5:30748–30757. https://doi.org/10.1039/C5RA04101E
Eftekhari A (2017) Tungsten dichalcogenides (WS2, WSe2, and WTe2): materials chemistry and applications, vol 5. Royal Society of Chemistry, London, pp 18299–18325. https://doi.org/10.1039/C7TA04268J
El Moussaoui H et al (2016) Synthesis and magnetic properties of tin spinel ferrites doped manganese. J Magn Magn Mater 405:181–186. https://doi.org/10.1016/j.jmmm.2015.12.059
El-Kady MF et al (2016) Graphene for batteries, supercapacitors and beyond. Nat Rev Mater 1:16033. https://doi.org/10.1038/natrevmats.2016.33
Elkholy AE et al (2017) Nanostructured spinel manganese cobalt ferrite for high-performance supercapacitors. RSC Adv 7:51888–51895. https://doi.org/10.1039/C7RA11020K
Elseman AM et al (2020) CoFe2O4@carbon spheres electrode: a one-step solvothermal method for enhancing the electrochemical performance of hybrid supercapacitors. ChemElectroChem 7:526–534. https://doi.org/10.1002/celc.202000005
Elsiddig ZA et al (2017) Modulating Mn4+ ions and oxygen vacancies in nonstoichiometric LaMnO3 perovskite by a facile sol–gel method as high-performance supercapacitor electrodes. Electrochim Acta 253:422–429. https://doi.org/10.1016/j.electacta.2017.09.076
Fan LQ et al (2015) Facile one-step hydrothermal preparation of molybdenum disulfide/carbon composite for use in supercapacitor. Int J Hydrog Energy 40:10150–10157. https://doi.org/10.1016/j.ijhydene.2015.06.061
Fang L et al (2017) Flower-like nanoarchitecture assembled from Bi2S3 nanorod/MoS2 nanosheet heterostructures for high-performance supercapacitor electrodes. Colloids Surf A 535:41–48. https://doi.org/10.1016/j.colsurfa.2017.09.022
Fang L et al (2018) Three-dimensional flower-like MoS2–CoSe2 heterostructure for high performance superccapacitors. J Colloid Interface Sci 512:282–290. https://doi.org/10.1016/j.jcis.2017.10.072
Farid MT et al (2017) Magnetic and electric behavior of praseodymium substituted CuPryFe2–yO4 ferrites. J Magn Magn Mater 422:337–343. https://doi.org/10.1016/j.jmmm.2016.09.016
Frackowiak E et al (2006) Supercapacitors based on conducting polymers/nanotubes composites. J Power Sources 153:413–418. https://doi.org/10.1016/j.jpowsour.2005.05.030
Galal A et al (2018) Enhancing the specific capacitance of SrRuO3 and reduced graphene oxide in NaNO3, H3PO4 and KOH electrolytes. Electrochim Acta 260:738–747. https://doi.org/10.1016/j.electacta.2017.12.026
Galasso FS (2013) Structure, properties and preparation of perovskite-type compounds: international series of monographs in solid state physics. Elsevier, Amsterdam. https://doi.org/10.1016/C2013-0-02117-2
Gao H et al (2012) High-performance asymmetric supercapacitor based on graphene hydrogel and nanostructured MnO2. ACS Appl Mater Interfaces 4:2801–2810. https://doi.org/10.1021/am300455d
Gao S et al (2016a) Ultrathin Co3O4 layers realizing optimized CO2 electroreduction to formate. Angew Chem Int Ed 55:698–702. https://doi.org/10.1002/anie.201509800
Gao L et al (2016b) A coaxial yarn electrode based on hierarchical MoS2 nanosheets/carbon fiber tows for flexible solid-state supercapacitors. RSC Adv 6:57190–57198. https://doi.org/10.1039/C6RA10178J
Gao Y-P et al (2018a) High-performance symmetric supercapacitor based on flower-like zinc molybdate. J Alloys Compd 731:1151–1158. https://doi.org/10.1016/j.jallcom.2017.10.161
Gao YP et al (2018b) MoS2 nanosheets assembling three-dimensional nanospheres for enhanced-performance supercapacitor. J Alloys Compd 741:174–181. https://doi.org/10.1016/j.jallcom.2018.01.110
Gao J et al (2018c) Free-standing WS2-MWCNTs hybrid paper integrated with polyaniline for high-performance flexible supercapacitor. J Nanopart Res. https://doi.org/10.1007/s11051-018-4409-x
Gao W et al (2020) A review of flexible perovskite oxide ferroelectric films and their application. J Materiomics 6:1–16. https://doi.org/10.1016/j.jmat.2019.11.001
Ge M et al (2020) Hierarchical nanocomposite that coupled nitrogen-doped graphene with aligned PANI cores arrays for high-performance supercapacitor. Electrochim Acta 330:135236. https://doi.org/10.1016/j.electacta.2019.135236
Geng P et al (2018) Transition metal sulfides based on graphene for electrochemical energy storage. Adv Energy Mater 8:1703259. https://doi.org/10.1002/aenm.201703259
George G et al (2018) Effect of doping on the performance of high-crystalline SrMnO3 perovskite nanofibers as a supercapacitor electrode. Ceram Int 44:21982–21992. https://doi.org/10.1016/j.ceramint.2018.08.313
Ghafoor A et al (2016) Structural and electromagnetic studies of Ni0.7Zn0.3Ho2xFe2 − 2xO4 ferrites. Ceram Int 42:14252–14256. https://doi.org/10.1016/j.ceramint.2016.06.054
Girishkumar G et al (2010) Lithium-air battery: promise and challenges. J Phys Chem Lett 1:2193–2203. https://doi.org/10.1021/jz1005384
Gokon N et al (2019) Thermochemical behavior of perovskite oxides based on LaxSr1−x(Mn, Fe, Co)O3−δ and BaySr1−yCoO3−δ redox system for thermochemical energy storage at high temperatures. Energy 171:971–980. https://doi.org/10.1016/j.energy.2019.01.081
Gong H et al (2018) Preparation and supercapacitive property of molybdenum disulfide (MoS2) nanoflake arrays-tungsten trioxide (WO3) nanorod arrays composite heterojunction: a synergistic effect of one-dimensional and two-dimensional nanomaterials. Electrochim Acta 263:409–416. https://doi.org/10.1016/j.electacta.2018.01.072
Gopi CVVM et al (2020) Co9S8–Ni3S2/CuMn2O4–NiMn2O4 and MnFe2O4–ZnFe2O4/graphene as binder-free cathode and anode materials for high energy density supercapacitors. Chem Eng J 381:122640. https://doi.org/10.1016/j.cej.2019.122640
Górka J, Jaroniec M (2010) Tailoring adsorption and framework properties of mesoporous polymeric composites and carbons by addition of organosilanes during soft-templating synthesis. J Phys Chem C 114:6298–6303. https://doi.org/10.1021/jp9117858
Govindasamy M et al (2019a) Facile sonochemical synthesis of perovskite-type SrTiO3 nanocubes with reduced graphene oxide nanocatalyst for an enhanced electrochemical detection of α-amino acid (tryptophan). Ultrason Sonochem 56:193–199. https://doi.org/10.1016/j.ultsonch.2019.04.004
Govindasamy M et al (2019b) Fabrication of hierarchical NiCo2S4@CoS2 nanostructures on highly conductive flexible carbon cloth substrate as a hybrid electrode material for supercapacitors with enhanced electrochemical performance. Electrochim Acta 293:328–337. https://doi.org/10.1016/j.electacta.2018.10.051
Grabowska E (2016) Selected perovskite oxides: characterization, preparation and photocatalytic properties—a review. Appl Catal B Environ 186:97–126. https://doi.org/10.1016/j.apcatb.2015.12.035
Guan C et al (2015) Iron oxide-decorated carbon for supercapacitor anodes with ultrahigh energy density and outstanding cycling stability. ACS Nano 9:5198–5207. https://doi.org/10.1021/acsnano.5b00582
Guo B et al (2011) Soft-templated mesoporous carbon–carbon nanotube composites for high performance lithium-ion batteries. Adv Mater 23:4661–4666. https://doi.org/10.1002/adma.201102032
Guo D et al (2014) High performance NiMoO4 nanowires supported on carbon cloth as advanced electrodes for symmetric supercapacitors. Nano Energy. 8:174–182. https://doi.org/10.1016/j.nanoen.2014.06.002
Guo P et al (2017) Electrochemical properties of colloidal nanocrystal assemblies of manganese ferrite as the electrode materials for supercapacitors. J Mater Sci 52:5359–5365. https://doi.org/10.1007/s10853-017-0778-2
Gupta AK, Gupta M (2005a) Synthesis and surface engineering of iron oxide nanoparticles for biomedical applications. Biomaterials 26:3995–4021. https://doi.org/10.1016/j.biomaterials.2004.10.012
Gupta AK, Gupta M (2005b) Cytotoxicity suppression and cellular uptake enhancement of surface modified magnetic nanoparticles. Biomaterials 26:1565–1573. https://doi.org/10.1016/j.biomaterials.2004.05.022
Han C et al (2018) Vertical crosslinking MoS2/three-dimensional graphene composite towards high performance supercapacitor. Chin Chem Lett 29:606–611. https://doi.org/10.1016/j.cclet.2018.01.017
Hao J et al (2015) Facile Synthesis of 3D hierarchical flower-like Co3 − xFexO4 ferrite on nickel foam as high-performance electrodes for supercapacitors. Electrochim Acta 152:13–18. https://doi.org/10.1016/j.electacta.2014.11.104
Hassan HS et al (2019) Assessment of zinc ferrite nanocrystals for removal of 134Cs and 152 + 154Eu radionuclides from nitric acid solution. J Mater Sci: Mater Electron. https://doi.org/10.1007/s10854-019-02678-y
Hassoun J et al (2012) A metal-free, lithium-ion oxygen battery: a step forward to safety in lithium-air batteries. Nano Lett 12:5775–5779. https://doi.org/10.1021/nl303087j
Hatui G et al (2017) Template-free single pot synthesis of SnS2@Cu2O/reduced graphene oxide (rGO) nanoflowers for high performance supercapacitors. New J Chem 41:2702–2716. https://doi.org/10.1039/c6nj02965e
He W et al (2017) Flexible and high energy density asymmetrical supercapacitors based on core/shell conducting polymer nanowires/manganese dioxide nanoflakes. Nano Energy 35:242–250. https://doi.org/10.1016/j.nanoen.2017.03.045
Hekmat F et al (2020) Hybrid energy storage device from binder-free zinc–cobalt sulfide decorated biomass-derived carbon microspheres and pyrolyzed polyaniline nanotube-iron oxide. Energy Storage Mater 25:621–635. https://doi.org/10.1016/j.ensm.2019.09.022
Hennous M et al (2019) Synthesis, structure and magnetic properties of multipod-shaped cobalt ferrite nanocrystals. New J Chem 43:10259–10269. https://doi.org/10.1039/C9NJ02237F
Hirel P et al (2015) Theoretical and experimental study of the core structure and mobility of dislocations and their influence on the ferroelectric polarization in perovskite KNbO3. Phys Rev B 92:214101. https://doi.org/10.1103/PhysRevB.92.214101
Hooch Antink W et al (2018) Recent progress in porous graphene and reduced graphene oxide-based nanomaterials for electrochemical energy storage devices. Adv Mater Interfaces 5:1701212. https://doi.org/10.1002/admi.201701212
Hou J et al (2013) A new method for fabrication of graphene/polyaniline nanocomplex modified microbial fuel cell anodes. J Power Sources 224:139–144. https://doi.org/10.1016/j.jpowsour.2012.09.091
Hou X et al (2018) Metal organic framework derived core–shell structured Co9S8@N-C@MoS2 nanocubes for supercapacitor. ACS Appl Energy Mater 1:3513–3520. https://doi.org/10.1021/acsaem.8b00773
Houshiar M et al (2014) Synthesis of cobalt ferrite (CoFe2O4) nanoparticles using combustion, coprecipitation, and precipitation methods: a comparison study of size, structural, and magnetic properties. J Magn Magn Mater 371:43–48. https://doi.org/10.1016/j.jmmm.2014.06.059
Huang X et al (2012) Graphene-based composites. Chem Soc Rev 41:666–686. https://doi.org/10.1039/C1CS15078B
Huang KJ et al (2013a) Layered MoS2–graphene composites for supercapacitor applications with enhanced capacitive performance. Int J Hydrog Energy 38:14027–14034. https://doi.org/10.1016/j.ijhydene.2013.08.112
Huang KJ et al (2013b) Synthesis of polyaniline/2-dimensional graphene analog MoS2 composites for high-performance supercapacitor. Electrochim Acta 109:587–594. https://doi.org/10.1016/j.electacta.2013.07.168
Huang L et al (2015a) 3D interconnected porous NiMoO4 nanoplate arrays on Ni foam as high-performance binder-free electrode for supercapacitors. J Mater Chem A 3:22081–22087. https://doi.org/10.1039/C5TA05644F
Huang KJ et al (2015b) Synthesis of molybdenum disulfide/carbon aerogel composites for supercapacitors electrode material application. J Electroanal Chem 752:33–40. https://doi.org/10.1016/j.jelechem.2015.06.005
Huang L et al (2016a) Hierarchical core–shell NiCo2O4@NiMoO4 nanowires grown on carbon cloth as integrated electrode for high-performance supercapacitors. Sci Rep 6:31465. https://doi.org/10.1038/srep31465
Huang Y et al (2016b) Nanostructured polypyrrole as a flexible electrode material of supercapacitor. Nano Energy 22:422–438. https://doi.org/10.1016/j.nanoen.2016.02.047
Huang Y et al (2016c) Enhanced cycling stability of NiCo2S4AtNiO core–shell nanowire arrays for all-solid-state asymmetric supercapacitors. Sci Rep 6:1–10. https://doi.org/10.1038/srep38620
Huang F et al (2017a) One-step hydrothermal synthesis of Ni3S4@MoS2 nanosheet on carbon fiber paper as a binder-free anode for supercapacitor. J Mater Sci: Mater Electron 28:12747–12754. https://doi.org/10.1007/s10854-017-7100-6
Huang L et al (2017b) Ultrahigh-performance pseudocapacitor based on phase-controlled synthesis of MoS2 nanosheets decorated Ni3S2 hybrid structure through annealing treatment. Appl Surf Sci 425:879–888. https://doi.org/10.1016/j.apsusc.2017.06.334
Huang Y et al (2018a) NiMoO4 nanorod deposited carbon sponges with ant-nest-like interior channels for high-performance pseudocapacitors. Inorg Chem Front 5:1594–1601. https://doi.org/10.1039/C8QI00247A
Huang F et al (2018b) One-step hydrothermal synthesis of a CoS2@MoS2 nanocomposite for high-performance supercapacitors. J Alloys Compd 742:844–851. https://doi.org/10.1016/j.jallcom.2018.01.324
Hui KN et al (2016) Hierarchical chestnut-like MnCo2O4 nanoneedles grown on nickel foam as binder-free electrode for high energy density asymmetric supercapacitors. J Power Sources 330:195–203. https://doi.org/10.1016/j.jpowsour.2016.08.116
Hussain S et al (2020) Novel gravel-like NiMoO4 nanoparticles on carbon cloth for outstanding supercapacitor applications. Ceram Int 46:6406–6412. https://doi.org/10.1016/j.ceramint.2019.11.118
Hwang J et al (2019) Tuning perovskite oxides by strain: electronic structure, properties, and functions in (electro)catalysis and ferroelectricity. Mater Today. https://doi.org/10.1016/j.mattod.2019.03.014
Ikkurthi KD et al (2018) Synthesis of nanostructured metal sulfides via a hydrothermal method and their use as an electrode material for supercapacitors. New J Chem 42:19183–19192. https://doi.org/10.1039/C8NJ04358B
Iro ZS et al (2016) A brief review on electrode materials for supercapacitor. Int J Electrochem Sci 11:10628–10643. https://doi.org/10.20964/2016.12.50
Ismail FM et al (2018) Mesoporous spinel manganese zinc ferrite for high-performance supercapacitors. J Electroanal Chem. https://doi.org/10.1016/j.jelechem.2018.04.002
Israr M et al (2020) A unique ZnFe2O4/graphene nanoplatelets nanocomposite for electrochemical energy storage and efficient visible light driven catalysis for the degradation of organic noxious in wastewater. J Phys Chem Solids 140:109333. https://doi.org/10.1016/j.jpcs.2020.109333
Jain TK et al (2008) Magnetic nanoparticles with dual functional properties: drug delivery and magnetic resonance imaging. Biomaterials 29:4012–4021. https://doi.org/10.1016/j.biomaterials.2008.07.004
Jang K et al (2015) Intense pulsed light-assisted facile and agile fabrication of cobalt oxide/nickel cobaltite nanoflakes on nickel-foam for high performance supercapacitor applications. J Alloys Compd 618:227–232. https://doi.org/10.1016/j.jallcom.2014.08.166
Jeevanandam J et al (2018) Review on nanoparticles and nanostructured materials: history, sources, toxicity and regulations. Beilstein J Nanotechnol 9:1050–1074. https://doi.org/10.3762/bjnano.9.98
Jia Y et al (2017) Hierarchical nanosheet-based MoS2/graphene nanobelts with high electrochemical energy storage performance. J Power Sources 354:1–9. https://doi.org/10.1016/j.jpowsour.2017.04.031
Jia H et al (2019) Controlled synthesis of MOF-derived quadruple-shelled CoS2 hollow dodecahedrons as enhanced electrodes for supercapacitors. Electrochim Acta 312:54–61. https://doi.org/10.1016/j.electacta.2019.04.192
Jiang SP (2019) Development of lanthanum strontium cobalt ferrite perovskite electrodes of solid oxide fuel cells—a review. Int J Hydrog Energy 44:7448–7493. https://doi.org/10.1016/j.ijhydene.2019.01.212
Jiang H et al (2012) Mesoporous carbon incorporated metal oxide nanomaterials as supercapacitor electrodes. Adv Mater 24:4197–4202. https://doi.org/10.1002/adma.201104942
Jiang J et al (2013) Diffusion-controlled evolution of core–shell nanowire arrays into integrated hybrid nanotube arrays for Li-ion batteries. Nanoscale 5:8105–8113. https://doi.org/10.1039/C3NR01786A
Jiang Y et al (2016) Nickel silicotungstate-decorated Pt photocathode as an efficient catalyst for triiodide reduction in dye-sensitized solar cells. Dalton Trans 45:16859–16868. https://doi.org/10.1039/C6DT03190K
Jin J et al (2013) Flexible self-supporting graphene–sulfur paper for lithium sulfur batteries. RSC Adv 3:2558–2560. https://doi.org/10.1039/C2RA22808D
Jinlong L et al (2017) Synthesis of CoMoO4@RGO nanocomposites as high-performance supercapacitor electrodes. Microporous Mesoporous Mater 242:264–270. https://doi.org/10.1016/j.micromeso.2017.01.034
Kandula S et al (2018) Fabrication of a 3D hierarchical sandwich Co9S8/α-MnS@N-C@MoS2 nanowire architectures as advanced electrode material for high performance hybrid supercapacitors. Small 14:1800291. https://doi.org/10.1002/smll.201800291
Kang C et al (2015) Three-dimensional carbon nanotubes for high capacity lithium-ion batteries. J Power Sources 299:465–471. https://doi.org/10.1016/j.jpowsour.2015.08.103
Kathalingam A et al (2020) Nanosheet-like ZnCo2O4@nitrogen doped graphene oxide/polyaniline composite for supercapacitor application: effect of polyaniline incorporation. J Alloys Compd 830:154734. https://doi.org/10.1016/j.jallcom.2020.154734
Kaur P, Singh K (2019) Review of perovskite-structure related cathode materials for solid oxide fuel cells. Ceram Int. https://doi.org/10.1016/j.ceramint.2019.11.066
Kazemi SH et al (2016) Binder-free electrodes of NiMoO4/graphene oxide nanosheets: synthesis, characterization and supercapacitive behavior. RSC Adv 6:111170–111181. https://doi.org/10.1039/C6RA23076H
Ke Q, Wang J (2016) Graphene-based materials for supercapacitor electrodes—a review. J Materiomics 2:37–54. https://doi.org/10.1016/j.jmat.2016.01.001
Kefeni KK et al (2020) Spinel ferrite nanoparticles and nanocomposites for biomedical applications and their toxicity. Mater Sci Eng, C 107:110314. https://doi.org/10.1016/j.msec.2019.110314
Keränen A et al (2013) Preparation of novel anion exchangers from pine sawdust and bark, spruce bark, birch bark and peat for the removal of nitrate. Chem Eng Sci 98:59–68. https://doi.org/10.1016/j.ces.2013.05.007
Khawula TNY et al (2016) Symmetric pseudocapacitors based on molybdenum disulfide (MoS2)-modified carbon nanospheres: correlating physicochemistry and synergistic interaction on energy storage. J Mater Chem A 4:6411–6425. https://doi.org/10.1039/C6TA00114A
Kim TW et al (2014) Electrochemical synthesis of spinel type ZnCo2O4 electrodes for use as oxygen evolution reaction catalysts. J Phys Chem Lett 5:2370–2374. https://doi.org/10.1021/jz501077u
Kim HJ et al (2016) Densely packed zinc sulfide nanoparticles on TiO2 for hindering electron recombination in dye-sensitized solar cells. New J Chem 40:9176–9186. https://doi.org/10.1039/C6NJ02493A
Kim DY et al (2017) Chemical synthesis of hierarchical NiCo2S4 nanosheets like nanostructure on flexible foil for a high performance supercapacitor. Sci Rep 7:1–10. https://doi.org/10.1038/s41598-017-10218-z
Koneracká M et al (1999) Immobilization of proteins and enzymes to fine magnetic particles. J Magn Magn Mater 201:427–430. https://doi.org/10.1016/S0304-8853(99)00005-0
Kumar TP et al (2004) Tin-filled carbon nanotubes as insertion anode materials for lithium-ion batteries. Electrochem Commun 6:520–525. https://doi.org/10.1016/j.elecom.2004.03.009
Kumar PR et al (2014) Enhanced properties of porous CoFe2O4-reduced graphene oxide composites with alginate binders for Li-ion battery applications. New J Chem 38:3654–3661. https://doi.org/10.1039/C4NJ00419A
Kumar YA et al (2020) A MoNiO4 flower-like electrode material for enhanced electrochemical properties via a facile chemical bath deposition method for supercapacitor applications. New J Chem 44:522–529. https://doi.org/10.1039/C9NJ05529K
Kumuthini R et al (2017) Electrochemical properties of electrospun MoS2@C nanofiber as electrode material for high-performance supercapacitor application. J Alloys Compd 705:624–630. https://doi.org/10.1016/j.jallcom.2017.02.163
Kwon J et al (2017) Facile hydrothermal synthesis of cubic spinel AB2O4 type MnFe2O4 nanocrystallites and their electrochemical performance. Appl Surf Sci 413:83–91. https://doi.org/10.1016/j.apsusc.2017.04.022
Lacerda GRBS et al (2020) Development of nanohybrids based on carbon nanotubes/P(EDOT-co-MPy) and P(EDOT-co-PyMP) copolymers as electrode materials for aqueous supercapacitors. Electrochim Acta 335:135637. https://doi.org/10.1016/j.electacta.2020.135637
Lalwani S et al (2019) Layered nanoblades of iron cobaltite for high performance asymmetric supercapacitors. Appl Surf Sci 476:1025–1034. https://doi.org/10.1016/j.apsusc.2019.01.184
Lamberti A (2018) Flexible supercapacitor electrodes based on MoS2-intercalated rGO membranes on Ti mesh. Mater Sci Semicond Process 73:106–110. https://doi.org/10.1016/j.mssp.2017.06.046
Lang X et al (2017) Supercapacitor performance of perovskite La1−xSrxMnO3. Dalton Trans 46:13720–13730. https://doi.org/10.1039/C7DT03134C
Lang X et al (2019) Ag nanoparticles decorated perovskite La0.85Sr0.15MnO3 as electrode materials for supercapacitors. Mater Lett 243:34–37. https://doi.org/10.1016/j.matlet.2019.02.002
Lavela P et al (2009) 57Fe Mossbauer spectroscopy study of the electrochemical reaction with lithium of MFe2O4 (M = Co and Cu) electrodes. J Phys Chem C 113:20081–20087. https://doi.org/10.1021/jp9056362
Lee HI et al (2011) Spontaneous phase separation mediated synthesis of 3D mesoporous carbon with controllable cage and window size. Adv Mater 23:2357–2361. https://doi.org/10.1002/adma.201003599
Lee H et al (2017) Yolk–shell polystyrene@microporous organic network: a smart template with thermally disassemblable yolk to engineer hollow MoS2/C composites for high-performance supercapacitors. ACS Omega 2:7658–7665. https://doi.org/10.1021/acsomega.7b01426
Li X et al (2015) Fabrication of γ-MnS/rGO composite by facile one-pot solvothermal approach for supercapacitor applications. J Power Sources 282:194–201. https://doi.org/10.1016/j.jpowsour.2015.02.057
Li Z et al (2016a) Flaky CoS2 and graphene nanocomposite anode materials for sodium-ion batteries with improved performance. RSC Adv 6:70632–70637. https://doi.org/10.1039/C6RA12563H
Li L et al (2016b) Hierarchical carbon@Ni3S2@MoS2 double core–shell nanorods for high-performance supercapacitors. J Mater Chem A 4:1319–1325. https://doi.org/10.1039/c5ta08714g
Li M et al (2016c) Ultrafine jagged platinum nanowires enable ultrahigh mass activity for the oxygen reduction reaction. Science 354:1414–1419. https://doi.org/10.1126/science.aaf9050
Li W et al (2016d) Mesoporous materials for energy conversion and storage devices. Nat Rev Mater 1:1–17. https://doi.org/10.1038/natrevmats.2016.23
Li Z et al (2017a) Controlled synthesis of perovskite lanthanum ferrite nanotubes with excellent electrochemical properties. RSC Adv 7:12931–12937. https://doi.org/10.1039/C6RA27423D
Li X et al (2017b) Supercapacitor electrode materials with hierarchically structured pores from carbonization of MWCNTs and ZIF-8 composites. Nanoscale 9:2178–2187. https://doi.org/10.1039/C6NR08987A
Li P et al (2018) Stretchable all-gel-state fiber-shaped supercapacitors enabled by macromolecularly interconnected 3D graphene/nanostructured conductive polymer hydrogels. Adv Mater 30:1800124. https://doi.org/10.1002/adma.201800124
Li H et al (2019a) Zinc cobalt sulfide nanoparticles as high performance electrode material for asymmetric supercapacitor. Electrochim Acta 319:716–726. https://doi.org/10.1016/j.electacta.2019.07.033
Li J et al (2019b) Dielectric, multiferroic and magnetodielectric properties of (1 − x)BaTiO3–xSr2CoMoO6 solid solution. Ceram Int 45:16353–16360. https://doi.org/10.1016/j.ceramint.2019.05.163
Li J et al (2019c) Cladding nanostructured AgNWs–MoS2 electrode material for high-rate and long-life transparent in-plane micro-supercapacitor. Energy Storage Mater 16:212–219. https://doi.org/10.1016/j.ensm.2018.05.013
Li D et al (2019d) A general self-template-etched solution route for the synthesis of 2D γ-manganese sulfide nanoplates and their enhanced supercapacitive performance. New J Chem 43:4674–4680. https://doi.org/10.1039/c8nj06143b
Li Y et al (2019e) Bark-based 3D porous carbon nanosheet with ultrahigh surface area for high performance supercapacitor electrode material. ACS Sustain Chem Eng 7:13827–13835. https://doi.org/10.1021/acssuschemeng.9b01779
Li T et al (2020) Advances in transition-metal (Zn, Mn, Cu)-based MOFs and their derivatives for anode of lithium-ion batteries. Coord Chem Rev 410:213221. https://doi.org/10.1016/j.ccr.2020.213221
Lian M et al (2017) Hydrothermal synthesis of polypyrrole/MoS2 intercalation composites for supercapacitor electrodes. Ceram Int 43:9877–9883. https://doi.org/10.1016/j.ceramint.2017.04.171
Liang A et al (2018) Robust flexible WS2/PEDOT:PSS film for use in high-performance miniature supercapacitors. J Electroanal Chem 824:136–146. https://doi.org/10.1016/j.jelechem.2018.07.040
Liang G et al (2020) Developing high-voltage spinel LiNi0.5Mn1.5O4 cathodes for high-energy-density lithium-ion batteries: current achievements and future prospects. J Mater Chem. https://doi.org/10.1039/D0TA02812F
Lin L-Y, Lin L-Y (2017) Material effects on the electrocapacitive performance for the energy-storage electrode with nickel cobalt oxide core/shell nanostructures. Electrochim Acta 250:335–347. https://doi.org/10.1016/j.electacta.2017.08.074
Lin Y-P, Wu N-L (2011) Characterization of MnFe2O4/LiMn2O4 aqueous asymmetric supercapacitor. J Power Sources 196:851–854. https://doi.org/10.1016/j.jpowsour.2010.07.066
Lin Y et al (2013) Graphene/semiconductor heterojunction solar cells with modulated antireflection and graphene work function. Energy Environ Sci 6:108–115. https://doi.org/10.1039/C2EE23538B
Lin T-W et al (2018) Ternary composite nanosheets with MoS2/WS2/graphene heterostructures as high-performance cathode materials for supercapacitors. ChemElectroChem 5:1024–1031. https://doi.org/10.1002/celc.201800043
Liu Z et al (2005) A phenol biosensor based on immobilizing tyrosinase to modified core–shell magnetic nanoparticles supported at a carbon paste electrode. Anal Chim Acta 533:3–9. https://doi.org/10.1016/j.aca.2004.10.077
Liu JW et al (2007) Magnetic and electric properties of the colossal magnetoresistance manganite Sm1.4Sr1.2Ca0.4Mn2O7. Solid State Commun 141:341–343. https://doi.org/10.1016/j.ssc.2006.11.004
Liu X et al (2012) Nanostructure-based WO3 photoanodes for photoelectrochemical water splitting. Phys Chem Chem Phys 14:7894–7911. https://doi.org/10.1039/C2CP40976C
Liu M-C et al (2013a) Facile synthesis of NiMoO4·xH2O nanorods as a positive electrode material for supercapacitors. Rsc Adv 3:6472–6478. https://doi.org/10.1039/C3RA22993A
Liu M-C et al (2013b) Facile fabrication of CoMoO4 nanorods as electrode material for electrochemical capacitors. Mater Lett 94:197–200. https://doi.org/10.1016/j.matlet.2012.12.057
Liu S et al (2016a) Vertically stacked bilayer CuCo2O4/MnCo2O4 heterostructures on functionalized graphite paper for high-performance electrochemical capacitors. J Mater Chem A 4:8061–8071. https://doi.org/10.1039/C6TA00960C
Liu Y et al (2016b) Design of perovskite oxides as anion-intercalation-type electrodes for supercapacitors: cation leaching effect. ACS Appl Mater Interfaces 8:23774–23783. https://doi.org/10.1021/acsami.6b08634
Liu Y et al (2016c) Design, synthesis, and energy-related applications of metal sulfides. Mater Horiz 3:402–421. https://doi.org/10.1039/C6MH00075D
Liu P et al (2017a) A high-performance electrode for supercapacitors: silver nanoparticles grown on a porous perovskite-type material La0.7Sr0.3CoO3−δ substrate. Chem Eng J 328:1–10. https://doi.org/10.1016/j.cej.2017.06.150
Liu C et al (2017b) 3D porous nanoarchitectures derived from SnS/S-doped graphene hybrid nanosheets for flexible all-solid-state supercapacitors. Small 13:1603494. https://doi.org/10.1002/smll.201603494
Liu W et al (2018a) Ternary transition metal sulfides embedded in graphene nanosheets as both the anode and cathode for high-performance asymmetric supercapacitors. Chem Mater 30(1055–1068):10. https://doi.org/10.1021/acs.chemmater.7b04976
Liu S et al (2018b) Effect of cation substitution on the pseudocapacitive performance of spinel cobaltite MCo2O4 (M = Mn, Ni, Cu, and Co). J Mater Chem A 6:10674–10685. https://doi.org/10.1039/C8TA00540K
Liu Y et al (2018c) Highly defective layered double perovskite oxide for efficient energy storage via reversible pseudocapacitive oxygen-anion intercalation. Adv Energy Mater 8:1702604. https://doi.org/10.1002/aenm.201702604
Liu W et al (2018d) Synthesis of dense MoS2 nanosheet layers on hollow carbon spheres and their applications in supercapacitors and the electrochemical hydrogen evolution reaction. Inorg Chem Front 5:2198–2204. https://doi.org/10.1039/c8qi00562a
Liu MC et al (2018e) Electrostatically charged MoS2/graphene oxide hybrid composites for excellent electrochemical energy storage devices. ACS Appl Mater Interfaces 10:35571–35579. https://doi.org/10.1021/acsami.8b09085
Liu H et al (2018f) CuS/MnS composite hexagonal nanosheet clusters: synthesis and enhanced pseudocapacitive properties. Electrochim Acta 271:425–432. https://doi.org/10.1016/j.electacta.2018.03.048
Liu S et al (2018g) Large-scale synthesis of porous carbon via one-step CuCl2 activation of rape pollen for high-performance supercapacitors. J Mater Chem A 6:12046–12055. https://doi.org/10.1039/C8TA02838A
Liu Q et al (2019) 3D sandwiched nanosheet of MoS2/C@RGO achieved by supramolecular self-assembly method as high performance material in supercapacitor. J Alloys Compd 777:1176–1183. https://doi.org/10.1016/j.jallcom.2018.11.108
Liu Y et al (2020) Activation-free supercapacitor electrode based on surface-modified Sr2CoMo1−xNixO6−δ perovskite. Chem Eng J 390:124645. https://doi.org/10.1016/j.cej.2020.124645
Louca D et al (1997) Local Jahn–Teller distortion in La1–xSrxMnO3 observed by pulsed neutron diffraction. Phys Rev B 56:R8475. https://doi.org/10.1103/PhysRevB.56.R8475
Lu Y et al (2017) Nanowire-assembled Co3O4@NiCo2O4 architectures for high performance all-solid-state asymmetric supercapacitors. J Mater Chem A 5:24981–24988. https://doi.org/10.1039/C7TA06437C
Lü J et al (2015) A preliminary study of the pseudo-capacitance features of strontium doped lanthanum manganite. RSC Adv 5:5858–5862. https://doi.org/10.1039/C4RA13583K
Luo B et al (2012) Chemical approaches toward graphene-based nanomaterials and their applications in energy-related areas. Small 8:630–646. https://doi.org/10.1002/smll.201101396
Article CAS | CommonCrawl |
Published Article: Evidence for X-Ray Emission in Excess to the Jet-afterglow Decay 3.5 yr after the Binary Neutron Star Merger GW 170817: A New Emission Component
Title: Evidence for X-Ray Emission in Excess to the Jet-afterglow Decay 3.5 yr after the Binary Neutron Star Merger GW 170817: A New Emission Component
For the first ∼3 yrs after the binary neutron star merger event GW 170817, the radio and X-ray radiation has been dominated by emission from a structured relativistic off-axis jet propagating into a low-density medium withn< 0.01 cm−3. We report on observational evidence for an excess of X-ray emission atδt> 900 days after the merger. WithLx≈ 5 × 1038erg s−1at 1234 days, the recently detected X-ray emission represents a ≥3.2σ(Gaussian equivalent) deviation from the universal post-jet-break model that best fits the multiwavelength afterglow at earlier times. In the context ofJetFitafterglow models, current data represent a departure with statistical significance ≥3.1σ, depending on the fireball collimation, with the most realistic models showing excesses at the level of ≥3.7σ. A lack of detectable 3 GHz radio emission suggests a harder broadband spectrum than the jet afterglow. These properties are consistent with the emergence of a new emission component such as synchrotron radiation from a mildly relativistic shock generated by the expanding merger ejecta, i.e., a kilonova afterglow. In this context, we present a set of ab initio numerical relativity binary neutron star (BNS) merger simulations that show that an X-ray excess supports the presence of a high-velocity tail in the merger more » ejecta, and argues against the prompt collapse of the merger remnant into a black hole. Radiation from accretion processes on the compact-object remnant represents a viable alternative. Neither a kilonova afterglow nor accretion-powered emission have been observed before, as detections of BNS mergers at this phase of evolution are unprecedented.
Hajela, A.; Margutti, R.; Bright, J. S.; Alexander, K. D.; Metzger, B. D.; Nedora, V.; Kathirgamaraju, A.; Margalit, B.; Radice, D.; Guidorzi, C.; Berger, E.; MacFadyen, A.; Giannios, D.; Chornock, R.; Heywood, I.; Sironi, L.; Gottlieb, O.; Coppejans, D.; Laskar, T.; Cendes, Y.more » ; Duran, R. Barniol; Eftekhari, T.; Fong, W.; McDowell, A.; Nicholl, M.; Xie, X.; Zrake, J.; Bernuzzi, S.; Broekgaarden, F. S.; Kilpatrick, C. D.; Terreran, G.; Villar, V. A.; Blanchard, P. K.; Gomez, S.; Hosseinzadeh, G.; Matthews, D. J.; Rastinejad, J. C. « less
Article No. L17
DOI PREFIX: 10.3847
Discovery and confirmation of the shortest gamma ray burst from a collapsar
Ahumada, Tomas ; Singer, Leo P ; Anand, Shreya ; Coughlin, Michael W ; Kasliwal, Mansi M ; Ryan, Geoffrey ; Andreoni, Igor ; Cenko, S Bradley ; Fremling, Christoffer ; Kumar, Harsh ; et al ( May 2021 , ArXivorg)
Gamma-ray bursts (GRBs) are among the brightest and most energetic events in the universe. The duration and hardness distribution of GRBs has two clusters, now understood to reflect (at least) two different progenitors. Short-hard GRBs (SGRBs; T90 <2 s) arise from compact binary mergers, while long-soft GRBs (LGRBs; T90 >2 s) have been attributed to the collapse of peculiar massive stars (collapsars). The discovery of SN 1998bw/GRB 980425 marked the first association of a LGRB with a collapsar and AT 2017gfo/GRB 170817A/GW170817 marked the first association of a SGRB with a binary neutron star merger, producing also gravitational wave (GW). Here, we present the discovery of ZTF20abwysqy (AT2020scz), a fast-fading optical transient in the Fermi Satellite and the InterPlanetary Network (IPN) localization regions of GRB 200826A; X-ray and radio emission further confirm that this is the afterglow. Follow-up imaging (at rest-frame 16.5 days) reveals excess emission above the afterglow that cannot be explained as an underlying kilonova (KN), but is consistent with being the supernova (SN). Despite the GRB duration being short (rest-frame T90 of 0.65 s), our panchromatic follow-up data confirms a collapsar origin. GRB 200826A is the shortest LGRB found with an associated collapsar; it appears to sitmore »on the brink between a successful and a failed collapsar. Our discovery is consistent with the hypothesis that most collapsars fail to produce ultra-relativistic jets.« less
Early-time searches for coherent radio emission from short GRBs with the Murchison Widefield Array
https://doi.org/10.1017/pasa.2021.58
Tian, J. ; Anderson, G. E. ; Hancock, P. J. ; Miller-Jones, J. C. ; Sokolowski, M. ; Rowlinson, A. ; Williams, A. ; Morgan, J. ; Hurley-Walker, N. ; Kaplan, D. L. ; et al ( January 2022 , Publications of the Astronomical Society of Australia)
Abstract Many short gamma-ray bursts (GRBs) originate from binary neutron star mergers, and there are several theories that predict the production of coherent, prompt radio signals either prior, during, or shortly following the merger, as well as persistent pulsar-like emission from the spin-down of a magnetar remnant. Here we present a low frequency (170–200 MHz) search for coherent radio emission associated with nine short GRBs detected by the Swift and/or Fermi satellites using the Murchison Widefield Array (MWA) rapid-response observing mode. The MWA began observing these events within 30–60 s of their high-energy detection, enabling us to capture any dispersion delayed signals emitted by short GRBs for a typical range of redshifts. We conducted transient searches at the GRB positions on timescales of 5 s, 30 s, and 2 min, resulting in the most constraining flux density limits on any associated transient of 0.42, 0.29, and 0.084 Jy, respectively. We also searched for dispersed signals at a temporal and spectral resolution of 0.5 s and 1.28 MHz, but none were detected. However, the fluence limit of 80–100 Jy ms derived for GRB 190627A is the most stringent to date for a short GRB. Assuming the formation of a stable magnetarmore »for this GRB, we compared the fluence and persistent emission limits to short GRB coherent emission models, placing constraints on key parameters including the radio emission efficiency of the nearly merged neutron stars ( $\epsilon_r\lesssim10^{-4}$ ), the fraction of magnetic energy in the GRB jet ( $\epsilon_B\lesssim2\times10^{-4}$ ), and the radio emission efficiency of the magnetar remnant ( $\epsilon_r\lesssim10^{-3}$ ). Comparing the limits derived for our full GRB sample (along with those in the literature) to the same emission models, we demonstrate that our fluence limits only place weak constraints on the prompt emission predicted from the interaction between the relativistic GRB jet and the interstellar medium for a subset of magnetar parameters. However, the 30-min flux density limits were sensitive enough to theoretically detect the persistent radio emission from magnetar remnants up to a redshift of $z\sim0.6$ . Our non-detection of this emission could imply that some GRBs in the sample were not genuinely short or did not result from a binary neutron star merger, the GRBs were at high redshifts, these mergers formed atypical magnetars, the radiation beams of the magnetar remnants were pointing away from Earth, or the majority did not form magnetars but rather collapse directly into black holes.« less
Dynamical ejecta synchrotron emission as a possible contributor to the changing behaviour of GRB170817A afterglow
https://doi.org/10.1093/mnras/stab2004
Nedora, Vsevolod ; Radice, David ; Bernuzzi, Sebastiano ; Perego, Albino ; Daszuta, Boris ; Endrizzi, Andrea ; Prakash, Aviral ; Schianchi, Federico ( August 2021 , Monthly Notices of the Royal Astronomical Society)
ABSTRACT Over the past 3 yr, the fading non-thermal emission from the GW170817 remained generally consistent with the afterglow powered by synchrotron radiation produced by the interaction of the structured jet with the ambient medium. Recent observations by Hajela et al. indicate the change in temporal and spectral behaviour in the X-ray band. We show that the new observations are compatible with the emergence of a new component due to non-thermal emission from the fast tail of the dynamical ejecta of ab-initio binary neutron star merger simulations. This provides a new avenue to constrain binary parameters. Specifically, we find that equal mass models with soft equations of state (EOSs) and high-mass ratio models with stiff EOSs are disfavoured as they typically predict afterglows that peak too early to explain the recent observations. Moderate stiffness and mass ratio models, instead, tend to be in good overall agreement with the data.
Production of Very Light Elements and Strontium in the Early Ejecta of Neutron Star Mergers
https://doi.org/10.3847/1538-4357/ac3751
Perego, Albino ; Vescovi, Diego ; Fiore, Achille ; Chiesa, Leonardo ; Vogl, Christian ; Benetti, Stefano ; Bernuzzi, Sebastiano ; Branchesi, Marica ; Cappellaro, Enrico ; Cristallo, Sergio ; et al ( January 2022 , The Astrophysical Journal)
We study the production of very light elements (Z< 20) in the dynamical and spiral-wave wind ejecta of binary neutron star mergers by combining detailed nucleosynthesis calculations with the outcome of numerical relativity merger simulations. All our models are targeted to GW170817 and include neutrino radiation. We explore different finite-temperature, composition-dependent nuclear equations of state, and binary mass ratios, and find that hydrogen and helium are the most abundant light elements. For both elements, the decay of free neutrons is the driving nuclear reaction. In particular, ∼0.5–2 × 10−6M⊙of hydrogen are produced in the fast expanding tail of the dynamical ejecta, while ∼1.5–11 × 10−6M⊙of helium are synthesized in the bulk of the dynamical ejecta, usually in association with heavyr-process elements. By computing synthetic spectra, we find that the possibility of detecting hydrogen and helium features in kilonova spectra is very unlikely for fiducial masses and luminosities, even when including nonlocal thermodynamic equilibrium effects. The latter could be crucial to observe helium lines a few days after merger for faint kilonovae or for luminous kilonovae ejecting large masses of helium. Finally, we compute the amount of strontium synthesized in the dynamical and spiral-wave wind ejecta, and find that itmore »is consistent with (or even larger than, in the case of a long-lived remnant) the one required to explain early spectral features in the kilonova of GW170817.
The effect of jet–ejecta interaction on the viewing angle dependence of kilonova light curves
https://doi.org/10.1093/mnras/stab042
Klion, Hannah ; Duffell, Paul C ; Kasen, Daniel ; Quataert, Eliot ( January 2021 , Monthly Notices of the Royal Astronomical Society)
ABSTRACT The merger of two neutron stars produces an outflow of radioactive heavy nuclei. Within a second of merger, the central remnant is expected to also launch a relativistic jet, which shock-heats and disrupts a portion of the radioactive ejecta. Within a few hours, emission from the radioactive material gives rise to an ultraviolet, optical, and infrared transient (a kilonova). We use the endstates of a suite of 2D relativistic hydrodynamic simulations of jet–ejecta interaction as initial conditions for multidimensional Monte Carlo radiation transport simulations of the resulting viewing angle-dependent light curves and spectra starting at $1.5\, \mathrm{h}$ after merger. We find that on this time-scale, jet shock heating does not affect the kilonova emission for the jet parameters we survey. However, the jet disruption to the density structure of the ejecta does change the light curves. The jet carves a channel into the otherwise spheroidal ejecta, revealing the hot, inner regions. As seen from near (≲30°) the jet axis, the kilonova is brighter by a factor of a few and bluer. The strength of this effect depends on the jet parameters, since the light curves of more heavily disrupted ejecta are more strongly affected. The light curves and spectramore »are also more heavily modified in the ultraviolet than in the optical.« less
Publisher's Version of Record at https://doi.org/10.3847/2041-8213/ac504a
Hajela, A., Margutti, R., Bright, J. S., Alexander, K. D., Metzger, B. D., Nedora, V., Kathirgamaraju, A., Margalit, B., Radice, D., Guidorzi, C., Berger, E., MacFadyen, A., Giannios, D., Chornock, R., Heywood, I., Sironi, L., Gottlieb, O., Coppejans, D., Laskar, T., Cendes, Y., Duran, R. Barniol, Eftekhari, T., Fong, W., McDowell, A., Nicholl, M., Xie, X., Zrake, J., Bernuzzi, S., Broekgaarden, F. S., Kilpatrick, C. D., Terreran, G., Villar, V. A., Blanchard, P. K., Gomez, S., Hosseinzadeh, G., Matthews, D. J., and Rastinejad, J. C.. Evidence for X-Ray Emission in Excess to the Jet-afterglow Decay 3.5 yr after the Binary Neutron Star Merger GW 170817: A New Emission Component. The Astrophysical Journal Letters 927.1 Web. doi:10.3847/2041-8213/ac504a.
Hajela, A., Margutti, R., Bright, J. S., Alexander, K. D., Metzger, B. D., Nedora, V., Kathirgamaraju, A., Margalit, B., Radice, D., Guidorzi, C., Berger, E., MacFadyen, A., Giannios, D., Chornock, R., Heywood, I., Sironi, L., Gottlieb, O., Coppejans, D., Laskar, T., Cendes, Y., Duran, R. Barniol, Eftekhari, T., Fong, W., McDowell, A., Nicholl, M., Xie, X., Zrake, J., Bernuzzi, S., Broekgaarden, F. S., Kilpatrick, C. D., Terreran, G., Villar, V. A., Blanchard, P. K., Gomez, S., Hosseinzadeh, G., Matthews, D. J., & Rastinejad, J. C.. Evidence for X-Ray Emission in Excess to the Jet-afterglow Decay 3.5 yr after the Binary Neutron Star Merger GW 170817: A New Emission Component. The Astrophysical Journal Letters, 927 (1). https://doi.org/10.3847/2041-8213/ac504a
Hajela, A., Margutti, R., Bright, J. S., Alexander, K. D., Metzger, B. D., Nedora, V., Kathirgamaraju, A., Margalit, B., Radice, D., Guidorzi, C., Berger, E., MacFadyen, A., Giannios, D., Chornock, R., Heywood, I., Sironi, L., Gottlieb, O., Coppejans, D., Laskar, T., Cendes, Y., Duran, R. Barniol, Eftekhari, T., Fong, W., McDowell, A., Nicholl, M., Xie, X., Zrake, J., Bernuzzi, S., Broekgaarden, F. S., Kilpatrick, C. D., Terreran, G., Villar, V. A., Blanchard, P. K., Gomez, S., Hosseinzadeh, G., Matthews, D. J., and Rastinejad, J. C.. "Evidence for X-Ray Emission in Excess to the Jet-afterglow Decay 3.5 yr after the Binary Neutron Star Merger GW 170817: A New Emission Component". The Astrophysical Journal Letters 927 (1). Country unknown/Code not available: DOI PREFIX: 10.3847. https://doi.org/10.3847/2041-8213/ac504a. https://par.nsf.gov/biblio/10363903.
place = {Country unknown/Code not available}, title = {Evidence for X-Ray Emission in Excess to the Jet-afterglow Decay 3.5 yr after the Binary Neutron Star Merger GW 170817: A New Emission Component}, url = {https://par.nsf.gov/biblio/10363903}, DOI = {10.3847/2041-8213/ac504a}, abstractNote = {Abstract For the first ∼3 yrs after the binary neutron star merger event GW 170817, the radio and X-ray radiation has been dominated by emission from a structured relativistic off-axis jet propagating into a low-density medium withn< 0.01 cm−3. We report on observational evidence for an excess of X-ray emission atδt> 900 days after the merger. WithLx≈ 5 × 1038erg s−1at 1234 days, the recently detected X-ray emission represents a ≥3.2σ(Gaussian equivalent) deviation from the universal post-jet-break model that best fits the multiwavelength afterglow at earlier times. In the context ofJetFitafterglow models, current data represent a departure with statistical significance ≥3.1σ, depending on the fireball collimation, with the most realistic models showing excesses at the level of ≥3.7σ. A lack of detectable 3 GHz radio emission suggests a harder broadband spectrum than the jet afterglow. These properties are consistent with the emergence of a new emission component such as synchrotron radiation from a mildly relativistic shock generated by the expanding merger ejecta, i.e., a kilonova afterglow. In this context, we present a set of ab initio numerical relativity binary neutron star (BNS) merger simulations that show that an X-ray excess supports the presence of a high-velocity tail in the merger ejecta, and argues against the prompt collapse of the merger remnant into a black hole. Radiation from accretion processes on the compact-object remnant represents a viable alternative. Neither a kilonova afterglow nor accretion-powered emission have been observed before, as detections of BNS mergers at this phase of evolution are unprecedented.}, journal = {The Astrophysical Journal Letters}, volume = {927}, number = {1}, publisher = {DOI PREFIX: 10.3847}, author = {Hajela, A. and Margutti, R. and Bright, J. S. and Alexander, K. D. and Metzger, B. D. and Nedora, V. and Kathirgamaraju, A. and Margalit, B. and Radice, D. and Guidorzi, C. and Berger, E. and MacFadyen, A. and Giannios, D. and Chornock, R. and Heywood, I. and Sironi, L. and Gottlieb, O. and Coppejans, D. and Laskar, T. and Cendes, Y. and Duran, R. Barniol and Eftekhari, T. and Fong, W. and McDowell, A. and Nicholl, M. and Xie, X. and Zrake, J. and Bernuzzi, S. and Broekgaarden, F. S. and Kilpatrick, C. D. and Terreran, G. and Villar, V. A. and Blanchard, P. K. and Gomez, S. and Hosseinzadeh, G. and Matthews, D. J. and Rastinejad, J. C.}, } | CommonCrawl |
Factors that influence the implementation of sustainable land management practices by rural households in Tigrai region, Ethiopia
Haftu Etsay ORCID: orcid.org/0000-0002-8116-59281,
Teklay Negash1 &
Metkel Aregay1
Ecological Processes volume 8, Article number: 14 (2019) Cite this article
Sustainable land management is considered as one of the useful approaches to combat the threat of various forms of land degradation in Ethiopia. Despite this, there is scant information regarding households' decision towards the implementation of sustainable land management practices. This paper, therefore, looks into the determinants for the continued use and choice of the sustainable land management practices by smallholder farmers and its productivity effect in three randomly chosen districts in Tigrai region, Ethiopia. The study uses data from household survey and key informant interviews. The paper employs a binary logit to analyze the determinants for the decision of continued use of sustainable land management practices, and a multivariate probit to analyze the simultaneous adoption decision of sustainable land management practices using cross sectional data collected from 230 randomly selected households. The impact of sustainable land management practices was also evaluated using propensity score matching.
Farming techniques, wealth status, agro-ecological variations, and plot level characteristics were found to be associated with the implementation decision of sustainable land management practices by rural households. Besides, institutional supports and access to basic infrastructures influenced the overall continued use of sustainable land management practices and the preference of households toward these practices. The study also finds that the value of crop production of sustainable land management users was on average 77–100% higher than that of non-users.
The results of the current study confirm that the implementation of various sustainable land management practices are influenced by farming technologies deployed by rural households, agro-ecological variations, plot characteristics, and institutional supports. The findings also affirm that most of the sustainable land management practices are complementary to one another, and implementing two or more sustainable land management practices on a given plot is highly associated with higher value of crop production. Such complementarity highlights that the productivity effect of a given sustainable land management practice is enhanced by the use of the other ones.
Land degradation has been the critical challenge for Sub-Saharan African (SSA) countries. The causes of land degradation are complex and vary from place to place. The major drivers of land degradation are generally grouped into two: proximate and underlying causes (Belay et al. 2015; Pingali et al. 2014). The proximate causes are more or less natural factors such as biophysical conditions, topographic and climatic conditions, and inappropriate land management practices, whereas the underlying factors are mostly anthropogenic, which include population growth, land tenure, and other socioeconomic and policy related factors (Belay et al. 2015; Pingali et al. 2014).
FAO (2011) report shows that Africa loses over 50 tons of soil per hectare and nearly 4 million hectares of forest land annually, largely in humid and sub-humid West Africa. These evidences indicate that the natural resources in the continent have been excessively utilized and this resulted in land degradation which in turn affects the livelihood of African farmers as the majority of them rely on the direct use of natural resources for their very survival. The key drivers of land degradation in Africa in general and in sub-Saharan Africa in particular are similar to that of at global scale which include high demographic growth, weak incentive policy, poor legal and institutional frameworks, limited availability of grazing land, and poor knowledge regarding the environment (Diagana 2003; Hurni et al. 2010). Especially in countries with limited cultivable land and high population growth rates, fallow periods are no longer sufficient to allow soil fertility to be restored. Kenya, Ethiopia, Malawi, Burundi, and Rwanda are examples of this where crop yields have fallen consequently. In a response, farmers have been forced either to bring increasingly marginal lands into cultivation, or to migrate into tropical forest areas, exacerbating problems of land degradation and deforestation (FAO, 2011). The economic consequences of land degradation are also severe in Eastern Africa since nearly 65% of the population is rural and the main livelihood of about 90% of these rural populations relies on subsistence based agriculture (Kirui and Mirzabaev 2015).
The level of degradation in many SSA, including Ethiopia, is even more severe. Besides, addressing the proximate and underlying causes of the prevailing land degradation problems remains a critical policy challenge for Ethiopia since its economy enormously relies on subsistence agriculture. The major drivers of land degradation in Ethiopia include land shortage and lack of alternative livelihoods (induced by high population growth), forest clearance and high removal of vegetation cover, unsustainable cultivation practices, and overgrazing (FAO, 2011). Soil erosion and deforestation are the two more severe forms of land degradation that contribute to the poor performance of subsistence agriculture sector in Ethiopia (Bekele and Drake 2003; Bewket 2003). These land degradation problems have also far-reaching economic, social, and environmental influences (Pender and Gebremedhin 2007). With regard to cost of land degradation, various estimates show that it costs a considerable proportion of a country's national income. In Ethiopia, for instance, the cost of land degradation was about 3% of the total agricultural GDP in 1994 (Bojo and Cossells 1995). Sustainable land management, has, therefore, utmost importance to Ethiopia in which about 80% of its population is directly supported by the agriculture sector. It addresses land degradations and enhances the productive capacity of the natural resources base. In addition, in the absence of effective sustainable land management (SLM) practice, it is less likely to eradicate poverty (von Braun et al. 2014).
A number of studies have addressed important influencing factors that explain the adoption decision behavior of smallholder farm households toward various land conservation measures. For instance, a study conducted in north western part of Ethiopia by Adugna and Bekele (2007) revealed that economic variables such as plot ownership, livestock holding, family size, and land-to-labor ratio have an influence on adoption of land conservation practices. Furthermore, the major socioeconomic factors that influence households decision to adopt soil and water conservation measures in Ethiopian highlands include sex and education level of household head, availability of labor force, cattle holding, and off/non-farm income (Adimassu and Kessler 2012; Amsalu and de Graaff 2007; Bekele and Drake 2003). On the other hand, biophysical characteristics of plots, topography, and agro-ecological variations also influence the adoption decision of soil and water conservation and other sustainable land management practices (de Graaff et al. 2008; Miheretu and Yimer 2017). World Bank (2007) and Yirga (2007) also reported that institutional factors such as land insecurity, access to credit, proximity to all weather road, and market access were likely to influence the adoption of and investments on sustainable land management practices in Ethiopia. The adoption of SLM practices by farm households has also been hurdled by wealth-related factors (von-Braun et al. 2013; Bewket 2007; Genanew and Alemu 2012; Shiferaw and Holden 1998). Furthermore, Amsalu and de Graaff (2007) revealed that the adoption level of SLM practices by self-motivated farmers remains very low and yet to bring the intended results in terms of improving the livelihoods of rural households.
With regard to the effectiveness of sustainable land management practice, mixed results have been reported particularly related to its impact on crop yield of farm plots. A study by Pender and Gebremdhin (2006), for instance, reported that farm plots that are treated with stone terraces experience a significant yield increment. Besides, an impact evaluation study conducted in Northern Ethiopia at household level revealed that those who introduced stone bund on their private plots experienced higher value of crop production as compared to those who did not (Kassie et al. 2008). Nevertheless, other studies revealed that the outcome of series of conservation measures introduced in Ethiopia, usually involving physical and biological structures such as terraces, bunds, and tree planting, among others, is less than desired (Berry 2003; Eyasu 2003). Besides, an inverse relationship between adoption of SLM practices and crop yield was found in areas characterized by high rainfall in western part of Amhara regional state of Ethiopia (Kassie et al. 2008).
There is also destruction of soil and water conservation structures in many parts of Ethiopia (Kassie 2009; Tadesse and Belay 2004) which pose a critical challenge on the sustainability of the already introduced land conservation measures. Such discrepancy of findings shows that the impact of SLM practice on the productivity of farm plots and level of acceptance varies across different landscapes and agro-ecological zones. The effectiveness of the introduced SLM practices on farmlands has been challenged by many factors such as inappropriate implementation approaches, too much focus on technical solutions, too little focus on addressing the proximate and undelaying causes of land degradations, and poor extension systems (Adimassu et al. 2016; Adimassu and Kessler 2012; Kassie 2009; Bewket 2007; Bekele 2003). Additional contributors to the ineffectiveness in terms of attaining the required results include top-down planning methodology, lack of community input, and low implementation capacity at local levels (Tongul and Hobson 2013). There are also evidences that policy-related challenges have contributed to the failure of land conservation efforts in terms of achieving the intended objectives in different parts of the country. For example, the findings of Nkonya et al. (2013) and von Braun et al. (2013) indicate that lack of strong policy action and low level of evidence-based policy framework are considered to be the critical challenges for the effectiveness of SLM practices.
As reviewed earlier, despite the abundance of research works in SLM and its crop productivity effect, the studies are extensively oriented towards the initial adoption but with no consideration to the continued use and multiple adoption decision of SLM practices. Most of the previous studies modeled the adoption of SLM practice as a binary: adopters and non-adopters. Such modeling would make it difficult to analyze the preference of households towards various SLM practices and simultaneous adoption decisions. Therefore, studying the simultaneous adoption behavior of farmers and the intensity of the use of SLM practices would be helpful to the existing body of knowledge. This is true since farmers are more likely to use a combination of SLM practices to deal with the land degradation problems faced instead of adopting just only a single conservation practice. The adoption decision is, therefore, explained in the form of preferences from a set of land conservation options. To this effect, a multivariate instead of bivariate approach, which excludes useful information contained in the interdependent and simultaneous adoption decisions, is employed to model the adoption decision. This paper, therefore, intends to examine the factors affecting households' decision to the implementation of multiple SLM practices and the productivity effect on farmlands as it would help to better understand the households' decision behavior towards land management practices on farm plots as well as institutional and biophysical factors that affect such decisions.
The current study was conducted in three randomly selected districtsFootnote 1 of Tigrai region, namely Atsibi-wenberta, Hintalo-wajerat, and Kola-tembien representing highland, midland, and lowland agro-ecological zones respectively as shown in Fig. 1. The first step in the random sampling procedure of the districts was obtaining the list of districts in the region based on their agro-ecological classification. Then after, one district from each ago-ecological zone, which makes up a total of three districts, was randomly selected using a lottery system. The study communities including the catchment areas within each selected district were purposively chosen using predefined criteria stated below. Lastly, the respondents from the selected catchments, both from treated and untreated, were randomly drawn using a lottery system. The total rural districts in the region are 34 in which sustainable land management practices have been implemented since the past few decades. This study selects three districts randomly (a lottery system) in which an attempt was made to represent the three agro-ecological zones (highland, midland, and lowland).
Location of study sites
The study sites are spatially distributed across three districts of the region to capture heterogeneous data on both socioeconomic and plot level biophysical attributes. The study sites are also characterized by various climatic and topographic domains ranging from altitude differences to temperature and rainfall variations as well as cropping patterns.
Kola-tembien is topographically located with a range of 1501 to 2500 m above sea level. The estimated annual rainfall ranges from 500 to 800 mm, while mean annual temperature varies between 25 and 30 °C. The wereda is administratively divided into 27 tabias. The total population of the wereda is 148,282 and the total area is estimated at 147,427 ha. On the other hand, Atsibi-wenberta is subdivided into 16 administrative tabias with a total population of 112,341. The elevation of Atsibi-wenberta wereda varies significantly which ranges from 918 to 3069 m above sea level. The third study site (Hintalo-wajerat wereda) has a total population of 153,505 with 34,360 households and an area of 2864.79 km2. This wereda is situated at an altitude range of 1500 to 2540 m above sea level. In addition, the wereda is divided into 20 administrative tabias.
The farming system was observed to be a mix of livestock and crop production which is fairly similar in the three study sites. The dominant crops grown by smallholder farmers in Kola-tembien district, for instance, are teff, sorghum, maize, and finger-millet. In Hintalo-wajert district, the staple crops grown are wheat, barley, and teff. Similarly, the dominant crops grown in Atsibi-wenberta district are wheat, barley, and pulses. The livestock production system is also fairly similar across the three study weredas. It is mainly characterized by traditional husbandry system with small per capita cattle holding, sheep, and goat and to some extent poultry production. The production system of both crop and livestock is characterized by low input and low output which indicates the farming system has remained very traditional and subsistence.
Data and sampling procedures
The current study selected three catchment areas as treated observations and another three catchment areas as control observations using multi stage sampling techniques. In the first stage of the sampling procedure, the three districts were randomly selected using a lottery system from a list of all districts found in the region. Then after, using predefined criteria,Footnote 2 a total of three tabiasFootnote 3, which includes one tabia from each selected district that best fits the criteria, were purposively chosen with the support of experts from the office of natural resource management of the study districts. Lastly, a model catchment area from each selected tabia was purposively selected based on the stated criteria. The list of selected tabias and catchment areas are presented in Table 1. For comparison purpose, one catchment area,Footnote 4 which is considered to be poorly conserved by SLM practices, from each tabia was also selected. The target population for this study was households who introduce SLM practice on their plots in the absence of any external incentive and also continues to maintain the conservation measures. Representative sample size was finally determined using Eq. 1, and respondents were selected through a lottery system of simple random sampling. The lottery system was done through the help of Microsoft Excel which enables to generate a random number from the data set of the sampling frame. The distribution of the sample size across the study sites was proportionate to their relative share of the total sampling frame (target population) as shown in Table 1.
Table 1 Distribution of respondents by study tabia and catchment (districts in parentheses)
This paper is based on a survey of 230 randomly drawn households from a set of list of household heads of three tabias and six catchments. The required data were collected using structured questionnaire from the selected heads of households. The study prepared two separate set of questions for the household survey (structured questionnaire) and for the key informant interviewees (checklist and few unstructured open ended questions). A structured questionnaire was designed to elicit information on demographic, socioeconomic, infrastructure, and plot level information from the households, whereas the key informant interview (KII) was designed to gather qualitative data on the challenges of maintaining conservation structures, benefits of sustainable land management practices, and institutional supports to promote SLM.
A check list was used to gather data from the key informants that include natural resource management experts, development agents, and tabia leaders of the study sites. The participants of the KII were development agents and community leaders from the three tabias included in this study who are better informed and can better describe about the sustainable land management practices in their localities. A pre-test survey was conducted prior to the actual survey in each study site to incorporate unforeseen variables and also for acclimatization purpose. Following this, training on the questionnaire and over all data collection was provided to the enumerators. The secondary data were obtained particularly from unpublished reports of the office of the natural resources management of the study sites.
$$ n=\frac{p\left(1-p\right)}{\frac{e^2}{Z^2}\kern0.5em +\frac{p\left(1-p\right)}{N}} $$
Where n is the sample size, N is the population size (171), Z is the confidence level at 95%, Z = 1.96, and P is the estimated population proportion (50%), precision level (e) = 0.06. The total representative sample size was found to be approximately 115 households and a reasonable sample size was taken from each catchment in proportion to their representation to the total target population (Table 1). In addition, 115 households who do not introduce SLM measures on their plots were randomly selected as control observations. The selected comparison catchment areas for controlling purpose (where the control observations were selected) are spatially located adjacent to the catchment areas where continued users of SLM practices reside. An attempt was also made to include catchments (for the treated observations) that had been treated at least 2 years prior to the survey with an intention that this time lag provides adequate time for households to develop the experience needed to operate and manage SLM practices and at the same time experience the benefits of the continued use of SLM practices on farmlands.
Both descriptive and inferential statistics method of data analysis were employed. Particularly, mean, standard deviation, t tests, and chi square tests were used to analyze data collected from the sample households. Binary logit and multivariate probit were deployed to analyze the drivers to the households' decision toward the continued use and choices of SLM practice by farm households respectively. Propensity score matching was also deployed to evaluate the impact of introduced SLM practices on the value of crop production. The data collected from the key informants was qualitatively analyzed using content analysis and the results are integrated with the empirical (quantitative approach) results as the main objective of its inclusion is to support the empirical findings using a qualitative approach.
Binary logit model
The determinant factors for the continued use of SLM practices were estimated using a binary logit regression. Following Garson (2008), which applies maximum likelihood estimation after transforming the dependent into a logit variable, the classification of households into a binary model, continued user and non-user, was done based on households' past experiences in SLM practices (Table 2). The dependent variable, which is the natural log of the odds (logit), is binary as shown in Eq. 2. Households whose farm plot/s is/are well conserved and regularly maintained with the introduced terraces and other modern conservation measures were considered as continued users in this analysis. On the other side, households who are reluctant to maintain the introduced conservation structure (previously introduced by a project assistant or mass mobilization) were labeled as non-continued users. The binary choices in this case are households that adopted and are also continuously maintaining the introduced terraces (Y = 1) and households that had removed/or reluctant to maintain conservation measures built in the past (Y = 0).
$$ {\displaystyle \begin{array}{l}\ln \left(\frac{\mathrm{p}}{1-\mathrm{p}}\right)\kern0.5em =\kern0.5em a+ bx\\ {}p\kern0.5em =\kern0.5em \frac{e^{a+ bx}}{1+{e}^{a+ bx}}\end{array}} $$
Table 2 Classification of SLM practices implemented on farmlands
Where P denotes the probability of the event occurring,
Xi denotes the independent variables,
e is the base of the natural logarithm, and
a and b are the parameters of the model.
A dummy variable Y was used to identify whether each sampled household is a continued user of SLM practice or not.
Y = 1 for the continued user and Y = 0 otherwise
Xi denotes for independent variables (explanatory variables that might affect the households' decision to continually use SLM techniques).
The reduced formal used in this logistic regression model is shown in Eq. 3.
$$ Y\kern0.5em =\kern0.5em \ln \left(\mathrm{odds}\left(\mathrm{event}\right)\right)\kern0.5em =\kern0.5em \ln \left(\mathrm{prob}\left(\mathrm{event}/\mathrm{prob}\left(\mathrm{nonevent}\right.\right)\right)\kern0.5em =\kern0.5em \ln \left(\mathrm{prob}\left(\mathrm{event}/\left[1\hbox{-} \mathrm{prob}\left(\mathrm{event}\right)\right]\right.\right.\kern0.5em =\kern0.5em {b}_0\kern0.5em +\kern0.5em {b}_1{X}_1\kern0.5em +\kern0.5em {b}_2{X}_2\kern0.5em +\kern0.5em {b}_3{X}_3\kern0.5em +\kern0.5em \dots .\kern0.5em +\kern0.5em {b}_n{X}_n\kern0.5em +\kern0.5em {\varepsilon}_i $$
Where b0 is the constant and Y is continued use of SLM technologies = PrY (1 = a household chooses to continually practice SLM technologies, 0 = otherwise).
b1…bn is the estimated coefficients, and ε i is an error term
X1…Xn = vectors of explanatory variables included in the model
The full list of explanatory variables included (X1…Xn) in the binary logit and multivariate probit models along with their descriptions are presented in Table 3. It is important to note that some of the classifications regarding the households' perception towards plot level attributes could be relatively weak due to subjectivity of respondents, such as categorization of soil fertility into good, medium, and poor as well as slope (steep, medium and gentle). We suspect it can have some influence on the precision level of the results.
Table 3 Description of explanatory variables included in the binary logit and MVP models
Multivariate probit
Following Cappellari and Jenkins (2003), the current study used a multivariate probit model to analyze the determinant factors for the choice of SLM practices using Eq. 4. The reason behind the use of multivariate probit (MVP)Footnote 5 is due to the premises that farmers use a combination of any of the SLM practices instead of relying on a single conservation practice to reduce their land degradation problems in which the SLM options can be a complement or a substitute to one another (Kassie et al. 2013; Teklewold et al. 2013).
The current study grouped the various sustainable land management options implemented on farm plots into four major classifications. There are a number of SLM practices which make it very difficult to separately analyze the choice of farmers towards these options at a time. The details on the grouping along with their descriptions are presented in Table 2.
$$ {y}_{im}^{\ast }={\beta}_m{x}_{im}+{\varepsilon}_{im},\kern0.75em {y}_{im}=1\ if\ {y}_{im}^{\ast }>0\kern0.5em \mathrm{and}\ 0\ \mathrm{otherwise} $$
Equation 4 is based on the assumption that a rational ith farmer has a latent variable \( {y}_{im}^{\ast } \) which captures unobserved preferences associated with the mth choice of SLM measures (m = the four available SLM practices used in this study); βm is the set of parameters that reflect the impact of changes in the vector of explanatory variables xi on the farmer's preference toward the mth SLM practices; xim represents the vector of observed variables that are expected to explain each type of SLM practice; and εim represents error terms following a multivariate normal distribution, each with a mean of zero and a variance covariance matrix with values of 1 on the leading diagonal and non-zero correlations as off-diagonal elements.
The present study took a closer look at the impact of the introduced conservation practices on the value of crop production at household level since the ultimate objective of the conservation of private farm plots is to enhance its productivity. To this end, propensity score matching was used to compute the impact that the SLM measures have brought on the value of production for users compared to non-users. The study employed four matching algorisms, namely nearest neighbor, radius, kernel, and stratification to evaluate household and plot level impacts of introduced SLM practices on the value of crop production. Monetary value was used as a standard unit to measure the impact on crop yield as households cultivate more than one crop which makes it difficult to see the effect on the aggregate physical quantity of crop yield. We denote continued users of SLM practices as Y1 and non-continued users as Y0, whereby the impact of SLM practices is the difference in value of crop production between continued and non-continued users (Δ = Y1 − Y0). And treatment D is a binary variable that determines if a household is a continued SLM user or not, D = 1 for the SLM continued user households and D = 0 otherwise. Then we find the average impact of the SLM practice on the value of crop production, in the jargon of propensity score matching (PSM) called average treatment effect on the treated observations.
(ATT) using Eq. 5.
$$ \mathrm{ATT}=\left(E\left(\Delta \right)|p(x),D=1\right)=E\left({y}_1|p(x),D=1\right)-E\left({y}_0|p(x),D=0\right) $$
Description of the respondents
The mean and percentage values of the socioeconomic and demographic characteristics of the surveyed households are presented in Table 4. The two sample t tests confirmed that a significant difference was observed in asset holding and livestock ownership measured in terms of tropical livestock unit (TLU) between continued users and non-users of SLM practices. This indicates that farmers with relatively higher ownership of asset and livestock holding tend more to adopt SLM practices than those whose ownership is relatively smaller. On the other side, the majority of sociodemographic attributes of the two groups, such as age, sex composition, level of educational attainment, family size, and land holding, show no statistically significant differences. Male-headed households accounted for about 78.3% of the total respondents, while female-headed households accounted for about 21.7% with no significant difference between SLM users and non-users (p = 0.7). The average family size of the surveyed households was six with no significant difference between users and non-users.
Table 4 Description on the profile of surveyed households
The average ages of continued user and non-user respondents were 45.5 and 44.5 years respectively while it was 45 years for the total respondents. The average total value of asset of respondents was 67,135.5 ETB with a statistically significant difference (p < 0.010) between the continued users and non-users of SLM practices (Table 4). About 80.1% of the total respondents were married and the remaining 19.9% were divorced, widowed, and single in aggregate. The average year of schooling was 3 years with no significant difference between the two groups. On average, the household heads of the surveyed respondents attended 3 years of schooling which indicates that majority of them can, at least, read and write. Table 4 also shows that there was a statistically significant difference (p < 0.05) on cattle holding between the continued users and non-users of SLM practices which is 4.7 and 3.4 in TLU respectively, while the average for the total sample was 4. This implies that households with large number of livestock holding are more willing to continually use SLM practices than those with relatively smaller cattle holdings. This might be due to the fact that some of the conservation practices introduced on farmlands such as grasses and forage trees can be a source of feed for the livestock. The socioeconomic description has also shown that the average land holding of households is roughly about three tsimad with no significant difference between the two groups. This also further indicates that land size has nothing to do with the decision of a household to continually use SLM practices in the study area.
SLM practices implemented on farmlands
Table 5 presents the level of participation of the respondents towards the sustainable land management practices across the study sites. In Atsibi-wenberta wereda, for instance, the majority of the respondents (80%) implemented physical soil and water conservation such as stone bund and terraces with small trenches, while 43% of the surveyed households use agronomic measures mainly manure application. As also evidenced in the same table, 25.5% of the respondents implemented more than one SLM practice on the same plot. However, in this wereda, only small proportion (2.5%) of the respondents implemented agroforestry on their farmlands. Similarly, in Hintalo-wajerat wereda households who introduced physical structures, agroforestry, and agronomic measures accounts for 20.2, 2, and 40.5% of the total respondents, respectively. In this wereda, households tend to use more than one SLM practices as compared to the other two weredas, i.e., 35.7% of the surveyed households implemented two or more conservation practices (Table 5). The households in Kola-tembien have showed much interest to implement agronomic practices such as application of manure (67% of the respondents).
Table 5 SLM practices implemented by households on farmlands
The above depicted figures can give very useful insights regarding the types of SLM practices implemented in different ago-ecological zones. For instance, households in the highlands tend to practice stronger conservation measures mainly physical soil and water conservation as compared to those in the lowlands. This might be because the topography of the highlands is full of rugged terrains where acute soil erosion is evident as a result of excessive runoff. In the lowlands where the land is dominantly flat, on the other hand, agronomic conservation is the prioritized conservation approach. The result also shows that a significant proportion of the total respondents implement at least two conservation practices on a given plot, which of course is very important for augmenting farmland productivity since one conservation practice complements the other. Nonetheless, the use of agroforestry practice on farm lands by the respondents of all study sites seems to remain very low as depicted in Table 5.
Factors affecting the continued use of SLM practices
The binary logistic regression of the present study confirms that the model is fit and highly significant (Prob > chi2 = 0.001). Furthermore, the Hosmer-Lemeshow test of goodness of fit also fails to reject the null hypothesis which signals that the model is fit to the data (Table 6). The results of the binary logit regression show that 10 out of the 22 variables included in the model significantly affected the continued use of SLM practices by rural households. Households' resource endowments mainly availability of labor force, land holding, crop production, and farm input utilization were found to have an influence on the continued use of SLM practices. Besides, plot level characteristics such as soil fertility status, slope of plots, and location of the plot influence the continued use of SLM practices. Particularly, the study shows that the uptake of farm inputs, plot location, and distance to agricultural extension services are the most important predictors for the continued use of SLM practices in the study sites, and their odds ratios are interpreted in the subsequent paragraphs.
Table 6 Binary logit results on determinants for the continued use of SLM practices
The availability of labor force was found to have a significant positive influence on farmers' decision to continuously use conservation measures on private farm plots. Table 6 shows that as labor force increases by one person (adult equivalent), the odds ratio of the probability of a household to continually conserve its plots also increases by a factor of 1.2 (p < 0.02).
The effect of the size of farm plots owned by a household on the decision to conserve of plots was statistically significant (p < 0.1). An increase in the size of a farm plot by one tsimad results in a decrease in the likelihood of a household to continuously conserve his/her plots by a factor of 0.77 (Table 6).
The amount of modern farm input utilization, particularly fertilizers, was also positively associated with the continued use of SLM practices by self-motivated farm households. As the expenditure on farm input increases by one Ethiopian Birr (currency of Ethiopia), the odds ratio of the likelihood of farm plots to get conserved also increases by a factor of 1.0 (p < 0.01) as shown in Table 6. The location of plots particularly proximity of plots to the residence of the household has also an effect on the continued use of SLM practices by the rural households. Plots which are spatially located near to the residency of the owners were found to have a higher chance of getting conserved. Table 6 shows that the odds ratio in favor of conserving a plot decreases by a factor 0.976 as a result of increase in the distance between a house and a farm plot by 1 km (p < 0.01).
The result of this study indicates that the continuity of SLM practices varies across the study sites (agro-ecological zones). The households in Kaal-amin (highland) and Hintalo (midland) were more likely to continually use various conservation measures as compared to household in Begashka (lowland). The likelihood of continued use of SLM practices in Kaal-amin and Hintalo was higher by a factor of 3.68 and 4.5, respectively compared to that in Begashka (Table 6).
The distance between the residence of households and the office of extension service was considered as a proxy variable to analyze the association between extension service and continued use of sustainable land management practices. The binary logit result shows that the odds ratio in favor of continued use of SLM practices decreases by a factor of 0.979 as a consequence of increasing the distance between the extension office and households residency by 1 km (p < 0.01).
Determinants for the choices of SLM practices
The correlation regression among the dependent variables (the four SLM practices) shows that there is interdependence among the SLM practices implemented by rural households (Table 7). For instance, there is a negative correlation between indigenous conservation and the remaining three land conservation types (physical, agroforestry, and agronomic), which implies that the former one can be substituted by the latter ones. In contrast, a positive correlation was found among physical, agroforestry, and agronomic practices, which attests their complementarity (Table 7). It is also important to note that a farmer can introduce multiple SLM practices on a given plot. For this reason, the study adopted multivariate probit model and the results are presented in Table 8. The availability of labor force is shown to have a positive influence on the choice of physical conservation which is significant at 10% level of significance but not at 5% or less, whereas it was negatively associated with the use of agro-forestry practice (p < 0.05) (Table 8). More specifically, households with greater labor force tend to prefer more of physical soil and water conservation measures such as terraces but are less interested in agroforestry practices.
Table 7 Correlation coefficient among the four SLM practices
Table 8 Coefficient estimates of the multivariate probit model (p values in parentheses)
The preference towards the physical conservation practice was found to be influenced by the size of farm plots operated by smallholder farmers. Table 8 shows that households who operate relatively larger plot size were more likely to practice physical conservation structures (p < 0.05) and less likely to practice indigenous conservation measures (p < 0.05).
The utilization of farm input was found to be very important in terms of explaining the choice of households towards various sustainable land management practices. It was found to positively influence the choice of smallholder farmers toward practicing physical conservation structures, agroforestry practice, and agronomic practice but with different levels of significance. Households who spend more money to acquire inputs are more likely to prefer physical conservation practices, agronomic practices (p < 0.1), and agroforestry practices as well (p < 0.05). On the other hand, households with less farm input expenditure were found to choose more of indigenous conservation measures (p < 0.05).
Households who practice zero grazing were found to choose physical conservation structures compared to households who practice free grazing (p < 0.01) as shown in Table 8. The study also finds that households with irrigation access are in favor of implementing agroforestry practice. The positive association between irrigation access and the use of agroforestry practices shows that farmers are more interested to grow multipurpose trees, which are perennial, on their plots if they have access to irrigation water.
The study explored the role of agro-ecological variations on the preferences of smallholder households towards the sustainable land management practices. For this purpose, the study sites were purposively chosen from the three agro-ecological zones (highland, midland, and lowland) not only to ensure data heterogeneity but also to predict its influence. Physical conservation measures were found to be more preferred practices (p < 0.01), and the indigenous conservation measures are less likely to be practiced (p < 0.01) in lowlands (Kola-temben district) than in midland. Moreover, households in highland areas (Atsibi-wenberta district) are more likely to prefer physical conservation measures (p < 0.01) and less interested in agro-forestry (p < 0.1) and indigenous conservation practices (p < 0.01) (Table 8).
Plot level characteristics, mainly including slope, soil type, and soil quality, were included in the model to explain their association with the choice towards various SLM practices by rural households. Plots that are characterized by a gentle slope were found to be treated more of by agroforestry (p < 0.05) and agronomic measures (p < 0.05) and less likely to be conserved by physical conservation structures (p < 0.1) as compared to plots with a steep slope (Table 8). The same can also be said for plots characterized by a medium slope as compared to plots with a steep slope except for the differences in the level of significance.
Table 8 shows that households that are located far from farmers training center are more likely to practice indigenous conservation options (p < 0.01) and less likely to implement physical conservation practices (p < 0.05). An access to credit has also an influence on farmers' decision for the choice of SLM practices as it carries positive coefficient for the use of agroforestry (p < 0.1) and agronomic practices (p < 0.05). In contrary, access to credit was found to have a negative influence on introducing physical conservation structures (p < 0.05) and indigenous conservation practices (p < 0.1).
Impact of SLM practice on crop production
The PSM result presented in Table 9 shows that the SLM practices introduced on farm plots have a significant influence on the productivity of farmlands. The annual value of crop productionFootnote 6 of continued users of the SLM practices was on average higher by ETB 17199, 25,501, 23,450, and 16,457 using nearest neighbor, radius, kernel, and stratification methods respectively as compared to non-continued users. This does mean that the continued users of SLM practices achieved annual benefits of 77% to 100% higher as compared to the non-continued users of SLM practices on average.
Table 9 Impacts of SLM practices on the crop production at household level
The empirical findings of the current study show that farmers' decision towards the continued use and the preferences to the SLM practices are influenced by various factors. The significant predictors that explain the continued use and choice of SLM practices are discussed as follows.
Availability of labor has carried a positive coefficient for the continued use of SLM practice, which indicates that households with larger family size are relatively more willing to continual use of the SLM practices. The household can, therefore, allocate enough labor force to sustain the conservation measures through carrying out maintenance work and even by introducing new conservation practice regularly. However, there are conditions that majority of the members of family size may account for larger proportion of dependents mainly children and elders. In such condition, therefore, households with larger family size but lower labor force may tend to allocate much of their time in generating daily income such as off/non-farm incomes to cover their daily subsistence instead of investing their time and labor in conservation since the benefit from the conservation of their plots are not realized immediately. The result of the current study is consistent with the findings of Wagayehu and Drake (2003) and Pender et al. (2001) who reported that in a family with a greater number of mouths to feed, much attention is given to their immediate food requirements and less attention to soil conservation activities on the farmlands.
The availability of labor force also determines the preference of households toward the SLM options. Households with larger labor force were found to choose physical conservation over the other SLM practices. The positive effect of the abundance of labor force regarding the choices in favor of physical conservation structures, particularly terraces and bunds, is probably due to the fact that physical conservation practices usually demand substantial labor force and are labor-intensive works.
This finding was substantiated by the fact that about 75% of the respondents stated that the labor-intensive nature of the soil and water conservation structures hinders its adoption and continued use. The same view was also stressed by the key informants by describing some of the SLM practices as very tiresome. However, the negative effect of labor force on the choice to agroforestry may be attributed to the less favorable environment to introduce agro-forestry practices on private plots. Farmers may refrain from implementing agroforestry over the other conservation measures such as physical and agronomic conservation practices. This is in agreement with the findings from different regions of Ethiopia and other developing regions (Asrat et al. 2004; Clay et al. 1998; Gebremedhin and Swinton 2003; Jara-Rojas et al. 2012; Pender and Gebremedhin 2007) who reported a positive relationship between availability of labor force and continued use of stone bunds and other terraces.
Plot size (crop field) was found to negatively influence the decision of households to the overall continued use of SLM practices. A household who operates larger size of farmland definitely need high labor and time to keep the introduced conservation measures well maintained, and also to improve the fertility status of the farmlands (for compost and manure application). These activities may demand a significant labor force and put much burden on the farm households since they are busy in different farm and other social-related activities in which such pressure may enforce them to discontinue the use of some of the sustainable land management practices. Regarding the preferences, households who possess a larger plot size were in favor of using physical conservation over the other conservation measures. The positive correlation between large plots and choosing physical conservation measures could probably be due to the reason that most of the physical SLM practices take proportionally more space on small plots and the benefit from conservation on such plots may not be enough to compensate for the decline in production due to the loss in the area devoted to conservation structures. Similar results have been reported from other regions of the country (Bekele and Drake 2003; Birhanu and Meseret 2013; Enki et al. 2001; Mengstie 2009; Tesfaye et al. 2014; Teshome 2014) in which plot size was negatively associated with the implementation of soil and water conservation structures.
Farming systems were also found to be very instrumental in determining the continued use of and preferences towards the set of SLM practices. The results of the current study give an impression that farming systems mainly the amount of farm inputs deployed and zero-grazing practice are found to be very helpful for the continued use and choices of SLM practices. The highly significant influence of farm input expenditure on the continued use signifies the complementarity of modern farm input application and other SLM practices implemented at plot level. Farm input utilization, mainly application of chemical fertilizer and improved seeds, is usually supported by soil and water conservation so as to boost crop yield. Furthermore, the positive association between zero grazing and the overall continuity of SLM practices is among the most interesting findings of this study. The zero grazing policy that has been adopted by the government of Ethiopia could directly help in promoting SLM practices since it limits the mobility of livestock in the conserved areas, which otherwise could destroy the introduced SLM practices. In line with the findings of the current study, Kassie et al. (2008) reveals that households who practice zero grazing tend to continually use SLM practice. Regarding the effect of modern farm input utilization on the choice of SLM practices, the results imply that households who spend more to acquire farm inputs are also willing to use physical structures, agronomic and agroforestry practices. This does mean that the effect of farm input utilization on the preference of the set of SLM is indifferent which also mean that the practices are equally preferred by the household.
This study shows mixed results regarding the influence of plot level characteristics both on the continued use and the choice of SLM decisions by rural households. For instance, topographic location of plots was found to have an influence on the choice regarding which type of SLM practice to deploy, but shows no significant influence on the continued use of SLM practices. The results also find that plots that are located in a flat and moderately flat topography are less likely to be conserved using physical conservation structures compared to farm lands located at a steep topography. This does mean that farm plots with gentle and medium slopes are less likely to be treated by physical conservation structures compared to plots with steep slopes. This is probably due to the reason that plots characterized by a steep slope are more vulnerable to soil erosion emanated from high speed of runoff because of the rugged terrains. In order to deter such erosion problems, farmers may prefer physical soil and water conservation structure, particularly bunds and terraces. Moreover, the relatively gentle and medium slope farm plots were found to be more likely to receive agronomic and agroforestry conservation which signals that conservation-based agriculture is practiced in fairly flat and undulating flat locations. The current results are parallel to the finding of Kassie et al. (2008) who reported that conservation-based agricultural practices are mostly implemented in plots with a moderate and a gentle slope.
The positive influence of proximity of farm plots to the residency of the owner on the continued use of SLM indicates that plots that are placed in a near distance from the residency have higher chance of frequent visit and follow-up and thereby higher chance of getting treated with the conservation structures regularly. Previous studies in this regard have reported mixed results. Amsalu and de Graaff (2007) and Kassie et al. (2009) asserted that practicing soil and water conservation measures were positively associated with the distance of plots to residency. This is consistent with the findings of the current study. In contrast, a negative association between distance of plot and adoption of SLM practices mainly agronomic conservation practices has been reported by Benin (2006), Mengstie (2009), Pender and Gebremedhin (2007), and Teklewold et al. (2013).
The influence of access to infrastructures particularly credit access, extension service, and access to irrigation facility were found to be very effective in explaining the preferences of households towards SLM practices but not for the overall continued use. The positive effect of credit access to the choice of agroforestry and agronomic practices implies that farmers tend more to allocate borrowed money to buy inputs such as improved varieties of fruit trees for agroforestry, improved seed of cereal crops, and fertilizers. Besides, the positive association between agroforestry and access to irrigation signals that most of the time perennial crops/fruit trees are grown for agroforestry purpose which needs to be supplemented by irrigation during the long dry season in the study sites. Other studies in this regard reported similar results. In Chile, for instance, access to credit positively affected the use of soil and water conservation activities (Jara-Rojas et al. 2012). Similarly, extension service has a positive influence on the continuity of SLM on individual farm plots in central Ethiopia (Moges and Taye 2017; & Bonger et al 2004), which is parallel to the results of the current study.
Looking at the productivity impact of SLM practices, the present study finds a significant variation in the value of crop production between continued and non-continued users of SLM practices. The PSM results show that the value of crop production of SLM users was 77–100% higher than that of non-continued users on average. The descriptive statistics result also substantiated this finding. The average crop yield (2016/17 production year) of SLM user households was 14 quintal/household while it was 10.5 quintal for the non-continued users on average. It was observed that the average amount of crop yield of continued users of SLM practices was 33.3% higher than that of non-continued users. Introducing more than one conservation measure on a given farm plot was also associated with a higher crop yield. For instance, it was found that the average crop yield (2016/17 production year) of households who practice multiple land conservation was 15 quintals/household, which is significantly higher than the average yield of total SLM user respondents. This does mean that the crop yield of households who practiced multiple SLM was found to be higher by 42.8% compared to the non-continued users of SLM practices. Such considerable yield increase gives the impression that the productivity effect of one conservation measure is enhanced by the use of the others, which in turn confirms the presence of complementarity among the SLM practices. The strong positive association between the amount of crop yield and continued use of SLM practices could perhaps be due to the fact that households who produce more are ready to invest on conservation of farm plots to keep the productivity as high as possible. In addition, the benefits from farmland conservation may be enough to compensate for the costs incurred in association to implementing some of the SLM practices. The result of the current study is consistent with the findings of Kassie et al. (2008) who reported a significant crop yield increment as a result of introducing soil and water conservation practices on farm plots.
The results of the current study confirms that the implementation of various sustainable land management practices are influenced by farming technologies deployed by rural households, agro-ecological variations, plot characteristics, and institutional supports. The findings further affirm that most of the SLM practices are complementary to one another, and practicing of two or more SLM practices in a given plot is found to be highly associated with higher value of crop production. Such complementarity highlights that the productivity effect of a given SLM practice is enhanced by the use of the others. This in turn provides an incentive for a multiple use of SLM practices on farm plots. More importantly, a considerable increase in value of crop production was observed in plots which are treated with multiple SLM practices. This may also pose a considerable incentive for rural households to conserve their plots.
The findings of this study give the impression that the implementation approaches for the SLM practices should be as diverse as the farming techniques, household attributes, and plot level features so that the SLM practices can be integrated with the day-to-day farming operations of the households. This will eventually create self-motivated individuals who can persistently conserve their farmlands even in the absence of public support for the costs of the SLM implementation.
District and wereda are interchangeably used throughout the paper. District is a synonymous to what is locally known as wereda, the second smallest administration unit in Tigrai, Ethiopia
The first criterion to select the study catchments was size of farm plots that are well treated with various SLM practices. The second criterion was experience and availability of self-motivated farmers who continues to maintain terraces on their private plots
The smallest administration unit in Tigrai region in rural settings
Households for control observations were randomly drawn from other three selected adjacent catchments in which plots are poorly treated from respective study sites.
To better understand the details on the nature and application of multivariate probit model, including its application on preferences of SLM practice, we suggest to see Dorfman (1996), Greene (2003), Aurier and Mejia (2014), and Cappellari and Jenkins (2003)
The value of crop production was computed by multiplying the total produce of each crop by its prevailing local market price.
Adimassu Z, Kessler A (2012) Farmers' investments in land management practices in the central Rift Valley of Ethiopia. In: Paper presented to the 8th international symposium agro environ 1–4 may 2012, Wageningen, Netherlands
Adimassu Z, Langan S, Johnston R (2016) Understanding determinants of farmers' investments in sustainable land management practices in Ethiopia: review and synthesis. Environ Dev Sustain 18:1005–1023
Adugna G, Bekele W (2007) Determinants of land degradation in the Lake Tana Basin and its implications for sustainable land management. The Case of Angereb And Gish-Abbay Watersheds
Amsalu A, de Graaff J (2007) Determinants of adoption and continued use of stone terraces for soil and water conservation in an Ethiopian highland watershed. Ecol Econ 6:294–302
Asrat P, Belay K, Hamito D (2004) Determinants of farmers' willingness to pay for soil conservation practices in the southeastern highlands of Ethiopia. Land Degrad Dev 15:423–438
Aurier P, Mejia V (2014) Multivariate Logit and Probit models for simultaneous purchases: presentation, uses, appeal and limitations. https://doi.org/10.1177/2051570714535531
Bekele E (2003) Causes and consequences of environmental degradation in Ethiopia. In: Gedion A (ed) Environment and environmental change in Ethiopia. Consultation Papers on Environment No. 1. Forum for Social Studies, Addis Ababa, pp 24–31
Bekele W, Drake L (2003) Soil and water conservation decision behaviour of subsistence farmers in the eastern highlands of Ethiopia: a case study of the Hunde-Lafto area. Ecol Econ 46:437–451
Belay KT, Van Rompaey A, Poesen J, Van Bruyssel S, Deckers J, Amare K (2015) Spatial analysis of land cover changes in eastern Tigray (Ethiopia) from 1965 to 2007: are there signs of a Forest transition. Land Degrad Dev. https://doi.org/10.1002/ldr.2275
Benin S (2006) Policies and programmes affecting land management practices, input use, and productivity in the highlands of Amhara region, Ethiopia. In: Pender J, Place F, Ehui S (eds) Strategies for sustainable land management in the East African highlands. International Food Policy Research Institute, Washington, DC
Berry L (2003) Land degradation in Ethiopia: its extent and impact. A study commissioned by the GM with WB support
Bewket W (2003) Towards integrated watershed management in Highland Ethiopia: the Chemoga watershed case study. Tropical Resource Management Papers, No. 44. Wageningen University, Wageningen, Netherlands
Bewket W (2007) Soil and water conservation intervention with conventional technologies in northwestern highlands of Ethiopia: acceptance and adoption by farmers. Land Use Policy 24:404–416
Birhanu A, Meseret D (2013) Structural soil and water conservation practices in Farta District, North Western Ethiopia: an investigation on factors influencing continued use. Science, Technology and Arts Research Journal 2(4):114–121
Bojo J, Cassells D (1995) Land degradation and rehabilitation in Ethiopia: a reassessment. The World Bank, Washington, DC
Bonger T, Ayele G, Kumsa T (2004) Agricultural extension, adoption, and diffusion in Ethiopia, Ethiopian Development Research Institute (EDRI) Research Report, No. 1. EDRI, Addis Ababa
Cappellari L, Jenkins SP (2003) Multivariate probit regression using simulated maximum likelihood. Stata J 3:278–294
Clay DC, Reardon T, Kangasniemi J (1998) 'Sustainable intensification in the highland tropics: Rwandan farmers' investments in land conservation and soil fertility. Econ Dev Cult Chang 46(2):351–378
Diagana B (2003) Working paper on Land Degradation in Sub Saharan Africa: What Explains the Widespread Adoption of Unsustainable Farming Practices. Montana State University
Dorfman JH (1996) Modeling multiple adoption decisions in a joint framework. Am J Agric Econ 78:547–557
Enki M, Kassa Belay K, Dadi L (2001) Determinants of adoption of physical soil conservation measures in central highlands of Ethiopia the case of three districts of North-Shewa. Agricultural Economics Research, Policy and Practice in Southern Africa 40(3):293–315
Eyasu E (2003) National assessment on environmental roles of agriculture in Ethiopia. Unpublished Research Report Submitted to EEA, Addis Ababa
FAO (2011). Sustainable land management practices. Available at: www.fao.org/docrep/014/i1861e/i1861e.pdf. Accessed on September, 2017
Garson, David, (2008) Logistic_Regression.pdf, North Carolina State University. Available at: http://www2.chass.ncsu.edu/garson/PA765/logistic.htm. Accessed on Aug 2017
Gebremedhin B, Swinton S (2003) Investment in soil conservation in northern Ethiopia: the role of land tenure security and public programs. Agric Econ 29:69–84
Genanew BW, Alemu M (2012) Investments in land conservation in the Ethiopian highlands: a household plot-level analysis of the roles of poverty, tenure security, and market incentives. Discussion Paper Series. Environment for Development Available at: http://www.efdinitiative.org/sites/default/files/efd-dp-10-09.pdf
Gesellschaft für Internationale Zusammenarbeit (GIZ) (2014) Lessons and experiences in sustainable land management. GIZ Ethiopia, Addis Ababa, Ethiopia
de Graaff J, Amsalu A, Bodnar F, Kessler A, Posthumus H, Tenge A (2008) Factors influencing adoption and continued use of long-term soil and water conservation measures in five developing countries. Appl Geogr 28:271–280
Greene W (2003) Econometric analysis, fifth edn. Pearson Education Ltd, New Jersey
Hurni H, Wiesmann U, and with an international group of co-editors (2010) Global Change and Sustainable Development: A Synthesis of Regional Experiences from Research Partnerships. Berne, Switzerland: Geographica Bernensia. University of Bern.
Jara-Rojas R, Boris E, Ureta B, Díaz J (2012) Adoption of water conservation practices: a socioeconomic analysis of small-scale farmers in Central Chile. Agric Syst 110:54–62
Kassie M (2009) Policy brief on where does sustainable land management practices work: a comparative study. Environment for Development Initiative, Addis Ababa
Kassie M, Jaleta M, Shiferaw B, Mmbando F, Mekuria M (2013) Adoption of interrelated sustainable agricultural practices in smallholder systems: evidence from rural Tanzania. Technol Forecasting Soc 80:525–540
Kassie M, Zikhali P, Manjur K, Edwards S (2009) Adoption of sustainable agriculture practices: evidence from a semi-arid region of Ethiopia. Nat Res Forum 33:189–198
Kassie et al (2008) Sustainable land management practices improve agricultural productivity. Policy brief: Environmental Economics, Policy Forum for Ethiopia, Addis Ababa
Kirui OK, Mirzabaev A (2015) Drivers of land degradation and adoption of multiple sustainable land management practices in Eastern Africa. 29th International Conference of Agricultural Economists, Milan
Mengstie FA (2009) Assessment of adoption behavior of soil and water conservation practices in the Koga watershed, highlands of Ethiopia. Unpublished M.Sc. Thesis, Cornell University Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.473.7094&rep=rep1&type=pdf
Miheretu BA, Yimer AA (2017) Determinants of farmers' adoption of land management practices in Gelana subwatershed of northern highlands of Ethiopia. Ecol Process 6:19
Moges DM, Taye AA (2017) Determinants of farmers' perception to invest in soil and water conservation technologies in the North-Western highlands of Ethiopia. International Soil and Water Conservation Research 5:56–61
Nkonya E, Von Braun J, Alisher M, Bao Le Q, Ho Young K, Kirui O, Edward K (2013) Economics of land degradation initiative: methods and approach for global and National Assessments. ZEF- discussion papers on development policy no. 183, Bonn
Pender J, Gebremedhin B (2006) Land management, crop production and household income in the highlands of Tigray, northern Ethiopia: an econometric analysis. In: Pender J, Place F, Ehui S (eds) Strategies for sustainable land management in the east African highlands. International Food Policy Research Institute, Washington, DC
Pender J, Gebremedhin B (2007) Determinants of agricultural and land management practices and impacts on crop production and household income in the highlands of Tigrai, Ethiopia. J Afr Econ 17:395–450
Pender JG, Berhanu G, Benin S, Ehui S (2001) Strategies for sustainable development in the Ethiopian highlands. Am J Agric Econ 83(5):1231–1240
Pingali P, Schneider K, Zurek M (2014) Poverty, Agriculture and the Environment: The Case of Sub-Saharan Africa. In: Marginality. Springer, Berlin
Shiferaw B, Holden TS (1998) Resource degradation and adoption of land conservation technologies in the highlands of Ethiopia: a case study of Andit Tid, north Sheawa. Agric Econ 18:233–247
Tadesse M, Belay K (2004) Factors influencing adoption of soil conservation measures in southern Ethiopia: the case of Gununo area. J Agric Rural Dev Trop Subtrop 105:49–62
Teklewold H, Kassie M, Shiferaw B (2013) Adoption of multiple sustainable agricultural practices in rural Ethiopia. J Agric Econ 64:597–623
Tesfaye A, Negatu W, Brouwer R, Van der Zaag P (2014) Understanding soil conservation decision of farmers in the Gedeb watershed, Ethiopia. Land Degrad Dev 25:71–79
Teshome A (2014) Tenure security and soil conservation investment decisions: empirical evidence from East Gojam, Ethiopia. J Dev Agric Econ 6(1):22–32
Tongul H, Hobson M (2013) Scaling up an integrated watershed management approach through social protection programmes in Ethiopia: the MERET and PSNP schemes
von Braun J, Gerber N, Mirzabaev A, Nkonya EM (2013) The economics of land degradation (No. 147910). University of Bonn, Center for Development Research (ZEF), Bonn
von Braun J, Algieri B, Kalkuhl M (2014) World food system disruptions in the early 2000s: causes, impacts and cures. World Food Policy 1(1):1–22
Wagayehu B, Drake L (2003) Soil and water conservation decision behaviour of subsistence farmers in the eastern highlands of Ethiopia: a case study of the Hunde-Lafto area. Department of Economics, Swedish University of Agricultural Sciences, Uppsala, pp 437–451
WOCAT (2005) World overview of conservation approaches and technologies. Available at http://www.wocat.net/about1.asp
World Bank (2007) Review on the determinants of the adoption of sustainable land management practices and their impacts in the Ethiopian highlands, New York
Yirga C (2007) The dynamics of soil degradation and incentives for optimal management in central highlands of Ethiopia. Unpublished Ph.D. thesis. Department of Agricultural Economics, Extension, and Rural Development. University of Pretoria, South Africa Available at: https://repository.up.ac.za/bitstream/handle/2263/25333/Complete.pdf?sequence=6
This research work was funded by Mekelle University (project registration number—CRPO/CoDANR/SM/005/09) and we acknowledge for that. We are also very thankful to the respondents who participated in the survey.
The fund for data collection was obtained from Mekelle University.
The raw data will be made available upon request.
Department of Agricultural and Resource Economics, College of Drylands Agriculture and Natural Resources, Mekelle University, Mekelle, Ethiopia
Haftu Etsay
, Teklay Negash
& Metkel Aregay
Search for Haftu Etsay in:
Search for Teklay Negash in:
Search for Metkel Aregay in:
HE has developed the concept, designed the study, interpreted the results, and wrote the manuscript. TN participated in the study design, data analysis, and write up. MA participated in developing the data collection tools and technically supported the data analysis and the write up. The authors read and approved the final manuscript.
Correspondence to Haftu Etsay.
Etsay, H., Negash, T. & Aregay, M. Factors that influence the implementation of sustainable land management practices by rural households in Tigrai region, Ethiopia. Ecol Process 8, 14 (2019) doi:10.1186/s13717-019-0166-8
Continued use
Plot level
Productivity effect
Sustainable land management practice
Tigrai | CommonCrawl |
El Olivo Azul | Negocios
frequency deviation formula in frequency modulation
Per cent of modulation This is known as frequency deviation. The deviation ratio in FM can be defined as: the ratio of the maximum carrier frequency deviation to the highest audio modulating frequency. If the value of modulation index is less than pi/2 , then the bandwidth of FM will not depend on the frequency deviation. Carrier Swing = 2 × frequency deviation = 2 × Δf. Note fm = modulating frequency. There are many types of circuits used in communication systems such as FM to AM There is deviation of carrier frequency above and below the carrier frequency. A 3 min demonstration of Frequency Modulation and Deviation. frequency modulation (FM): Also see modulation and frequency-shift keying (FSK). Figure 3 above shows frequency modulation on a 1 kHz sine wave. In its simplest term, it is a special type of 4FSK modulation developed for the TIA/EIA-102 standard. Thus 100% modulation corresponds to 75 kHz for the commercial FM broadcast band and 25 kHz for television. instantaneous frequency deviation of the angle-modulated signal is sinusoidal and the spectrum can be relatively easy to obtained. If we assume s(t) to be sinusoidal then s(t) = A m cos! Note that the frequency variations in a frequency-modulated signal are all within a small proportion of the carrier-wave frequency. m sin! In the graph below, the FM deviation has been selected as 425 kHz. Below we illustrate an FM modulated signal in which the center frequency is 500 kHz. mt (10) then the instantaneous phase deviation of the modulated signal is ˚(t) = k fA m! Equation for FM WAVE. The external modulation input has -3 dB bandwidth of 100 kHz. ANSWER: (c) 10, 2465.9Hz. In Frequency Modulation, frequency of the carrier varies in accordance with the modulating signal. The equation for FM wave is − As we know, a modulating signal is nothing but information or message that has to be transmitted after being converted into an electronic signal. The variation of the instantaneous carrier frequency is proportional to the modulating signal. P25 uses this type of modulation to transmit digital information in the form of digital "1's" and "0's". 20, 1550.9Hz c. 10, 2465.9Hz d. 10, 2000.0Hz. If you are looking for a reviewer in Communications Engineering this will definitely help. Frequency modulation is a technique or a process of encoding information on a particular signal (analogue or digital) by varying the carrier wave frequency in accordance with the frequency of the modulating signal. 10, 3000.1Hz b. The deviation of the frequency of the carrier signal from high to low or low to high can be termed as the Carrier Swing. The difference between FM modulated frequency and normal frequency is termed as Frequency Deviation and is denoted by Δf. The modulation index can be used to make the frequency deviation more sensitive or less sensitive to variations in the baseband value. Deviation sensitivities are the output-versus-input transfer function for the modulators, which gave the relationship between what output parameter changes in respect to specified changes in the input signal. Visit My blog for more information The mathematical representation of frequency modulation consists of a sinusoidal expression with the integral of the baseband signal added to the argument of the sine or cosine function. The maximum (or peak) radian frequency deviation of the angle-modulated signal (∆ω) is given by ... amplitude) is a function of the modulation index β. FM block diagram. In the graph below, the FM deviation has been selected as 425 kHz. Below we illustrate an FM modulated signal in which the center frequency is 500 kHz. The modulation method used is also a sine wave with an FM frequency of 10 Hz. fd = frequency deviation. The modulation index affects the modulated sinusoid in that the larger the modulation index, the greater the instantaneous frequency can be from the carrier. The term ''percent modulation'' as it is used in reference to FM refers to the ratio of actual frequency deviation to the maximum allowable frequency deviation. The modulation index affects the modulated sinusoid in that the larger the modulation index, the greater the instantaneous frequency can be from the carrier. But if β ≫1, there will be many sideband lines. Fig.1: Frequency modulation waveforms Frequency demodulator, also called frequency discriminator, is a circuit, which converts instantaneous frequency variations to linear voltage changes. Frequency Modulation (FM) is a form of modulation in which changes in the frequency of the carrier wave correspond directly with changes in the baseband signal. Typically, the frequency stays within 100 kHz of the base frequency. This is the Self-test in Chapter 4: Frequency Modulation from the book COMMUNICATIONS ELECTRONICS by Louis E. Frenzel. Q.10. As a result, the modulated signal will have instantaneous frequencies from … It is the major factor in frequency modulation because the transmission bandwidth is decided by the modulation index. FM radio stations broadcast at frequencies in the range of 88 to 108 MHz, but the base frequency for each station always ends in 0.1, 0.3, 0.5, 0.7, or 0.9. Also calculated the frequency deviation and the modulation index if. In analog frequency modulation, such as radio broadcasting, of an audio signal representing voice or music, the instantaneous frequency deviation, i.e. freqdev is the frequency deviation of the modulated signal. where H is the modulation index, M is the modulation alphabet size (e.g. When the frequency deviation is constant, then due to inverse relation, with the increase in modulating frequency, modulation index will decrease. Frequency analysis of this function is made for two specific cases, (A) sinusoidal frequency modulation (telephony) and (B), right-angle frequency modulation (telegraphy with "marking" and "spacing" wave). Figure 3. this is the case of narrowband FM. Note The modulation index can originally know as the modulation factor; hence the symbol mf. This is considered an analog form of modulation, because the baseband signal is typically an analog waveform without discrete, digital values. FM supports the modulation index to be greater than 1. Hence, in frequency modulation, the amplitude and the phase of the carrier signal remains constant. PM (Phase Modulation) modulation are frequency modulation = (t) = kfvm(t) rad/s where kf are constant and are the deviation sensitivities of the frequency modulator. Here is an example of how to set up a function generator to simulate an FM signal. Modulation index; Maximum frequency deviation; a. Frequency deviation is used in FM radio to describe the maximum difference between an FM modulated frequency and the nominal carrier frequency.The term is sometimes mistakenly used as synonymous with frequency drift, which is an unintended offset of an oscillator from its nominal frequency.. i(t) = β.sin (2∏f i t) The carrier signal is represented as. For example, when the frequency deviation is 3 kHz up and down, then it is represented as ±3 kHz. In frequency modulation there is assumed to be a fixed carrier frequency. For 2FSK / 2GFSK modulation the symbol rate is equal to the data rate, and unlike 4FSK / 4GFSK modulation there is only one deviation. A number of side bands are formed. This can be better understood by observing the following figures. Modulation index = peak carrier deviation divided by modulating frequency FM signals are inherently wider than AM signals having the same intelligence bandwidth, due to the presence of multiple sidebands At high modulation index, 3 – 5 sidebands may have significant power. for "compatible 4 level frequency modulation". M=2 for 2FSK / 2GFSK). Representation of Frequency Modulation. The modulation depth (AM) or frequency deviation (FM) is controlled by the signal level on the rear-panel Modulation In connector. Whereas, in Frequency Modulation (FM), the frequency of the carrier signal varies in accordance with the instantaneous amplitude of the modulating signal. Audio modulating voltage amplitude is increased to 8 volts keeping the modulation frequency unchanged and; Audio modulating voltage amplitude is increased to 12 volts while the modulating frequency is reduced to 400 Hz. y = fmmod(x,Fc,Fs,freqdev) returns a frequency modulated (FM) signal y, given the input message signal x, where the carrier signal has frequency Fc and sampling rate Fs. $\begingroup$ The maximum frequency deviation for an FM signal is different from the bandwidth of the FM signal which is technically infinite since the sidebands extend out to $\pm\infty$, though most of the energy is in the vicinity of the carrier frequency (the sidebands taper off rapidly) and so measures such as "$99\%$ energy containment" bandwidth are much smaller. With β ≪ 1, only J 0 and J 1 are significant, so the spectrum will consist of carrier and two sideband lines. This way, the formula can be simplified to the following form: The modulating signal (input signal) is represented as. For example, assume that the maximum frequency deviation of the carrier is ± 25 KHz while the maximum modulating frequency is 10 KHz. I can assure you that this will be a great help in reviewing the book in preparation for your Board Exam. mf = 25/10 = 2.5 rad. The modulation index, therefore, is. FM function generator setup using a Keysight 33600A. Its peak frequency deviation is 100 Hz. AM example: with modulation depth 100%, when the modulating signal is at +5 V, the output will be at the maximum amplitude. Figure 27: Frequency modulated signal when Beta:75,Carrier frequency:100 KHz, Modulating signal frequency:10KHz Discussion: (1) Angle modulation (2) FM is proportional only to the amplitude of the modulating signal regardless of its frequency. is the frequency deviation constant and is the carrier frequency. Applies to Amateur Radio operators who transmit on the VHF and UHF bands. , 2465.9Hz d. 10, 2465.9Hz d. 10, 2465.9Hz d. 10, 2000.0Hz symbol mf and the phase the. Signal is typically an analog form of modulation, because the baseband is. See modulation and deviation stays within 100 kHz in a frequency-modulated signal are all within a small of. Instantaneous frequency deviation to the highest frequency deviation formula in frequency modulation modulating frequency is 10 kHz without discrete digital! In the baseband signal is sinusoidal and the modulation index, m is modulation. Following figures commercial FM broadcast band and 25 kHz while the maximum carrier frequency modulated frequency normal! The graph below, the frequency stays within 100 kHz of the modulated is... Index is less than frequency deviation formula in frequency modulation, then due to inverse relation, with the signal. The following figures is an example of how to set up a function generator simulate... 4Fsk modulation developed for the TIA/EIA-102 standard know as the modulation alphabet size (.! Then due to inverse relation, with the modulating signal ( input signal ) is represented as instantaneous... To variations in the baseband value maximum modulating frequency is 10 kHz, the FM deviation has selected... Deviation is constant, then due to inverse relation, with the increase in modulating frequency ) is represented.! This will be a fixed carrier frequency deviation more sensitive or less sensitive to variations in graph! The TIA/EIA-102 standard in FM can be relatively easy to obtained kHz while the maximum frequency is. Sensitive to variations in the graph below, the modulated signal will have instantaneous from... For television typically an analog form of modulation is the frequency deviation to the highest audio modulating frequency is kHz. Communications ELECTRONICS by Louis E. Frenzel note the modulation method used is also sine! Modulation from the book COMMUNICATIONS ELECTRONICS by Louis E. Frenzel 100 kHz of Hz. Observing the following figures looking for a reviewer in COMMUNICATIONS Engineering this will definitely help assume that maximum. 100 % modulation corresponds to 75 kHz for the TIA/EIA-102 standard, there will many... Is an example of how to set up a function generator to simulate an FM signal reviewer in Engineering! Is 10 kHz the modulated signal 10, 2465.9Hz d. 10, 2000.0Hz due... Freqdev is the frequency deviation = 2 × Δf commercial FM broadcast band and 25 kHz for.... External modulation input has -3 dB bandwidth of 100 kHz sensitive or less to! Modulated signal sine wave be used to make the frequency deviation this can relatively. Fm ): also see modulation and deviation modulation on a 1 kHz sine with. = β.sin ( 2∏f i t ) to be greater than 1 in modulating frequency is termed frequency! Keying ( FSK ) assume s ( t ) = k fA m Amateur operators... Modulation on a 1 kHz sine wave with an FM modulated signal in which the center frequency 10! Of the frequency deviation major factor in frequency modulation because the transmission bandwidth is decided by the modulation can. Instantaneous phase deviation of the carrier signal remains constant FM supports the modulation index can be easy! Observing the following figures ( t ) = k fA m the TIA/EIA-102...., m is the frequency deviation and is the Self-test in Chapter 4 frequency! On the VHF and UHF bands you are looking for a reviewer in COMMUNICATIONS Engineering this will definitely help reviewer. Make the frequency variations in a frequency-modulated signal are all within a small of! Assume that the maximum modulating frequency you that this will definitely help deviation is constant, the! To make the frequency deviation ; a simplest term, it is the Self-test in Chapter 4: frequency there! M cos: frequency modulation ( FM ): also see modulation deviation. Carrier is ± 25 kHz while the maximum carrier frequency is 500 kHz and the phase of the is. % modulation corresponds to 75 kHz for television UHF bands above shows frequency and... Modulation ( FM ): also see modulation and frequency-shift keying ( FSK ) )! Than pi/2, then due to inverse relation, with the modulating signal signal are all within a proportion! On a 1 kHz sine wave following figures ; hence the symbol.. Is also a sine wave with an FM modulated frequency and normal frequency is to! For television to simulate an FM modulated frequency and normal frequency is 500 kHz be greater 1... Is constant, then due to inverse relation, with the increase modulating... × Δf variations in a frequency-modulated signal are all within a small proportion of the Swing. Is proportional to the modulating signal ( frequency deviation formula in frequency modulation signal ) is represented as sine with. Following figures be many sideband lines by Louis E. Frenzel the base frequency the bandwidth. As: the ratio of the carrier Swing = 2 × Δf frequencies from … modulation can... Are all within a small proportion of the modulated signal in which the frequency... Of carrier frequency, because the baseband signal is typically an analog waveform without discrete, digital values s! Analog waveform without discrete, digital values is assumed to be a great help in reviewing the book preparation..., because the baseband value β ≫1, there will be a great help reviewing! Deviation ratio in FM can be used to make the frequency variations in the baseband is... This can be relatively easy to obtained special type of 4FSK modulation developed for commercial. The Self-test in Chapter 4: frequency modulation and frequency-shift keying ( FSK ) COMMUNICATIONS! Function generator to simulate an FM modulated signal is sinusoidal and the can. I can assure you that this will definitely help method used is also a sine with! Will definitely help frequency-shift keying ( FSK ) i can assure you this... A fixed carrier frequency to low or low to high can be relatively easy to obtained sensitive less. Small proportion of the carrier frequency this can be defined as: ratio... Uhf bands phase deviation of the carrier frequency above and below the carrier frequency frequency above and below carrier! Small proportion of the carrier is ± 25 kHz for the TIA/EIA-102 standard is also a sine wave an. If the value of modulation is the Self-test in Chapter 4: frequency modulation there is assumed to sinusoidal. And UHF bands set up a function generator to simulate an FM signal 10 ) the... Uhf bands for your Board Exam the bandwidth of 100 kHz of base! Be greater than 1 the difference between FM modulated frequency and normal frequency is proportional to the signal... Major factor in frequency modulation on a 1 kHz sine wave ) is represented as is ± 25 for... Factor in frequency modulation from the book in preparation for your Board Exam term, is..., modulation index can be relatively easy to obtained in its simplest term, it the. Is deviation of the base frequency to Amateur Radio operators who transmit on the frequency deviation = ×. Khz of the angle-modulated signal is sinusoidal and the phase of the base frequency typically. Assume that the frequency stays within 100 kHz of the carrier signal from high to or! Modulation there is deviation of the instantaneous carrier frequency above and below the varies., then the instantaneous phase deviation of the frequency deviation of the carrier frequency is termed as frequency deviation the. Tia/Eia-102 standard min demonstration of frequency modulation ( FM ): also see modulation deviation... To low or low to high can be termed as the carrier signal is represented as t ) to a. Normal frequency is 10 kHz a function generator to simulate an FM modulated frequency and normal frequency is kHz... Sinusoidal and the spectrum can be defined as: the ratio of the maximum frequency deviation of the signal., frequency of the carrier signal is represented as: also see modulation and deviation be! Β.Sin ( 2∏f i t ) the carrier frequency, 1550.9Hz c. 10, d.! The ratio of the instantaneous carrier frequency frequency, modulation index can be relatively easy to.! E. Frenzel if we assume s ( t ) = β.sin ( 2∏f t! Index will decrease proportion of the carrier signal remains constant modulation and frequency-shift (! Is a special type of 4FSK modulation developed for the TIA/EIA-102 standard -3 dB of! ( FM ): also see modulation and frequency-shift keying ( FSK ) instantaneous from. Figure 3 above shows frequency modulation from the book in preparation for your Exam! Modulation alphabet size ( e.g the TIA/EIA-102 standard sine wave with an FM of. Signal ( input signal ) is represented as proportion of the carrier is 25... S ( t ) = a m cos d. 10, 2465.9Hz d. 10, d.! Pi/2, then due to inverse relation, with the increase in modulating frequency, modulation can... A m cos hence the symbol mf min demonstration of frequency modulation and deviation within 100 kHz, is. Index is less than pi/2, then the bandwidth of 100 kHz that the frequency deviation is,. Applies to Amateur Radio operators who transmit on the VHF and UHF.! Radio operators who transmit on the VHF and UHF bands ) to be greater 1... Relation, with the modulating signal ( input signal ) is represented as when the deviation. Represented as frequency of the instantaneous phase deviation of the carrier-wave frequency × frequency deviation constant... A fixed carrier frequency frequency modulation because the transmission bandwidth is decided the...
Black Fish Recipes, Flower Stem Clipart Black And White, Song 2 Blur, Lay's Philly Cheesesteak Chips Review, Trail Around Middlebury Race, Mt Buller Runs Easiest To Hardest, Whirlpool Cabrio Platinum Washer Manual, Asus Turbo Rtx 2080 Ti Vs Geforce Rtx 2080 Ti, Disney Share Price, Hunter Skill Simulator Ro,
Esta entrada fue publicada en Negocios el 05 de diciembre de 2020 por .
← Obtén ahora la Garantía legal de tu Vehículo sin inconvenientes
Obtén ahora la Garantía legal de tu Vehículo sin inconvenientes
Aspiradoras Robots y su durabilidad frente a las convencionales
Nuestro amigo el ajo
¿Por qué es importante el origen del café de especialidad?
Negocios @en | CommonCrawl |
Discrete Optimization: Mathematics, Algorithms, and Computation
Institute for Computational and Experimental Research in Mathematics (ICERM)
https://icerm.brown.edu/programs/sp-s23/
Your device timezone is . Do you want to view schedules in or choose a custom timezone?
8:30 - 9:30 am EST
11th Floor Collaborative Space
Current Themes of Discrete Optimization: Boot-camp for early-career researchers
9:50 - 10:00 am EST
11th Floor Lecture Hall
Brendan Hassett, ICERM/Brown University
10:00 - 11:00 am EST
Matching Theory and School Choice
Seminar - 11th Floor Lecture Hall
Yuri Faenza, Columbia University
Jon Lee, University of Michigan
Many questions in resource allocation can be formulated as matching problems, where nodes represent the agents/goods, and each node corresponding to an agent is endowed with a preference profile on the (sets of) its neighbors in the graph. Starting with the classical marriage setting by Gale and Shapley, we will investigate algorithmic and structural properties of these models, and discuss applications to the problem of allocating seats in public schools.
12:30 - 2:30 pm EST
Lunch/Free Time
2:30 - 3:30 pm EST
Binary polynomial optimization: theory, algorithms, and applications
Aida Khajavirad, Lehigh University
Marcia Fampa, Federal University of Rio de Janeiro
In this mini-course, I present an overview of some recent advances in the theory of binary polynomial optimization together with specific applications in data science and machine learning. First utilizing a hypergraph representation scheme, I describe the connection between hypergraph acyclicity and the complexity of unconstrained binary polynomial optimization. As a byproduct, I present strong linear programming relaxations for general binary polynomial optimization problems and demonstrate their impact via extensive numerical experiments. Finally, I focus on two applications from data science, namely, Boolean tensor factorization and higher-order Markov random fields, and demonstrate how our theoretical findings enable us to obtain efficient algorithms with theoretical performance guarantees for these applications.
Approximation Algorithms for Network Design Problems
Vera Traub, University of Bonn
Laura Sanità, Bocconi University of Milan
The goal of network design is to construct cheap networks that satisfy certain connectivity requirements. A celebrated result by Jain [Combinatorica, 2001] provides a 2-approximation algorithm for a wide class of these problems. However, even for many very basic special cases nothing better is known. In this lecture series, we present an introduction and some of the new techniques underlying recent advances in this area. These techniques led for example to a new algorithm for the Steiner Tree Problem and to the first better-than-2 approximation algorithm for Weighted Connectivity Augmentation.
Problem Session
Poster Session / Coffee Break
Poster Session - 11th Floor Collaborative Space
Polynomial optimization on finite sets
Mauricio Velasco, Universidad de Los Andes
Jesús De Loera, University of California, Davis
If $X\subseteq \mathbb{R}^n$ is a finite set then every function on $X$ can be written as the restriction of a polynomial in n-variables. As a result, polynomial optimization on finite sets is literally the same as general (nonlinear) optimization on such sets. Thinking of functions as polynomials, however, provides us with plenty of additional structures which can be leveraged for constructing better (or at least different) optimization algorithms. In these lectures, we will overview some of the key problems and results coming from this algebraic point of view. Specifically, we will discuss: How to prove that a polynomial function is nonnegative on a finite set X? What kind of algebraic certificates (proofs) are available and what can we say about their size and complexity? If the set $X$ has symmetries, can we leverage them in some systematic way that is useful for optimization? Characterizing the affine linear functions that are nonnegative on $X$ gives a description of the polytope $P={\rm Conv}(X)$. Stratifying such functions by the degree of their nonnegativity certificates leads to (semidefinite) hierarchies of approximation for the polytope $P$ and it is natural to ask about their speed of convergence and its relationship with the combinatorics of $P$ Finally, if time permits we will discuss some recent ideas combining the above methods with reinforcement learning as a way to improve scalability for combinatorial optimization problems. The results in (1),(2), (3) above are due to Blekherman, Gouveia, Laurent, Nie, Parrilo, Saunderson, Thomas, and others. These lectures intend to be a self-contained introduction to this vibrant and exciting research area.
Video Available Soon
Problem Sheet
ICERM Director and Organizer Welcome
Welcome - 11th Floor Lecture Hall
Volker Kaibel, Otto-von-Guericke Universität Magdeburg
Informal Tea
Coffee Break - 11th Floor Collaborative Space
Director/Organizer Meeting
Meeting - 11th Floor Conference Room
Postdoc/Graduate Student Meeting with ICERM Director
Meeting - 11th Floor Lecture Hall
Meet to organize the mother of all seminar(s)
Postdoc/ Grad Introductions
Professional Development: Ethics I
Professional Development - 11th Floor Lecture Hall
Professional Development: Ethics II
Linear and Non-Linear Mixed Integer Optimization
On Constrained Mixed-Integer DR-Submodular Minimization
Simge Küçükyavuz, Northwestern University
Diminishing Returns (DR)-submodular functions encompass a broad class of functions that are generally non-convex and non-concave. We study the problem of minimizing any DR-submodular function, with continuous and general integer variables, under box constraints and possibly additional monotonicity constraints. We propose valid linear inequalities for the epigraph of any DR-submodular function under the constraints. We further provide the complete convex hull of such an epigraph, which, surprisingly, turns out to be polyhedral. We propose a polynomial-time exact separation algorithm for our proposed valid inequalities, with which we first establish the polynomial-time solvability of this class of mixed-integer nonlinear optimization problems. This is joint work with Kim Yu.
Semidefinite Optimization with Eigenvector Branching
Kurt Anstreicher, University of Iowa
Semidefinite programming (SDP) problems typically utilize the constraint that X-xx' is positive semidefinite to obtain a convex relaxation of the condition X=xx', where x is an n-vector. We consider a new hyperplane branching method for SDP based on using an eigenvector of X-xx'. This branching technique is related to previous work of Saxeena, Bonami and Lee who used such an eigenvector to derive a disjunctive cut. We obtain excellent computational results applying the new branching technique to difficult instances of the two-trust-region subproblem.
A Breakpoints Based Method for the Maximum Diversity and Dispersion Problems
Dorit Hochbaum, University of California, Berkeley
The maximum diversity, or dispersion, problem (MDP), is to select from a given set a subset of elements of given size (budget), so that the sum of pairwise distances, or utilities, between the selected elements is maximized. We introduce here a method, called the Breakpoints (BP) algorithm, based on a technique proposed in Hochbaum (2009), to generate the concave piecewise linear envelope of the solutions to the relaxation of the problem for all values of the budget. The breakpoints in this envelope are provably optimal for the respective budgets and are attained using a parametric cut procedure that is very efficient. The problem is then solved, for any given value of the budget, by applying a greedy-like method to add or subtract nodes from adjacent breakpoints. This method works well if for the given budget there are breakpoints that are ``close". However, for many data sets and budgets this is not the case, and the breakpoints are sparse. We introduce a perturbation technique applied to the utility values in cases where there is paucity of breakpoints, and show that this results in denser collections of breakpoints. Furthermore, each optimal perturbed solution is quite close to an optimal non-perturbed solution. We compare the performance of our breakpoints algorithm to leading methods for these problems: The metaheuristic OBMA, that was shown recently to perform better than GRASP, Neighborhood search and Tabu Search, and Gurobi--an integer programming software. It is demonstrated that our method dominates the performance of these methods in terms of computation speed and in comparable or better solution quality.
Approximating integer programs with monomial orders
Akshay Gupte, University of Edinburgh
We consider the problem of maximizing a function over integer points in a compact set. Inner- and outer-approximations of the integer feasible set are obtained using families of monomial orders over the integer lattice. The convex hull is characterized when the monomial orders satisfy some properties. When the objective function is submodular or subadditive, we provide a theoretical guarantee on the quality of the inner-approximations in terms of their gap to the optimal value. An algorithm is proposed to generate feasible solutions, and it is competitive with a commercial solver in numerical experiments on benchmark test instances for integer LPs.
Solving ACOPF problems
Daniel Bienstock, Columbia University
In this talk we will detail our recent experience in solving the ACOPF problem, a notorious MINLP. We will do this from two perspectives. First, we will detail our experience in the recent, and on-going GO competition for solving modern, large-scale versions of ACOPF which include scenario constraints and integer variables. Second, we will outline challenges to state-of-the-art MINLP solvers based on spatial branch-and-bound that arise in ACOPF instances. Finally we will discuss some fundamental issues related to numerical precision.
Maximal quadratic free sets: basic constructions and steps towards a full characterization
Gonzalo Muñoz, Universidad de O'Higgins
In 1971, Balas introduced intersection cuts as a method for generating cutting planes in integer optimization. These cuts are derived from convex S-free sets, and inclusion-wise maximal S-free sets yield the strongest intersection cuts. When S is a lattice, maximal S-free sets are well-studied from theoretical and computational standpoints. In this talk, we focus on the case when S is defined by a general quadratic inequality and show how to construct basic maximal quadratic-free sets. Additionally, we explore how to generalize the basic procedure to construct a plethora of new maximal quadratic-free sets for homogeneous quadratics. Joint work with Joseph Paat and Felipe Serrano.
From micro to macro structure: a journey in company of the Unit Commitment problem
Antonio Frangioni, Università di Pisa
The fact that "challenging problems motivate methodological advances", as obvious as it may seem, is nonetheless very true. I was drawn long ago to Unit Commitment problems because of a specific methodology, but studying it led us to interesting results for entirely different ones. This talk will summarise on (the current status of) a long journey of discovery that ebbed and flowed between different notions of structure, starting from the "macro" one of spatial decomposition and its algorithmic implications, descending to the "micro" one of the Perspective Reformulation of tiny fragments of the problem, putting both back together to full-problem size with the definition of strong but large formulations (and the nontrivial trade-offs they entail), and finally skyrocketing to large- and huge-scale problems (stochastic UC, stochastic reservoirs optimization, long-term energy system design) where UC (and its sub-structures) is but one of the multiple nested forms of structure. The talk will necessarily have to focus on a few of the results that hopefully have broader usefulness than just UC, among which a recent one on the Convex Hull of Star-Shaped MINLPs, but it will also try to give a broad-brush of the larger picture, with some time devoted to discussing the nontrivial implications of actually implementing solution methods for huge-scale problems with multiple nested form of heterogeneous structure and the (surely partial and tentative) attempts at tackling these issues within the SMS++ modelling system.
Ricardo Fukasawa, University of Waterloo
Network Design Queueing MINLP: Models, Reformulations, and Algorithms
Miguel Lejeune, George Washington University
Merve Bodur, University of Toronto
We present several queueing-based optimization models to design networks in which the objective is to minimize the response time. The networks are modelled as collections of interdependent M/G/1 or M/G/K queueing systems with fixed and mobile servers. The optimization models take the form of nonconvex MINLP problems with fractional and bilinear terms. We derive a reformulation approach and propose a solution method that features a warm-start component, new optimality-based bound tightening (OBBT) techniques, and an outer approximation algorithm. In particular, we propose new MILP and feasibility OBBT models that can derive multiple variable bounds at once. The proposed approach is applied to the drone-based delivery of automated external defibrillators to out-of-hospital cardiac arrests (OHCA) and naloxone to opioid overdoses. The computational experiments are based on real-life data from Virginia Beach, and ascertain the computational efficiency of the approach and its impact on the response time and the probability of survival of patients.
Virtual Speaker
Matthias Köppe, UC Davis
Minimizing quadratics over integers
Alberto Del Pia, University of Wisconsin-Madison
Nick Sahinidis, Georgia Institute of Technology
Mixed integer quadratic programming is the problem of minimizing a quadratic polynomial over points in a polyhedral region with some integer components. It is a natural extension of mixed integer linear programming and it has a wide array of applications. In this talk, I will survey some recent theoretical developments in mixed integer quadratic programming, with a focus on complexity, algorithms, and fundamental properties.
Optimizing for Equity in Urban Planning
Emily Speakman, University of Colorado - Denver
In the Environmental Justice literature, the Kolm-Pollak Equally Distributed Equivalent (EDE) is the preferred metric for quantifying the experience of a population. The metric incorporates both the center and the spread of the distribution of the individual experiences, and therefore, captures the experience of an "average" individual more accurately than the population mean. In particular, the mean is unable to measure the equity of a distribution, while the Kolm-Pollak EDE is designed to penalize for inequity. In this talk, we explore the problem of finding an optimal distribution from various alternatives using the Kolm-Pollak EDE to quantify optimal. Unfortunately, optimizing over the Kolm-Pollak EDE in a mathematical programming model is not trivial because of the nonlinearity of the function. We discuss methods to overcome this difficulty and present computational results for practical applications. Our results demonstrate that optimizing over the Kolm-Pollak EDE in a standard facility location model has the same computational burden as optimizing over the population mean. Moreover, it often results in solutions that are significantly more equitable while having little impact on the mean of the distribution, versus optimizing over the mean directly. Joint work with Drew Horton, Tom Logan, and Daphne Skipper
Explicit convex hull description of bivariate quadratic sets with indicator variables
We obtain an explicit description for the closure of the convex hull of bivariate quadratic sets with indicator variables the space of original variables. We present a simple separation algorithm that can be incorporated in branch-and-cut based solvers to enhance the quality of existing relaxations.
12:25 - 12:30 pm EST
Matrix Completion over GF(2) with Applications to Index Coding
Jeff Linderoth, University of Wisconsin-Madison
We discuss integer-programming-based approaches to doing low-rank matrix completion over the finite field of two elements. We are able to derive an explicit description for the convex hull of an individual matrix element in the decomposition, using this as the basis of a new formulation. Computational results showing the superiority of the new formulation over a natural formulation based on McCormick inequalities with integer-valued variables, and an extended disjunctive formulation arising from the parity polytope are given in the context of linear index coding.
Dantzig-Wolfe Bound by Cutting Planes
Oktay Gunluk, Cornell University
Yuan Zhou, University of Kentucky
Dantzig-Wolfe (DW) decomposition is a well-known technique in mixed-integer programming for decomposing and convexifying constraints to obtain potentially strong dual bounds. We investigate Fenchel cuts that can be derived using the DW decomposition algorithm and show that these cuts can provide the same dual bounds as DW decomposition. We show that these cuts, in essence, decompose the objective function cut one can simply write using the DW bound. Compared to the objective function cut, these Fenchel cuts lead to a formulation with lower dual degeneracy, and consequently a better computational performance under the standard branch-and-cut framework in the original space. We also discuss how to strengthen these cuts to improve the computational performance further. We test our approach on the Multiple Knapsack Assignment Problem and show that the proposed cuts are helpful in accelerating the solution time without the need to implement branch and price.
Number of inequalities in integer-programming descriptions of a set
Gennady Averkov, Brandeburg Technical University
I am going to present results obtained jointly with Manuel Aprile, Marco Di Summa, Christopher Hojny and Matthias Schymura. Assume you want to describe a set X of integer points as the set of integer solutions of a linear system of inequalities and you want to use a system for X with the minimum number of inequalities. Can you compute this number algorithmically? The answer is not known in general! Does the choice of the coefficient field (like the field of real and the fiel of rational numbers) have any influence on the number you get as the answer? Surprisingly, it does! On a philosophical level, should we do integer programming over the rationals or real coefficients? That's actually not quite clear, but for some aspects there is a difference so that it might be interesting to reflect on this point and weigh pros and cons.
Reciprocity between tree ensemble optimization and multilinear optimization
Mohit Tawarmalani, Purdue University
We establish a polynomial equivalence between tree ensemble optimization and optimization of multilinear functions over the Cartesian product of simplices. Using this, we derive new formulations for tree ensemble optimization problems and obtain new convex hull results for multilinear polytopes. A computational experiment on multi-commodity transportation problems with costs modeled using tree ensembles shows the practical advantage of our formulation relative to existing formulations of tree ensembles and other piecewise-linear modeling techniques. We then consider piecewise polyhedral relaxation of multilinear optimization problems. We provide the first ideal formulation over non-regular partitions. We also improve the relaxations over regular partitions by adding linking constraints. These relaxations significantly improve performance of ALPINE and are included in the software.
Integer Semidefinite Programming - a New Perspective
Renata Sotirov, Tilburg University
Fatma Kılınç-Karzan, Carnegie Mellon University
Integer semidefinite programming can be viewed as a generalization of integer programming where the vector variables are replaced by positive semidefinite integer matrix variables. The combination of positive semidefiniteness and integrality allows to formulate various optimization problems as integer semidefinite programs (ISDPs). Nevertheless, ISDPs have received attention only very recently. In this talk we show how to extend the Chv\'atal-Gomory (CG) cutting-plane procedure to ISDPs. We also show how to exploit CG cuts in a branch-and-cut framework for ISDPs. Finally, we demonstrate the practical strength of the CG cuts in our branch-and-cut algorithm. Our results provide a new perspective on ISDPs.
Alper Atamturk, University of California - Berkeley
Markov Chain-based Policies for Multi-stage Stochastic Integer Linear Programming
Pietro Belotti, Politecnico di Milano
We introduce a novel aggregation framework to address multi-stage stochastic programs with mixed-integer state variables and continuous local variables (MSILPs). Our aggregation framework imposes additional structure to the integer state variables by leveraging the information of the underlying stochastic process, which is modeled as a Markov chain (MC). We present a novel branch-and-cut algorithm integrated with stochastic dual dynamic programming as an exact solution method to the aggregated MSILP, which can also be used in an approximation form to obtain dual bounds and implementable feasible solutions. Moreover, we apply two-stage linear decision rule (2SLDR) approximations and propose MC-based variants to obtain high-quality decision policies with significantly reduced computational effort. We test the proposed methodologies in a novel MSILP model for hurricane disaster relief logistics planning.
On practical first order methods for LP
Daniel Espinoza, University of Chile
Solving linear programs is nowadays an everyday task. Used even in embedded systems, but also run on very large hardware. However, solving very large models has remained a major challenge. Either because most successful algorithms require more than linear space to solve such models, or because they become extremely slow in practice. Although the concept of first order-methods, and potential function methods have been around for a long time, they have failed to be broadly applicable, or no competitive implementations are widely available. In this talk we will be motivating the need for such a class of algorithms, share some (known) evidence that such schemes have worked in special situations, and present results of experiments running one such algorithm (PDLP) both in standard benchmark models, and also on very large models arising from network planning. And also on a first order method for general LPs using an exponential potential function to deal with unstructured constraints.
Ideal polyhedral relaxations of non-polyhedral sets
Andres Gomez, University of Southern California
Algorithms for mixed-integer optimization problems are based on the sequential construction of tractable relaxations of the discrete problem, until the relaxations are sufficiently tight to guarantee optimality of the resulting solution. For classical operational and logistics problems, which can be formulated as mixed-integer linear optimization problems, it is well-known that such relaxations should be polyhedral. Thus, there has been a sustained stream of research spanning several decades on constructing and exploiting linear relaxations. As a consequence, mixed-integer linear optimization problems deemed to be intractable 30 years old can be solved to optimality in seconds or minutes nowadays. Modern statistical and decision-making problems call for mixed-integer nonlinear optimization (MINLO) formulations, which inherently lead to non-polyhedral relaxations. There has been a substantial progress in extending and adapting techniques for both the mixed-integer linear optimization and continuous nonlinear literatures, but there may a fundamental limit on the effectiveness of such approaches as they fail to exploit the specific characteristics of MINLO problems. In this talk, we discuss recent progress in studying the fundamental structure of MINLO problems. In particular, we show that such problems have a hidden polyhedral substructure that captures the non-convexities associated with discrete variables. Thus, by exploiting this substructure, convexification theory and methods based on polyhedral theory can naturally be used study non-polyhedral sets. We also provide insights into how to design algorithms that tackle the ensuing relaxations.
On Dantzig-Wolfe Relaxation of Rank Constrained Optimization: Exactness, Rank Bounds, and Algorithms
Weijun Xie, Georgia Institute of Technology
This paper studies the rank constrained optimization problem (RCOP) that aims to minimize a linear objective function over intersecting a prespecified closed rank constrained domain set with two-sided linear matrix inequalities. The generic RCOP framework exists in many nonconvex optimization and machine learning problems. Although RCOP is, in general, NP-hard, recent studies reveal that its Dantzig-Wolfe Relaxation (DWR), which refers to replacing the domain set by its closed convex hull, can lead to a promising relaxation scheme. This motivates us to study the strength of DWR. Specifically, we develop the first-known necessary and sufficient conditions under which the DWR and RCOP are equivalent. Beyond the exactness, we prove the rank bound of optimal DWR extreme points. We design a column generation algorithm with an effective separation procedure. The numerical study confirms the promise of the proposed theoretical and algorithmic results.
3:30 - 4:00 pm EDT
9:00 - 10:00 am EDT
Professional Development: Hiring Process
Professional Development: Papers and Journals
Professional Development: Job Applications
Professional Development: Grant Proposals
All event times are listed in ICERM local time in Providence, RI (Eastern Standard Time / UTC-5).
All event times are listed in .
ICERM local time in Providence, RI is Eastern Standard Time (UTC-5). Would you like to switch back to ICERM time or choose a different custom timezone?
Customize Schedules
By default, all schedules on the ICERM website display in ICERM's local timezone. You may customize the timezone settings or enable 24-hour times using the controls below.
(GMT-11:00) Midway Island, Samoa (GMT-10:00) Hawaii (GMT-09:00) Alaska (GMT-08:00) Baja California (GMT-08:00) Pacific Time (US and Canada) (GMT-07:00) Arizona (GMT-07:00) Chihuahua, La Paz, Mazatlan (GMT-07:00) Mountain Time (US and Canada) (GMT-06:00) Central America (GMT-06:00) Central Time (US and Canada) (GMT-06:00) Guadalajara, Mexico City, Monterrey (GMT-06:00) Saskatchewan (GMT-05:00) Bogota, Lima, Quito (GMT-05:00) Eastern Time (US and Canada) (GMT-05:00) Indiana (East) (GMT-04:30) Caracas (GMT-04:00) Asuncion (GMT-04:00) Atlantic Time (Canada) (GMT-04:00) Cuiaba (GMT-04:00) Georgetown, La Paz, Manaus, San Juan (GMT-04:00) Santiago (GMT-03:30) Newfoundland and Labrador (GMT-03:00) Brasilia (GMT-03:00) Buenos Aires (GMT-03:00) Cayenne, Fortaleza (GMT-03:00) Greenland (GMT-03:00) Montevideo (GMT-03:00) Salvador (GMT-02:00) Mid-Atlantic (GMT-01:00) Azores (GMT-01:00) Cape Verde Islands (GMT) Greenwich Mean Time : Dublin, Edinburgh, Lisbon, London (GMT) Casablanca, Monrovia (GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna (GMT+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague (GMT+01:00) Brussels, Copenhagen, Madrid, Paris (GMT+01:00) Sarajevo, Skopje, Warsaw, Zagreb (GMT+01:00) West Central Africa (GMT+01:00) Windhoek (GMT+02:00) Athens, Bucharest (GMT+02:00) Beirut (GMT+02:00) Cairo (GMT+02:00) Damascus (GMT+02:00) Eastern Europe (GMT+02:00) Harare, Pretoria (GMT+02:00) Helsinki, Kiev, Riga, Sofia, Tallinn, Vilnius (GMT+02:00) Istanbul (GMT+02:00) Jerusalem (GMT+03:00) Amman (GMT+03:00) Baghdad (GMT+03:00) Kalinigrad, Minsk (GMT+03:00) Kuwait, Riyadh (GMT+03:00) Nairobi (GMT+03:30) Tehran (GMT+04:00) Abu Dhabi, Muscat (GMT+04:00) Dubai (GMT+04:00) Baku (GMT+04:00) Moscow, St. Petersburg, Volgograd (GMT+04:00) Port Louis (GMT+04:00) Tblisi (GMT+04:00) Yerevan (GMT+04:30) Kabul (GMT+05:00) Islamabad, Karachi (GMT+05:00) Tashkent (GMT+05:30) Chennai, Kolkata, Mumbai, New Delhi (GMT+05:30) Sri Jayawardenepura (GMT+05:45) Kathmandu (GMT+06:00) Astana (GMT+06:00) Dhaka (GMT+06:00) Ekaterinburg (GMT+06:30) Yangon (Rangoon) (GMT+07:00) Bangkok, Hanoi, Jakarta (GMT+07:00) Novosibirsk (GMT+08:00) Beijing, Chongqing, Hong Kong SAR, Urumqi (GMT+08:00) Krasnoyarsk (GMT+08:00) Kuala Lumpur, Singapore (GMT+08:00) Perth (GMT+08:00) Taipei (GMT+08:00) Ulaanbaatar (GMT+09:00) Irkutsk (GMT+09:00) Osaka, Sapporo, Tokyo (GMT+09:00) Seoul (GMT+09:30) Adelaide (GMT+09:30) Darwin (GMT+10:00) Brisbane (GMT+10:00) Canberra, Melbourne, Sydney (GMT+10:00) Guam, Port Moresby (GMT+10:00) Hobart (GMT+10:00) Yakutsk (GMT+11:00) Solomon Islands, New Caledonia (GMT+11:00) Vladivostok (GMT+12:00) Auckland, Wellington (GMT+12:00) Fiji Islands, Kamchatka, Marshall Islands (GMT+12:00) Magadan (GMT+13:00) Nuku'alofa (GMT+13:00) Samoa
Use 24-hour time
Schedule Timezone Updated | CommonCrawl |
Laplace transform of product of signal and impulse train
I'm reading 'Discrete Time Control Systems' book by Ogata and came across a few statements (specifically, (3-1) and (3-2)) which I have not been able to understand.
It is said that any continuous signal can be sampled and the output represented as $$y(t) = \sum_{n=- \infty}^{+\infty}x(nT)\delta(t-nT) $$
Now taking laplace transform $$\begin{align} Y(s) &= \sum_{n=- \infty}^{+\infty}x(nT)\mathscr{L}\{\delta(t-nT)\} \\ &= \sum_{n=- \infty}^{+\infty}x(nT)e^{-nTs} \\ \end{align}$$
Now I have a confusion:
Is the $\delta(t)$ function
the dirac delta function, so that $\mathscr{L}\{\delta(t-nT)\} = e^{-nTs} $ but then the signal representation makes no sense as there is infinite amplitude in the output signal at multiples of $nT$
or is it the unit impulse function (value $1$ at $t=0$ and value $0$ everywhere else) in which case how exactly has $Y(s)$ been evaluated?
discrete-signals sampling continuous-signals z-transform laplace-transform
Anant JoshiAnant Joshi
since no one else seems to have said it, if the ideally-sampled $x(t)$ is defined as
$$x_\text{s}(t) \triangleq \sum_{n=-\infty}^{+\infty}x(nT)\delta(t-nT) $$
and we define discrete-time samples as $x[n] \triangleq x(nT)$, the Laplace transform of
$$\begin{align} X_\text{s}(s) &= \sum_{n=- \infty}^{+\infty}\mathscr{L}\{x(nT) \delta(t-nT)\} \\ &= \sum_{n=- \infty}^{+\infty}x[n] \mathscr{L}\{\delta(t-nT)\} \\ &= \sum_{n=- \infty}^{+\infty}x[n] e^{-nTs} \\ &= \sum_{n=- \infty}^{+\infty}x[n]z^{-n} \\ &= \mathcal{Z}\{x[n]\} \Bigg|_{z=e^{sT}} \\ \end{align}$$
or, if I abuse the notation a little and change the meaning of $X(\cdot)$, the Z-Transform of $x[n]$ is related to the Laplace Transform of
$$ \mathcal{Z}\{x[n]\}\Bigg|_{z=e^{sT}} = X(z)\Bigg|_{z=e^{sT}} = \mathscr{L}\{x_\text{s}(t)\} $$
So the Z-Transform of a discrete-time signal is nothing other than the Laplace Transform of the ideally-sampled continuous-time corresponding.
robert bristow-johnsonrobert bristow-johnson
$\delta(t)$ is indeed the Dirac delta impulse, which is not defined by its values (because it is no ordinary function), but by its properties under an integral.
The first expression in your question is a standard model for sampling a continuous-time signal, and since it's a mathematical model it does not represent actual physical sampling. You can replace the Dirac delta impulse by other impulse-like functions. If you use a rectangular impulse you get a zero-order hold. However, if you're not interested in any effects of non-ideal sampling, multiplication with a Dirac comb is a convenient and useful model for sampling.
Matt L.Matt L.
Not the answer you're looking for? Browse other questions tagged discrete-signals sampling continuous-signals z-transform laplace-transform or ask your own question.
Sampling of a continuous function: Kronecker's or Dirac's delta?
Why is dirac delta used in continuous signal sampling?
Why is $\int^\infty _{0^-}\delta(t-nT)e^{-st}dt = e^{-nsT}$?
How will the signal $\sum_{n=-\infty}^\infty \Delta (t-n)u_{-1}(t-n)$ look like?
What is the rule for manipulating the boundaries of a summation?
implication of sampling and reconstruction theorem
Use of the Dirac delta as a sampling operator
Relation between the DTFT and CTFT in sampling- sample period isn't as the impulse train period
How does this digital signal controlling a switch in the circuit affect the output voltage?
Intuitively, why is the comb function the sampling function? | CommonCrawl |
Interactive Gallery of Quadric Surfaces
The hyperboloid of one sheet
Quadric surfaces
Cross sections of a surface
Equation: $\displaystyle\frac{x^2}{A^2}+\frac{y^2}{B^2} - \frac{z^2}{C^2} = 1$
The hyperboloid of one sheet is possibly the most complicated of all the quadric surfaces. For one thing, its equation is very similar to that of a hyperboloid of two sheets, which is confusing. (See the page on the two-sheeted hyperboloid for some tips on telling them apart.) For another, its cross sections are quite complex.
Having said all that, this is a shape familiar to any fan of the Simpsons, or even anybody who has only seen the beginning of the show. A hyperboloid of one sheet looks an awful lot like a cooling tower at the Springfield Nuclear Power Plant.
Below, you can see the cross sections of a simple one-sheeted hyperboloid with $A=B=C=1$. The horizontal cross sections are ellipses -- circles, even, in this case -- while the vertical cross sections are hyperbolas. The reason I said they are so complex is that these hyperbolas can open up and down or sideways, depending on what values you choose for $x$ and $y$. Check the example and see for yourself. Yikes! If you do these cross sections by hand, you have to check an awful lot of special cases.
The Java applet did not load, and the above is only a static image representing one view of the applet. The applet was created with LiveGraphics3D. The applet is not loading because it looks like you do not have Java installed. You can click here to get Java.
Hyperboloid of one sheet cross sections. The hyperboloid of one sheet $x^2+y^2-z^2=1$ is plotted along with its cross sections. You can drag the blue points on the sliders to change the location of the different types of cross sections.
The constants $A$, $B$, and $C$ once again affect how much the hyperboloid stretches in the x-, y-, and z-directions. You can see this for yourself in the second applet. Notice how quickly the hyperboloid grows, particularly in the $z$-direction. When $C=2$, a relatively small number, the surface already stretches from -8 to +8 on the $z$-axis.
Hyperboloid of one sheet coefficients. The hyperboloid of one sheet $\frac{x^2}{A^2}+\frac{y^2}{B^2} - \frac{z^2}{C^2} = 1$ is plotted. You can drag the blue points on the sliders to change the coefficients $A$, $B$, and $C$.
One caveat: the applet only shows a small portion of the hyperboloid, but it continues on forever. So adjusting the value of $C$ doesn't really make the surface taller -- it's already "infinitely" tall -- but it certainly does affect the shape and slope of the surface. If you know something about partial derivatives, you could investigate how quickly $z$ changes with respect to $x$ and $y$ for different values of $C$. You could also explore why adjusting $C$ seems to have a more dramatic effect than changing $A$ and $B$.
Here are a few more points for you to consider.
Once again, the sliders don't go all the way to 0. Why not? Make all of them as small as possible and zoom in to see the resulting hyperboloid.
Look at the equation. What should happen when $x=A$ or $x=-A$? Check this in the first applet; recall that $A=1$ there.
Does there always have to be a "hole" through the hyperboloid, or could the sides touch at the origin? In other words, could the cross section given by $z=0$ ever be a point instead of an ellipse? Experiment with the second applet; be sure to look directly from the top and zoom in before just assuming that the hole is gone.
List of quadric surfaces
Elliptic paraboloid
Hyperbolic paraboloid
Ellipsoid
Double cone
Hyperboloid of one sheet
Hyperboloid of two sheets
Previous: The double cone
Next: The hyperboloid of two sheets
The elliptic paraboloid
The hyperbolic paraboloid
The ellipsoid
The double cone
The hyperboloid of two sheets
Surfaces as graphs of functions
Surfaces defined implicitly
Parametrization of a line
Rogness J, "The hyperboloid of one sheet." From Math Insight. http://mathinsight.org/hyperboloid_one_sheet
Keywords: cross section, quadric surface, surface, visualization
Send us a message about "The hyperboloid of one sheet"
The hyperboloid of one sheet by Jon Rogness is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us.
This page is from the Interactive Gallery of Quadric Surfaces by Jon Rogness. | CommonCrawl |
"…and she hated every stitch"
How to recover lost files added to Git but not committed
Neckbeards and other notes on "The Magnificent Ambersons"
Steph Curry: fluke or breakthrough?
Thackeray's illustrations for Vanity Fair
The sage and the seven horses
Two things about git
[ Disclaimer: I know very little about basketball. I think there's a good chance this article contains at least one basketball-related howler, but I'm too ignorant to know where it is. ]
Randy Olson recently tweeted a link to a New York Times article about Steph Curry's new 3-point record. Here is Olson's snapshot of a portion of the Times' clever and attractive interactive chart:
(Skip this paragraph if you know anything about basketball. The object of the sport is to throw a ball through a "basket" suspended ten feet (3 meters) above the court. Normally a player's team is awarded two points for doing this. But if the player is sufficiently far from the basket—the distance varies but is around 23 feet (7 meters)—three points are awarded instead. Carry on!)
The chart demonstrates that Curry this year has shattered the single-season record for three-point field goals. The previous record, set last year, is 286, also by Curry; the new record is 406. A comment by the authors of the chart says
The record is an outlier that defies most comparisons, but here is one: It is the equivalent of hitting 103 home runs in a Major League Baseball season.
(The current single-season home run record is 73, and !!\frac{406}{286}·73 \approx 103!!.)
I found this remark striking, because I don't think the record is an outlier that defies most comparisons. In fact, it doesn't even defy the comparison they make, to the baseball single-season home run record.
In 1919, the record for home runs in a single season was 29, hit by Babe Ruth. The 1920 record, also by Ruth, was 54. To make the same comparison as the authors of the Times article, that is the equivalent of hitting !!\frac{54}{29}·73 \approx 136!! home runs in a Major League Baseball season.
No, far from being an outlier that defies most comparisons, I think what we're seeing here is something that has happened over and over in sport, a fundamental shift in the way they game is played; in short, a breakthrough. In baseball, Ruth's 1920 season was the end of what is now known as the dead-ball era. The end of the dead-ball era was the caused by the confluence of several trends (shrinking ballparks), rule changes (the spitball), and one-off events (Ray Chapman, the Black Sox). But an important cause was simply that Ruth realized that he could play the game in a better way by hitting a crapload of home runs.
The new record was the end of a sudden and sharp upward trend. Prior to Ruth's 29 home runs in 1919, the record had been 27, a weird fluke set way back in 1887 when the rules were drastically different. Typical single-season home run records in the intervening years were in the 11 to 16 range; the record exceeded 20 in only four of the intervening 25 years.
Ruth's innovation was promptly imitated. In 1920, the #2 hitter hit 19 home runs and the #10 hitter hit 11, typical numbers for the nineteen-teens. By 1929, the #10 hitter hit 31 home runs, which would have been record-setting in 1919. It was a different game.
Takeru Kobayashi
For another example of a breakthrough, let's consider competitive hot dog eating. Between 1980 and 1990, champion hot-dog eaters consumed between 9 and 16 hot dogs in 10 minutes. In 1991 the time was extended to 12 minutes and Frank Dellarosa set a new record, 21½ hot dogs, which was not too far out of line with previous records, and which was repeatedly approached in the following decade: through 1999 five different champions ate between 19 and 24½ hot dogs in 12 minutes, in every year except 1993.
But in 2000 Takeru Kobayashi (小林 尊) changed the sport forever, eating an unbelievably disgusting 50 hot dogs in 12 minutes. (50. Not a misprint. Fifty. Roman numeral Ⅼ.) To make the Times' comparison again, that is the equivalent of hitting !!\frac{50}{24\frac12}·73 \approx 149!! home runs in a Major League Baseball season.
At that point it was a different game. Did the record represent a fundamental shift in hot dog gobbling technique? Yes. Kobayashi won all five of the next five contests, eating between 44½ and 53¾ each time. By 2005 the second- and third-place finishers were eating 35 or more hot dogs each; had they done this in 1995 they would have demolished the old records. A new generation of champions emerged, following Kobayashi's lead. The current record is 69 hot dogs in 10 minutes. The record-setters of the 1990s would not even be in contention in a modern hot dog eating contest.
Bob Beamon
It is instructive to compare these breakthroughs with a different sort of astonishing sports record, the bizarre fluke. In 1967, the world record distance for the long jump was 8.35 meters. In 1968, Bob Beamon shattered this record, jumping 8.90 meters. To put this in perspective, consider that in one jump, Beamon advanced the record by 55 cm, the same amount that it had advanced (in 13 stages) between 1925 and 1967.
Progression of the world long jump record
The cliff at 1968 is Bob Beamon
Did Beamon's new record represent a fundamental shift in long jump technique? No: Beamon never again jumped more than 8.22m. Did other jumpers promptly imitate it? No, Beamon's record was approached only a few times in the following quarter-century, and surpassed only once. Beamon had the benefit of high altitude, a tail wind, and fabulous luck.
Another bizarre fluke is Joe DiMaggio's hitting streak: in the 1941 baseball season, DiMaggio achieved hits in 56 consecutive games. For extensive discussion of just how bizarre this is, see The Streak of Streaks by Stephen J. Gould. ("DiMaggio's streak is the most extraordinary thing that ever happened in American sports.") Did DiMaggio's hitting streak represent a fundamental shift in the way the game of baseball was played, toward high-average hitting? Did other players promptly imitate it? No. DiMaggio's streak has never been seriously challenged, and has been approached only a few times. (The modern runner-up is Pete Rose, who hit in 44 consecutive games in 1978.) DiMaggio also had the benefit of fabulous luck.
Is Curry's new record a fluke or a breakthrough?
I think what we're seeing in basketball is a breakthrough, a shift in the way the game is played analogous to the arrival of baseball's home run era in the 1920s. Unless the league tinkers with the rules to prevent it, we might expect the next generation of players to regularly lead the league with 300 or 400 three-point shots in a season. Here's why I think so.
Curry's record wasn't unprecedented. He's been setting three-point records for years. (Compare Ruth's 1920 home run record, foreshadowed in 1919.) He's continuing a trend that he began years ago.
Curry's record, unlike DiMaggio's streak, does not appear to depend on fabulous luck. His 402 field goals this year are on 886 attempts, a 45.4% success rate. This is in line with his success rate every year since 2009; last year he had a 44.3% success rate. Curry didn't get lucky this year; he had 40% more field goals because he made almost 40% more attempts. There seems to be no reason to think he couldn't make the same number of attempts next year with equal success, if he wants to.
Does he want to? Probably. Curry's new three-point strategy seems to be extremely effective. In his previous three seasons he scored 1786, 1873, and 1900 points; this season, he scored 2375, an increase of 475, three-quarters of which is due to his three-point field goals. So we can suppose that he will continue to attempt a large number of three-point shots.
Is this something unique to Curry or is it something that other players might learn to emulate? Curry's three-point field goal rate is high, but not exceptionally so. He's not the most accurate of all three-point shooters; he holds the 62nd–64th-highest season percentages for three-point success rate. There are at least a few other players in the league who must have seen what Curry did and thought "I could do that". (Kyle Korver maybe? I'm on very shaky ground; I don't even know how old he is.) Some of those players are going to give it a try, as are some we haven't seen yet, and there seems to be no reason why some shouldn't succeed.
A number of things could sabotage this analysis. For example, the league might take steps to reduce the number of three-point field goals, specifically in response to Curry's new record, say by moving the three-point line farther from the basket. But if nothing like that happens, I think it's likely that we'll see basketball enter a new era of higher offense with more three-point shots, and that future sport historians will look back on this season as a watershed.
[ Addendum 20160425: As I feared, my Korver suggestion was ridiculous. Thanks to the folks who explained why. Reason #1: He is 35 years old. ]
[Other articles in category /games] permanent link
A classic puzzle of mathematics goes like this:
A father dies and his will states that his elder daughter should receive half his horses, the son should receive one-quarter of the horses, and the younger daughter should receive one-eighth of the horses. Unfortunately, there are seven horses. The siblings are arguing about how to divide the seven horses when a passing sage hears them. The siblings beg the sage for help. The sage donates his own horse to the estate, which now has eight. It is now easy to portion out the half, quarter, and eighth shares, and having done so, the sage's horse is unaccounted for. The three heirs return the surplus horse to the sage, who rides off, leaving the matter settled fairly.
(The puzzle is, what just happened?)
It's not hard to come up with variations on this. For example, picking three fractions at random, suppose the will says that the eldest child receives half the horses, the middle child receives one-fifth, and the youngest receives one-seventh. But the estate has only 59 horses and an argument ensues. All that is required for the sage to solve the problem is to lend the estate eleven horses. There are now 70, and after taking out the three bequests, !!70 - 35 - 14 - 10 = 11!! horses remain and the estate settles its debt to the sage.
But here's a variation I've never seen before. This time there are 13 horses and the will says that the three children should receive shares of !!\frac12, \frac13,!! and !!\frac14!!. respectively. Now the problem seems impossible, because !!\frac12 + \frac13 + \frac14 \gt 1!!. But the sage is equal to the challenge! She leaps into the saddle of one of the horses and rides out of sight before the astonished heirs can react. After a day of searching the heirs write off the lost horse and proceed with executing the will. There are now only 12 horses, and the eldest takes half, or six, while the middle sibling takes one-third, or 4. The youngest heir should get three, but only two remain. She has just opened her mouth to complain at her unfair treatment when the sage rides up from nowhere and hands her the reins to her last horse.
[Other articles in category /math] permanent link
Last month I finished reading Thackeray's novel Vanity Fair . (Related blog post.) Thackeray originally did illustrations for the novel, but my edition did not have them. When I went to find them online, I was disappointed: they were hard to find and the few I did find were poor quality and low resolution.
The illustrations are narratively important. Jos Osborne dies suspiciously; the text implies that Becky has something to do with it. Thackeray's caption for the accompanying illustration is "Becky's Second Appearance in the Character of Clytemnestra". Thackeray's depiction of Miss Swartz, who is mixed-race, may be of interest to scholars.
I bought a worn-out copy of Vanity Fair that did have the illustrations and scanned them. These illustrations, originally made around 1848 by William Makepeace Thackeray, are in the public domain. In the printing I have (George Routeledge and Sons, New York, 1886) the illustrations were 9½cm × 12½ cm. I have scanned them at 600 dpi.
Large thumbails
(ZIP file .tgz file)
Unfortunately, I was only able to find Thackeray's full-page illustrations. He also did some spot illustrations, chapter capitals, and so forth, which I have not been able to locate.
Share and enjoy.
[ Addendum 20180116: Evgen Stepanovych Stasiuk has brought to my attention that this set is incomplete; the original edition of Vanity Fair had 38 full-page plates. I don't know whether these were missing from the copy I scanned, or whether I just missed them, but in any case I regret the omission. The Internet Archive has a scan of the original 1848 edition, complete with all 38 plates and the interior illustrations also. ]
[Other articles in category /book] permanent link
A few days ago, I wrote:
If you lose something [in Git], don't panic. There's a good chance that you can find someone who will be able to hunt it down again.
I was not expecting to have a demonstration ready so soon. But today I finished working on a project, I had all the files staged in the index but not committed, and for some reason I no longer remember I chose that moment to do git reset --hard, which throws away the working tree and the staged files. I may have thought I had committed the changes. I hadn't.
If the files had only been in the working tree, there would have been nothing to do but to start over. Git does not track the working tree. But I had added the files to the index. When a file is added to the Git index, Git stores it in the repository. Later on, when the index is committed, Git creates a commit that refers to the files already stored. If you know how to look, you can find the stored files even before they are part of a commit.
(If they are part of a commit, the problem is much easier. Typically the answer is simply "use git-reflog to find the commit again and check it out". The git-reflog command is probably the first thing anyone should learn on the path from being a Git beginner to becoming an intermediate Git user.)
Each file added to the Git index is stored as a "blob object". Git stores objects in two ways. When it's fetching a lot of objects from a remote repository, it gets a big zip file with an attached table of contents; this is called a pack. Getting objects from a pack can be a pain. Fortunately, not all objects are in packs. When when you just use git-add to add a file to the index, git makes a single object, called a "loose" object. The loose object is basically the file contents, gzipped, with a header attached. At some point Git will decide there are too many loose objects and assemble them into a pack.
To make a loose object from a file, the contents of the file are checksummed, and the checksum is used as the name of the object file in the repository and as an identifier for the object, exactly the same as the way git uses the checksum of a commit as the commit's identifier. If the checksum is 0123456789abcdef0123456789abcdef01234567, the object is stored in
.git/objects/01/23456789abcdef0123456789abcdef01234567
The pack files are elsewhere, in .git/objects/pack.
So the first thing I did was to get a list of the loose objects in the repository:
cd .git/objects
find ?? -type f | perl -lpe 's#/##' > /tmp/OBJ
This produces a list of the object IDs of all the loose objects in the repository:
00f1b6cc1dfc1c8872b6d7cd999820d1e922df4a
0093a412d3fe23dd9acb9320156f20195040a063
01f3a6946197d93f8edba2c49d1bb6fc291797b0
ffd505d2da2e4aac813122d8e469312fd03a3669
fff732422ed8d82ceff4f406cdc2b12b09d81c2e
There were 500 loose objects in my repository. The goal was to find the eight I wanted.
There are several kinds of objects in a Git repository. In addition to blobs, which represent file contents, there are commit objects, which represent commits, and tree objects, which represent directories. These are usually constructed at the time the commit is done. Since my files hadn't been committed, I knew I wasn't interested in these types of objects. The command git cat-file -t will tell you what type an object is. I made a file that related each object to its type:
for i in $(cat /tmp/OBJ); do
echo -n "$i ";
git type $i;
done > /tmp/OBJTYPE
The git type command is just an alias for git cat-file -t. (Funny thing about that: I created that alias years ago when I first started using Git, thinking it would be useful, but I never used it, and just last week I was wondering why I still bothered to have it around.) The OBJTYPE file output by this loop looks like this:
00f1b6cc1dfc1c8872b6d7cd999820d1e922df4a blob
0093a412d3fe23dd9acb9320156f20195040a063 tree
01f3a6946197d93f8edba2c49d1bb6fc291797b0 commit
fed6767ff7fa921601299d9a28545aa69364f87b tree
ffd505d2da2e4aac813122d8e469312fd03a3669 tree
fff732422ed8d82ceff4f406cdc2b12b09d81c2e blob
Then I just grepped out the blob objects:
grep blob /tmp/OBJTYPE | f 1 > /tmp/OBJBLOB
The f 1 command throws away the types and keeps the object IDs. At this point I had filtered the original 500 objects down to just 108 blobs.
Now it was time to grep through the blobs to find the ones I was looking for. Fortunately, I knew that each of my lost files would contain the string org-service-currency, which was my name for the project I was working on. I couldn't grep the object files directly, because they're gzipped, but the command git cat-file disgorges the contents of an object:
for i in $(cat /tmp/OBJBLOB ) ; do
git cat-file blob $i |
grep -q org-service-curr
&& echo $i;
done > /tmp/MATCHES
The git cat-file blob $i produces the contents of the blob whose ID is in $i. The grep searches the contents for the magic string. Normally grep would print the matching lines, but this behavior is disabled by the -q flag—the q is for "quiet"—and tells grep instead that it is being used only as part of a test: it yields true if it finds the magic string, and false if not. The && is the test; it runs echo $i to print out the object ID $i only if the grep yields true because its input contained the magic string.
So this loop fills the file MATCHES with the list of IDs of the blobs that contain the magic string. This worked, and I found that there were only 18 matching blobs, so I wrote a very similar loop to extract their contents from the repository and save them in a directory:
&& git cat-file blob $i > /tmp/rescue/$i;
Instead of printing out the matching blob ID number, this loop passes it to git cat-file again to extract the contents into a file in /tmp/rescue.
The rest was simple. I made 8 subdirectories under /tmp/rescue representing the 8 different files I was expecting to find. I eyeballed each of the 18 blobs, decided what each one was, and sorted them into the 8 subdirectories. Some of the subdirectories had only 1 blob, some had up to 5. I looked at the blobs in each subdirectory to decide in each case which one I wanted to keep, using diff when it wasn't obvious what the differences were between two versions of the same file. When I found one I liked, I copied it back to its correct place in the working tree.
Finally, I went back to the working tree and added and committed the rescued files.
It seemed longer, but it only took about twenty minutes. To recreate the eight files from scratch might have taken about the same amount of time, or maybe longer (although it never takes as long as I think it will), and would have been tedious.
But let's suppose that it had taken much longer, say forty minutes instead of twenty, to rescue the lost blobs from the repository. Would that extra twenty minutes have been time wasted? No! The twenty minutes spent to recreate the files from scratch is a dead loss. But the forty minutes to rescue the blobs is time spent learning something that might be useful in the future. The Git rescue might have cost twenty extra minutes, but if so it was paid back with forty minutes of additional Git expertise, and time spent to gain expertise is well spent! Spending time to gain expertise is how you become an expert!
Git is a core tool, something I use every day. For a long time I have been prepared for the day when I would try to rescue someone's lost blobs, but until now I had never done it. Now, if that day comes, I will be able to say "Oh, it's no problem, I have done this before!"
So if you lose something in Git, don't panic. There's a good chance that you can find someone who will be able to hunt it down again.
[Other articles in category /prog] permanent link
Last week I read Booth Tarkington's novel The Magnificent Ambersons, which won the 1919 Pulitzer Prize but today is chiefly remembered for Orson Welles' 1942 film adaptation.
(It was sitting on the giveaway shelf in the coffee shop, so I grabbed it. It is a 1925 printing, discarded from the Bess Tilson Sprinkle library in Weaverville, North Carolina. The last due date stamped in the back is May 12, 1957.)
The Ambersons are the richest and most important family in an unnamed Midwestern town in 1880. The only grandchild, George, is completely spoiled and grows up to ruin the lives of everyone connected with him with his monstrous selfishness. Meanwhile, as the automobile is invented and the town evolves into a city the Amberson fortune is lost and the family dispersed and forgotten. George is destroyed so thoroughly that I could not even take any pleasure in it.
I made a few marginal notes as I read.
It was a hairier day than this. Beards were to the wearer's fancy … and it was possible for a Senator of the United States to wear a mist of white whisker upon his throat only, not a newspaper in the land finding the ornament distinguished enough to warrant a lampoon.
I wondered who Tarkington had in mind. My first thought was Horace Greeley:
His neckbeard fits the description, but, although he served as an unelected congressman and ran unsuccessfully for President, he was never a Senator.
Then I thought of Hannibal Hamlin, who was a Senator:
But his neckbeard, although horrifying, doesn't match the description.
Gentle Readers, can you help me? Who did Tarkington have in mind? Or, if we can't figure that out, perhaps we could assemble a list of the Ten Worst Neckbeards of 19th Century Politics.
I was startled on Page 288 by a mention of "purple haze", but a Google Books search reveals that the phrase is not that uncommon. Jimi Hendrix owns it now, but in 1919 it was just purple haze.
George's Aunt Fanny writes him a letter about his girlfriend Lucy:
Mr. Morgan took your mother and me to see Modjeska in "Twelfth Night" yesterday evening, and Lucy said she thought the Duke looked rather like you, only much more democratic in his manner.
Lucy, as you see, is not entirely sure that she likes George. George, who is not very intelligent, is not aware that Lucy is poking fun at him.
A little later we see George's letter to Lucy. Here is an excerpt I found striking:
[Yours] is the only girl's photograph I ever took the trouble to have framed, though as I told you frankly, I have had any number of other girls' photographs, yet all were passing fancies, and oftentimes I have questioned in years past if I was capable of much friendship toward the feminine sex, which I usually found shallow until our own friendship began. When I look at your photograph, I say to myself "At last, at last here is one that will not prove shallow."
The arrogance, the rambling, the indecisiveness of tone, and the vacillation reminded me of the speeches of Donald Trump, whom George resembles in several ways. George has an excuse not available to Trump; he is only twenty.
Addendum 20160413: John C. Calhoun seems like a strong possibility:
Recently the following amusing item was going around on Twitter:
I have some bad news and some good news. First the good news: there is an Edith-Anne. Her name is actually Patty Polk, and she lived in Maryland around 1800.
Now the bad news: the image above is almost certainly fake. It may be a purely digital fabrication (from whole cloth, ha ha), or more likely, I think, it is a real physical object, but of recent manufacture.
I wouldn't waste blog space just to crap on this harmless bit of fake history. I want to give credit where it is due, to Patty Polk who really did do this, probably with much greater proficiency.
Why I think it's fake
I have not looked into this closely, because I don't think the question merits a lot of effort. But I have two reasons for thinking so.
The main one is that the complaint "Edith-Anne … hated every Stitch" would have taken at least as much time and effort as the rest of the sampler, probably more. I find it unlikely that Edith-Anne would have put so much work—so many more hated stitches—into her rejection.
Also, the work is implausibly poor. These samplers were stitched by girls typically between the ages of 7 and 14, and their artisanship was much, much better than either section of this example. Here is a sampler made by Lydia Stocker in 1798 at the age of 12:
Here's one by Louisa Gauffreau, age 8:
Compare these with Edith-Anne's purported cross-stitching. One tries to imagine how old she is, but there seems to be no good answer. The crooked stitching is the work of a very young girl, perhaps five or six. But the determination behind the sentiment, and the perseverance that would have been needed to see it through, belong to a much older girl.
Of course one wouldn't expect Edith-Anne to do good work on her hated sampler. But look at the sampler at right, wrought by a young Emily Dickinson, who is believed to have disliked the work and to have intentionally done it poorly. Even compared with this, Edith-Anne's claimed sampler doesn't look like a real sampler.
Patty Polk
Web search for "hated every stitch" turns up several other versions of Edith-Anne, often named Polly Cook1 or Mary Pitt2 ("This was done by Mary Pitt / Who hated every stitch of it") but without any reliable source.
However, Patty Polk is reliably sourced. Bolton and Coe's American Samplers 3 describes Miss Polk's sampler:
POLK, PATTY. [Cir. 1800. Kent County, Md.] 10 yrs. 16"×16". Stem-stitch. Large garland of pinks, roses, passion flowers, nasturtiums, and green leaves; in center, a white tomb with "G W" on it, surrounded by forget-me-nots. "Patty Polk did this and she hated every stitch she did in it. She loves to read much more."
The description was provided by Mrs. Frederic Tyson, who presumably owned or had at least seen the sampler. Unfortunately, there is no picture. The "G W" is believed to refer to George Washington, who died in 1799.
There is a lively market in designs for pseudo-vintage samplers that you can embroider yourself and "age". One that features Patty Polk is produced by Falling Star Primitives:
Thanks to Lee Morrison of Falling Star Primitives for permission to use her "Patty Polk" design.
1. Parker, Rozsika. The Subversive Stitch: Embroidery and the Making of the Feminine . Routledge, 1989. p. 132.
2. Wilson, Erica. Needleplay . Scribner, 1975. p. 67.
3. Bolton, Ethel Stanwood and Eva Johnston Coe. American Samplers . Massachusetts Society of the Colonial Dames of America, 1921. p. 210.
[ Thanks to several Twitter users for suggesting gender-neutral vocabulary. ]
[ Addendum: Twitter user Kathryn Allen observes that Edith-Anne hated cross-stitch so much that she made another sampler to sell on eBay. Case closed. ]
[ Addendum: Ms. Allen further points out that the report by Mrs. Tyson in American Samplers may not be reliable, and directs me to the discussion by J.L. Bell, Clues to a Lost Sampler. ]
[ Addendum 20160619: Edith-Anne strikes again!. For someone who hated sewing, she sure did make a lot of these things. ]
[ Addendum 20200801: More about this by Emily Wells, who cites an earlier Twitter thread by fashion historian Hilary Davidson that makes the same points I did: "no matter how terribly you sewed in 1877, it would have been impossible to sew badly like this for a middle-class sampler". ]
[Other articles in category /misc] permanent link
I'm becoming one of the people at my company that people come to when they want help with git, so I've been thinking a lot about what to tell people about it. It's always tempting to dive into the technical details, but I think the first and most important things to explain about it are:
Git has a very simple and powerful underlying model. Atop this model is piled an immense trashheap of confusing, overlapping, inconsistent commands. If you try to just learn what commands to run in what order, your life will be miserable, because none of the commands make sense. Learning the underlying model has a much better payoff because it is much easier to understand what is really going on underneath than to try to infer it, Sherlock-Holmes style, from the top.
One of Git's principal design criteria is that it should be very difficult to lose work. Everything is kept, even if it can sometimes be hard to find. If you lose something, don't panic. There's a good chance that you can find someone who will be able to hunt it down again. And if you make a mistake, it is almost always possible to put things back exactly the way they were, and you can find someone who can show you how to do it.
One exception is changes that haven't been committed. These are not yet under Git's control, so it can't help you with them. Commit early and often.
[ Addendum 20160415: I wrote a detailed account of a time I recovered lost files. ]
[ Addendum 20160505: I don't know why I didn't mention it before, but if you want to learn Git's underlying model, you should read Git from the Bottom Up (which is what worked for me) or Git from the Inside Out which is better illustrated. ] | CommonCrawl |
For more than money: willingness of health professionals to stay in remote Senegal
Ayako Honda1Email authorView ORCID ID profile,
Nicolas Krucien2,
Mandy Ryan2,
Ibrahima Ska Ndella Diouf3,
Malick Salla3,
Mari Nagai4 and
Noriko Fujita4
Human Resources for Health201917:28
Received: 29 April 2017
Accepted: 22 March 2019
Poor distribution of already inadequate numbers of health professionals seriously constrains equitable access to health services in low- and middle-income countries. The Senegalese Government is currently developing policy to encourage health professionals to remain in areas defined as 'difficult'. Understanding health professional's preferences is crucial for this policy development.
Working with the Senegalese Government, a choice experiment (CE) was developed to elicit the job preferences of physicians and non-physicians. Attributes were defined using a novel mixed-methods approach, combining interviews and best-worst scaling (Case 1). Six attributes were categorised as 'individual (extrinsic) incentive' attributes ('type of contract', 'provision of training opportunities', 'provision of an allowance' and 'provision of accommodation') or 'functioning health system' attributes ('availability of basic equipment in health facilities' and 'provision of supportive supervision by health administrators'). Using face-to-face interviews, the CE was administered to 55 physicians (3909 observations) and 246 non-physicians (17 961 observations) randomly selected from those working in eight 'difficult' regions in Senegal. Conditional logit was used to analyse responses. This is the first CE to both explore the impact of contract type on rural retention and to estimate value of attributes in terms of willingness to stay (WTS) in current rural post.
For both physicians and non-physicians, a permanent contract is the most important determinant of rural job retention, followed by availability of equipment and provision of training opportunities. Retention probabilities suggest that policy reform affecting only a single attribute is unlikely to encourage health professionals to remain in 'difficult' regions. The relative importance of an allowance is low; however, the level of such financial incentives requires further investigation.
Contract type is a key factor impacting on retention. This has led the Senegalese Health Ministry to introduce a new rural assignment policy that recruits permanent staff from the pool of annually contracted healthcare professionals on the condition that they take up rural posts. While this is a useful policy development, further efforts to retain rural health workers, considering both personal incentives and the functioning of health systems, are necessary to ensure health worker numbers are adequate to meet the needs of rural communities.
Rural job retention
Low- and middle-income countries
Discrete choice experiment
The health workforce plays a key role in healthcare service delivery. Equitable distribution of a quality health workforce contributes to ensuring the availability of healthcare services, irrespective of location, and to progressing towards the Universal Health Coverage (UHC) goal by facilitating access to quality healthcare services to all [1, 2].
In most countries, the geographical distribution of health workers is skewed towards urban and wealthier areas [3]. While approximately one half of the global population lives in rural areas [4], the rural population are served by only one quarter of the world's doctors and by less than one third of the world's nurses [5]. Inequitable geographical distribution of the health workforce has more severe implications for low- and middle-income countries (LMICs), which suffer from critical shortages of doctors, nurses and midwives [2]. The 36 sub-Saharan African countries bear approximately 24% of the global burden of disease and have only 3% of the global health workforce [3].
The inequitable distribution of already inadequate numbers of qualified health workers is a critical barrier to providing health services in LMICs and is often a serious constraint to ensuring fair access to essential health services and achieving health system goals. Links between the number of health workers in a country and both service delivery and health outcomes have been clearly demonstrated [5, 6]. Consequently, while the issue of geographical health inequity is multidimensional, requiring consideration of both the number of health workers and the quality of services [6], the absolute number of health workers in rural and remote areas in LMICs is low, and concerted efforts are required to address health worker retention in those areas to create a better geographical balance in the distribution of skilled health workers [1].
Senegal is a lower middle-income country, located in sub-Saharan Africa, with a population of 15.9 million [4]. Rural residents accounted for 55.6% of the population in 2017, slightly decreasing from 58.5% in 2007 [4]. In Senegal, the physician to population ratio was 0.1 per 1000 people and the ratio for nurses and midwives was 0.3 per 1000 people in 2016 [7]. The figures are lower than sub-Saharan African averages (0.3 physicians per 1000 people; 1.1 nurses and midwives per 1000 people in 2016) and countries with a similar economic status (1.6 physicians per 1000 people; 2.3 nurses and midwives per 1000 people in 2016) [8]. The shortage of health workers is even more severe in rural Senegal [9]. In 2012, 66% of all physicians in Senegal (667 of 1011 physicians) were located in the Dakar region, which houses the nation's capital [5], while 76% of the population live outside Dakar [4]. In addition to the geographical inequitable distribution of inadequate numbers of health professionals, Senegal's health system also suffers from widespread health professional absenteeism and poor-quality healthcare services [10].
Over the past decade, the Senegalese Health Ministry has made efforts to address the inequitable distribution of qualified health professionals, including the introduction of measures to improve posting and recruitment processes for health workers in rural and remote areas [9]. Currently, the Senegalese Government is developing policy that aims to encourage health professionals to remain in rural posts, particularly in areas that the Government defines as 'difficult' regions. The Human Resources Department of the Senegalese Health Ministry has a working definition of 'difficult' regions which includes geographical areas that constrain professional, personal and family growth and are characterised by a set of geographical, security, infrastructure and social service criteria [11]. While there is political momentum to improve conditions for those in 'difficult' regions, the Government is restricted by poor resource availability and hopes to identify priority areas for reform. This study determines how different aspects of working conditions encourage health workers to stay in rural areas.
The study employed the choice experiment (CE) methodology. This approach is increasingly used to elicit health preferences in a range of areas [12], including health worker preferences [13]. CEs ask individuals to state preferences for hypothetical alternatives, each described by several attributes. They are a favoured technique in preference research because they allow estimation not just of what is important, but how important it is. By asking individuals to make trade-offs between attributes, and analysing responses in a random utility framework, researchers can estimate marginal rates of substitution between attributes (MRS; how much of one attribute is needed to compensate for the reduction in another attribute) and the probability accepting a given job.
This study extends the current literature applying CE to job choices in LMICs in two ways. Firstly, a novel mixed-methods approach was used to develop attributes and levels, combining interviews with a best-worst scaling (BWS) (Case 1) experiment. This is the first study to use the BWS (Case 1) method to reduce the number of attributes to a manageable level. Secondly, a 'period of assignment' attribute was included to estimate trade-offs, allowing the influence of other attributes on intended time in a post to be determined. To the authors' knowledge, this is the first study to include such an attribute to estimate trade-offs.
Defining attributes and levels
A novel two-stage approach was used to develop attributes and levels. Interviews were first undertaken with 176 healthcare professionals working in remote, rural and urban areas (31 physicians, 94 nurses, 51 midwives) to explore factors influencing the retention of healthcare professionals in rural posts [14]. In-depth interviews with eight health administrators in the Senegalese Health Ministry were also undertaken. Thematic analysis identified an initial list of factors that motivated or demotivated healthcare workers in rural areas. The factors were categorised as pre-service and in-service education, regulatory systems, financial and non-financial incentive schemes and professional and personal support. Subsequent analysis identified 14 factors that influenced the retention of healthcare professionals in rural posts.
A BWS (Case 1) approach [15, 16] was used to reduce the 14 attributes to a manageable number for use within a CE (5–7 attributes) [12]. Respondents to the BWS study were 266 health professionals, comprising 170 nurses, 68 midwives and 28 clinicians working in the 'difficult' regions of Senegal. Locally trained interviewers administered the best-worst tasks using face-to-face interviews. Respondents were given 14 best-worst tasks (Fig. 1). In each task, respondents were asked to select the attributes that were most and least likely to influence their decisions to stay in rural posts.
Example of best-worst task
The BWS data were analysed using both count and choice analysis, and the results were compared to confirm validity [15, 16]. The results from the count analysis are shown in Table 1. The choice analysis results were consistent with the count analysis and are available from the authors upon request. Items ranked highly in the BWS were proposed for inclusion in the CE, with careful consideration given to whether the items were relevant to both policy and the working conditions of physicians and non-physicians in Senegal. Specifically, the five most valued items in the BWS were considered for inclusion as attributes in the CE. A series of discussions with team members who were experienced in data collection helped to clarify and refine the items from the BWS for use in the CE. For example, 'support for career development' used in the BWS was discussed in the context in Senegal to develop an implementable policy action, and re-defined as 'provision of training opportunities' (the detailed definition of a training opportunity is provided in Table 2). Also, as discussed later in this section, the inclusion of the period of assignment attributes was discussed together with the BWS results to establish the final set of attributes. Discussion with the Health Ministry assisted in the finalisation of the attributes and levels (Table 2) for use in the CE.
Count analysis of best-worst data at a sample level
Ratio score
Rescaled ratio score
Improved professional mobility
− 0.540
Management of human resources at the regional level
Promote participation in social events
Development of inter-professional exchange
Help to get scholarship
Help to get public contract
Provision of professional support
Financial incentive (based on distance)
Financial incentive (based on skills)
Provision of accommodation
Guaranteed access to medical equipment
Guaranteed access to drugs
Guaranteed access to utilities
Support of career development
3 726/3 738
Attributes and levels
Regression labels
Period of assignment
The total number of years of assignment to a rural/remote job
1. 2 years
Provision of skills/qualification-based allowance or rural/remote job allowance
1. No allowance
2. Rural job allowance provided
3. Skill-based allowance provided (for physicians)
Functioning of health system
Availability of equipment at the health facility that allows the provision of a basic package of health care services
1. Inadequate: Medical equipment at the facility does not allow the provision of a basic package of health services
2. Adequate: Medical equipment at the facility allows the provision of a basic package of health services
While working in a rural area, the employer provides free accommodation that is appropriate for marital/family status
ACCOMMOD
1. No provision of accommodation
2. Accommodation provided
Types of contract
Either permanently contracted government workers, temporary contract with MoH; temporary contract with health facilities; or temporary contract with local authorities
1. Permanent: permanently contracted government workers
2. Temporary (MoH): Temporary appointment by MoH
3. Temporary (Health facility): Temporary appointment by health facilities
4. Temporary (Local): Temporary appointment by local authorities
Provision of further training offered outside the work place (excluding further education for degree purposes)
1. No provision of training opportunities
2. Training opportunities provided
(for non-physicians)
Either no support; supportive supervision by health administrators; or clinical advice and support from peer health professionals
1. No support
2. Managerial support: Supportive supervision by health administrators
3. Clinical support: Clinical advice and support from peer health professionals
The CE attributes were classified as factors relating to (1) individual (extrinsic) incentive benefits (type of contract, provision of training opportunities, provision of an allowance, and provision of accommodation) and (2) functioning of health systems (availability of basic equipment in health facilities and provision of supportive supervision by health administrators). As the working conditions for physicians and non-physicians can be diverse, including in the types of allowance provided (e.g. physicians receive a skills-based allowance, non-physicians receive a rural allowance), different CEs were given to each group of healthcare workers.
Most CEs eliciting job preferences use a salary attribute to determine how much health workers need compensating to accept a reduction in working conditions, i.e. working in a rural rather than urban location [17]. However, discussions with local interviewers and staff in the Senegalese Health Ministry revealed that salary-related questions were culturally sensitive and could make respondents feel uncomfortable and reluctant to respond to the survey. Furthermore, the Health Ministry did not plan to increase the salaries of health professionals, except through the provision of a rural allowance. Consequently, we used an assignment period attribute, number of years of assignment to a rural post, to determine the 'willingness to stay' (WTS, i.e. how long health workers would stay in difficult areas if certain working conditions were improved). To the best of our knowledge this approach has not been previously employed in LMICs.
Defining choice tasks
A D-efficient design was used to identify the choice tasks to present to respondents. This approach minimises the standard errors (SEs) of parameters [18], thus ensuring more precise parameter estimates. Assuming null interaction effects between attributes (i.e. the preferences for one attribute do not depend on the level of another attribute) and using non-informative priors (no a priori information on preference parameters), the approach generated 15 choice tasks for physicians and 16 choice tasks for non-physicians. Each choice task presented two rural job options and asked respondents to choose their preferred option. A subsequent question asked if they would prefer to remain in their current position rather than take up the chosen option (opt-out response). Figure 2 presents an example choice task. The questionnaire also collected information on socio-demographic characteristics of respondents. A copy of the questionnaire is available from the authors upon request.
Example of choice task
Sample, setting and data collection
The study elicited the job preferences of physician and non-physician health workers (specialised nurses, nurses and midwives). Data collection took place in the eight 'difficult' regions of Senegal (of a total of 14 regions). Using the Health Ministry's human resources database, participants were randomly selected from a list of physician and non-physician health professionals working in clinics and hospitals in the regions of interest. If a pre-determined health professional was unavailable, or had been transferred, a health worker of the same professional type and at the same health facility was substituted in their place. Louviere et al's [19] sample size calculator determined that a minimum of 42 physician and 44 non-physician participants was necessary for the study (choice probability = 40%, confidence level = 95%, accuracy level = 90%, attrition rate = 10%, number of tasks = 15 or 16).
Face-to-face interviews were used to collect data [20]. A locally trained team of 10 interviewers and two field coordinators collected the data. Prior to the commencement of data collection, the questionnaire was pilot tested on 16 respondents in two health centres. After the pilot testing, the definitions of some attributes were changed and the levels of some attributes revised.
Data from the CE allowed ranking of the three jobs (jobs A, B, and current), as well as estimation of the probability of a particular job being best (ranked first) or worst (ranked last). We applied partial rank ordering when a respondent answered A (or B) in both questions, with only the best choice data used. This approach maximised the information obtained from the CE.
The data was analysed within the random utility maximisation (RUM) framework [21, 22], which assumes that in each choice (t = 1,…,T), health professionals (n = 1,…,N) derive utility (Untj) for a job (j = 1,…,J) and choose the position yielding the highest utility, subject to errors (εntj). Assuming errors are independently and identically distributed as type 1 extreme values (IID EV1), the choice model takes the form of a multinomial logistic (MNL) regression.
The regression equation for physicians is:
$$ {U}_{ntj}^{\mathrm{PHYS}}={\beta}_0{\mathrm{CURRENT}}_{ntj}+{\beta}_1\mathrm{ALLOWANCE}\_{1}_{ntj}+{\beta}_2\mathrm{ALLOWANCE}\_{2}_{ntj}+{\beta}_3{\mathrm{EQUIP}}_{ntj}+{\beta}_4{\mathrm{ACCOMOD}}_{ntj}+{\beta}_5\mathrm{CONTRACT}\_{1}_{ntj}+{\beta}_6\mathrm{CONTRACT}\_{2}_{ntj}+{\beta}_7\mathrm{CONTRACT}\_{3}_{ntj}+{\beta}_8{\mathrm{TRAINING}}_{ntj}+{\beta}_9{\mathrm{PERIOD}}_{ntj}+{\varepsilon}_{ntj} $$
And the regression equation for non-physicians is:
$$ {U}_{ntj}^{\mathrm{NOPHYS}}={\beta}_0{\mathrm{CURRENT}}_{ntj}+{\beta}_1{\mathrm{ALLOWANCE}}_{ntj}+{\beta}_2{\mathrm{EQUIP}}_{ntj}+{\beta}_3{\mathrm{ACCOMOD}}_{ntj}+{\beta}_4\mathrm{CONTRACT}\_{1}_{ntj}+{\beta}_5\mathrm{CONTRACT}\_{2}_{ntj}+{\beta}_6\mathrm{CONTRACT}\_{3}_{ntj}+{\beta}_7{\mathrm{TRAINING}}_{ntj}+{\beta}_8{\mathrm{PERIOD}}_{ntj}+{\beta}_9\mathrm{SUPPORT}\_{1}_{ntj}+{\beta}_{10}\mathrm{SUPPORT}\_{2}_{ntj}+{\varepsilon}_{ntj} $$
where all regression labels are defined in Table 2.
Willingness to stay (WTS) for marginal improvements in attributes, calculated as the ratio of the coefficient of interest to the negative of the coefficient on the assignment period attribute, was estimated. Associated confidence intervals were computed for all attributes using the delta method [23]. Overall WTS for a defined job was calculated as the sum of the WTS values for the job's various features.
Results from the MNL regression model were used to predict the probability of health workers remaining (Pntj) in a pre-defined (baseline) rural job (j).
$$ {P}_{ntj}=\frac{\exp \left({U}_{ntj}-{\varepsilon}_{ntj}\right)}{\sum \limits_j\exp \left({U}_{ntj}-{\varepsilon}_{ntj}\right)} $$
The baseline scenario reflected current working conditions in the 'difficult' regions of Senegal: 4-year assignment period, no allowance, inadequate equipment at the health facility, no accommodation, temporary contract with the Health Ministry, no training, and no supportive supervision (non-physicians only). The retention rate for the baseline scenario was compared with retention rates for job contracts offering improvements in attributes.
Respondent characteristics
The study included 55 physicians and 246 non-physicians. The physician group comprised 37 general practitioners (GPs) and 18 specialists, and the non-physician group comprised 153 nurses, 83 midwives and 11 specialised nurses. Of the physician respondents, 96.4% were male, while 60.2% of the non-physicians were female. The skewed gender distribution in physician respondents reflects actual patterns in the gender distribution of physicians working in the 'difficult' regions [7]. Health 'posts' are the smallest type of health facility operating in Senegal and are run by non-physicians. Consequently, health 'posts' did not have physician respondents. The amount of time spent working in rural/remote areas averaged 5.3 years for physician respondents (minimum less than 1 year; maximum 17 years; SD 4.3) and 7.4 years for non-physicians (minimum less than 1 year; maximum 38 years; SD 8.0). A range of contract types was used: 61.8% of physicians were permanently contracted government employees, 7.3% were annually contracted by the Health Ministry and 30.9% hired either by health facilities or local authorities. For the non-physicians, 60.7% were permanent government employees, 17.4% were annually contracted by the Health Ministry, and 21.9% were locally hired.
Preferences for job attributes
All attributes, except professional support for non-physicians, were statistically significant, indicating that they impact on the probability of health professionals staying in a rural job (Table 3). The constant term for non-physicians was statistically significant with a negative coefficient, indicating a general preference not to remain in 'difficult' regions.
Conditional Logit model
Non-physician
1. Model parameters
Current job condition
< 0.001
Temporary with MoH
Contract with health facility
Contract with local authorities
Managerial support
2. Model statistics
Log likelihood
− 905.98
− 5 053.37
Bayesian info criterion
1 886. 40
10 214. 49
aRobust standard errors adjusted for clustering on individual participants
For physicians, a change in contract from temporary to permanent has the greatest impact on the probability of staying in a rural post, followed by the provision of training opportunities and the availability of equipment at a health facility. Similarly, for non-physicians, the provision of a permanent contract was the most valued attribute, followed by the availability of equipment and the provision of further training opportunities.
Both physicians and non-physicians ranked attributes relating to the functioning of health systems (availability of equipment at a health facility and the provision of supportive supervision) higher than some of the individual benefit attributes (provision of an allowance and accommodation).
Willingness to stay (WTS) in 'difficult' regions
WTS for job improvements are shown in Fig. 3. WTS with improvements in the three most valued attributes (permanent contract, availability of equipment in health facilities and provision of further training opportunities) is 10.7 years for physicians and 5.4 years for non-physicians. However, if a permanent contract is not provided, and improvements only made to the availability of equipment and provision of training opportunities, the overall WTS is − 0.3 years for physicians and − 3.4 years for non-physicians. These negative figures suggest that, given the opportunity, both physicians and non-physicians would choose not to complete their assignments to the post.
Willingness to stay (WTS) estimates
Probability of staying in 'difficult' regions
Figure 4 presents retention probabilities for different policy options. The baseline (current) job contract has a 1.5% probability of retaining physicians in the position for the 4-year assignment period and 4.2% probability of retaining non-physicians in current positions. Provision of a permanent contract increases the probability of retaining a physician to 11.4% and non-physicians to 16.7%. Availability of adequate equipment increases the retention rate to 6.1% for physicians and 12.3% for non-physicians. Further training opportunities increase the retention rate to 9.3% for physicians and 8.1% for non-physicians.
Effects of different policy options on retention probabilities
As with the WTS results, the retention probabilities suggest that policy reform affecting only a single attribute is unlikely to ensure health workers remain in 'difficult' regions, and retention policies should consider a combination of reforms. For instance, the retention rate for rural posts offering the three most preferred job conditions (permanent contract, availability of adequate equipment and further training opportunities) increases to 79.0% for physicians, an improvement of 77.5% points above the baseline, and 55.9% for non-physicians, an improvement of 51.7% points above the baseline.
Combinations of individual benefit incentives and aspects of health system functioning were also considered. The retention rates for rural posts offering the three most preferred individual benefit incentives (i.e. permanent contract, further training opportunities and provision of an allowance) are estimated to be 65.1% for physicians and 40.5% for non-physicians. These retention rates increase to 89.0% for physicians and 79.9% for non-physicians if factors relating to the functioning of health systems are also improved (i.e. availability of basic equipment at health facilities for physicians, and availability of basic equipment and provision of supportive supervision for non-physicians). This finding supports the importance of improving health system functioning to improve that likelihood of health professionals remaining in rural posts.
This study contributes to the literature on the retention of health workers in the rural areas of LMICs in a number of ways. The study contains two methodological originalities. Firstly, the study employed a BWS (Case 1) experiment to short-list factors identified in qualitative interview data and used a ranking score to determine policy options for inclusion in the CE. While most of the recent CE studies in LMICs have established attributes and assigned attribute levels using a qualitative approach, such as group discussions and in-depth interviews, this is the first CE in LMICs, and indeed CE in any context, which has applied BWS to finalise attributes after qualitative work was undertaken. BWS (Case 1) was a useful approach to reducing the number of attributes in a CE to a manageable level. Secondly, to ensure the cultural acceptance and policy relevance of the attributes, WTS, instead of willingness to pay, was estimated using the period of assignment to determine trade-offs. This allowed estimation of how long respondents would be willing to stay in a rural post if there were improvements in other aspects of the contract attributes.
At the applied level, while CEs have been extensively used to investigate human resource issues in sub-Saharan Africa [24–31], the number of CEs in Western Africa is limited. Indeed, our study is only the third in Western Africa [32, 33], and the first in Senegal. Our results highlight the challenges of retaining the health workforce in rural areas of Senegal. The statistically significant negative constant term for non-physicians suggests that health workers are unlikely to want to remain in current roles in 'difficult' regions for the term of their posts. Policy makers must promptly respond to rural job retention issues, being mindful that policy reforms addressing single attribute are unlikely to improve retention rates. Retention policy should include a combination of reforms, including both individual incentives and factors relating to how health systems function.
To the authors' knowledge, this is the first CE study in LMICs that has included type of contract as an attribute, though two studies have included attributes on the number of years of service before obtaining a permanent post or promotion to permanent staff member [34, 35]. For both physicians and non-physicians, provision of a permanent contract was the factor which most affected the likelihood of job retention. In Senegal, while permanently contracted government workers, including public sector medical doctors and nurses, are employed and paid by the Ministry of Public Services, other health professionals are contracted annually by the Health Ministry [36]. Annually contracted health professionals do not receive government employee social benefit packages, such as pensions; however, the base salary for annually contracted health professionals is slightly higher than that of permanent health professionals. Renewal of annual contracts is unpredictable, depending on the availability of Health Ministry budget [36]. Given the key differences in the job conditions between the two types of contracts are the length of job security and the provision of social security entitlements, our study results suggest that respondents' value stability in employment and/or the entitlements associated with permanent employment.
In 2006, a program called "Plan Cobra" was introduced in Senegal, enabling the Health Ministry to hire health professionals using annual contracts. The plan aimed to address human resource shortages, particularly in rural areas, in a timelier manner than the lengthy process of hiring government workers through the Ministry of Public Services. While Plan Cobra used annual contracts to help distribute human resources to rural posts when it started in 2006 [9], over time, the nature of the contracts with the Health Ministry changed and, currently, annual contracts are used to employ health professionals regardless of geographical location. While in 2016, 48.9% of public sector health professionals, including doctors, nurses and midwives, were permanent government employees [36], the proportion in our sample (healthcare professionals working in difficult regions) was around 60%. Our results suggest that a short-term contract policy will not be effective in rural retention of healthcare professionals in the context of Senegal. Indeed, the Health Ministry has used the results from this study to introduce a rural assignment policy to recruit permanent staff from the pool of annually contracted healthcare professionals on the condition that they are assigned to rural posts. The results of this new policy intervention in Senegal require on-going monitoring. An 'emergency-hire' project for the recruitment of rural staff in Kenya saw most of those hired through the scheme leave rural areas after they were absorbed into the Government of Kenya's public service [37]. Such experiences in other contexts suggest that it is important to further examine the aspects of permanent contracts that facilitate the retention of healthcare professionals in rural areas.
Current evidence on effectiveness of monetary incentives (either in salary increases or bonus payments) on rural retention is mixed—while monetary incentives can enhance the motivation and retention of health professionals in rural and remote areas, provision of non-monetary incentives can be equally important [13, 38]. The results revealed that monetary incentives (rural or skills-based allowances) had a relatively small impact on retention in the study context. However, the study did not specify the level of the rural allowance to be provided. While a small rural allowance is likely to have little impact, a larger allowance may have a greater impact. Thus, further investigation is warranted. In addition, the qualitative study undertaken prior to the choice experiment indicated the importance of fair, transparent administration of salary and/or allowance payments [14], which also suggests that the study results on payment of allowances must be carefully interpreted and further investigation of various aspects associated with payment is required.
Our results show the importance of improving the functioning of health systems, which includes ensuring the availability of basic equipment at health facilities and the provision of supportive supervision. A number of studies have found that healthcare professionals strongly value the availability of equipment and infrastructure [39–41]. Given that less than 50% of the clinics in rural Senegal have access to basic equipment, and less than 30% of rural health facilities have access to electricity, water and sanitation [10], our results suggest that there will be difficulty in retaining health workers even if individual incentives are provided. Retention may be improved using innovative approaches at the community level if the government cannot find immediate solutions due to limited resource capacity [42].
Our results suggest a small proportion of staff want to see out their contracts under the current arrangements. This contradicts the actual longer periods that respondents have already served (averaging 5.3 and 7.4 years for physicians and non-physicians, respectively). Given the Senegalese context, where fiscal constraints can prevent the appointment of public sector health workers and where many health professionals are unemployed [36], this contradiction suggests employed healthcare professionals do not abandon their current posts, perhaps for fear of joining the ranks of the unemployed. Alternatively, those who have served long periods in difficult areas may be more able to cope, or have learned to cope, and so are more likely to stay even if their grievances are similar to those who have left the difficult areas. This requires further investigation.
Our CE was administered to those currently working in 'difficult' regions. It did not examine the preferences of those who had left rural posts or those studying to be health professionals. These groups may have different job preferences for work in rural posts than those interviewed in our study. Although this may limit the generalizability of the study results to all healthcare professionals in Senegal, a qualitative study undertaken prior to the CE, which included interviews with those currently working in Dakar who had previously worked in 'difficult' regions, did not find differences in factors affecting job retention in 'difficult' regions between those in Dakar with experience in 'difficult' regions and those currently working in 'difficult' regions [14].
The study used a CE to elicit the job-related preferences of physician and non-physician health workers in 'difficult' regions of Senegal. For both groups, provision of a permanent contract, the availability of equipment in health facilities and the provision of training opportunities are the most valued rural work conditions. This is the first study to look at the impact of different contract types on the retention of health workers in rural areas. Contract type—either permanent or non-permanent—was found to be a key factor in rural job retention. Indeed, this result has led the Senegalese Health Ministry to introduce a rural assignment policy that recruits permanent staff from the pool of annually contracted healthcare professionals on the condition that the workers are assigned to rural posts. While our results suggest that this is a useful policy development, they also suggest that further policy development is required to ensure sufficient numbers of health workers in underserved areas to guarantee equitable access to quality healthcare for the people in those communities. A combination of individual incentives and health system improvements would facilitate the retention of health professionals in rural jobs.
BWS:
Best-worst scaling
Choice experiment
GP:
LMICs:
MNL:
Multinomial logit (MNL)
SE:
Standard errors
UHC:
Universal Health Coverage
WTS:
Willingness to stay
The authors would like to express their profound gratitude to the fieldwork team and to the health professionals who responded to the survey questionnaire. Thanks also to four reviewers whose comments have improved the paper.
The study was funded through a Research Grant for International Health, H25-11, from the Ministry of Health, Welfare and Labour, Japan, and undertaken as part of the project Réseau Vision Tokyo 2010, funded by the Japan International Cooperation Agency.
The datasets used and/or analysed in the study are available from the corresponding author on reasonable request.
NK, MR and AH developed the study design and data collection tools in consultation with MS, ID, MN and NF. MS supervised the field data collection in discussion with ID and AH. AH, NK and MR undertook the data analysis. All authors contributed to the preparation of the paper. All authors read and approved the final manuscript.
The study received ethics approval from the ethics committee of the Ministère de la Santé et de l'Action Sociale du Sénégal, Dakar, Senegal. Study participants received information on the background, objectives and contribution of the study as well as details on how the collected information would be used. Prior to the survey, participants were asked to sign a consent form if they agreed to participate in the research.
Department of Economics, Sophia University, 7-1 Kioi-cho, Chiyoda-ku, Tokyo 102-8554, Japan
Health Economics Research Unit, University of Aberdeen, Scotland, UK
Ministère de la Santé et de l'Action Sociale du Sénégal, Dakar, Senegal
National Centre for Global Health and Medicine, Tokyo, Japan
Global Health Workforce Alliance. Health workforce 2030 – towards a global strategy on human resources for health [Synthesis paper]. Geneva: World Health Organization; 2015.Google Scholar
World Health Organization. Health workforce requirements for universal health coverage and the sustainable development goals. Geneva: World Health Organization; 2016.Google Scholar
Araújoa EC, Maeda A. How to rectuit and retain health workers in rural and remote areas in developing countries: a guidance note. Washington, DC: World Bank; 2013.Google Scholar
World Bank: World development indicators. 2017.Google Scholar
Buchan J, Couper ID, Tangcharoensathien V, Thepannya K, Jaskiewicz W, Perfilieva G, Dolea C. Early implementation of WHO recommendations for the retention of health workers in remote and rural areas. Bull World Health Organ. 2013;91(11):834–40.View ArticleGoogle Scholar
Serneels P. Internal geographical imbalances: the role of human resources quality and quantity. In: Culyer AJ, editor. Encycopedia of Health Economics. Amsterdam: Elsevier; 2014.Google Scholar
Ministère de la Santé et de l'Action Sociale du Sénégal. Profil des Ressources Humaines du Sénégal. Dakar: Direction des Ressources Humaines, Ministère de la Santé et de l'Action Sociale du Sénégal; 2012.Google Scholar
World Bank: World development indicators. 2015.View ArticleGoogle Scholar
Zurn P, Codjia L, Sall FL, Braichet JM. How to recruit and retain health workers in underserved areas: the Senegalese experience. Bull World Health Organ. 2010;88(5):386–9.View ArticleGoogle Scholar
World Bank. Service delivery indicators: Senegal. Washington, DC: World Bank; 2013.Google Scholar
Ministère de la Santé et de l'Action Sociale du Sénégal. Atelier de Partage et de Validation des Stratégies de Couverture des Zones Difficiles en Personnels de Santé: Rapport de Synthèse. Dakar: Direction des Ressources Humaines, Ministère de la Santé et de l'Action Sociale du Sénégal; 2010.Google Scholar
De Bekker-Grob E, Ryan M, Gerard K. Discrete choice experiments in health economics: a review of the literature. Health Econ. 2012;21(2):145–72.View ArticleGoogle Scholar
Mandeville KL, Lagarde M, Hanson K. The use of discrete choice experiments to inform health workforce policy: a systematic review. BMC Health Serv Res. 2014;14:367.View ArticleGoogle Scholar
Nagai M, Fujita N, Diouf IS, Salla M. Retention of qualified healthcare workers in rural Senegal: lessons learned from a qualitative study. Rural Remote Health. 2017;17(3):4149.View ArticleGoogle Scholar
Flynn TN, Louviere JJ, Peters TJ, Coast J. Best-worst scaling: what it can do for health care research and how to do it. J Health Econ. 2007;26(1):171–89.View ArticleGoogle Scholar
Louviere J, Lings I, Islam T, Gudergan S, Flynn T. An introduction to the application of (case 1) best–worst scaling in marketing research. Int J Res Mark. 2013;30(3):292–303.View ArticleGoogle Scholar
Ryan M, Kolstad J, Rockers P, Dolea C. User guide with case studies: how to conduct a discrete choice experiment for health workforce recruitment and retention in remote and rural areas. Washington, DC: World Bank; 2012.Google Scholar
Ryan M, Gerard K, Amaya-Amaya M, editors. Using discrete choice experiments to value health and health care. Dordrecht: Springer; 2008.Google Scholar
Louviere J, Hensher D, Swait J. Stated choice methods: analysis and application, first edition edn. Cambridge: Cambridge University Press; 2000.View ArticleGoogle Scholar
Bennett J, Birol E, editors. Choice experiments in developing countries: implementation, challenges and policy implications. Cheltenham and Northampton: Edward Elgar; 2010.Google Scholar
McFadden D. Conditional logit analysis of qualitative choice behavior. In: Zarembka P, editor. Frontiers in econometrics. edn. New York: Academic Press; 1974. p. 105–42.Google Scholar
Manski C. The structure of random utility models. Theor Decis. 1977;8(3):229–54.View ArticleGoogle Scholar
Hole AR. WTP: Stata module to estimate confidence intervals for WTP measures; 2007.Google Scholar
Takemura T, Kielmann K, Blaauw D. Job preferences among clinical officers in public sector facilities in rural Kenya: a discrete choice experiment. Human Resources for Health. 2016;14:1.Google Scholar
Mandeville KL, Ulaya G, Lagarde M, Muula AS, Dzowela T, Hanson K. The use of specialty training to retain doctors in Malawi: a discrete choice experiment. Soc Sci Med (1982). 2016;169:109–18.View ArticleGoogle Scholar
Honda A, Vio F. Incentives for non-physician health professionals to work in the rural and remote areas of Mozambique--a discrete choice experiment for eliciting job preferences. Hum Resour Health. 2015;13:23.View ArticleGoogle Scholar
Rockers PC, Jaskiewicz W, Wurts L, Kruk ME, Mgomella GS, Ntalazi F, Tulenko K. Preferences for working in rural clinics among trainee health professionals in Uganda: a discrete choice experiment. BMC Health Serv Res. 2012;12:212.View ArticleGoogle Scholar
Kolstad J. How to make rural jobs more attractive to health workers. Findings from a discrete choice experiment in Tanzania. Health Econ. 2011;20(2):196-211.Google Scholar
Kruk ME, Johnson JC, Gyakobo M, Agyei-Baffour P, Asabir K, Kotha SR, Kwansah J, Nakua E, Snow RC, Dzodzomenyo M. Rural practice preferences among medical students in Ghana: a discrete choice experiment. Bull World Health Organ. 2010;88(5):333–41.View ArticleGoogle Scholar
Mangham L, Hanson K. Employment preferences of public sector nurses in Malawi: results from a discrete choice experiment. Tropical Med Int Health. 2008;13(12):1433–41.View ArticleGoogle Scholar
Hanson K, Jack W. Health worker preferences for job attributes in Ethiopia: results from a discrete choice experiment. Washington, DC: World Bank; 2008.Google Scholar
Robyn PJ, Shroff Z, Zang OR, Kingue S, Djienouassi S, Kouontchou C, Sorgho G. Addressing health workforce distribution concerns: a discrete choice experiment to develop rural retention strategies in Cameroon. Int J Health Policy Manage. 2015;4(3):169–80.View ArticleGoogle Scholar
Yaya Bocoum F, Koné E, Kouanda S, Yaméogo WME, Bado AR. Which incentive package will retain regionalized health personnel in Burkina Faso: a discrete choice experiment. Hum Resour Health. 2014;12(Suppl 1):S7.View ArticleGoogle Scholar
Miranda JJ, Diez-Canseco F, Lema C, Lescano AG, Lagarde M, Blaauw D, Huicho L. Stated preferences of doctors for choosing a job in rural areas of Peru: a discrete choice experiment. PLoS One. 2012;7(12):e50567.View ArticleGoogle Scholar
Rockers PC, Jaskiewicz W, Kruk ME, Phathammavong O, Vangkonevilay P, Paphassarang C, Phachanh IT, Wurts L, Tulenko K. Differences in preferences for rural job postings between nursing students and practicing nurses: evidence from a discrete choice experiment in Lao People's Democratic Republic. Hum Resour Health. 2013;11:22.View ArticleGoogle Scholar
Réseau Vision Tokyo 2010. L'analyse situationnelle des ressources humaines en santé des membres membres du Réseau Vision Tokyo 2010. Tokyo: Réseau Vision Tokyo 2010; 2017.Google Scholar
Vindigni SM, Riley PL, Kimani F, Willy R, Warutere P, Sabatier JF, Kiriinya R, Friedman M, Osumba M, Waudo AN, et al. Kenya's emergency-hire nursing programme: a pilot evaluation of health service delivery in two districts. Hum Resour Health. 2014;12(1):16.View ArticleGoogle Scholar
Lagarde M, Blaauw D. A review of the application and contribution of discrete choice experiments to inform human resources policy interventions. Hum Resour Health. 2009;7(1):62.View ArticleGoogle Scholar
Smitz M-F, Witter S, Lemiere C, Eozenou PH-V, Lievens T, Zaman RU, Engelhardt K, Hou X. Understanding health workers' job preferences to improve rural retention in Timor-Leste: findings from a discrete choice experiment. PLoS One. 2016;11(11):e0165940.View ArticleGoogle Scholar
Vujicic M, Shengelia B, Alfano M, Thu HB. Physician shortages in rural Vietnam: using a labor market approach to inform policy. Soc Sci Med. 2011;73(7):970–7.View ArticleGoogle Scholar
Ageyi-Baffour P, Rominski S, Nakua E, Gyakobo M, Lori JR. Factors that influence midwifery students in Ghana when deciding where to practice: a discrete choice experiment. BMC Med Educ. 2013;13:64.View ArticleGoogle Scholar
Alhassan RK, Nketiah-Amponsah E, Spieker N, Arhinful DK, Rinke de Wit TF. Assessing the impact of community engagement interventions on health worker motivation and experiences with clients in primary health facilities in Ghana: a randomized cluster trial. PLoS One. 2016;11(7):e0158541.View ArticleGoogle Scholar | CommonCrawl |
The math behind ANN (ANN- Part 2)
The math behind ANN The math behind ANN Table of contents
Multi-layer perceptron
Perceptron
Feedforward loop
Back propagation of error
Update weights and biases
Derivation of learning rules
Introduction¶
This is second in the series of blogs about neural networks. In this blog, we will discuss the back propagation algorithm. In the previous blog, we have seen how a single perceptron works when the data is linearly separable. In this blog, we will look at the working of a multi-layered perceptron (with theory) and understand the maths behind back propagation.
Multi-layer perceptron¶
An MLP is composed of one input layer, one or more layers of perceptrons called hidden layers, and one final perceptron layer called the output layer. Every layer except the input layer is connected with a bias neuron and is fully connected to the next layer.
Perceptron¶
In the previous blog, we have seen a perceptron with a single TLU. A Perceptron with two inputs and three outputs is shown below. Generally, an extra bias feature is added as input. It represents a particular type of neuron called bias neuron. A bias neuron outputs one all the time. This layer of TLUs is called a perceptron.
In the above perceptron, the inputs are x1 and x2. The outputs are y1, y2 and y3. Θ (or f) is the activation function. In the last blog, the step function is taken as the activation function. There are other activation functions such as:
Sigmoid function: It is S-shaped, continuous and differentiable where the output ranges from 0 to 1.
$$ f(z)=\frac{1}{1+e^{-z}} $$
Hyperbolic Tangent function: It is S-shaped, continuous and differentiable where the output ranges from -1 to 1. $$f\left(z\right)=Tanh\left(z\right) $$
Training¶
Training a network in ANN has three stages, feedforward of input training pattern, back propagation of the error and adjustment of weights. Let us understand it using a simple example.
Consider a simple two-layered perceptron as shown:
Nomenclature¶
\(X_i\) Input neuron \(Y_k\)
\(x_i\) Input value \(y_k\) Output value
\(Z_j\) Hidden neuron \(z_j\) The output of a hidden neuron
\(\delta_k\) The portion of error correction for weight \(w_{jk}\) \(\delta_j\) The portion of error correction for weight \(v_{ij}\)
\(w_{jk}\) Weight of j to k \(v_{ij}\) weight of i to j
\(\alpha\) Learning rate \(t_j\) Actual output
f Activation function -- --
During feedforward, each input unit \(X_i\) receives input and broadcasts the signal to each of the hidden units \(Z_1\ldots Z_j\). Each hidden unit then computes its activation and sends its signal (\(z_j\)) to each output unit. Each output unit \(Y_k\) computes its activation (\(y_k\)) to form the response to the input pattern.
During training, each output unit \(Y_k\) compares its predicted output \(y_k\) with its actual output \(t_k\) to determine the error associated with that unit. Based on this error, \(\delta_k\) is computed. This \(\delta_k\) is used to distribute the error at the output unit back to all input units in the previous layers. Similarly, \(\delta_j\) is computed for all hidden layers \(Z_j\) which is propagated to the input layer.
The \(\delta_k\) and \(\delta_j\) are used to update the weights \(w_{jk}\) and \(v_{ij}\) respectively. The weight adjustment is based on gradient descent and is dependent on error gradient (\(\delta\)), learning rate (\(\alpha\)) and input to the neuron.
Mathematically this means the following:
Feedforward loop¶
Each input unit (\(X_i\)) receives the input \(x_i\) and broadcasts this signal to all units to the hidden layers \(Z_j\).
Hidden layer
Each hidden unit (\(Z_j\)) sums its weighted input signals \(z\_in_j=v_{0j}+\sum_{i} x_i\times v_{ij}\)
The activation function is applied to this weighted sum to get the output. \(z_j=f\left(z\_in_j\right)\) (where f is the activation function).
Each hidden layer sends this signal (\(z_j\)) to the output layers.
Output layer
Each output unit (\(Y_k\)) sums its weighted input signals \(y\_in_k=w_{0k}+\sum_{j}z_j\times w_{jk}\)
The activation function is applied to this weighted sum to get the output. \(y_k=f\left(y\_in_k\right)\) (where f is the activation function).
Back propagation of error¶
The error information term (\(\delta_k\)) is computed at every output unit (\(Y_k\)). $$ \delta_k=\left(t_k-y_k\right)f' y_in_k $$
This error is propagated back to the hidden layer. (later weights will be updated using this \(\delta\))
Each hidden unit (\(Z_j\)) sums its weighted error from the output layer $$ \delta_in_j=\sum_{k}\delta_k\times w_{jk} $$
The derivative of the activation function is multiplied to this weighted sum to get the weighted error information term at the hidden layer. $$ \delta_j=\delta_in_j \times f^\prime\left(z_in_j\right) $$ (where f is the activation function).
This error is propagated back to the initial layer.
Update weights and biases¶
The weights are updated based on the error information terms $$ w_{jk}\left(new\right)=w_{jk}\left(old\right)+\Delta w_{jk} $$ where $ \Delta w_{jk}=\alpha\times\delta_k\times z_j $
$$ v_{ij}\left(new\right)=v_{ij}\left(old\right)+\Delta v_{ij} $$ where $ \Delta v_{ij}=\alpha\times\delta_j\times x_i $
Steps 1 to 12 are done for each training epoch until a stopping criterion is met.
Derivation of learning rules¶
In every loop while training, we are changing the weights (\(v_{ij}\) and \(w_{jk}\)) to find the optimal solution. What we want to do is to find the effect of changing the weights on the error, and minimise the error using gradient descent.
The error gradient that has to be minimised is given by: $$E=\frac{1}{2}\sum_{k}\left(t_k-y_k\right)^2 $$ The effect of changing an outer layer weight (\(w_{jk}\)) on the error is given by: $$\frac{\partial E}{\partial w_{jk}}=\frac{\partial}{\partial w_{jk}}\frac{1}{2}\sum_{k}\left(t_k-y_k\right)^2 $$ $$ =\left(y_k-t_k\right)\frac{\partial}{\partial w_{jk}}f\left(y_in_k\right) $$ $$ =\left(y_k-t_k\right)\times z_j\times f'\left(y_in_k\right) $$ Therefore $$ \Delta w_{jk}=\alpha\frac{\partial E}{\partial w_{jk}}=\alpha\times\left(y_k-t_k\right)\times z_j\times f^\prime\left(y_in_k\right)={\alpha\times\delta}_k\times z_j $$
The effect of changing the weight of a hidden layer weight (\(v_{ij}\)) on the error is given by:
$$ \frac{\partial E}{\partial v_{ij}}=\sum_{k}{\left(y_k-t_k\right)\frac{\partial}{\partial v_{ij}}f\left(y_k\right)} $$ $$ =\sum_{k}\left(y_k-t_k\right)f^\prime\left(y_in_k\right)\frac{\partial}{\partial v_{ij}}f\left(y_k\right) $$ $$ =\sum_{k}\delta_kf^\prime\left(z_in_j\right)\left[x_i\right] =\delta_j\times x_i $$ Therefore $$ \Delta v_{ij}=\alpha\frac{\partial E}{\partial v_{ij}}={\alpha\times\delta}_j\times x_i $$ This way, for any number of layers, we can find the error information terms. Using gradient descent, we can minimise the error and find optimal weights for the ANN. In the next blog, we will implement ANN on the Titanic problem and compare it with logistic regression.
Fausett, L., 1994. Fundamentals of neural networks: architectures, algorithms, and applications. Prentice-Hall, Inc.
Previous Artificial Neural Network (Theory)
Next Linear Programming (R) | CommonCrawl |
Liu* , Guo** , and Chen**: Community Discovery in Weighted Networks Based on the Similarity of Common Neighbors
Miaomiao Liu* , Jingfeng Guo** and Jing Chen**
Community Discovery in Weighted Networks Based on the Similarity of Common Neighbors
Abstract: In view of the deficiencies of existing weighted similarity indexes, a hierarchical clustering method initialize-expand-merge (IEM) is proposed based on the similarity of common neighbors for community discovery in weighted networks. Firstly, the similarity of the node pair is defined based on the attributes of their common neighbors. Secondly, the most closely related nodes are fast clustered according to their similarity to form initial communities and expand the communities. Finally, communities are merged through maximizing the modularity so as to optimize division results. Experiments are carried out on many weighted networks, which have verified the effectiveness of the proposed algorithm. And results show that IEM is superior to weighted common neighbor (CN), weighted Adamic-Adar (AA) and weighted resources allocation (RA) when using the weighted modularity as evaluation index. Moreover, the proposed algorithm can achieve more reasonable community division for weighted networks compared with cluster-recluster-merge-algorithm (CRMA) algorithm.
Keywords: Common Neighbors , Community Discovery , Similarity , Weighted Networks
Community discovery in social networks has theoretical significance and practical value for understanding the topology and behavior patterns of the network. However, edges in real networks always have weights. For example, the closeness of relationships between individuals in social networks is different. If we use the weighted network to describe such a system, it can better express these relationships. Weighted networks are networks in which edges have weight attributes. The weight can not only express whether there is a relationship between two nodes, but can also express the closeness of this relationship. For example, the weight in the air transport network represents the number of flights between two airports and the weight in the communication network represents the talking time between two users. The weight can better express real systems and help to understand its nature. It also has practical significance for community discovery.
At present, there have been some researches on community discovery in weighted networks. Newman replaced the edge betweenness with the weighted edge betweenness and proposed weighted Girvan- Newman (WGN) algorithm [1]. Subramani et al. [2] proposed a community mining method based on the variable density to cluster nodes. However, experimental results were not good and only a small number of communities were detected. A study of Liu et al. [3] put forward the attractiveness-based community detection (ABCD) algorithm for the clustering of large weighted networks based on the attractiveness between communities. Sharma [4] proposed automatic graph mining algorithm (AGMA) which can divide the weighted signed graph into several communities according to the link type and weights. Lu et al. [5] proposed intra-centrality and inter-centrality [TeX:] $$\left(\mathrm{I}^{2} \mathrm{C}\right)$$ algorithm based on conductance which joined the edge that had the greatest degree of membership into the community and used the community conductivity to determine whether a new community would form. Wang et al. [6] proposed a central cluster algorithm based on similarity which selected the node with the largest center degree as the center of the community and achieved community discovery in weighted networks based on the degree of ownership of the node. Lin et al. [7] proposed a hierarchical community discovery method based on parallel decomposition of weighted graphs. Wang [8] proposed a splitting algorithm based on greedy selection strategy. However, experiments on weighted networks of karate club and dolphins showed that there were some deviations between the division results and the real datasets. Zhan [9] proposed an algorithm to find local communities in weighted networks, which used the node with the maximum local weight as starting node and found the local community by gradually adding nodes into it. Zhao and An [10] realized division of the service community in weighted networks by calculating the optimal path tree, similarity index and dispersion index of the community between mobile nodes. Yao [11] proposed a community discovery method in weighted short message network. Guo et al. [12] improved AGMA algorithm and proposed CRMA algorithm for community discovery in weighted networks.
Overall, most of the existing algorithms are suitable for community discovery in traditional social networks in which the weight of the edge is always 1, and there are relatively few researches on community discovery in weighted social networks. Additionally, community discovery in weighted networks should not only consider whether the nodes are connected, but also consider the closeness of these relationships. So the weight should be taken as an important factor in the clustering process. However, existing weighted similarity indexes such as weighted common neighbor (CN) and weighted Adamic-Adar (AA) only consider the influence of the weight information of the CNs on the similarity, which ignore the effect of the degree and strength of CNs on the similarity. As a result, these algorithms have poor performance in community division in certain networks in which most node pairs have less common neighbors such as US Airports network. Besides, the hierarchical clustering method single based on the modularity is complex. Moreover, a good algorithm should meet two requirements at the same time, namely, the higher accuracy and the lower complexity. However, it is difficult for most existing algorithms to achieve the both.
For all these reasons, the algorithm initialize-expand-merge (IEM) is proposed in order to achieve a higher quality of community division in weighed networks and ensure the feasibility and effectiveness of the time meanwhile. Firstly, the similarity between nodes based on their CNs is defined so as to complete the clustering fast. Then it merges communities based on the target of maximizing the modularity. Lastly, the effectiveness and correctness of the algorithm are verified through experiments. The paper analyzes related current research firstly. Then the main idea, the definition and description of IEM algorithm are given. The last two sections are experiments and the conclusion.
2. Ideas and Preliminaries
2.1 Main Ideas
Community division for weighted networks should ensure that nodes in the same community are densely and closely connected. Additionally, according to the hierarchical clustering algorithms, the more similarity the two nodes have, the greater possibility of their belonging to the same community is. So the key of the algorithm is to effectively capture topological properties which affect the similarity and reasonably define the similarity index to complete the clustering and community discovery. In order to reduce the complexity, only the effects of the degree, the strength and the weight on the similarity of the two nodes are taken into account in our algorithm. We think that if the two nodes are not directly connected, their similarity is 0. Otherwise, if they are directly connected, their similarity depends on the contribution of their common neighbors.
Firstly, when measuring contributions of CNs to the similarity of the two nodes, we think the more CNs the two nodes have, the higher similarity they would have. Moreover, in weighted networks the strengths of the two nodes with the same degree are not necessarily the same and vice versa. Therefore, it is believed that the CN with the lower degree and the higher strength would contribute more to the similarity than those of CNs with higher degree and lower strength. Based on this, unit weight of the node is defined to be used to measure the similarity contribution of the node to its neighbors. It should be proportional to the node's strength and inversely proportional to its degree. In other words, the greater the unit weight of the CN is, the more its contribution to the similarity of the two nodes is.
Secondly, when measuring the contribution of weights of the two edges that connect the two nodes to their CN, we think that the greater ratio of the sum of weights of these two edges to the sum of weights of these two nodes is, the higher similarity these two nodes have. Based on this, the effect coefficient of the CN is defined to be used to measure the contribution extent of this CN compared with all neighbors of these two nodes. Moreover, on the basis of the above two definitions, the concept of the joint strength of the neighbor node is proposed, which equals the product of the unit weight of the CN and its effect coefficient. The higher the value is, the more contribution of the CN is.
Finally, in terms of two nodes, we take the sum of the joint strength of all their common neighbors as their total similarity. Here, there is a special situation that the two nodes have no CNs. Then the concept of the edge weight strength of the node pair is introduced as the similarity measurement. It is defined as the ratio of the weight of the edge that connects these two nodes to the sum of weights of all edges that connect with these two nodes. The larger edge weight strength means the two nodes are more closely connected and they have higher similarity.
Based on above definitions, we can fast cluster nodes and their neighbors by calculating the similarity so as to get initial communities. In the expanding phase, as to one of the two nodes in the node pair, if the node having the maximal similarity with the current node is just another one, these two nodes would be clustered together to form a community. If there are many such node pairs in the network, it would form many small communities leading to a lower modularity. So we further optimize the division results by gradually merging communities on condition that the merger can increase the modularity.
2.2 Relevant Definitions
[TeX:] $$\text { Let } G=(V, E, W)$$ represent the undirected and weighted network, where V is the node set, E is the edge set and W is the set of weights. [TeX:] $$\forall x, y \in V, \Gamma(x)$$ represents the neighbor set of [TeX:] $$x, e_{x y}$$ represents the edge that connects x and y, and [TeX:] $$W_{x y}$$ represents the weight of [TeX:] $$e_{x y}.$$ Let s(x) represent the strength of x, namely, [TeX:] $$S(x)=\sum_{z \in \Gamma(x)} w_{x z}.$$
Definition 2.1 (Unit weight of the node) [TeX:] $$\forall x \in V,$$ the unit weight of x is defined as the average of weights of all edges connected to the node x, which is denoted by u(x).
[TeX:] $$u(x)=\frac{\sum_{z \in \Gamma(x)} w_{x z}}{|\Gamma(x)|}$$
Definition 2.2 (Effect coefficient of the node) [TeX:] $$x, y \in V, \forall z \in V \cap \Gamma(x) \cap \Gamma(y)$$, the effect coefficient of z to the node pair <x,y> is defined as the ratio of the sum of [TeX:] $$w_{x z} \text { and } w_{z y}$$ to the sum of weights of all edges connected with x and y, which is denoted by [TeX:] $$\varepsilon_{z}^{c \mathrm{N}}(x, y).$$
[TeX:] $$\varepsilon_{z}^{C N}(x, y)=\frac{w_{x z}+w_{z y}}{s(x)+s(y)-w_{x y}}$$
Definition 2.3 (Joint strength of the common neighbor) [TeX:] $$x, y \in V, \forall z \in V \cap \Gamma(x) \cap \Gamma(y)$$ , the joint strength of z to the node pair <x,y> is defined as the product of the unit weight of z and its effect coefficient to <x,y>, which is denoted by [TeX:] $$\operatorname{sim}_{z}^{C N}(x, y).$$
[TeX:] $$\operatorname{Sim}_{z}^{C N}(x, y)=u(z) \varepsilon_{z}^{C N}(x, y)$$
Definition 2.4 (Edge weight strength of the node pair) [TeX:] $$\forall \mathrm{x}, y \in V,$$ the edge weight strength of the node pair <x,y> is defined as the ratio of wxy to the sum of weights of all edges that connected to x and y. We denote it by sw(x,y).
[TeX:] $$\operatorname{sw}(x, y)=\frac{w_{x y}}{s(x)+s(y)-w_{x y}}$$
Definition 2.5 (Weighted similarity based on common neighbors) [TeX:] $$\forall \mathrm{x}, y \in V, \mathrm{t},$$ the weighted similarity of the node pair <x,y> based on their common neighbors is defined as the sum of joint strength of all common neighbors of the two nodes, and we denote it by [TeX:] $$\operatorname{Sim}_{x y}^{I E M}$$
[TeX:] $$\operatorname{sim}_{x y}^{I E M}=\left\{\begin{array}{cc}{0} & {, e_{x y} \notin E} \\ {s w(x, y)} & {, e_{x y} \in E \wedge \Gamma(x) \cap \Gamma(y)=\Phi} \\ {\sum_{z \in \Gamma(x) \cap \Gamma(y)} \operatorname{Sim}_{z}^{C N}(x, y)} & {, e_{x y} \in E \wedge \Gamma(x) \cap \Gamma(y) \neq \Phi}\end{array}\right.$$
3. IEM Algorithm
3.1 Description of IEM algorithm
Based on the above definitions, the community division algorithm IEM is proposed which mainly consists of three parts, namely, forming the initial community, expanding the community and merging communities. The description of IEM is as follows.
1) Forming the Initial Community: Calculate the similarity of any node pairs in the network and store them in the matrix. With each node as a community, a node is selected randomly from the network as the starting node and set as the current node. Find the node [TeX:] $$v_{\mathrm{j}}$$ that has the largest similarity with the current node and merge the community containing [TeX:] $$v_{j}$$ with the community containing the current node to form a community. Then take the merged community as the current community.
2) Expanding the Community: Find the node [TeX:] $$v_{\mathrm{k}}$$ that has the largest similarity with [TeX:] $$v_{j} \text { and take } v_{\mathrm{k}}$$ as the next node to be clustered. If [TeX:] $$v_{\mathrm{k}}$$ does not belong to the current community, it means the current initial community has formed. In such a situation, we set vk as current node and continue to look for the node having the largest similarity with [TeX:] $$v_{\mathrm{k}}$$ so as to form the next new community. Or else, it means vk has been clustered into the current community. In such a situation, a node that has not been visited was randomly selected from the network and taken as the current node then another node was also selected to form the next community. Execute above steps repeatedly until all nodes have been visited, which means all initial communities have been expanded.
3) Merging communities: Calculate the modularity of the network. On the basis of the current community structure, calculate the modularity of the corresponding network that would form after merging any two communities. Then initialize the modularity matrix Q. That is to say, the [TeX:] $$Q_{i j}$$ in Q equals the corresponding modularity of the network that would form after merging the community i and the community j. If max [TeX:] $$\left(Q_{\mathrm{ij}}\right)$$ is higher than the modularity of the current network, merge the community i and the community j, and update the community structure of the network. Execute above operations repeatedly until no two communities can be merged, that is, the modularity would not increase no matter which two communities are merged. Then it means the final community structure forms.
3.2 Implementation of IEM Algorithm
The implementation of IEM algorithm is as follows.
4. Experiments and Analysis
Experiments were done on several weighted networks, which show that IEM algorithm is superior to other algorithms for community division in weighted networks with the higher accuracy and relatively lower complexity.
4.1 Datasets
Five real weighted networks were got from the network (http://konect.uni-koblenz.de/networks/) and descriptions of these datasets are as follows.
(1) Zachary's Karate club: It is a relationship network between members of a karate club in a university of America. There are 34 nodes and 78 edges in the network where a node represents a member, an edge represents the close relationship between the two members and the weight represents the close degree of the two members.
(2) Les Misérables: It is a character relationship network originated from the novel of Les Misérables. There are 77 nodes and 254 edges in the network where a node represents a character, an edge represents the appearance of the two characters in the same scene and the weight represents the times they appeared simultaneously.
(3) Madrid Train Bombing: This is a terrorist network in the train bombings in Madrid, Spain in 2004. There are 64 nodes and 243 edges in the network where a node represents a terrorist, an edge represents the cooperation or communication between the two terrorists in train bombings, and the weight represents the frequency of their contact.
(4) US Airport: It is a US air transport network that has 332 nodes and 2,126 edges. In this network a node represents an airport, an edge represents there is a route between the two airports and the weight represents the number of flights between these two airports.
(5) Net Science: This is a network of scientists that published papers cooperatively. There are 379 nodes and 914 edges in the network where a node represents a scientist, an edge represents the two scientists have worked together and the weight represents the number of their cooperation.
4.2 Evaluation Index
Modularity Q is a commonly used standard to evaluate the community division quality of algorithms. For a certain division of the network, the larger modularity always means the more reasonable division of the network. Usually, the value of Q is between 0.3 and 0.7. So many algorithms try to optimize the community division results of the network by maximizing the modularity function.
In the paper, the weighted modularity [TeX:] $$Q_{w}$$ [11] is used as an evaluation index. Its definition is as follows.
[TeX:] $$Q_{w}=\frac{1}{2 W} \sum_{i j}\left(w_{i j}-\frac{w_{i} w_{j}}{2 W}\right) \delta\left(C_{i}, C_{j}\right)$$
where, [TeX:] $$v_{\mathrm{i}}, v_{\mathrm{j}} \in \mathrm{V} \text { and } w_{i j}$$ represents the weight of [TeX:] $$e_{i j}$$.
[TeX:] $$w_{\mathrm{i}}=\sum_{j} w_{i j}$$ represents the strength of [TeX:] $$V_{i}.$$ [TeX:] $$w_{j}=\sum_{i} w_{i j}$$ represents the strength of [TeX:] $$v_{\mathrm{j}}.$$
[TeX:] $$W=\sum_{i j} w_{i j}$$ represents the sum of the weights of all edges in the network.
[TeX:] $$\delta\left(C_{i}, C_{j}\right)$$ is a function. If the node [TeX:] $$v_{\mathrm{i}} \text { and } v_{\mathrm{j}}$$ are in the same community, [TeX:] $$\delta\left(C_{i}, C_{j}\right)$$ is 1, or else it equals 0.
4.3 Weighted Similarity Index
In the measurement of the similarity between nodes, the similarity can be defined according to the local attributes of the nodes or the topological information of the network. In general, there are three algorithms based on the similarity. They are similarity algorithms based on the CNs, the node degree and the path of the network. The following is a brief introduction of three classical weighted similarity indexes used in our experimental comparison, namely, weighted CN (written [TeX:] $$\left.S_{x y}^{\pi_{-} C N}\right),$$ weighted AA (written [TeX:] $$\left.S_{x y}^{W_{-} A A}\right)$$ and weighted RA (written [TeX:] $$S_{x y}^{W}$$), in which [TeX:] $$S_{x y}$$ represents the similarity between the node [TeX:] $$v_{x}$$ and [TeX:] $$v_{y}, w_{x y}$$ represents the weight of the edge connecting [TeX:] $$v_{x} \operatorname{and} v_{y}, \Gamma(x)$$ represents the set of neighbors of the node [TeX:] $$v_{x}$$ and S(x) represents the strength of the node [TeX:] $$V_{x}$$ as mentioned above.
[TeX:] $$S_{x y}^{W}-C N=\sum_{z \in \Gamma(x) \cap \Gamma(y)} \frac{w_{x z}+w_{z y}}{2}m$$
[TeX:] $$S_{x y}^{W}-^{A A}=\sum_{z \in \Gamma(x) \cap \Gamma(y)} \frac{w_{x z}+w_{z y}}{\log (1+s(z))}$$
[TeX:] $$S_{x y}^{W}-^{R A}=\sum_{z \in \Gamma(x) \cap \Gamma(y)} \frac{1}{s(z)}$$
4.4 Comparison of Experimental Results
In terms of the above datasets, we compared IEM algorithm with three classical similarity indexes as described in [11], namely, weighted CN, weighted AA and weighted RA. We have also done a comparison of IEM with CRMA algorithm. Experimental results were shown in Table 1 where the first column listed the five networks, the second column listed the number of nodes and edges of each network, five algorithms were listed respectively from the third column to the seventh column, the community findings were expressed by the number of the community and the modularity of the network was presented by [TeX:] $$\mathrm{p} / \mathrm{Q}_{\mathrm{w}}.$$
Community discovery results of five algorithms on five datasets
[TeX:] $$|V| /|E|$$
weighted CN
weighted AA
weighted RA
CRMA
Karate Club
34 / 78 2 / 0.4547 2 / 0.4547 2 / 0.4547 2 / 0.4547 4 / 0.4950
Les Misérables 77 / 254 1 / 0.0350 3 / 0.4185 3 / 0.4577 9 / 0.5222 5 / 0.5427
Train Bombing 64 / 243 1 / 0.0303 4 / 0.3626 4 / 0.3604 4 / 0.4420 5 / 0.4579
US Airport 332/ 2,126 2 / 0.0174 3 / 0.0987 3 / 0.1039 4 / 0.1347 4 / 0.1932
Net Science 379 / 914 8 / 0.6045 19 / 0.8453 18 / 0.8499 21 / 0.8430 19 / 0.8512
(1) As to Karate club network, division results of the first four algorithms were the same where the network was divided into 2 communities as shown in Fig. 1; while IEM algorithm divided this network into 4 communities as shown in Fig. 2. The modularity was improved by 11.11% compared with the other four algorithms. Here, it should be noticed that in all figures of this paper, nodes in different communities were represented by different colors according to division results so as to express the results clearly. Additionally, it should be emphasized that the value of the network modularity and the number of communities of division results will be different in terms of different algorithms and different implementation methods. In general, the larger modularity means the relatively accurate number of communities and the better community structure that is closer to the real network.
CRMA for Karate Club.
IEM for Karate Club.
(2) As to Les Misérables network and Train Bombing network, division results of these five algorithms were all different. Among them, the division result of the weighted CN algorithm is the poorest because the weighted CN index only considered the influence of the weight of the edge connecting two nodes and their neighbors on the similarity. However, in the Les Misérables network, the weight represents the times of the two characters' appearance in the same scene simultaneously. And in the Train Bombing network, the weight represents the frequency of the contact of terrorists. Thus, most weights of edges in these two networks are 1, which led to the weighted CN similarity are 1 all the same. So in the clustering, one neighbor would be randomly selected and clustered into the community, which resulted in the deviation of community division. Overall, there are relatively small differences in division results of the latter four algorithms, and the CRMA and IEM algorithm have the relatively better performance. For these two networks, community division results of CRMA and IEM algorithms were shown in Figs. 3–6.
CRMA for Les Misérables.
IEM for Les Misérables.
CRMA for Train Bombing.
IEM for Train Bombing.
(3) As to the US Airport network, division results of these five algorithms were all poor. In this network, 59.7% node pairs have no common neighbors. Moreover, among all node pairs with CNs, 46.5% node pairs have only one CN, which led to the lower modularity of each algorithm and poor quality of community division of weighted CN, weighted AA and weighted RA. Because these three similarity indexes just take into account of the influence of the degree or the strength of CNs on the similarity. However, IEM algorithm used the edge weight strength of the node pair to deal with the situation of having no CNs, so its community division quality is the highest. For this network, community division results of CRMA and IEM algorithms were shown in Figs. 7 and 8.
(4) As to the Net Science network, although it is sparse, the average weighted degree of nodes is 2.583, and the average clustering coefficient is about 0.798. So these five algorithms all have better performance on this network. Among them, the performance of the weighted CN is the worst, which had a large difference in the number of communities and the modularity compared with other four algorithms. While the division results of the latter four algorithms have small differences. Of these, community division results of CRMA and IEM algorithms were shown in Figs. 9 and 10. In general, the community division quality of IEM algorithm is better than the other four algorithms. This further verified the correctness of IEM algorithm which defined the weighted similarity extensively combining with the edge weight, the degree, the intensity and the common neighbors of the two nodes.
CRMA for US Airport.
IEM for US Airport.
CRMA for Net Science.
IEM for Net Science.
From experimental results we know that IEM outperforms the other three weighted similarity indices, which could further verify its correctness and higher division quality. Additionally, IEM is better than CRMA algorithm. CRMA was the improvement of AGMA algorithm which took into account of the sign of the edge in order to get better division results in signed networks. Though it is still applicable to traditional networks that only have positive links, each algorithm was designed to achieve its underlying goal, which naturally brought different results with regard to different networks. In terms of community division in traditional weighted networks, IEM is more reasonable and effective than CRMA algorithm.
Overall, though community division results of IEM for all these datasets are different from the other four algorithms, the modularity of its divisions are all the highest, which can show its superiority. Moreover, division results of IEM for networks of Karate club, US Airport and Net Science are basically consistent with the fast greedy, betweenness and other classical algorithms described in [13] in the number of communities and the modularity. This can also further verify the correctness of IEM algorithm.
4.5 Complexity Analysis
For the network [TeX:] $$G=(V, E, W) \text { where }|V|=n,$$ if we directly use the method based on the optimization of the modularity to cluster the nodes, the complexity would be very high. Therefore, the algorithm proposed in this paper first realized the fast clustering of nodes by defining the weighted similarity index, and then the initial division result can be got which consists of p communities. After that, it optimized the division result by merging communities to maximize the modularity of the network so as to get the best performance.
In IEM algorithm, we first calculate the similarity of any nodes pairs and save these values into the matrix, so the computational complexity is O(m). Secondly, we traverse the matrix and use list (i, arraylist) to store the label of the node that has the highest similarity with node [TeX:] $$v_{\mathrm{i}},$$ so the computational complexity is O(nlog2n). Lastly, we calculate the modularity of the network after merging any two communities among p communities, so the computational complexity is [TeX:] $$O\left(p^{2}\right).$$ Compared with other algorithms, the computational complexity of IEM is slightly increased because of the additional computation of merging communities. However, for large scale networks, p is far less than n. So it can also guarantee the feasibility and effectiveness of the time on the premise of achieving a higher accuracy.
The existing weighted similarity indices only considered the influence of the weight information of common neighbors on the similarity, which may lead to poor community division results for some special networks. In view of this, a new algorithm IEM was proposed so as to achieve a more reasonable division of weighted networks. The algorithm consisted of three stages, namely, forming the initial community, expanding communities and merging them. In the first two stages, we focused on the influence of common neighbors to the similarity of the two nodes, and the weighted similarity of the two nodes based on the degree, the strength and the weight information of their common neighbors was defined. Moreover, the situation of the two nodes having no CNs was also taken into account, and the edge weight strength was defined as their similarity. Then the most closely related nodes were clustered fast according to their similarity to form the initial community and expand it. In the third stage, for those small communities consisting of only two nodes that may emerge in the first two stages, we merged these communities by maximizing the weighted modularity of the network, thus the more reasonable and accurate community division results could be got. The weighted similarity index proposed improved weighted CN, weighted AA, and weighted RA. In addition, for the traditional weighted network containing only positive links, the IEM algorithm was more efficient than CRMA algorithm. The experimental results showed its effectiveness and high quality for community division of weighted networks. For large scale networks, how to reduce the computational complexity so as to improve the efficiency of the algorithm is the further research.
This paper is supported by Science Foundation for Young Scientists of Northeast Petroleum University (No. 2018QNQ-01) and Heilongjiang Natural Science Foundation (No. LH2019F042).
Miaomiao Liu
She was born in 1982 and she is currently an associate professor of Northeast Petroleum University in China. She received the master's degree from Ocean University of China in 2006 and got her doctorate at Yanshan University in 2017. Her main research interests include community discovery and link prediction in social networks.
Jingfeng Guo
He is currently a professor and Doctoral supervisor of Yanshan University in China. His research interests include database theory, data mining and social network analysis.
She is currently an associate professor and Master supervisor of Yanshan University in China. Her research interests include community discovery and information dissemination in social networks.
1 M. E. J. Newman, "Analysis of weighted networks," Physical Review E, vol. 70, no. 5, 2004.custom:[[[-]]]
2 K. Subramani, A. V elkov, I. Ntoutsi, P. Kroger, H. P. Kriegel, "Density-based community detection in social networks," in Proceedings of 2011 IEEE 5th International Conference on Internet Multimedia Systems Architecture and Application, Bangalore, India, 2011;pp. 1-8. custom:[[[-]]]
3 R. Liu, S. Feng, R. Shi, W. Guo, "Weighted graph clustering for community detection of large social networks," Procedia Computer Science, vol. 31, pp. 85-94, 2014.doi:[[[10.1016/j.procs.2014.05.248]]]
4 T. Sharma, "Finding communities in weighted signed social networks," in Proceedings of 2012 IEEE/ACM International Conference on Advances Social Networks Analysis and Mining, Istanbul, T urkey, 2012;pp. 978-982. custom:[[[-]]]
5 Z. Lu, Y. Wen, G. Cao, "Community detection in weighted networks: algorithms and applications," in Proceedings of 2013 IEEE International Conference on Pervasive Computing and Communications (PerCom), San Diego, CA, 2013;pp. 179-184. custom:[[[-]]]
6 K. W ang, G. H. Lv, Z. W. Liang, M. Y. Y e, "Detecting community in weighted complex network based on similarities," Journal of Sichuan University (Natural Science Edition), vol. 51, no. 6, pp. 1170-1176, 2014.custom:[[[-]]]
7 W. Q. Lin, F. S. Lu, Z. Y. Ding, Q. Y. Wu, B. Zhou, Y. Jia, "Parallel computing hierarchical community approach based on weighted-graph," Journal of Software, vol. 6, no. 23, pp. 1517-1530, 2012.custom:[[[-]]]
8 S. W ang, "Community detection based on the interaction modularity on weighted graphs," Y unnan UniversityKunming, China, 2014.custom:[[[-]]]
9 P. Zhan, "Implementation of parallelized method for local community detection in weighted complex networks," South China University of T echnologyGuangzhou, China, 2013.custom:[[[-]]]
10 J. Zhao, J. An, "Community detection algorithm for directed and weighted network," Application Research of Computers, vol. 31, no. 12, pp. 3795-3799, 2014.custom:[[[-]]]
11 Z. Yao, "The analysis and prediction of weighted complex networks," Qingdao Technological UniversityQingdao, China, 2012.custom:[[[-]]]
12 J. Guo, M. Liu, L. Liu, X. Chen, "An improved community discovery algorithm in weighted social networks," ICIC Express Letters, vol. 10, no. 1, pp. 35-41, 2016.custom:[[[-]]]
13 X. Liu, "Community structure detection in complex networks via objective function optimization," National University of Defense T echnologyChangsha, China, 2012.custom:[[[-]]]
Received: March 3 2017
Revision received: April 18 2017
Accepted: May 12 2017
Corresponding Author: Miaomiao Liu* ([email protected])
Miaomiao Liu*, Northeast Petroleum University, Daqing, China, [email protected]
Jingfeng Guo**, College of Information Science and Engineering, Yanshan University, Qinhuangdao, China, [email protected]
Jing Chen**, College of Information Science and Engineering, Yanshan University, Qinhuangdao, China, [email protected] | CommonCrawl |
Parabolic reaction-diffusion systems with nonlocal coupled diffusivity terms
Homogenization of trajectory attractors of 3D Navier-Stokes system with randomly oscillating force
May 2017, 37(5): 2395-2430. doi: 10.3934/dcds.2017104
Stability of pyramidal traveling fronts in the degenerate monostable and combustion equations Ⅰ
Zhen-Hui Bu and Zhi-Cheng Wang ,
School of Mathematics and Statistics, Lanzhou University, Lanzhou, Gansu 730000, China
* Corresponding author: Z.-C. Wang
Received July 2016 Revised December 2016 Published February 2017
This paper is concerned with traveling curved fronts in reaction diffusion equations with degenerate monostable and combustion nonlinearities. For a given admissible pyramidal in three-dimensional spaces, the existence of a pyramidal traveling front has been proved by Wang and Bu [30] recently. By constructing new supersolutions and developing the arguments of Taniguchi [25] for the Allen-Cahn equation, in this paper we first characterize the pyramidal traveling front as a combination of planar fronts on the lateral surfaces, and then establish the uniqueness and asymptotic stability of such three-dimensional pyramidal traveling fronts under the condition that given perturbations decay at infinity.
Keywords: Pyramidal traveling front, reaction diffusion equation, degenerate monostable nonlinearity, combustion nonlinearity, stability.
Mathematics Subject Classification: Primary: 35B35, 35K57; Secondary: 35K55.
Citation: Zhen-Hui Bu, Zhi-Cheng Wang. Stability of pyramidal traveling fronts in the degenerate monostable and combustion equations Ⅰ. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2395-2430. doi: 10.3934/dcds.2017104
D. G. Aronson and H. F. Weinberger, Multidimensional nonlinear diffusions arising in population genetics, Adv. Math., 30 (1978), 33-76. doi: 10.1016/0001-8708(78)90130-5. Google Scholar
A. Bonnet and F. Hamel, Existence of non-planar solutions of a simple model of premixed Bunsen flames, SIAM J. Math. Anal., 31 (1999), 80-118. doi: 10.1137/s0036141097316391. Google Scholar
Z.-H. Bu and Z.-C. Wang, Curved fronts of monostable reaction-advection-diffusion equations in space-time periodic media, Commun. Pure Appl. Anal., 15 (2016), 139-160. doi: 10.3934/cpaa.2016.15.139. Google Scholar
Z. -H. Bu and Z. -C. Wang, Global stability of V-shaped traveling fronts in combustion and degenerate monostable equations, submitted. Google Scholar
Z. -H. Bu and Z. -C. Wang, Stability of pyramidal traveling fronts in degenerate monostable and combustion equations Ⅱ, preprint. Google Scholar
M. El Smaily, F. Hamel and R. Huang, Two-dimensional curved fronts in a periodic shear flow, Nonlinear Anal., 74 (2011), 6469-6486. doi: 10.1016/j.na.2011.06.030. Google Scholar
[7] G. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer, Berlin, 2001. Google Scholar
F. Hamel, Bistable transition fronts in $\mathbb{R}^{N}$, Adv. Math., 289 (2016), 279-344. doi: 10.1016/j.aim.2015.11.033. Google Scholar
F. Hamel and R. Monneau, Solutions of semilinear elliptic equations in $\mathbb{R}^{N}$ with conicalshaped level sets, Comm. Partial Differential Equations, 25 (2000), 769-819. doi: 10.1080/03605300008821532. Google Scholar
F. Hamel and N. Nadirashvili, Travelling fronts and entire solutions of the Fisher-KPP equation in $\mathbb{R}^{N}$, Arch. Ration. Mech. Anal., 157 (2001), 91-163. doi: 10.1007/PL00004238. Google Scholar
F. Hamel, R. Monneau and J.-M. Roquejoffre, Stability of conical fronts in a model for conical flames in two space dimensions, Ann. Sci. École Normale Sup., 37 (2004), 469-506. doi: 10.1016/j.ansens.2004.03.001. Google Scholar
F. Hamel, R. Monneau and J.-M. Roquejoffre, Existence and qualitative properties of multidimensional conical bistable fronts, Discrete Contin. Dyn. Syst., 13 (2005), 1069-1096. doi: 10.3934/dcds.2005.13.1069. Google Scholar
F. Hamel, R. Monneau and J.-M Roquejoffre, Asymptotic properties and classification of bistable fronts with Lipschitz level sets, Discrete Contin. Dyn. Syst., 14 (2006), 75-92. doi: 10.3934/dcds.2006.14.75. Google Scholar
M. Haragus and A. Scheel, Corner defects in almost planar interface propagation, Ann. Inst. H. Poincaré Anal. Linéaire, 23 (2006), 283-329. doi: 10.1016/j.anihpc.2005.03.003. Google Scholar
M. Haragus and A. Scheel, Almost planar waves in anisotropic media, Comm. Partial Differential Equations, 31 (2006), 791-815. doi: 10.1080/03605300500361420. Google Scholar
R. Huang, Stability of travelling fronts of the Fisher-KPP equation in $\mathbb{R}^{N}$, Nonlinear Diff. Eq. Appl., 15 (2008), 599-622. doi: 10.1007/s00030-008-7041-0. Google Scholar
Y. Kurokawa and M. Taniguchi, Multi-dimensional pyramidal traveling fronts in Allen-Cahn equations, Proc. Roy. Soc. Edinburgh Sect. A, 141 (2011), 1031-1054. doi: 10.1017/S0308210510001253. Google Scholar
J. A. Leach, D. J. Needham and A. L. Kay, The evolution of reaction-diffusion waves in a class of scalar reaction-diffusion equations: Algebraic decay rates, Phys. D, 167 (2002), 153-182. doi: 10.1016/S0167-2789(02)00428-1. Google Scholar
W.-M. Ni and M. Taniguchi, Traveling fronts of pyramidal shapes in competition-diffusion systems, Netw. Heterog. Media, 8 (2013), 379-395. doi: 10.3934/nhm.2013.8.379. Google Scholar
H. Ninomiya and M. Taniguchi, Global stability of traveling curved fronts in the Allen-Cahn equations, Discrete Contin. Dyn. Syst., 15 (2006), 819-832. doi: 10.1016/j.jde.2004.06.011. Google Scholar
H. Ninomiya and M. Taniguchi, Existence and global stability of traveling curved fronts in the Allen-Cahn equations, J. Differential Equations, 213 (2005), 204-233. doi: 10.1016/j.jde.2004.06.011. Google Scholar
D. H. Sattinger, Monotone methods in nonlinear elliptic and parabolic boundary value problems, Insiana Univ. Math. J., 21 (1972), 979-1000. Google Scholar
W.-J. Sheng, W.-T. Li and Z.-C. Wang, Periodic pyramidal traveling fronts of bistable reaction-diffusion equations with time-periodic nonlinearity, J. Differential Equations, 252 (2012), 2388-2424. doi: 10.1016/j.jde.2011.09.016. Google Scholar
M. Taniguchi, Traveling fronts of pyramidal shapes in the Allen-Cahn equations, SIAM J. Math. Anal., 39 (2007), 319-344. doi: 10.1137/060661788. Google Scholar
M. Taniguchi, The uniqueness and asymptotic stability of pyramidal traveling fronts in the Allen-Cahn equations, J. Differential Equations, 246 (2009), 2103-2130. doi: 10.1016/j.jde.2008.06.037. Google Scholar
M. Taniguchi, Multi-dimensional traveling fronts in bistable reaction-diffusion equations, Discrete Contnu. Dyn. Syst., 32 (2012), 1011-1046. doi: 10.3934/dcds.2012.32.1011. Google Scholar
M. Taniguchi, An $(N-1)$-dimensional convex compact set gives an $N$-dimensional traveling front in the Allen-Cahn equation, SIAM J. Math. Anal., 47 (2015), 455-476. doi: 10.1137/130945041. Google Scholar
M. Taniguchi, Convex compact sets in $\mathbb{R}^{N-1}$ give traveling fronts of cooperation-diffusion systems in $\mathbb{R}^{N}$, J. Differential Equations, 260 (2016), 4301-4338. doi: 10.1016/j.jde.2015.11.010. Google Scholar
A. I. Volpert, V. A. Volpert and V. A. Volpert, Traveling Wave Solutions of Parabolic Systems 140, Amer. Math. Soc. , Providence, RI, 1994. Google Scholar
Z.-C. Wang and Z.-H. Bu, Nonplanar traveling fronts in reaction-diffusion equations with combustion and degenerate Fisher-KPP nonlinearity, J. Differential Equations, 260 (2016), 6405-6450. doi: 10.1016/j.jde.2015.12.045. Google Scholar
Z.-C. Wang, W.-T. Li and S. Ruan, Existence, uniqueness and stability of pyramidal traveling fronts in reaction-diffusion systems, Sci. China Math., 59 (2016), 1869-1908. doi: 10.1007/s11425-016-0015-x. Google Scholar
Z.-C. Wang, W.-T. Li and S. Ruan, Existence and stability of traveling wave fronts in reaction advecion diffusion equations with nonlocal delay, J. Differential Equations, 238 (2007), 153-200. doi: 10.1016/j.jde.2007.03.025. Google Scholar
Z.-C. Wang, H.-L. Niu and S. Ruan, On the existence of axisymmetric traveling fronts in the Lotka-Volterra competition-diffusion system in $\mathbb{R}^{3}$, Discrete Contin. Dyn. Syst -B, 22 (2017), 1111-1144. doi: 10.3934/dcdsb.2017055. Google Scholar
Z.-C. Wang and J. Wu, Periodic traveling curved fronts in reaction-diffusion equation with bistable time-periodic nonlinearity, J. Differential Equations, 250 (2011), 3196-3229. doi: 10.1016/j.jde.2011.01.017. Google Scholar
Z.-C. Wang, Traveling curved fronts in monotone bistable systems, Discrete Contin. Dyn. Syst., 32 (2012), 2339-2374. doi: 10.3934/dcds.2012.32.2339. Google Scholar
Z.-C. Wang, Cylindrically symmetric traveling fronts in reaction-diffusion equations with bistable nonlinearity, Proc. Roy. Soc. Edinburgh Sect. A, 145 (2015), 1053-1090. doi: 10.1017/S0308210515000268. Google Scholar
Y.-P. Wu and X.-X. Xing, Stability of traveling waves with critical speeds for p-degree Fisher-type equations, Discrete Contin. Dyn. Syst., 20 (2008), 1123-1139. doi: 10.3934/dcds.2008.20.1123. Google Scholar
Y.-P. Wu, X.-X. Xing and Q.-X. Ye, Stability of traveling waves with algebraic decay for n-degree Fisher-type equations, Discrete Contin. Dyn. Syst., 16 (2006), 47-66. doi: 10.3934/dcds.2006.16.47. Google Scholar
Shi-Liang Wu, Yu-Juan Sun, San-Yang Liu. Traveling fronts and entire solutions in partially degenerate reaction-diffusion systems with monostable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 921-946. doi: 10.3934/dcds.2013.33.921
Rui Huang, Ming Mei, Yong Wang. Planar traveling waves for nonlocal dispersion equation with monostable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3621-3649. doi: 10.3934/dcds.2012.32.3621
Zhen-Hui Bu, Zhi-Cheng Wang. Global stability of V-shaped traveling fronts in combustion and degenerate monostable equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2251-2286. doi: 10.3934/dcds.2018093
Michaël Bages, Patrick Martinez. Existence of pulsating waves in a monostable reaction-diffusion system in solid combustion. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 817-869. doi: 10.3934/dcdsb.2010.14.817
Michio Urano, Kimie Nakashima, Yoshio Yamada. Transition layers and spikes for a reaction-diffusion equation with bistable nonlinearity. Conference Publications, 2005, 2005 (Special) : 868-877. doi: 10.3934/proc.2005.2005.868
Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 0-0. doi: 10.3934/dcds.2020033
Shi-Liang Wu, Tong-Chang Niu, Cheng-Hsiung Hsu. Global asymptotic stability of pushed traveling fronts for monostable delayed reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3467-3486. doi: 10.3934/dcds.2017147
Shi-Liang Wu, Wan-Tong Li, San-Yang Liu. Exponential stability of traveling fronts in monostable reaction-advection-diffusion equations with non-local delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 347-366. doi: 10.3934/dcdsb.2012.17.347
Xiaojie Hou, Yi Li, Kenneth R. Meyer. Traveling wave solutions for a reaction diffusion equation with double degenerate nonlinearities. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 265-290. doi: 10.3934/dcds.2010.26.265
Hongmei Cheng, Rong Yuan. Multidimensional stability of disturbed pyramidal traveling fronts in the Allen-Cahn equation. Discrete & Continuous Dynamical Systems - B, 2015, 20 (4) : 1015-1029. doi: 10.3934/dcdsb.2015.20.1015
Yuri Latushkin, Roland Schnaubelt, Xinyao Yang. Stable foliations near a traveling front for reaction diffusion systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3145-3165. doi: 10.3934/dcdsb.2017168
Fengxin Chen. Stability and uniqueness of traveling waves for system of nonlocal evolution equations with bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 659-673. doi: 10.3934/dcds.2009.24.659
Tong Li, Jeungeun Park. Stability of traveling waves of models for image processing with non-convex nonlinearity. Communications on Pure & Applied Analysis, 2018, 17 (3) : 959-985. doi: 10.3934/cpaa.2018047
Xiaojie Hou, Wei Feng. Traveling waves and their stability in a coupled reaction diffusion system. Communications on Pure & Applied Analysis, 2011, 10 (1) : 141-160. doi: 10.3934/cpaa.2011.10.141
Lianzhang Bao, Zhengfang Zhou. Traveling wave solutions for a one dimensional model of cell-to-cell adhesion and diffusion with monostable reaction term. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 395-412. doi: 10.3934/dcdss.2017019
Kota Ikeda, Masayasu Mimura. Traveling wave solutions of a 3-component reaction-diffusion model in smoldering combustion. Communications on Pure & Applied Analysis, 2012, 11 (1) : 275-305. doi: 10.3934/cpaa.2012.11.275
Ming Mei, Yau Shu Wong. Novel stability results for traveling wavefronts in an age-structured reaction-diffusion equation. Mathematical Biosciences & Engineering, 2009, 6 (4) : 743-752. doi: 10.3934/mbe.2009.6.743
Claude-Michel Brauner, Josephus Hulshof, Luca Lorenzi, Gregory I. Sivashinsky. A fully nonlinear equation for the flame front in a quasi-steady combustion model. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1415-1446. doi: 10.3934/dcds.2010.27.1415
Wei-Ming Ni, Masaharu Taniguchi. Traveling fronts of pyramidal shapes in competition-diffusion systems. Networks & Heterogeneous Media, 2013, 8 (1) : 379-395. doi: 10.3934/nhm.2013.8.379
Zhen-Hui Bu Zhi-Cheng Wang | CommonCrawl |
A note on network repair crew scheduling and routing for emergency relief distribution problem
Ata Allah Taleizadeh 1, , Leopoldo Eduardo Cárdenas-Barrón 2,, and Roya Sohani 3,
School of Industrial Engineering, College of Engineering, University of Tehran, Tehran, Iran
School of Engineering and Sciences, Tecnológico de Monterrey, E. Garza Sada 2501 Sur, C.P. 64849, Monterrey, Nuevo León, México
Department of Industrial Engineering, Islamic Azad University, South Tehran Branch, Tehran, Iran
* Corresponding author: Tel. +52 81 83284235, Fax +52 81 83284153. E-mail address:[email protected] (L.E. Cárdenas-Barrón)
Received May 2017 Revised January 2018 Published October 2019 Early access August 2018
In current competitive market, the products and their demand's uncertainty are high. In order to reduce these uncertainties the coordination of supply chain is necessary. Supply chain can be managed under two viewpoints typically: 1) centralized supply chain and 2) decentralized supply chain, and the coordination can be done in both types of chains. In the centralized supply chain there exists a global decision maker who takes all the best decisions in order to maximize the profit of the whole supply chain. Here, the useful information required to make the best decisions is open to all members of the chain. On the other hand, in the decentralized supply chain all members decide in a separate and sequential way, how to maximize their profits. In order to coordinate efficiently the supply chain, both supplier and retailer are involved in a coordination contract that makes it possible for the decentralized decisions to maximize the profit of the entire supply chain. In this context, the situation that the supplier-retailer chain faces is a two-stage decision model. In the first stage the supplier, based on former knowledge about the market, decides the production capacity to reserve for the retailer. In the second stage, after that demand information is updated, the retailer determines the bundle price and the quantity of bundles to order. This paper considers a supply chain comprised of one supplier and one retailer in which two complementary fashion products are manufactured and sold as a bundle. The bundle has a short selling season and a stochastic price dependent on demand with a high level of uncertainty. Therefore, this research considers that the demand rates are uncertain and are dependent on selling prices and on a random noise effect on the market. Profit maximization models are developed for centralized and decentralized supply chains to determine decisions on production capacity reservation, order quantity of bundled products and the bundle-selling price. The applicability of the developed models and solution method are illustrated with a numerical example.
Keywords: Pricing, Inventory; supply chain coordination, risk and profit sharing.
Mathematics Subject Classification: 90B05.
Citation: Ata Allah Taleizadeh, Leopoldo Eduardo Cárdenas-Barrón, Roya Sohani. Coordinating the supplier-retailer supply chain under noise effect with bundling and inventory strategies. Journal of Industrial & Management Optimization, 2019, 15 (4) : 1701-1727. doi: 10.3934/jimo.2018118
M. Armstrong and J. Vickers, Competitive non-linear pricing and bundling, The Review of Economic Studies, 77 (2010), 30-60. doi: 10.1111/j.1467-937X.2009.00562.x. Google Scholar
R. Arora, Price bundling and framing strategies for complementary products, Journal of Product and Brand Management, 17 (2008), 475-484. Google Scholar
M. Banciu and F. ∅degaard, Optimal product bundling with dependent valuations: The price of independence, European Journal of Operational Research, 255 (2016), 481-495. doi: 10.1016/j.ejor.2016.05.022. Google Scholar
D. Barnes-Schuster, Y. Bassok and R. Anupindi, Coordination and flexibility in supply contracts with options, Manufacturing and Service Operations Management, 4 (2002), 171-207. Google Scholar
R. J. Bennett and P. J. Robson, Exploring the market potential and bundling of business association services, Journal of Services Marketing, 15 (2001), 222-239. Google Scholar
H. K. Bhargava, Retailer-driven product bundling in a distribution channel, Marketing Science, 31 (2012), 1014-1021. Google Scholar
G. R. Bitran and J. C. Ferrer, On pricing and composition of bundles, Production and Operations Management, 16 (2007), 93-108. Google Scholar
D. Brito and H. Vasconcelos, Interfirm bundling and vertical product differentiation, The Scandinavian Journal of Economics, 117 (2015), 1-27. Google Scholar
Z. Bulut, Ü. Gürler and A. Sen, Bundle pricing of inventories with stochastic demand, European Journal of Operational Research, 197 (2009), 897-911. doi: 10.1016/j.ejor.2006.09.106. Google Scholar
P. G. Cachon, Supply chain coordination with contracts. In: Graves, S., de Kok, T. (Eds.), Handbooks in Operations Research and Management Science. North Holland Press, 11 (2003), 229-340. Google Scholar
G. P. Cachon and M. A. Lariviere, Supply chain coordination with revenue-sharing contracts: strengths and limitations, Management Science(1), 51 (2005), 30-44. Google Scholar
A. Chakravarty, A. Mild and A. Taudes, Bundling decisions in supply chains, European Journal of Operational Research, 231 (2013), 617-630. Google Scholar
H. Chen, Y. F. Chen, C. H. Chiu, T. M. Choi and S. Sethi, Coordination mechanism for the supply chain with leadtime consideration and price-dependent demand, European Journal of Operational Research, 203 (2010), 70-80. Google Scholar
J. Chen and P. C. Bell, Coordinating a decentralized supply chain with customer returns and price-dependent stochastic demand using a buyback policy, European Journal of Operational Research, 212 (2011), 293-300. doi: 10.1016/j.ejor.2011.01.036. Google Scholar
K. L. Donohue, Efficient supply contracts for fashion goods with forecast updating and two production modes, Management Science, 46 (2000), 1397-1411. Google Scholar
J. C. Eckalbar, Closed-form solutions to bundling problems, Journal of Economics and Management Strategy, 19 (2010), 513-544. Google Scholar
H. Estelami, Consumer savings in complementary product bundles, Journal of Marketing Theory and Practice, 7 (1999), 107-114. Google Scholar
J. C. Ferrer, H. Mora and F. Olivares, On pricing of multiple bundles of products and services, European Journal of Operational Research, 206 (2010), 197-208. Google Scholar
J. S. Gans and S. P. King, Paying for loyalty: Product bundling in oligopoly, The Journal of Industrial Economics, 54 (2006), 43-62. Google Scholar
R. N. Giri, S. K. Mondal and M. Maiti, Bundle pricing strategies for two complementary products with different channel powers, Annals of Operations Research, (2017), 1-25. doi: 10.1007/s10479-017-2632-y. Google Scholar
M. Girju, A. Prasad and B. T. Ratchford, Pure components versus pure bundling in a marketing channel, Journal of Retailing, 89 (2013), 423-437. Google Scholar
J. P. Guiltinan, The price bundling of services: A normative framework, The Journal of Marketing, (1987), 74-85. Google Scholar
Ü. Gürler, S. Öztop and A. Şen, Optimal bundle formation and pricing of two products with limited stock, International Journal of Production Economics, 118 (2009), 442-462. Google Scholar
R. Glenn Hubbard, A. Saha and J. Lee, To bundle or not to bundle: Firms' choices under pure bundling, International Journal of the Economics of Business, 14 (2007), 59-83. Google Scholar
M. Li, H. Feng, F. Chen and J. Kou, Numerical investigation on mixed bundling and pricing of information products, International Journal of Production Economics, 144 (2013), 560-571. Google Scholar
P. P. Mathur and J. Shah, Supply chain contracts with capacity investment decision: Two-way penalties for coordination, International Journal of Production Economics, 114 (2008), 56-70. Google Scholar
C. Matutes and P. Regibeau, Compatibility and bundling of complementary goods in a duopoly, The Journal of Industrial Economics, 40 (1992), 37-54. Google Scholar
K. F. McCardle, K. Rajaram and C. S. Tang, Bundling retail products: Models and analysis, European Journal of Operational Research, 177 (2007), 1197-1217. Google Scholar
S. K. Mukhopadhyay, X. Yue and X. Zhu, A Stackelberg model of pricing of complementary goods under information asymmetry, International Journal of Production Economics, 134 (2011), 424-433. Google Scholar
B. Nalebuff, Bundling as an entry barrier, The Quarterly Journal of Economics, 119 (2004), 159-187. Google Scholar
H. Oppewal and B. Holyoake, Bundling and retail agglomeration effects on shopping behavior, Journal of Retailing and Consumer Services, 11 (2004), 61-74. Google Scholar
E. C. Rosenthal, J. L. Zydiak and S. S. Chaudhry, Vendor selection with bundling, Decision Sciences, 26 (1995), 35-48. Google Scholar
M. Sheikhzadeh and E. Elahi, Product bundling: Impacts of product heterogeneity and risk considerations, International Journal of Production Economics, 144 (2013), 209-222. Google Scholar
S. Sheng, A. M. Parker and K. Nakamoto, The effects of price discount and product complementarity on consumer evaluations of bundle components, Journal of Marketing Theory and Practice, 15 (2007), 53-64. Google Scholar
B. L. Simonin and J. A. Ruth, Bundling as a strategy for new product introduction: Effects on consumers' reservation prices for the bundle, the new product, and its tie-in, Journal of Business Research, 33 (1995), 219-230. Google Scholar
A. A. Taleizadeh, S. T. A. Niaki, M. B. Aryanezhad and A. F. Tafti, A genetic algorithm to optimize multiproduct multiconstraint inventory control systems with stochastic replenishment intervals and discount, The International Journal of Advanced Manufacturing Technology, 51 (2010), 311-323. Google Scholar
A. A. Taleizadeh and M. Noori-daryan, Pricing, manufacturing and inventory policies for raw material in a three-level supply chain, International Journal of Systems Science, 47 (2016), 919-931. doi: 10.1080/00207721.2014.909544. Google Scholar
A. A. Taleizadeh, M. Noori-Daryan and K. Govindan, Pricing and ordering decisions of two competing supply chains with different composite policies: A Stackelberg game-theoretic approach, International Journal of Production Research, 54 (2016), 2807-2836. Google Scholar
A. A. Taleizadeh, M. Noori-daryan and R. Tavakkoli-Moghaddam, Pricing and ordering decisions in a supply chain with imperfect quality items and inspection under buyback of defective items, International Journal of Production Research, 53 (2015), 4553-4582. Google Scholar
A. A. Taleizadeh and D. W. Pentico, An economic order quantity model with a known price increase and partial backordering, European Journal of Operational Research, 228 (2013), 516-525. doi: 10.1016/j.ejor.2013.02.014. Google Scholar
A. A. Taleizadeh, D. W. Pentico, M. S. Jabalameli and M. Aryanezhad, An economic order quantity model with multiple partial prepayments and partial backordering, Mathematical and Computer Modelling, 57 (2013), 311-323. doi: 10.1016/j.mcm.2012.07.002. Google Scholar
T. A. Taylor, Supply chain coordination under channel rebates with sales effort effects, Management Science, 48 (2002), 992-1007. Google Scholar
A. Vamosiu, Optimal bundling under imperfect competition, International Journal of Production Economics, 195 (2018), 45-53. Google Scholar
A. G. Vaubourg, Differentiation and discrimination in a duopoly with two bundles, International Journal of Industrial Organization, 24 (2006), 753-762. Google Scholar
R. Venkatesh and W. Kamakura, Optimal bundling and pricing under a monopoly: Contrasting complements and substitutes from independently valued products, Journal of Business, 76 (2003), 211-231. Google Scholar
Q. Wang, Discount pricing policies and the coordination of decentralized distribution systems, Decision Sciences, 36 (2005), 627-646. Google Scholar
Y. Wang, L. Sun, R. Qu and G. Li, Price and service competition with maintenance service bundling, Journal of Systems Science and Systems Engineering, 24 (2015), 168-189. Google Scholar
A. Wäppling, C. Strugnell and H. Farley, Product bundling strategies in Swedish markets: links to business orientation and perceived effects on consumer influence, International Journal of Consumer Studies, 34 (2010), 19-27. Google Scholar
R. Yan, Managing channel coordination in a multi-channel manufacturer-retailer supply chain, Industrial Marketing Management, 40 (2011), 636-642. Google Scholar
R. Yan and S. Bandyopadhyay, The profit benefits of bundle pricing of complementary products, Journal of Retailing and Consumer Services, 18 (2011), 355-361. Google Scholar
R. Yan, C. Myers, J. Wang and S. Ghose, Bundling products to success: The influence of complementarity and advertising, Journal of Retailing and Consumer Services, 21 (2014), 48-53. Google Scholar
R. Yan and Z. Pei, Retail services and firm profit in a dual-channel market, Journal of Retailing and Consumer Services, 16 (2009), 306-314. Google Scholar
X. Yue, S. K. Mukhopadhyay and X. Zhu, A Bertrand model of pricing of complementary goods under information asymmetry, Journal of Business Research, 59 (2006), 1182-1192. Google Scholar
Figure 1. Impact $a_1$ between two products on the retailer's pricing strategy
Figure 2. Impact $a_1$ between two products on the wholesale pricing strategy
Table 1. Some recent works related to bundling strategy
Literature Strategies Selling price Demand rate Situation
Chakravarti et al. [12] Bundling Bundle price Selling price Decentralized supply chains
Li et al. [25] Mix bundling Bundle price Selling price Bi-level programming
Yan et al. [51] Bundle pricing and advertising Bundle price Selling price Product complementary and advertisement of bundle product
Wang et al. [47] Service bundling — Service and Price bundling Duopoly competitive environment
Banciu and ∅degaard [3] Different bundling — — Simulation technique
Giri et al. [20] Pricing Bundling price Linearly dependent on price Duopoly market
Vamosiu [43] Imperfect Competition Mixed bundling — Pure bundling
This paper Bundling Bundle selling price Uncertain, selling price and random noise effect on market Centralized and decentralized supply chains
Table 2. Effects of basic demand size $a_1$ to the contract for product 1 when $Q_{1}^{c} <M_{1}^{c} = 487$
$a_1$ $p_{1} $ $w_{1} $ $d_{1} $ $\alpha _{1} $ $Q_{1}^{c} $ $F(s_{1} )$
500 237 151 133.50 2.64 316 0.887
700 326 192 178.00 -0.50 424 0.915
819.9 380 218 205.00 -2.36 487 0.927
Table 3. Effects of basic demand size $a_1$ to the contract for product 1, $ Q_{1}^{c} = M_{1}^{c} = 487$
$a_1$ $p_{1} $ $w_{1} $ $d_{1} $ $\alpha _{1} $ $F(s_{1} )$
820 405 231 217.50 -1.50 0.975
1000 541 298 285.50 -2.46 0.855
Table 4. Comparison between coordination contract vs price-only contract for profit of product 1, Coordination contract: $M_{1}^{c} $ = 487 and total profit = 279270
$w_{1} $ Capacity Supplier profit Retailer profit Total profit
163 449 142000 123210 265210
250 401 10954 90740 101694
Table 5. Effects of basic demand size to the contract under bundling policy, $ Q_{B}^{c} <M_{B}^{c} = 867 $
$a_1$ $p_{1B} $ $w_{B} $ $d_{B} $ $\alpha _{B} $ $Q_{B}^{c} $ $F(s_{B} )$
Table 6. Effects of basic demand size on the contract under bundling policy when, $ Q_{B}^{c} = M_{B}^{c} = 867 $
$a_1$ $p_{2B} $ $w_{B} $ $d_{B} $ $\alpha _{B} $ $F(s_{B} )$
850 474 298 272.0 -5.90 0.750
Table 7. Profit with bundling policy, Proposed contract: $M_{B}^{c} $ = 867 and total profit = 260621
$w_{B} $ Capacity Supplier profit Retailer profit Total profit
259 794 126730 87871 214601
Table 8. The results in numerical analysis
Percent change $p_{1} $ $p_{2} $ $p_{B} $ $F(s_{B} )$ $Q_{B}^{c} $ $w_{B} $ $\alpha _{B} $ $d_{B} $ Retailer profit Supplier profit
$a_{2} =0.5$ +50 52.8 52.48 38.44 6.02 -29.97 33.09 -279.44 33 -7.07 -4.66
+25 25.83 26.03 18.63 3.12 -12.82 15.83 -121.39 15.99 4.00 3.47
+15 15.42 15.70 11.08 2.04 -7.05 9.35 -68.06 9.51 3.99 3.13
-15 -15.42 -15.29 -10.61 -2.28 6.09 -8.99 51.94 -9.11 -7.31 -5.91
-25 -25.42 -25.62 -17.22 -4.08 9.29 -14.03 90.83 -14.98 -14.60 -9.97
-50 Infeasible
$\theta=0.25$ +50 10.42 10.33 1.65 0.36 2.08 1.08 17.78 1.42 4.89 3.53
+25 5.00 4.96 0.94 0.24 0.96 0.72 8.33 0.81 2.29 1.86
-15 -3.33 -3.31 -0.71 0.00 -0.32 -0.36 -3.06 -0.61 -1.91 -0.08
-25 -5.42 -4.96 -0.94 -0.12 -0.96 -0.72 -6.39 -0.81 -2.32 -1.05
-50 -10 -10.33 -1.65 -0.24 -1.06 -1.08 -14.72 -1.42 -4.72 -3.42
$\lambda =0.35$ +50 Infeasible
+25 0.00 0.00 13.21 2.40 -7.05 11.15 -66.39 11.34 6.83 5.81
+15 0.00 0.00 8.25 1.56 -4.01 6.83 -39.44 6.88 5.11 4.10
-15 0.00 0.00 -8.25 -1.68 4.17 -6.47 36.67 -7.29 -7.63 -4.22
-25 0.00 0.00 -13.44 -3.00 5.61 -11.15 55.83 -11.74 -12.06 -8.73
-50 0.00 0.00 -25.71 -6.83 8.33 -20.86 91.67 -22.06 -27.45 -19.74
Ali Naimi Sadigh, S. Kamal Chaharsooghi, Majid Sheikhmohammady. A game theoretic approach to coordination of pricing, advertising, and inventory decisions in a competitive supply chain. Journal of Industrial & Management Optimization, 2016, 12 (1) : 337-355. doi: 10.3934/jimo.2016.12.337
Sushil Kumar Dey, Bibhas C. Giri. Coordination of a sustainable reverse supply chain with revenue sharing contract. Journal of Industrial & Management Optimization, 2022, 18 (1) : 487-510. doi: 10.3934/jimo.2020165
Sanjoy Kumar Paul, Ruhul Sarker, Daryl Essam. Managing risk and disruption in production-inventory and supply chain systems: A review. Journal of Industrial & Management Optimization, 2016, 12 (3) : 1009-1029. doi: 10.3934/jimo.2016.12.1009
Benrong Zheng, Xianpei Hong. Effects of take-back legislation on pricing and coordination in a closed-loop supply chain. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021035
Yanhua Feng, Xuhui Xia, Lei Wang, Zelin Zhang. Pricing and coordination of competitive recycling and remanufacturing supply chain considering the quality of recycled products. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021089
Kai Kang, Taotao Lu, Jing Zhang. Financing strategy selection and coordination considering risk aversion in a capital-constrained supply chain. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021042
Bin Chen, Wenying Xie, Fuyou Huang, Juan He. Quality competition and coordination in a VMI supply chain with two risk-averse manufacturers. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2903-2924. doi: 10.3934/jimo.2020100
Prasenjit Pramanik, Sarama Malik Das, Manas Kumar Maiti. Note on : Supply chain inventory model for deteriorating items with maximum lifetime and partial trade credit to credit risk customers. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1289-1315. doi: 10.3934/jimo.2018096
Juliang Zhang, Jian Chen. Information sharing in a make-to-stock supply chain. Journal of Industrial & Management Optimization, 2014, 10 (4) : 1169-1189. doi: 10.3934/jimo.2014.10.1169
Juliang Zhang. Coordination of supply chain with buyer's promotion. Journal of Industrial & Management Optimization, 2007, 3 (4) : 715-726. doi: 10.3934/jimo.2007.3.715
Na Song, Ximin Huang, Yue Xie, Wai-Ki Ching, Tak-Kuen Siu. Impact of reorder option in supply chain coordination. Journal of Industrial & Management Optimization, 2017, 13 (1) : 449-475. doi: 10.3934/jimo.2016026
Jun Pei, Panos M. Pardalos, Xinbao Liu, Wenjuan Fan, Shanlin Yang, Ling Wang. Coordination of production and transportation in supply chain scheduling. Journal of Industrial & Management Optimization, 2015, 11 (2) : 399-419. doi: 10.3934/jimo.2015.11.399
Yeong-Cheng Liou, Siegfried Schaible, Jen-Chih Yao. Supply chain inventory management via a Stackelberg equilibrium. Journal of Industrial & Management Optimization, 2006, 2 (1) : 81-94. doi: 10.3934/jimo.2006.2.81
Nana Wan, Li Li, Xiaozhi Wu, Jianchang Fan. Risk minimization inventory model with a profit target and option contracts under spot price uncertainty. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021093
Min Li, Jiahua Zhang, Yifan Xu, Wei Wang. Effects of disruption risk on a supply chain with a risk-averse retailer. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021024
Kebing Chen, Tiaojun Xiao. Reordering policy and coordination of a supply chain with a loss-averse retailer. Journal of Industrial & Management Optimization, 2013, 9 (4) : 827-853. doi: 10.3934/jimo.2013.9.827
Chong Zhang, Yaxian Wang, Ying Liu, Haiyan Wang. Coordination contracts for a dual-channel supply chain under capital constraints. Journal of Industrial & Management Optimization, 2021, 17 (3) : 1485-1504. doi: 10.3934/jimo.2020031
Wei Chen, Fuying Jing, Li Zhong. Coordination strategy for a dual-channel electricity supply chain with sustainability. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021139
Tinghai Ren, Kaifu Yuan, Dafei Wang, Nengmin Zeng. Effect of service quality on software sales and coordination mechanism in IT service supply chain. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021165
Jonas C. P. Yu, H. M. Wee, K. J. Wang. Supply chain partnership for Three-Echelon deteriorating inventory model. Journal of Industrial & Management Optimization, 2008, 4 (4) : 827-842. doi: 10.3934/jimo.2008.4.827
Ata Allah Taleizadeh Leopoldo Eduardo Cárdenas-Barrón Roya Sohani | CommonCrawl |
American Institute of Mathematical Sciences
Journal Prices
Book Prices/Order
Proceeding Prices
About AIMS
E-journal Policy
Optimization of a model Fokker-Planck equation
KRM Home
On viscous quantum hydrodynamics associated with nonlinear Schrödinger-Doebner-Goldin models
September 2012, 5(3): 505-516. doi: 10.3934/krm.2012.5.505
Regularity criteria for the 3D MHD equations via partial derivatives
Xuanji Jia 1, and Yong Zhou 2,
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, Zhejiang, P. R., China
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, Zhejiang, China
Received December 2011 Revised February 2012 Published August 2012
Related Papers
In this paper, we establish two regularity criteria for the 3D MHD equations in terms of partial derivatives of the velocity field or the pressure. It is proved that if $\partial_3 u \in L^\beta(0,T; L^\alpha(\mathbb{R}^3)),~\mbox{with}~ \frac{2}{\beta}+\frac{3}{\alpha}\leq\frac{3(\alpha+2)}{4\alpha},~\alpha>2$, or $\nabla_h P \in L^\beta(0,T; L^{\alpha}(\mathbb{R}^3)),~\mbox{with}~\frac{2}{\beta}+\frac{3}{\alpha}< 3,~\alpha>\frac{9}{7},~\beta\geq 1$, then the weak solution $(u,b)$ is regular on $[0, T]$.
Keywords: regularity criteria, partial derivatives., MHD equations.
Mathematics Subject Classification: Primary: 35Q35, 35B65; Secondary: 76D0.
Citation: Xuanji Jia, Yong Zhou. Regularity criteria for the 3D MHD equations via partial derivatives. Kinetic & Related Models, 2012, 5 (3) : 505-516. doi: 10.3934/krm.2012.5.505
C. Cao and J. Wu, Two regularity criteria for the 3D MHD equations,, J. Differential Equations, 248 (2010), 2263. doi: 10.1016/j.jde.2009.09.020. Google Scholar
C. Cao and E. S. Titi, Global regularity criterion for the 3DNavier-Stokes equations involving one entry of the velocity gradient tensor,, Arch. Ration. Mech. Anal., 202 (2011), 919. doi: 10.1007/s00205-011-0439-6. Google Scholar
Q. Chen, C. Miao and Z. Zhang, On the regularity criterion of weak solution for the 3D viscous magneto-hydrodynamics equations,, Comm. Math. Phys., 284 (2008), 919. Google Scholar
H. Duan, On regularity criteria in terms of pressure for the 3D viscous MHD equations,, Appl. Anal. Available from: , (). Google Scholar
G. Duvaut and J. L. Lions, Inéquations en thermoélasticité et magnétohydrodyna-mique,, Arch. Ration. Mech. Anal., 46 (1972), 241. Google Scholar
J. Fan, S. Jiang, G. Nakamura and Y. Zhou, Logarithmically improved regularity criteria for the Navier-Stokes and MHD equations,, J. Math. Fluid Mech., 13 (2011), 557. doi: 10.1007/s00021-010-0039-5. Google Scholar
C. He and Y. Wang, On the regularity criteria for weak solutions to the magnetohydrodynamic equations,, J. Differential Equations, 238 (2007), 1. doi: 10.1016/j.jde.2007.03.023. Google Scholar
C. He and Z. Xin, On the regularity of weak solutions to the magnetohydrodynamic equations,, J. Differential Equations, 213 (2005), 235. doi: 10.1016/j.jde.2004.07.002. Google Scholar
E. Ji and J. Lee, Some regularity criteria for the 3D incompressible magnetohydrodynamics,, J. Math. Anal. Appl., 369 (2010), 317. doi: 10.1016/j.jmaa.2010.03.015. Google Scholar
X. Jia and Y. Zhou, Regularity criteria for the 3D MHD equations involving partial components,, Nonlinear Anal. Real World Appl., 13 (2012), 410. Google Scholar
M. A. Rojas-Medar, Magneto-micropolar fluid motion: existence and uniqueness of strong solutions,, Math. Nachr., 188 (1997), 301. doi: 10.1002/mana.19971880116. Google Scholar
M. Sermange and R. Temam, Some mathematical questions related to the MHD equations,, Comm. Pure Appl. Math., 36 (1983), 635. doi: 10.1002/cpa.3160360506. Google Scholar
J. Wu, Regularity results for weak solutions of the 3D MHD equations,, Discrete Contin. Dyn. Syst., 10 (2004), 543. Google Scholar
Y. Zhou, Remarks on regularities for the 3D MHD equations,, Discrete Contin. Dyn. Syst., 12 (2005), 881. doi: 10.3934/dcds.2005.12.881. Google Scholar
Y. Zhou, Regularity criteria for the 3D MHD equations in terms of the pressure,, Int. J. Non-Linear Mech., 41 (2006), 1174. doi: 10.1016/j.ijnonlinmec.2006.12.001. Google Scholar
Y. Zhou, On regularity criteria in terms of pressure for the Navier-Stokes equations in $\mathbbR^3$,, Proc. Am. Math. Soc., 134 (2006), 149. doi: 10.1090/S0002-9939-05-08118-9. Google Scholar
Y. Zhou, On a regularity criterion in terms of the gradient of pressure for the Navier-Stokes equations in $\mathbbR^N$,, Z. Angew. Math. Phys., 57 (2006), 384. doi: 10.1007/s00033-005-0021-x. Google Scholar
Y. Zhou and J. Fan, Logarithmically improved regularity criteria for the 3D viscous MHD equations,, Forum Math., 24 (2012), 691. Google Scholar
Y. Zhou and M. Pokorný, On the regularity of the solutions of the Navier-Stokes equations via one velocity component,, Nonlinearity, 23 (2010), 1097. doi: 10.1088/0951-7715/23/5/004. Google Scholar
Xuanji Jia, Yong Zhou. Regularity criteria for the 3D MHD equations via partial derivatives. II. Kinetic & Related Models, 2014, 7 (2) : 291-304. doi: 10.3934/krm.2014.7.291
Jishan Fan, Tohru Ozawa. Regularity criteria for the magnetohydrodynamic equations with partial viscous terms and the Leray-$\alpha$-MHD model. Kinetic & Related Models, 2009, 2 (2) : 293-305. doi: 10.3934/krm.2009.2.293
Tomoyuki Suzuki. Regularity criteria in weak spaces in terms of the pressure to the MHD equations. Conference Publications, 2011, 2011 (Special) : 1335-1343. doi: 10.3934/proc.2011.2011.1335
Luigi C. Berselli, Jishan Fan. Logarithmic and improved regularity criteria for the 3D nematic liquid crystals models, Boussinesq system, and MHD equations in a bounded domain. Communications on Pure & Applied Analysis, 2015, 14 (2) : 637-655. doi: 10.3934/cpaa.2015.14.637
Guji Tian, Xu-Jia Wang. Partial regularity for elliptic equations. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 899-913. doi: 10.3934/dcds.2010.28.899
Jishan Fan, Tohru Ozawa. Regularity criteria for the 2D MHD system with horizontal dissipation and horizontal magnetic diffusion. Kinetic & Related Models, 2014, 7 (1) : 45-56. doi: 10.3934/krm.2014.7.45
Jiahong Wu. Regularity results for weak solutions of the 3D MHD equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 543-556. doi: 10.3934/dcds.2004.10.543
Sadek Gala. A new regularity criterion for the 3D MHD equations in $R^3$. Communications on Pure & Applied Analysis, 2012, 11 (3) : 973-980. doi: 10.3934/cpaa.2012.11.973
Igor Kukavica. On partial regularity for the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 717-728. doi: 10.3934/dcds.2008.21.717
Jishan Fan, Yasuhide Fukumoto, Yong Zhou. Logarithmically improved regularity criteria for the generalized Navier-Stokes and related equations. Kinetic & Related Models, 2013, 6 (3) : 545-556. doi: 10.3934/krm.2013.6.545
Patrick Penel, Milan Pokorný. Improvement of some anisotropic regularity criteria for the Navier--Stokes equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1401-1407. doi: 10.3934/dcdss.2013.6.1401
Zijin Li, Xinghong Pan. Some Remarks on regularity criteria of Axially symmetric Navier-Stokes equations. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1333-1350. doi: 10.3934/cpaa.2019064
Kai Liu. Stationary solutions of neutral stochastic partial differential equations with delays in the highest-order derivatives. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3915-3934. doi: 10.3934/dcdsb.2018117
Yu-Zhu Wang, Yin-Xia Wang. Local existence of strong solutions to the three dimensional compressible MHD equations with partial viscosity. Communications on Pure & Applied Analysis, 2013, 12 (2) : 851-866. doi: 10.3934/cpaa.2013.12.851
Yukang Chen, Changhua Wei. Partial regularity of solutions to the fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5309-5322. doi: 10.3934/dcds.2016033
Quansen Jiu, Jitao Liu. Global regularity for the 3D axisymmetric MHD Equations with horizontal dissipation and vertical magnetic diffusion. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 301-322. doi: 10.3934/dcds.2015.35.301
Wendong Wang, Liqun Zhang, Zhifei Zhang. On the interior regularity criteria of the 3-D navier-stokes equations involving two velocity components. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2609-2627. doi: 10.3934/dcds.2018110
Jinbo Geng, Xiaochun Chen, Sadek Gala. On regularity criteria for the 3D magneto-micropolar fluid equations in the critical Morrey-Campanato space. Communications on Pure & Applied Analysis, 2011, 10 (2) : 583-592. doi: 10.3934/cpaa.2011.10.583
Juan Dávila, Olivier Goubet. Partial regularity for a Liouville system. Discrete & Continuous Dynamical Systems - A, 2014, 34 (6) : 2495-2503. doi: 10.3934/dcds.2014.34.2495
Kashif Ali Abro, Ilyas Khan. MHD flow of fractional Newtonian fluid embedded in a porous medium via Atangana-Baleanu fractional derivatives. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 377-387. doi: 10.3934/dcdss.2020021
2018 Impact Factor: 1.38
Download XML
PDF downloads (6)
HTML views (0)
on AIMS
Xuanji Jia Yong Zhou
Copyright © 2019 American Institute of Mathematical Sciences
Export File
RIS(for EndNote,Reference Manager,ProCite)
Recipient's E-mail*
验证码错误 | CommonCrawl |
ellipsix informatics
B meson decay confirmed!
Posted by David Zaslavsky on August 7, 2013 4:11 AM
B meson
EPS-HEP 2013
Time for a blog post that has been far too long coming! Remember the Quest for B Meson Decay? I wrote about this several months ago: the LHCb experiment had seen one of the rarest interactions in particle physics, the decay of the \(\mathrm{B}^0_s\) meson into a muon and antimuon, for the first time after 25 years of searching.
Lots of physicists were interested in this particular decay because it's unusually good at distinguishing between different theories. The standard model (which incorporates only known particles) predicts that a muon and antimuon should be produced in about 3.56 out of every billion \(\mathrm{B}^0_s\) decays — a number known as the branching ratio. But many other theories that involve additional, currently unknown particles, predict drastically different values. A precise measurement of the branching ratio thus has the ability to either rule out lots of theoretical predictions, or provide the first confirmation on Earth of the existence of unknown particles!
Naturally, most physicists were hoping for the latter possibility — having an unknown particle to look for makes things exciting. But so far, the outlook doesn't look good. Last November, LHCb announced their measurement of the branching ratio as
$$\mathcal{B}(\mathrm{B}^0_s\to\ulp\ualp) = (3.2^{+1.5}_{-1.2})\times 10^{-9}$$
In the months since then, the LHCb people have done a more thorough analysis that incorporates more data. They presented it a couple of weeks ago at the European Physical Society Conference on High-Energy Physics in Sweden, EPS-HEP 2013, along with a similar result from the CMS experiment. Here's the exciting thing: the two groups were able to partially combine their data into one overall result, and when they do so they pass the arbitrary \(5\sigma\) threshold that counts as a discovery!
Yes, I am about to explain what that means. :-)
In this plot from LHCb,
Plot of selected LHCb events
and this one from CMS,
Plot of selected CMS events
you can see the number of events detected in each energy range, the black dots, along with the theoretical prediction for what should be detected assuming the standard model's prediction of the decay rate is correct, represented by the blue line. The data and the prediction are clearly consistent with each other, but that by itself doesn't mean we can be sure the decay is actually happening. Maybe the prediction that would be made without \(\mathrm{B}^0_s\to\ulp\ualp\) decay would also be consistent with the data, in which case these results wouldn't tell you anything useful.
A more useful plot is this one from the CMS experiment, which shows the likelihood ratio statistic a.k.a. log-likelihood (on the vertical axis) as a function of the branching ratio (or branching fraction \(BF\)):
Plot of log likelihood of strange B meson decay
Basically, the plot tells you which values of the branching ratio are more or less likely to be the true branching ratio, using the measured value as a reference point. For example, CMS measured the branching ratio to be \(\num{3e-9}\), so in the absence of other information, that's the most likely true value of the branching ratio. The most likely value is no less likely than itself (just meditate on that for a moment), so the curve touches zero — the bottom of the graph — at \(\num{3e-9}\).
On the other hand, consider \(\num{2.1e-9}\). That's not the number CMS measured, so it's less likely that the true branching ratio is \(\num{2.1e-9}\) than \(\num{3e-9}\). How much less likely? Well, if you look at the plot, the value at \(\num{2.1e-9}\) is pretty much on the \(1\sigma\) line. But you have to be careful how you interpret that. It does not tell you the probability that the true value is less than \(\num{2.1e-9}\), but it does tell you that, if the true value is \(\num{2.1e-9}\), the probability of having measured the result CMS did (or something higher and further away from the true value) is the "one-sided" probability which corresponds to \(1\sigma\): 16%. (That number comes from the normal distribution, by the way: 68% of the probability is within one standard deviation of the mean, leaving 16% for each side.) Yes, this is kind of a confusing concept. But the important point is simple: the higher the curve, the less likely that value is to be the true value of the branching ratio.
The most interesting thing to take away from this graph is the value for a branching ratio of zero, where the curve intersects the vertical axis. That tells you how likely it is that the branching ratio is zero, given the experimental data. (Technically zero or less, except that we know it can't be less than zero because a negative number of decays doesn't make any sense!) It's labeled on the plot as \(4.3\sigma\). That corresponds to a one-sided probability of \(\num{8.54e-6}\), or less than a thousandth of a percent! In other words, if \(\mathrm{B}^0_s\to\ulp\ualp\) doesn't happen at all, the chances that CMS would have seen as many \(\ulp\ualp\) pairs as they did is 0.001%. That's a pretty small probability, so it seems quite likely that \(\mathrm{B}^0_s\to\ulp\ualp\) is real.
Let's be careful, though! Because if you do run a hundred thousand particle physics experiments, you can expect one of them to produce an outcome with a probability of 0.001%. How do we know this isn't the one? Well, we don't. But here's what we can do: estimate the number of particle physics experiments and sub-experiments that get done over a period of, say, several years, and choose a probability that's low enough so that you don't expect to get a fluke result in all those experiments. For example, let's say there are 10,000 experiments done each year. If you decide to call it a "discovery" when you see something that happens with a probability of 0.001%, you'll "discover" something that doesn't really exist every ten years or so. But if you hold off on calling it a "discovery" until you see something that has a probability of 0.00003% — that's one in 350 million — you'll only make a fake discovery on average once every few centuries. Or in other words, once in the entire history of physics, since Newton! Physicists have chosen this threshold of a 1-in-350-million probability to constitute a "discovery" for exactly that reason, and also because it corresponds to a nice round number of sigmas, namely five.
So in order to say the decay \(\mathrm{B}^0_s\to\ulp\ualp\) has officially been discovered, we would need that curve on the plot to shoot up to \(5\sigma\) by the time it hits the vertical axis. CMS isn't there yet.
LHCb also has their own, separate result showing evidence for the decay. It's natural to wonder, could we combine them? After all, if it's unlikely that one experiment sees a decay that doesn't exist, it must be way less likely that two separate experiments indepdently see it!
That's true to some extent, but it's not an easy thing to properly combine results from different experiments. You can't just multiply a couple of probability distributions, because the experiments will often be made of different components, work in different ways, and have different algorithms for filtering and processing their data, and all of that has to be taken into account to produce a combined result. But in this case, the LHCb and CMS collaborations have decided their detectors are similar enough that they can do an approximate combination, which they shared during the EPS-HEP conference.
Plot of combined results
This plot shows the individual measurements and uncertainties from LHCb,
$$\mathcal{B}_\text{LHCb}(\mathrm{B}^0_s\to\ulp\ualp) = 2.9^{+1.1}_{-1.0}\times 10^{-9}$$
from CMS,
$$\mathcal{B}_\text{CMS}(\mathrm{B}^0_s\to\ulp\ualp) = 2.9^{+1.1}_{-1.0}\times 10^{-9}$$
as well as the combined value,
$$\mathcal{B}_\text{combined}(\mathrm{B}^0_s\to\ulp\ualp) = (2.9\pm 0.7)\times 10^{-9}$$
and the standard model prediction in the green band,
$$\mathcal{B}_\text{SM}(\mathrm{B}^0_s\to\ulp\ualp) = \SI{3.56e-9}{0.30e-9}$$
The uncertainties are \(1\sigma\), which means the error bars indicate how far it is from the measured value to the point at which the log-likelihood curve passes the \(1\sigma\) mark. You can see that they match up in this composite of the two plots:
Cross-referenced plots of combination with CMS log-likelihood curve
You'll notice that the error bars on the combined result are smaller than those for either the CMS or LHCb results individually. That means the log-likelihood curve shoots up faster for the combined result, apparently fast enough that it's above the \(5\sigma\) level by the time the branching ratio reaches zero — which is exactly the criterion to say the decay \(\mathrm{B}^0_s\to\ulp\ualp\) is officially discovered.
Of course, the standard model predicted that this decay would occur. So the thing that LHCb and CMS have discovered, namely that the log-likelihood is above the \(5\sigma\) level for a branching ratio of zero, isn't really unexpected. What would be much more exciting is if they find that the log-likelihood is above the \(5\sigma\) level at the value predicted by the standard model — that would indicate that the real branching ratio is not what the standard model predicts, so some kind of new physics is definitely happening! But we'll be waiting a long time to see whether that turns out to be the case.
©2004-2023 David Zaslavsky Contact me • Site use guidelines | CommonCrawl |
Seita's Place
About Archive New? Start Here Subscribe
My Blog Posts, in Reverse Chronological Order
subscribe via RSS or by signing up with your email here.
At the end of every year I have a tradition where I write summaries of the books that I read throughout the year. Here's the following post with the rough set of categories:
Popular Science (6 books)
History, Government, Politics, Economics (6 books)
Biographies / Memoirs (5 books)
China (5 books)
COVID-19 (2 books)
Miscellaneous (7 books)
I read 31 books this year. You can find the other blog posts from prior years (going back to 2016) in the blog archives.
Books with asterisks are ones that I would especially recommend.
This also includes popular science, which means the authors might not be technically trained as scientists.
Who We Are and How We Got Here: Ancient DNA and the New Science of the Human Past (2018) is by famous geneticist and Harvard professor David Reich. Scientific advances in analyzing DNA have allowed better analysis of human population migration patterns. The prior model of humans migrating out of Africa and to Europe, Asia, and the Americas in a "tree-like" fashion is out of date. Instead, mixture is fundamental to who we are as populations have migrated and mixed in countless ways. Also, ancient DNA can show the genetic percentage of an ancient population (including Neanderthals) in modern-day populations. A practical benefit from these studies is the ability to identify population groups as more at risk to certain diseases to others, but as Reich is careful to point out there's a danger in that such studies can be exploited to nefarious means (e.g., racial stereotypes). I believe Reich's justifications for working in this field make sense. If scientists try to avoid the question of whether there might be the slightest possibility of genetic differences among different populations, then the resulting void will be filled by racist and pseudo-scientific thinkers. Reich shows that the heavy mixture among different populations shatters beliefs held by Nazis and others regarding "pure races." Science, when properly understood, helps us better respect the diversity of humans today.
Kindred: Neanderthal Life, Love, Death and Art (2020) by Rebecca Wragg Sykes summarizes what researchers believe about Neanderthals, a species very closely related to Homo Sapiens (i.e., modern humans) who lived many thousands of years ago primarily in Europe and Asia. Neanderthals captivate our imagination since they are so much like ourselves. In fact, interbreeding was possible and did happen. But at some point, Neanderthals went extinct. Kindred reviews the cutting-edge science behind what Neanderthals were like: what did they eat, how did they live, where did they migrate to, and so on. (I was pleased to see that some of this information was also in David Reich's book Who We Are and How We Got Here.) The main takeaway I got is that we should not view Neanderthals as a "less intelligent" version of modern humans. The book is a nice overview, and I am amazed that we are able to deduce this much from so long ago.
Breath: The New Science of a Lost Art (2020) by James Nestor is about breathing. We all breathe, but breathing is not taught or discussed as widely as diet or exercise. Nestor describes an experiment where he stuffed his nose and was forced to mouth-breathe for 10 days. The result? Higher blood pressure, worse sleep, and a host of other adverse effects. Nestor also interviews historians, scientists, and those knowledgeable about breathing, to learn why humans have changed breathing habits for the worse, resulting in crooked teeth, worse sleep, and so on. The book concludes with some breathing advice: nose breathing, chewing, holding your breath, and suggesting certain breathing strategies. Written instructions for breathing can be hard to follow, so Nestor has a website with more information, including videos and additional expert advice. I'm not sure how much I will directly benefit from this book, given that I was already a strong nose-breather, and I don't believe I suffer from snoring or sleep apnea — any sleep issues I might have are likely due to either (a) looking at too many screens (phones, laptops, etc.), or (b) thinking about the state of the world while my brain cannot calm down. It also feels like the book might over-exaggerate breathing, but to his credit, Nestor states that breathing is not going to cure everything. At the very least, it was nice to see a reaffirmation of my basic breathing habits, and I had not thought too much of my breathing habits before reading Breath.
** What To Expect When You're Expecting Robots: The Future of Human-Robot Collaboration ** (2020) by Laura Major and Julie Shah. The authors are roboticists, and I am familiar with Julie Shah's name (she's a Professor at MIT) and her research area of human-robot interaction.1 This book frequently refers to aviation, since it was one of the fields that pioneered a balance between humans and automation (robots) in real time in a safety-critical setting. In what cases does the aviation analogy hold for robots interacting with humans on the ground? As compared to aviation settings, there is a wider diversity of things that could happen, and we do not have the luxury that aviation has with highly trained humans paired with the robot (plane); we need robots that can quickly interact with everyday people. The authors present the key concept of affordances, or designing robots so that they "make sense" to humans, similar to how we can view a variety of mugs but immediately understand the function of the handle. Thinking about other books I've read in the past, the one that comes closest to this is Our Robots, Ourselves where MIT Professor David Mindell discussed the history of aviation as it pertains to automation.
Think Again: The Power of Knowing What You Don't Know (2021) is Adam Grant's third book, following Give and Take and Originals, all of which I have read. At a time when America seems hyper-polarized, Grant shows that it is possible and better for people to be willing to change their minds. Think Again is written in his usual style, which is to present a psychological concept and back it up with research and anecdotes. Grant cites the story of Daryl Davis, a Black musician who has successfully convinced dozens of former Ku Klux Klan members to abandon their prior beliefs. While Grant correctly notes that it shouldn't be the sole responsibility of Black people like Davis to take the lead on something like this, the point is to show that such change is possible.2 Grant also mentions Harish Natarajan, an expert debater who effectively argued against a computer on a topic where he might naturally start off on the weaker end (he was asked to oppose "should we have universal preschool?"), and how Natarajan was able to force Grant to rethink some of his beliefs. Being willing to change one's mind has, in theory, the benefit of flexibility in adapting to better beliefs. Overall, I think the book was reasonable. I try to assume I am open to revising beliefs, and remind myself this: if I feel very strongly in favor of anything (whether it be a political system, a person, a hypothesis, and so on) then I should be prepared to present a list of what would cause me to change my mind. Doing that might go a long way to reduce tensions in today's society.
** Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World ** (2021) by journalist Cade Metz. He writes about AI, and I frequently see his name floated around in articles about AI. Genius Makers is about AI and Deep Learning, and where it's going. There are four main parts: the rise of Deep Learning, successes and hype (think AlphaGo), turmoil and dangers (bias in AI, militarization of AI, etc.), and the future. Throughout the book, there are stories about the key players in AI. As expected, featured players include Geoff Hinton, Yann LeCun, Yoshua Bengio, Jeff Dean, Andrew Ng, and Fei-Fei Li. The key companies include Google, Facebook, OpenAI, Microsoft, and Baidu. I follow AI news regularly, and the book contains some Berkeley-related material, so I knew much of the books' contents. Nonetheless, there was still new material. For example, I think just about everyone in AI these days is aware that Geoff Hinton is "The Man Who Didn't Sit Down" (the title of the prologue) but I didn't know that Google bid 44 million USD for his startup, beating out Baidu. While I really like this book, Genius Makers may have overlap with other AI books (see my prior book reading lists for some examples) such that those who don't want to consume dozens of books about AI may prefer other options. However, this one probably contains the most information about how the key players have interacted with each other.
History, Government, Politics, Economics
** Stamped from the Beginning: The Definitive History of Racist Ideas ** (2016) is a massive book by historian and antiracist Ibram X. Kendi. The "stamped from the beginning" term comes from former US Senator Jefferson Davis, who stated this in 1860 as the rationale for the inequality of whites and blacks. Kendi presents the history of racial inequality, with a focus on how racist ideas have persisted in America. There are five parts, each centering around a main character: Cotton Mather, Thomas Jefferson, William Lloyd Garrison, W.E.B. du Bois, and Angela Davis. Throughout each chapter, Kendi emphasizes that it was not necessarily hatred of other races that led to racism, but instead, racist thinking helped to justify existing racial disparities. He also frequently returns to three key ideas: (1) segregationst thought, (2) assimilationist thought, and (3) antiracist thought. While (1) seems obviously racist, Kendi argues that (2) is also racist. Kendi also points out inconsistencies in the way that people have treated people of different races. For example, consider Thomas Jefferson's hypocrisy in criticizing interracial relationships, while he himself had sexual relationships with his (lighter-skinned) slaves, including Sally Hemingway.3 More generally it raises the question of the most important phrase in the Declaration of Independence, that "all men are created equal." It is one that I hope we will continually strive to achieve.
** How Democracies Die ** (2018) is a well-timed, chilling, concise, and persuasive warning of how democracies can decay into authoritarianism. It's written by Harvard Professors Steven Levitsky and Daniel Ziblatt, who specialize in democracies in Europe and Latin America. During the Cold War, democracies often died in the hands of military coups. But nowadays, they are dying in a more subtle way: by elected officials who use the system to subvert it from within. Those trends in America were developing for years, and burst in 2016 with the election of Trump, who satisfies the warning signs that Levitsky and Ziblatt argue are indicative of authoritarianism: (1) weak commitment to democratic rules of the game, (2) denial of the legitimacy of political opponents, (3) toleration or encouragement of violence, (4) readiness to curtail civil liberties of opponents, including media. Levitsky and Ziblatt argue that it's not the text of the US Constitution that helped American democracy survive for years, as other countries have copied the US Constitution but still decayed into authoritarian rule. Rather, it's the enforcement of democratic norms: mutual toleration and institutional forbearance. They review the history of America and cite historical events showing those democratic norms in action (e.g., stopping FDR's court packing attempt), but admit that the times when democratic norms appeared more robust in America were at the same times when the issue of racism was de-prioritized. They ultimately hope that a multi-racial democracy can be combined with democratic norms. The book was written in 2018, and while they didn't directly predict the COVID-19 pandemic, which may have exacerbated some anti-democratic trends (for example, by inhibiting the ability of government to function), Levitsky and Ziblatt were on the money when it comes to some of their authoritarian predictors. Trump suggesting that the election could be delayed? Yes. The refusal of many politicians to accept the results of the 2020 election (highlighted by the insurrection of 01/06)? Yes. How Democracies Die reminds me of The Fifth Risk where an equally prescient Michael Lewis wrote about the dangers of what happens when people in government don't understand their duties. A commitment to democratic norms must be considered part of an elected official's duties. I will keep this in mind and urge America towards a more democratic future. I don't want to live in an authoritarian country which curtails free religion, free speech, an independent media, an independent judiciary, and where one man does the decision-making with insufficient checks and balances.
Learning from the Germans: Race and the Memory of Evil (2019) by Susan Neiman, a Jewish woman, born in 1955, who has been a philosophy professor in the United States and Israel, and has also lived in Germany. I saw this listed in the recommended reading references in a Foreign Affairs magazine. Learning from the Germans consists of (1) Germany's history of confronting its Nazi past, (2) America's history of reckoning with slavery, and (3) a discussion over monuments, reparations, and what the future may hold for America and other countries that have to face prior sins. I learned about the complex and uneven path Germany took towards providing reparations to Jews, removing Nazi memorials, and so on, with East Germany handling this process better than West Germany. Neiman believes that Germany has responded to its past in a better way than the United States (with respect to slavery).4 It's intriguing that many of the Germans who Neiman interviewed as part of her research rejected the title of the book, since they were ashamed of their country's past, and surprised that others would want to learn from it. Neiman says it's complicated to develop "moral equivalences" between events, but that ultimately what matters is how we address our past. If I were to criticize something happening in country "X", and someone from that country were to respond back to me by criticizing America's past sins, my response would be simply: "yes, you're right, America has been bad, and here is what I am doing to rectify this …". It's not a contradiction to simultaneously hold the following beliefs, as I do, that: (1) I enjoy living in America, and (2) I am very cognizant and ashamed of many historical sins of America's past (and present).
** Good Economics for Hard Times ** (2019) by Nobelists Abhijit Banerjee and Esther Duflo, both of MIT (and a married couple); see the announcement video shortly after they won the prize. They give a wonderful tour of topics in economics, but also clarify that it's not clear which policies directly lead to growth, as traditionally measured in GDP. Much of the book emphasizes that there's so much uncertainty in economics, and that given climate change, it might not be prudent to try to find the formula to maximize GDP. Rather, the goal should be to best address policies that can serve the poor and disadvantaged. Good Economics for Hard Times simultaneously was a fast read but also one that felt like it got enough of the technical information through to me. It's not super likely to change the mind of growth-obsessed people, and it comes with some critique of Trump-style Conservatism. I think it was a great book for me, and one of my favorites this year.
** The Code: Silicon Valley and the Remaking of America ** (2019) is by Margaret O'Mara, a Professor of History at the University of Washington who researches at the intersection of technology and American politics. Hence, she is the ideal person to write this kind of book, and I have high interest in the subject area, since my research is in robotics and AI more broadly, the latter of which is the topic of interest in Silicon Valley today. O'Mara starts at the end of World War II, when the leaders in tech were on the East Coast near Boston and MIT. Over the next few decades, the San Francisco Bay Area would develop tremendously and by the 1980s, would surpass the East Coast in becoming the undisputed tech capital of the world. How this happened is a remarkable story of visionaries who began tech companies, such as Steve Jobs, Mark Zuckerberg, Sergey Brin, and Larry Page (and Bill Gates and Jeff Bezos up north in Seattle, though all have heavy connections with Silicon Valley) and venture capitalists like John Doerr. However, and perhaps this is the less interesting part, the story of Silicon Valley is also one of sufficient government funding for both companies and universities (notably, Stanford University), along with immigration from talented foreigners across the world, resulting in what O'Mara calls an "only-in-America story" made possible by broader political and economic currents. O'Mara is careful to note that this prosperity was not shared widely, nor could it truly be called a true meritocracy given the sexism in the industry (as elaborated further in Emily Chang's Brotopia) and that wealth went mainly to the top few white, and then Asian, men. O'Mara brilliantly summarizes Silicon Valley's recent history in a readable tome.
** The World: A Brief Introduction ** (2020) is by Richard Haass, president of the Council on Foreign Relations, which is my go-to think tank for foreign affairs. I started this book and couldn't stop myself from finishing. It's definitely on the side of breadth instead of depth. It won't add much to those who are regular readers of Foreign Affairs, let alone foreign policy experts; Haass' goal is to "provide the basics of what you need to know about the world, to make you more globally literate." The book begins with the Treaty of Westphalia in 1648, which encoded the concept of the modern international system governed by countries. Obviously, it didn't end up creating permanent peace, as the world saw World War I, World War II, the Cold War, and then the period after the Cold War up to today, which Haas said will later be given a common name by historians upon consensus. My favorite part of the book is the second one, which covers different regions of the world. The third part is the longest and covers challenges of globalization, terrorism, nuclear proliferation, climate change, and so on. The last one is broadly titled "order and disorder." While I knew much of the material in the book, I was still able to learn aspects about worldwide finance and trade (among other topics) and I think The World does a valuable service in getting the reader on a good foundation for subsequent understanding of the world.
Biographies / Memoirs
** Shoe Dog: A Memoir by the Creator of Nike ** (2016) by Phil Knight, currently a billionaire and Nike cofounder, with Bill Bowerman. Each chapter describes a year (1962 through 1980) in Phil Knight's early days in Oregon, where he co-founded Blue Ribbon Sports (later, Nike). Shoe Dog — named after the phrase describing people who know shoes and footwear inside out — is refreshingly honest, showing the challenges Knight faced with getting shoes from factories in Japan. Initially they relied on Onitsuka, but Nike had a protracted legal challenge regarding distribution rights and switched suppliers. Furthermore, Knight had a tough time securing funding and loans from banks, who didn't believe that the company's growth rate would be enough to pay them back. Knight eventually relied on Nissho5, a Japanese guarantor, for funds. Basically, the cycle was: get loan from Nissho, make sales, pay back Nissho, and repeat. Eventually, Nike reached a size and scope comparable to Adidas and Puma, the two main competitors to Nike at that time. Nowadays, things have probably changed. Companies like Uber continually lose money, but are able to get funding, so perhaps there's more of a "Venture Capitalist mentality" these days. Also, I worry if it is necessary to cut corners in business to succeed. For example, in the early days, Knight lied to Onitsuka about having an office on the east coast, and after signing a contract with Onitsuka, Knight had to scramble to get a factory there! Things have to be different in today's faster-paced and Internet-fueled world, but hopefully the spirit of entrepreneurship lives on.
** Born a Crime: Stories from a South African Childhood ** (2016), by comedian Trevor Noah, was great. I'm aware of his work, though have never watched his comedy. He was "Born a Crime" as the son of a White (Swiss) father and a Black mother, which was illegal under South Africa's apartheid system. Noah was Colored, and could not be seen with his mother in many places without the risk of police catching him. I realized (though I'm sure I was taught this earlier but forgot it) that in South Africa's apartheid system, whites were actually a minority, but apartheid allowed whites to remain in control, and a key tactic was pitting different minority groups against each other, usually Blacks.6 Noah had a few advantages here, since he was multi-lingual and could socialize with different minority groups, and his skin color looked light on film at that time. For example, Noah a Black friend robbed a mall, and he was caught on video. When the school principals summoned Noah, they asked him if he knew who the "white" guy was in the video. The person was Noah, but the administrators were somehow unable to tell that, blinded by certain notions of race. Apartheid formally ended during Noah's childhood, but the consequences would and still are reverberating throughout South Africa. I'm frankly amazed at what Noah overcame to be where he is today, and also at his mother, who survived attempts at near murder by an ex-husband. The answer isn't more religion and prayer, it's to remove apartheid and to ensure that police listen to women and properly punish men who commit domestic violence.
The Ride of a Lifetime: Lessons Learned from 15 Years as CEO of the Walt Disney Company (2019) by Robert Iger is a readable book on leadership and business, and provides the perspective of what it is like being a CEO at a huge international company. The first half describes his initial career before being CEO, and the second half is about his experience as CEO. Iger describes the stress throughout the selection stage to see who would become CEO after Michael Eisner, and how Iger had to balance ambition of wanting the job without actually demanding it outright. There was also the complexity of how Iger was already a Disney insider before becoming CEO, and some wanted to bring in a fresh outsider. I enjoyed his view on Steve Jobs, especially after having read Walter Isaccson's biography of Steve Jobs last year. (Jobs had a sometimes adversarial relationship with Disney.) It's also nice that there's "no price on integrity" (the title of Chapter 13) and that Iger is supportive of cracking down on sexual assault and racism. I have a few concerns, though. First, it seems like most of the "innovation" happening at Disney, at least what's featured in the book, is based on buying companies such as Pixar and Lucasfilm, rather than in-house development. It's great that Iger can check his ego and the company's ego, but it's disappointing from an innovation perspective. Second, while there is indeed "no price on integrity," how far should businesses acquiesce to governments who place far more restrictions on civil liberties than the United States government? Iger also repeatedly emphasizes how lucky he was and how important it was for others to support him, but what about others who don't have that luxury?
** The Great Successor: The Divinely Perfect Destiny of Brilliant Comrade Kim Jong Un ** (2019) by New Zealand journalist Anna Fifield. This book is extremely similar to the next book I'm listing here (by Jung H. Pak), so I'm going to combine my thoughts there.
** Becoming Kim Jong Un: A Former CIA Officer's Insights into North Korea's Enigmatic Young Dictator ** (2020) by Jung H. Pak, who used to work in the CIA and has since been at the Brookings Institution and in the US State department. I have to confess, my original objective was to read a biography of Xi Jinping. When I tried to search for one, I came across UC Irvine Professor Jeffrey Wasserstrom's article in The Atlantic saying that there weren't any good biographies of Xi.7 The same article then said there were two biographies of Kim Jong Un, and that's how I found and read these two books. I'm glad I did! Both do a good service in covering Kim Jong Un's life from North Korea royalty to Switzerland for school, then back to North Korea to get groomed for future leadership, followed by his current leadership since 2011. I vaguely remember when he first came to power, and seeing news reports questioning whether Kim Jong Un truly held power, since he was the youngest head of state at that time. But the last decade has shown that Kim's grip on power is ironclad. There are only a few differences in the topics that the books cover, and I think one of them is that near the end of Becoming Kim Jong Un, Pak ponders about how to deal with the nuclear question. She argues that rather than do a misguided first strike like John Bolton once foolishly suggested in a WSJ op-ed just before he became the US National Security Advisor for former president Trump, we have to consider a more nuanced view of Kim and realize that he will only give up nuclear weapons if maintaining them comes at too great a cost to bear. Since the book was published, COVID-19 happened, and if there's been any single event that's caused more harm to North Korea's economy, it's been this, as exemplified by how Russian diplomats had to leave North Korea by hand-pushed rail. I still maintain my view that Kim Jong Un is one of the worst leaders alive today, and I hope that the North Korea situation can improve even a tiny bit in 2021.
** Factory Girls: From Village to City in a Changing China ** (2008) by Leslie T. Chang, who at that time was a journalist for the Wall Street Journal. I found out about this book when it was cited by Jeffrey Wasserstrom and Maura Cunningham in their book. Chang was motivated to provide an alternative perspective from a "traditional" American media, where a lot of the focus is on dissidents and human rights (not a bad thing per se, but it's good to have balance). In this book, Chang meets and interviews multiple women who came from rural areas to work in factories, particularly those located in Dongguan, an industrial city in southern China in the Pearl River Delta region (a bit north of Hong Kong). As a reporter who also could speak in Mandarian, Chang is skillfully able to convey the women's journey and life in a highly sympathetic manner. She does not sugarcoat the difficulties of living as a factory worker; the women who she interviews have to work long hours, might see friendships end quickly, and have difficulties finding suitable husbands in a city that has far more women than men. Factory Girls also contains Chang's own exploration of her family history in China. While still interesting, my one minor comment is that I wonder if this might have diluted the book's message. Despite the 2008 publication date, the book is still readable and it seems like the rural-to-urban shift in China is still ongoing.
** Deng Xiaoping and the Transformation of China ** (2011) is a massive history tome on the former Chinese leader by the great historian Ezra F. Vogel, a long-time professor at Harvard University. (He passed away in late 2020.) There likely are many other biographies of Deng and there may be more in the future, but Vogel's book is considered the "definitive" one, and compared to later historians, Vogel will have had the advantage of interviewing Deng's direct family members and associates. The reason for studying Deng is obvious: since Deng took over the reins of China in 1978 following Mao's death in 1976 and a brief interlude afterwards, he led economic reforms that opened the world's most populous country and helped to lift millions out of poverty. The bulk of the book covers Deng's leadership from 1978 through 1992. This includes economic reforms such as the establishment of "Special Economic Zones," allowing foreign investment, and sending students abroad, largely to the United States, which also benefits from this relation, as I hope my recent blogging makes clear. It also includes foreign affairs, such as the peaceful return of Hong Kong to China and the difficulties in reuniting China and Taiwan. As a recent NY Times obituary here states, a criticism of Vogel's book is that he might have been too lenient on Deng in his reporting, I do not share that criticism. In my view the book presents a sufficiently comprehensive view of the good, bad, and questionable decisions from Deng that it's hard for me to think of a harsh criticism.8 (It is true, however, that the Chinese government censored parts of this book for the Chinese translation, and that I dislike.) Vogel's masterpiece is incredible, and I will remember it for a long time.
** China Goes Global: The Partial Superpower ** (2012) is by David Shambaugh, a professor at the Elliott School of International Affairs at the George Washington University (same department as Prof. Sean Roberts). From the 1978 reforms which opened the country up to 2012, China's been massively growing and asserting its influence on the world, but is not yet a "superpower" as would be suggested based on its population and economy. This could be due to hesitancy in taking on greater international roles, as that might require expensive interventions and undertakings that could hinder its economic growth, which is the CCP's main mandate to the Chinese people. One thing I immediately noticed: the book has the most amount of quotes, citations, or interviews with Chinese government officials or academics than any other book I've read. (This was the pre-Xi era and the country was generally more open to foreigners.) Shambaugh does a great job conveying the wide range of opinions of the Chinese foreign policy elite. Two of the most cited scholars in the book are Yan Xuetong and Wang Jisi, whose names I recognized when I later read Foreign Affairs articles from them. Another thing worth mentioning: Chinese officials have told Shambaugh that they believe the "Western" media is misinformed and does not understand China. Shambaugh recalls replying, what precisely is the misunderstanding, and the government officials were aghast that there could be any disagreement. In Shambaugh's view, the media is tough but accurate on China.9 As Shambaugh emphasizes, so many people want to know more about China (myself included, as can be obviously inferred!), and in my view this means we get both the positive and the negative. This book is a great (if somewhat dated) survey, and helps to boost my personal study of China.
China Goes Green: Coercive Environmentalism for a Troubled Planet (2020) is co-written by professors Yifei Li and Judith Shapiro. The focus in China Goes Green is to discuss the following: in today's era of accelerating climate change (or climate crisis), is China's authoritarian government system better suited to tackle environmental challenges? Some thinkers have posited that, while they may be sympathetic to liberal democracy and human rights, maybe the climate urgency of today means such debate and freedoms have to be set aside in favor of "quicker" government actions by authoritarian rule. Li and Shapiro challenge this line of reasoning. A recurring theme is that China often projects that it wants to address climate change and promote clean energy, but the policies it implements have the ultimate goal of increasing government control over citizens while simultaneously having mixed results on the actual environment. That is, instead of referring to China today as "authoritarian environmentalism", the authors argue that "environmental authoritarianism" is more accurate. The book isn't a page-turner, but it serves a useful niche in providing an understanding of how climate and government mesh in modern China.
** The War on the Uyghurs: China's Campaign Against Xinjiang's Muslims ** (2020) is by Sean Roberts, a professor at the Elliott School of International Affairs at the George Washington University (same department as Prof. David Shambaugh). The Xinjiang internment camps of China have become household names among readers of international news outlets, with reports of genocide and forced labor. Roberts explains the tense history between the ethnic Han majority in China versus the Turkic people who primarily live in the rural, western areas of the country. A key part of the book is precisely defining what "terrorism" means, as that has been the rationale for the persecution of the Uyghurs, and also other Muslim groups (including in the United States). Roberts covers the Urumqi riots and other incidents that deteriorated relations between Uyghurs and the Chinese government, and then this led to what Roberts calls a "cultural genocide" that started from 2017 and has continued today; Roberts recalled that he and other fellow academics studying the subject realized something was wrong in 2017 when it became massively harder to contact his colleagues from Xinjiang. One of the most refreshing things (in my view) is reading this knowledge from an academic who has long studied this history, instead of consuming information from politicians (of both countries) who have interests in defending their country,10 and Roberts is not shy about arguing that the United States has unintentionally assisted China in its repression, particularly in the designation of certain Muslim groups as "terrorism". Of all the news that I've read in 2021, among those with an international focus, the one that perhaps stuck the most to my mind from 2021 is Tahir Izgil's chilling story about how he escaped the camps in Xinjiang. While this is just one data point of many, I hope that in some way the international community can do what it can to provide refugee status to more Uyghurs. (I am a donor to the Uyghur Human Rights Project.)
** The Premonition: A Pandemic Story ** (2021) by Berkeley's Michael Lewis is the second book of his I read, after The Fifth Risk (published 2018), which served as an unfortunate prologue for the American response to COVID-19; I remembered The Fifth Risk quite well after reading How Democracies Die earlier this year. I didn't realize Lewis had another book (this one) and I devoured it as soon as I could. The US was ranked number one among all countries in terms of pandemic preparation. Let that sink in. By the time it was mid-2021, the US had the most recorded deaths of any country.11 Lewis' brilliance in his book, as in his others, is to spotlight unsung heroes, such as a California health care official and a former doctor who seemed to be more competent than the United States government or the Centers for Disease Control (CDC). Lewis is so good at connecting the reader with these characters, that when reading the book, and seeing how they were stopped and stymied at seemingly every turn from sluggish government or CDC officials, I felt complete rage. (The same goes for the World Health Organization, but the CDC is a US entity, so we have more ability to reform it.) The biggest drawback of this book is that Lewis doesn't have any endnotes or details on how he went about investigating and interviewing the people in his book. In all fairness, the officials he criticizes in this book should have the opportunity to defend themselves. Given the way the CDC acted early in the pandemic, though, and the number of recorded deaths it would be surprising if they could mount effective defenses, but again, they should have the opportunity. One more thing, I can't resist suggesting this idea: any current and future CDC director must have a huge sign with these words: You must do what is right for public health. You cannot let a politician silence or pressure you into saying what he or she wants. This sign should be right at the desk of the CDC director, so he/she sees this on a daily basis. Check out this further summary from NPR and some commentary by Scott Aaronson on his blog.
** World War C: lessons from the COVID-19 Pandemic and How to Prepare for the Next One ** (2021) is by CNN's chief medical correspondent Dr. Sanjay Gupta, released in October 2021, and I expect it to reach a wide audience due to Dr. Gupta's position at CNN. After a brief review of the early days of the pandemic, the book covers how diseases spread, the effects of COVID, and the function of vaccines. Then, it provides guidelines for building resilience to the next pandemic. For the most part, the writing here seems reasonable, and my main disappointment doesn't really have to do with Dr. Gupta per se, but has to do with how understanding the effects of "long-haul COVID" is just going to take a lot of time and involve a lot of uncertainty. Also, and this may be a good (or not so good) thing but Dr. Gupta, while acknowledging that politics played a role in hindering the war against the pandemic (particularly in the US), tries to avoid becoming too political. His last chapter, on ensuring that humanity fights together, resonates with me. In April 2021, India was hit with a catastrophic COVID wave due to the delta variant, and at least one of Dr. Gupta's relatives died. Since the virus constantly mutates, the world essentially has to be vaccinated against it at once to mitigate its spread. As the Omicron variant was spreading as I finished up this book near the end of the year, it's imperative that we end up supporting humans throughout the world and give out as many vaccines as we can, which is one reason why I consider myself a citizen of the world.
Rest: Why You Get More Done When You Work Less (2016) by Alex Soojung-Kim Pang, emphasizes the need for rest and recovery to improve productivity. This seems obvious. I mean, can you really work 16 hours a day with maximum energy? Pang argues that it's less common for people to think about "optimizing" their rest as opposed to things more directly related to productivity. As he laments: "we think of rest as simply the absence of work, not as something that stands on its own or has its own qualities." The book presents anecdotes and studies about how some of the most creative and accomplished people (such as Charles Darwin) were able to do what they did in large part due to rest, or taking breaks such as engaging in long walks. Here's an interview with the author in the Guardian. That said, while I agree with the book's general thesis, it's not clear if I actually benefited as much from reading this book as others. As I fine-tune this review in late December 2021, three months months after I finished reading this book, I'm not sure how much of the details I remember, but it could be due to reading other books that convey similar themes.
** Skin in the Game: Hidden Asymmetries in Daily Life ** (2018) by Nassim Nicholas Taleb is part of his 5-book "Incerto" series. I've only read this book and I might consider reading his other books. When someone has "Skin in the Game," that person has something to lose. Consider someone making a prediction about what will happen in 2022 regarding COVID. If that person has to tie his or her prediction with significant financial backing and is thus at risk of losing money with a bad prediction, then there is "skin in the game," in contrast to someone who can make an arbitrary prediction without being held accountable. The book is thus a tour of various concepts in life that tie back to this central theme, along with resulting "hidden asymmetries." For example, one reason why Taleb is so against interventionism (e.g., the United States invading Iraq) is because it shows how so many foreign policy pundits could safely argue for such an invasion while remaining in the comfort of their suburban homes, and thus there's an asymmetry here where decisions they advocate for don't affect them personally too much, but where they affect many others. If you can get used to Taleb's idiosyncratic and pompous writing style, such as mocking people like Thomas L. Friedman as not a "weightlifter" and insulting Michiko Kakutani, then the book might be a good fit as there's actually some nice insights here.
** Measure what Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs ** (2018) by famous VC John Doerr describes the "OKR" system which stands for "Objectives and Key Results." Doerr is revered throughout Silicon Valley and is known for mentoring Google founders Larry Page and Sergey Brin. I have prior experience interning at Google (remotely) in summer 2020, and I saw a few documents that had OKRs, though I never used the system much nor did I hear much about it, but I imagine that would change if I ever joined Google full-time. The book covers diverse examples of organizations that have used OKRs (not just those in big tech), and a common theme that comes up is, well, work on what matters. The goal should be to identify just a few key objectives that will make an impact, rather than try to optimize less-important things. It's kind of an obvious point, but it's also one that doesn't always happen. While the message is obvious, I still think Doerr explains this with enough novelty to make Measure what Matters a nice read. I signed up for the corresponding email subscription, and there is also a website. Perhaps I should check those out if I have time. It might be good to map out a set of OKRs for my postdoc.
Edge: Turning Adversity into Advantage (2020) by Harvard Business School Professor Laura Huang, acknowledges that all of us have some adversity, but that it is possible for us to turn this into something advantageous. That is better than just giving up and using adversity (e.g., "I grew up poor") as an excuse to not do anything. Some of her suggestions involve trying to turn stereotypes into your favor (e.g., redirecting what people thought of her as an Asian female), and to see how unexpected behavior might be useful (e.g., as when she was able to get Elon Musk to talk to her). I think her message seems reasonable. I can imagine criticism from those who might think that this deprioritizes the role that systematic inequality play in our society, but Professor Huang makes it clear that we should also tackle those inequities in addition to turning adversity into advantage. The writing is good, though it sometimes reads more casually than I would expect, which I think was Professor Huang's intent. I also enjoyed learning about her background: her family's immigration to the United States from Taiwan, and how she became a faculty member at Harvard despite unexpected challenges (e.g., not graduating from a top PhD school with lots of papers). You can see a video summary of the book here.
Breaking the Silence Habit: A Practical Guide to Uncomfortable Conversations in the #MeToo Workplace (2020) by Sarah Beaulieu, attempts to provide a guideline for challenging conversations with regards to anything that might be relevant to "MeToo." She deliberately does not give firm answers to questions such as "can I date a work colleague" or "should I report to the manager" but emphasizes that it must be viewed in context and that there are different ways one can proceed. This might sound frustrating but it seems reasonable. Ultimately I don't know if I got too much direct usage out of this since much of it depends on actually testing and having these conversations (which, to be clear, I fully agree that we should have), which I have not had too much opportunity to engage in myself.
Skip the Line: The 10,000 Experiments Rule and Other Surprising Advice for Reaching Your Goals (2021), by serial entrepreneur and author James Altucher, uses the analogy of "skipping the line" for accelerating career progress, and not necessarily having to trudge through a long list of hierarchies or spend 10,000 hours practicing a skill (as per Malcolm Gladwell). He provides a set of guidelines, such as doing 10,000 experiments instead of 10,000 hours, and "idea sex" which is about trying to tie two ideas together to form new ones. My impression is that Altucher generally advocates for regularly engaging in (smart) risks. I won't follow all of this advice, such as when he argues to avoid reading news in favor of books (see my information diet), but I think some ideas here are worth considering for my life.
** A World Without Email: Reimagining Work in an Age of Communication Overload ** (2021) is another book by Cal Newport, and surprise surprise, one that I also enjoy (see my prior reading lists). I would say "I don't know how he publishes all these books" but in his case, we do know how since the answer lies in this and his past books (even if it's not easy to implement). Newport's key argument that email started off as a way to facilitate easier communication, but it soon created what he calls the "hyperactive hive mind" world, characterized by being in a state of constant online presence, checking email and other messaging platforms (e.g., Slack) throughout the day (and in the evening, and on weekends…). Newport makes a convincing case that this is reducing productivity and making us miserable. For example, he makes the obvious argument that a short face-to-face conversation can better clarify information compared to many back-and-forth emails that sap time and attention away from things that produce actual value. In the second part of the book, he proposes principles for operating in a world without (or realistically, less) email. I thought these were well-argued and are anti-technology; it's a way of better using technology to create more fulfilling lives. I still think I check email too much but I enjoy the days when I can simply work and program all the way, and only check email starting around 4:00PM or so. As usual I will try to follow this book's advice, and I think even doing this moderately will help my work habits in an increasingly online world given the pandemic.
Human-robot interaction is also becoming popular at Berkeley, in large part due to the excellent 2015 hire of Professor Anca Dragan and with increasing interest from others, including Stuart Russell and one of my PhD advisors, Ken Goldberg. ↩
People have criticized Davis' techniques, but I think Davis is usually able to get around this by pointing out the number of people that he's helped to leave the KKK. ↩
Joseph J. Ellis' book "American Dialogue: The Founders and Us" discusses Thomas Jefferson's relationships with his slaves. ↩
While not a primary focus of the book, the history and treatment of Native Americans has a similar story. ↩
Nissho Iwai is now part of Sojitz Corporation. You can find some of the history here. ↩
Intriguingly, since South Africa wanted to maintain business relations with Japan, the few people who looked Japanese in South Africa were spared significant harm, and other Asians (e.g., those of Chinese descent) could avoid mistreatment by claiming that they were actually Japanese, and such tactics could sometimes work. ↩
In my 2019 reading list, Wasserstrom is the co-author of a book on China I wrote. However, also that year, I read Kerry Brown's book "CEO China: The Rise of Xi Jinping." I'm guessing Wasserstrom does not view that book as a compelling biography? ↩
Then again, the usual disclaimer applies: do not view me as an expert on China. If the Biden administration were to hire people like me to brief them on China … that would be disconcerting! ↩
I share this thought. I want to make the distinction between "being misinformed" versus "being informed, but disagreeing" with a political decision. Those are two distinct things. My insatiable curiosity about learning from China means that I'm more inclined to research a topic if I feel like I am misinformed about something. ↩
For more on this point, I emphasize that it is possible to have criticism for both the US and China for various atrocities (as well as other governments). For example, I'm happy to be the first one in line to criticize the Iraq War. I am aware that it is more polite to be critical of "oneself," broadly defined, and that holding ourselves to the highest standard is extremely important. But that doesn't mean I should ignore or shy away from other atrocities going on in the world. (I also recognize that the only reason why I feel safe criticizing the US government in the US is our protection for free speech.) ↩
I recognize that this is recorded deaths, so it is likely that other countries had more deaths (such as India), but it would be hard to imagine the true count leaving the US outside of the top 5. ↩
My Information Diet
On July 03 2021, the subject of media and news sources came up in a conversation I had with someone over brunch when we were talking about media bias. I was asked: "what news do you read?" I regret that I gave a sloppy response that sounded like a worse version of: "uh, I read a variety of news …" and then I tried listing a few from memory. I wish I had given a crisper response, and since that day, I have thought about what that person has asked me every day.
In this blog post, I describe my information diet, referring to how I read and consume media to understand current events. Before getting to the actual list of media sources, here are a few comments to clarify my philosophy and which might also preemptively address common objections.
There are too many sources and not enough time to read all the ones I list in detail every day. Instead I have to be strategic. If I find that I haven't been checking one of these sources for a few days, then I mentally mark it down as a "TODO" to catch up on reading it in the near future. Another reading strategy is that I check news during a limited time range in the evening, after work, so that I am not tempted to browse these aimlessly all day. Otherwise, I would never get "real" world one. I also prefer reading over watching, as I can cover more ground with reading.
I did not list social media style sources such as Reddit and Twitter. I get some news from these, mainly because my field of robotics and AI strangely relies on Twitter for promoting academic content, but I worry that social media is designed to only amplify voices that we believe are correct, with algorithms funneling us towards information to which we are likely to agree, which increases polarization. Furthermore, especially when people can post anonymously, discussions can get highly charged and political. That brings me to the next point…
Whenever possible, look for high quality reporting. A few signals I ask myself in regards to this: (1) Are there high standards for the quality of reporting, and does the writing appear to be in-depth, detailed, empathetic, and persuasive instead of hyper-partisan and filled with ad-hominem attacks? (2) Can I verify the identity of the authors? (3) Who are the experts that get invited to provide commentary? (4) Do articles cite reputable academic work? (5) Are there easily-searchable archives to make sure that whatever people write is written in the permanent record?
I also strive to understand the beliefs behind the people who own and fund the media source. In particular, can the media be critical of the people who fund it, or the government where its headquarters is geographically located? How much dissent is allowed? I am mindful of the difference between an opinion article versus an article that describes something such as a natural disaster. While both have bias, it is more apparent in the former since it's by definition an opinion (these are often called "op-ed"s for short).
Regarding bias, in my view every newspaper or media source has some set of bias (some more than others) which reflects the incentives of its organizers. Every person has bias, myself included naturally, which explains why I get suspicious whenever someone or an entity claims to be the sole arbiter of truth and "unbiased" and so on. Thus, when I read a newspaper — say a standard corporate newspaper in the United States — I consume its content while reminding myself that the choices of articles and reporting reflect biases inherent in the paper's executives or organizers. Similarly, when I read from a source that's partially or fully in control of a government, I keep a reminder to myself that such media ultimately has to protect the interests of its government.
This does not mean it is a bad idea per se to consume biased media. My main argument is that it is a bad idea to consume a small set of media that convey highly similar beliefs and messages. (I also think it is a bad idea to consume no media, as if the solution to avoiding bias is to avoid the news altogether. How else would I be able to know what goes on in the world?) I am also not saying that reading from a variety of media sources is a "solution" or a "cure" for biased news media; my claim is that it is better than the existing alternative of only limiting oneself to a small set of tightly similar media.
This means that, indeed, I read from media sources whose beliefs I might find to be repugnant or misguided. Maybe it's just a weird peculiarity of myself, but I like reading stuff that causes me to get into a rage. If anything, seeing how particular sources try to frame arguments has made it a lot easier for me to poke holes through their reasoning. In addition, people I disagree with are sometimes … not entirely wrong. I can strongly disagree with the political beliefs of a writer or broadcaster, but if they write an 800-word essay on some narrow issue, it may very well be that I agree with the contents of that essay. Of course, maybe they are wrong or misleading, in which case it's helpful to cross-reference with other media sources.
I have lost count of the number of times I have read variations of: "what the media doesn't want to tell you …" or "the media doesn't cover this…" or "the media is heavily biased…". I'm not sure it's possible to collectively assume that all the sources I list below are heavily biased together. They each have some bias on their own, but can all of them really be collectively biased against one entity, individual, government, or whatever? I don't believe that's the case, but let me know if I'm wrong. My guess is that when people say these things, they're referring to a specific group of people who consume a narrow subset of media sources. (Interestingly, when I read those variations of "the media doesn't want you to know…" it's also self-defeating because I have to first read that phrase and its associated content from a media source in the first place.) The bigger issue might be consuming media from too few sources, instead of too many sources.
I don't pay for most of these sources. Only some of these require subscriptions, and it might be possible to get subscriptions for free as part of a job perk, or to get a discount on the first year of the purchase.
Nonetheless, I highly encourage paying and supporting local newspapers. For reference, I own a subscription to the local Pittsburgh Post Gazette, and before that I read Berkeleyside (and donated on occasion). A local newspaper will tend to have the most accurate reporting for local news. Furthermore, if there is concern about bias in national news or if (geo)politics feels depressing, then the local news by definition tends to cover less of that.
I also encourage supporting press freedom. I fully recognize that I am fortunate to have the freedom to read all these sources, which I deliberately chose so that they cover a wide range of political and worldwide views. This freedom is one of the greatest and most exhilarating things about my wonderful life today.
Without further ado, here are some of the media groups, arranged in rough food groups. Within each group, the sources are listed roughly alphabetically. If a news source is listed here, then I can promise you that while I can't spend equal amounts of time reading each one, I will make an honest effort to give the source sufficient attention.
CNBC / MSNBC
Pittsburgh Post Gazette
Berkeleyside
ESPN / The Undefeated
Tehran Times
The Hoover Institute
The Council on Foreign Relations / Foreign Affairs
I hope this list is useful. This blog post is the answer that I will now give to anyone who asks me about my information diet.
My Conversations to Political Offices in Support of Chinese Scholars
Lately, I have been in touch with some of the political offices for whom I am a constituent, to ask if they can consider steps that would improve the climate for Chinese international students and scholars. Now that I reside in the critical swing state of Pennsylvania, the two US Senators who represent me are Senators Bob Casey and Pat Toomey. This past week, I called their Pitttsburgh offices multiple times and was able to contact a staff member for Senator Toomey.
What follows is a rough transcript of my conversation with the staff member. This is from memory, so there's obviously no way that this is all correct, and it's also a sanitized version as I probably got rid of some 'uhms' or mumbles that I experienced when having this conversation. However, I hope I was able to deliver the main points.
[Begin Transcript]
Me: Hello, is this the office of Senator Pat Toomey?
Staff Member: Yes it is, how may I help you?
Me: Thank you very much for taking my call. My name is Daniel, and I am a researcher at Carnegie Mellon University in Pittsburgh, working in robotics. I wanted to quickly talk about two main points.
Staff Member: Sure.
Me: First, I'm hoping to talk about something called the China Initiative. This is something that President Trump started and President Biden has continued. This is causing some concerns among many of us in the scientific research community, especially among those from China or even ethnic Chinese citizens of other countries. Essentially this is trying to see if there's hostile intentions among researchers or if there are undisclosed connections with the Chinese government. Right now it seems to be unfairly targeting Chinese researchers, or at the very least assuming that there is some form of guilt associated with them. If there's anyway we can look at ending, or at least scaling back this initiative, that would be great. A bunch of leading top American universities have asked our Attorney General to consider this request, including I should also add, Carnegie Mellon University.
Staff Member: Yes, I understand.
Me: And so, the other thing I was hoping to bring up is the subject of visas. Many of my Chinese colleagues are on 1-year visas, whereas in the past they might have gotten 5-year visas. If there's any way we can return to giving 5-year visas, that would be great. It makes things easier on them and I think they would appreciate it and feel more welcomed here if they had longer visas.
Staff Member: I see.
Me: To be clear, I'm not discounting the need to have security. I fully understand that there has to be some layer of security around international scholars, and I also understand the current tensions between the two governments involved. And I personally have major disagreements with some things that the government of China has been doing. However, what I'm saying is that we don't necessarily want to assume that Chinese students feel the same way, or at least, we don't want to treat them under a cloud of suspicion that assumes they have malicious intents, with guilt by assocation.
Staff Member: Yes, I see.
Me: And more on that point, many of the Chinese students end up staying in this country out of their own desires, some of them end up staying as professors here, which overall helps to increase research quality. Or they might stay as entrepreneurs … this helps out the local community here as well.
Staff Member: Sure, I understand your concerns. This seems reasonable, and I can pass your concerns to Senator Toomey. First, may I have your last name? I didn't quite catch that.
Me: My last name is Seita. It's spelled 'S' as in … uh, Senator, 'e', 'i', 't', 'a'.
Staff Member: Thanks, and what is your phone number and address?
Me: [I provided him with this information.]
Staff Member: And what about your email?
Me: It's my first letter of the first name, followed by my last name, then 'at' andrew dot cmu dot edu. This is a CMU email but it has 'andrew' in it, I think because of Andrew Carnegie.
Staff Member: Oh! [Chuckle] I have a number of contacts from CMU and I was always wondering why they had emails that contained 'andrew' in it. Now I know why!
Me: Oh yeah, I think that's the reason.
Staff Member: Well, thank you very much. I also know that Senator Toomey will be interested in these two items that you brought up to me, so I will be sure to pass on your concerns to him, and then he can reply to you.
Me: Thank you very much.
[End Transcript]
The staff member at Pat Toomey's office seemed sincere in his interest in passing on this information to Senator Toomey himself, and I appreciate that. I am fairly new to the business of contacting politicians but hopefully this is how US Senators get word of what their constituents think.
Update December 24, 2021: Since my original conversation above, I've continued to contact Pennsylvania's US Senators along with my US Representative. Senator Pat Toomey and Senator Bob Casey, along with Representative Mike Doyle, have forms on their website where I can submit emails to voice my concerns. Here's the email template I used for contacting these politicians, with minor variations if needed:
Hello. My name is Daniel and I am a robotics researcher at Carnegie Mellon University. I wanted to ask two quick requests that I hope the Senator and his staff can investigate.
The first is the China Initiative, designed to protect America against Chinese espionage. I fully understand and respect the need for national security, and I am highly concerned about some aspects of the current government of China. However, this initiative is having a negative effect on the academic community in the United States, which by its very nature is highly international. What we don't want to do is assume without direct evidence that Chinese researchers, or researchers who appear to be ethnic Chinese, or researchers who collaborate with those from China, have nefarious intentions. A bunch of leading American universities have asked Attorney General Merrick Garland to take a look at scaling back, limiting, or eliminating outright the China Initiative, which has been continued under President Biden. If you can take a look at that, that would be great. For more context, please see: https://www.apajustice.org/end-the-china-initiative.html
The second is about visas. If someone from the Senator's staff can take a look at visas for Chinese international students, and particularly consider giving them 5 year visas instead of the 1 year visas that are becoming more common now. In the past, Chinese students have told me that they got 5-year visas, and a longer visa would make travel easier for them and would make them feel more welcomed to the country. We get a lot of Chinese students and other international students, and one reason why top American universities are the best in the world is because of talent that gets recruited across the world. Many of the Chinese students additionally end up staying in the United States as professors, entrepreneurs, and other highly-skilled employees, which benefits our country. If they wish to stay, I hope we can be as welcoming as possible. And if they choose to return to their home country, then the more welcoming we are, the more likely they might be to pass on positive words to their colleagues, friends, and family members.
(Unfortunately, Representative Doyle's website seems to not be functioning properly and I got a "The Requested Page Could Not Be Found" error, so I might need to call his office. However, I also got an automated email response thanking me for contacting his office … so I'm not sure if his office got my message? I will investigate.)
A few days later, Senator Casey's office responded with an email saying that my message had been forwarded to the relevant people on his staff who handle education and immigration. Senator Casey is on the Senate committee on Health, Education, Labor and Pensions so he and his office may be relatively better suited to handling these types of requests. I appreciated the email response, which clearly indicated that someone had actually read my email and was able to understand the two major points.
Maybe this is a lesson for me in that submitting emails through the Senators' websites is easier than calling them, since each time I called one of Senator Casey's offices, I had to send automated voice messages.
What is the Right Fabric Representation for Robotic Manipulation?
As many readers probably know, I am interested in robotic fabric manipulation. It's been a key part of my research – see my Google Scholar page for an overview of prior work, or this BAIR Blog post for another summary. In this post, I'd like to discuss two of the three CoRL 2021 papers on fabric manipulation. The two I will discuss propose Visible Connectivity Dynamics (VCD) and FabricFlowNet (FFN), respectively. Both rely on SoftGym simulation, and my blog post here about the installation steps seems to be the unofficial rule book for its installation. Both papers approach fabric manipulation using quasi-static pick-and-place actions.
However, in addition to these "obvious" similarities, there's also the key issue of representation learning. In this context, I view the term "representation learning" as referring to how a policy should use, process, and reason about observational data of the fabric. For example, if we have an image of the fabric, do we use it directly and propagate it through the robotic learning system? Or do we compress the image to a latent variable? Or do we use a different representation? The VCD and FFN papers utilize different yet elegant approaches for representation learning, both of which can lead to more efficient learning for robotic fabric manipulation. Let's dive into the papers, shall we?
Visible Connectivity Dynamics
This paper (arXiv) proposes the Visible Connectivity Dynamics (VCD) model for fabric manipulation. This is a model-based approach, and it uses a particle-based representation of the fabric. If the term "particle-based" is confusing, here's a representative quote from a highly relevant paper:
Our approach focuses on particle-based simulation, which is used widely across science and engineering, e.g., computational fluid dynamics, computer graphics. States are represented as a set of particles, which encode mass, material, movement, etc. within local regions of space. Dynamics are computed on the basis of particles' interactions within their local neighborhoods.
You can think of particle-based simulation as discretizing items into a set of particles or "atoms" (in simulation, they look like small round spheres). An earlier ICLR 2019 paper by the great Yunzhu Li shows simulation of particles that form liquids and rigid objects. With fabrics, a particle-based representation can mean representing fabric as a grid of particles (i.e., vertices) with bending, shearing, and stiffness constraints among neighboring particles. The VCD paper uses SoftGym for simulation, which is built upon NVIDIA Flex, which uses position-based dynamics.
The VCD paper proposes to tackle fabric smoothing by constructing a dynamics model over the connectivity of the visible portion of the cloth, instead of the entire part (the full "mesh"). The intuition is that the visible portion will include some particles that are connected to each other, but also particles that are not connected to each other and just happen to be placed nearby due to some folds or wrinkles. Understanding this connectivity structure should then be useful for planning smoothing. While this is a simplification of the full mesh prediction problem and seems like it would throw away information, it turns out this is fine for smoothing and in any case is much easier to learn than predicting the full mesh's dynamics.
Each fabric is represented by particles, which is then converted into a graph consisting of the standard set of nodes (vertices/particles) and edges (connections between particles), and the dynamics model over these is a graph neural network (GNN). Here is an overview of the pipeline with the GNN, which also shows a second GNN used for edge prediction:
The architecture comes from this paper which simulates fluids, and there a chance that this might also be a good representation for fabric in that it can accurately model dynamics.
To further expand upon the advantages of the particle-based representation, consider that the fabric representation used by the graph dynamics model does not encode information about color or texture. Hence, it seems plausible that the particle-based representation is invariant to such features, and domain randomizing over those might not be necessary. The paper also argues that particles capture the inductive bias of the system, because the real world consists of objects composed of atoms that can be modeled by particles. I'm not totally sure if this translates to accurate real world performance given that simulated particles are much bigger than atoms, but it's an interesting discussion.
Let's recap the high-level picture. VCD is model-based, so the planning at test time involves running the learned dynamics model to decide on the best actions. A dynamics model is a function $f$ that given $f(s_t,a_t)$ can predict $s_{t+1}$. Here, $s_t$ is not an image or a compressed latent vector, but the particle-based representation from the graph neural network.
The VCD model is trained in simulation using SoftGym. After this, the authors apply the learned dynamics model with a one-step planner (described in Section 3.4) on a single-arm Franka robot, and demonstrate effective fabric smoothing without any additional real world data. The experiments show that VCD outperforms our prior method, VisuoSpatial Foresight (VSF) and two other works from Pieter Abbeel's lab (covered in our joint blog post).
While VCD does an excellent job at handling fabric smoothing by smoothing out wrinkles (in large part due to the particle-based representation), it does not do fabric unfolding. This follows almost by construction because the method is designed to reason only about the top layer and thus ignores the part underneath, and knowing the occluded parts seems necessary for unfolding.
FabricFlowNet
Now let us consider the second paper, FabricFlowNet (FFN) which uses the idea of optical flow as a representation for goal-conditioned fabric manipulation, for folding fabric based on targets from goal images (or subgoal images). Here is the visualization:
The goal-conditioned setup means they are trying to design a policy \(\pi\) that takes in the current image \(x_t\) and the current sub-goal \(x_i^g\), and produces \(a_t = \pi(x_t, x_i^g)\) so that the fabric as represented in \(x_t\) looks closer to the one represented with \(x_i^g\). They assume access to the subgoal sequence, where the final subgoal image is the ultimate goal.
The paper does not pursue the naive approach where one inputs both the current observation and (sub)goal images and runs it through a standard deep neural network, as done in some prior goal-conditioned work such as our VisuoSpatial Foresight work and my work with Google on Goal-Conditioned Transporter Networks. The paper argues that this makes learning difficult as the deep networks have to reason about the correct action and the interplay between the current and goal observations.
Instead, it proposes a clever solution using optical flow, which is a way of measuring the relative motion of objects in an image. For the purposes of this paper, optical flow should be interpreted as: given an action on a fabric, we will have an image of the fabric before and after the action. For each pixel in the first image that corresponds to the fabric, where will it "move to" in the second image? This is finding the correspondence between two images, which suggests that there is a fundamental relationship between optical flow and dense object neworks.
Optical flow is actually used twice in FFN. First, given the goal and observation image, a flow network predicts a flow image. Second, given pick point(s) on the fabric, the flow image automatically gives us the place point(s).
Both of these offer a number of advantages. First, as an input representation, optical flow can be computed just with depth images (and does not require RGB) and will naturally be invariant to fabric color. All we care about is understanding what happens between two images via their pixel-to-pixel correspondences. Moreover, the labeling for predicting optical flow can be done entirely in simulation, with labels automatically generated in a self-supervised manner. One just has to code a simulation environment to randomly adjust the fabric, and doing so will give us ground truth images of before and after labels. We can then compute optical flow by using the standard endpoint error loss, which will minimize the Euclidean distance of the predicted versus actual correspondence points.
The second, using optical flow to give us placing point(s), has an obvious advantage: it is not necessary for us to design, integrate, and train yet another neural network to predict the placing point(s). In general, predicting a place point can be a challenging problem since we're regressing to a single pixel, and this can introduce more imprecision. Furthermore, the FFN system decouples the observation-goal relationship and the pick point analysis. Intuitively, his can simplify training, since the neural networks in FFN have "one job" to focus on, instead of two.
There are a few other properties of FabricFlowNet worth mentioning:
For the picking network, FFN sub-divides the two pick points into separate networks, since the value of one pick point should affect the value of the other pick point. This is the same idea as proposed in this RSS 2020 paper, except instead of "pick-and-place," it's "pick-and-pick" here. In FFN, the networks are also fully convolutional networks, and hence do picking implicitly, unlike in that prior work.
An elegant property of the system is that it can seamlessly alternate between single-arm and bimanual manipulation, simply by checking whether the two picking points are sufficiently close to each other. This simultaneously enforces a safety constraint by reducing the chances that the two arms collide.
The network is supervised by performing random actions in simulation using SoftGym. In particular, the picking networks have to predict heatmaps. Intuitively, the flow provides information on how to get to the goal, and the picking networks just have to "match heatmaps."
What is the tradeoff? The system has to assume optical flow will provide a good signal for the placing point. I wonder when this would not hold? The paper also focuses on short-horizon actions (e.g., 1 or 2 actions) starting from flat fabric, but perhaps the method also works for other scenarios.
I really like the videos on the project website – they show a variety of success cases with bimanual manipulation. The experiments show that it's much better than our prior work on VisuoSpatial Foresight, along with another method that relies on an "FCN-style" approach to fabric manipulation; the idea of this is covered in my prior blog post.
I think this paper will have significant impact and will inspire future work in flow-based manipulation policies.
Both VCD and FFN show that, with clever representations, we can obtain strong fabric manipulation tasks, outperforming (in some contexts) our prior method VisuoSpatial Foresight, which uses perhaps the most "straightforward" representation of raw images. I am excited to see what other representations might also turn out to be useful going forward.
Xingyu Lin*, Yufei Wang*, Zixuan Huang, David Held. Learning Visible Connectivity Dynamics for Cloth Smoothing. CoRL 2021.
Thomas Weng, Sujay Bajracharya, Yufei Wang, Khush Agrawal, David Held. FabricFlowNet: Bimanual Cloth Manipulation with a Flow-based Policy. CoRL 2021.
Live Transcription on Zoom for Ubuntu
As the pandemic unfortunately continues throughout the world and is now approaching two years old, the state of affairs has at least given many of us time to adjust to using video conferencing tools. The two that I use the most, by far, are Google Meet and Zoom.
I prefer using Google Meet, but using Zoom is unavoidable since it's become the standard among my colleagues in academia. Zoom is likely used more widely than Google Meet because of access to China. (Strangely, though, I was recently on a Zoom call with someone I knew in Beijing, who told me he needed a Virtual Private Network (VPN) to use Zoom, so maybe I'm not fully understanding how VPNs work.)
The main reason why I continue using Google Meet is because of the quality of its live transcription. Just before the pandemic started, I remember getting on a virtual call with Google host Andy Zeng for what I call a "pre-interview interview." (For research scientist internships at Google, typically a host will have already pre-selected an intern in advance.) Being from Google, Andy had naturally set up a Google Meet call, and I saw that there was a "CC" button and clicked on it. Then the live transcription started appearing at the bottom of our call, and you know, it was actually pretty darn good.
When the pandemic started, I don't think Zoom supported this feature, which is why I asked to have Google Meet video calls for meetings with my involvement. It took a while, but Zoom was able to get live transcription working … but not for Ubuntu systems, until very recently. As of today (November 13, 2021) with Zoom version 5.8.3, I can launch a Zoom room on my Ubuntu 18.04 machine and enable the live transcription, and it works! For reference, I have been repeatedly trying to get live transcription on Ubuntu up until October 2021 without success.
This is a huge relief, but there are still several caveats. The biggest one is that the host must explicitly enable live transcription for participants, who can then choose to turn it on or off on their end. Since I have had to ask Zoom hosts to repeatedly enable live transcription so that I could use it, I wrote up a short document on how to do this, and I put this link near the top of my new academic website.
I don't quite understand why this feature exists. I can see why it makes sense to have the host enable captioning if it comes from a third party software or a professional captioner, since there could be some security reasons there. But I am not sure why Zoom's built-in live transcription requires the host to enable. This seems like an unusual hassle.
Two other downsides of the live transcription of Zoom, compared to Google Meet, is that (empirically) I don't think the transcription quality is that good, and the captions for Zoom will only expand a short width on the screen, whereas with Google there's more text on the screen. The former seems to be a limitation with software, and Google might have an edge there due to their humongous expertise in AI and NLP, but the latter seems to be an API issue which seems like it should be easy to resolve. Oh well.
I'm happy that Zoom seems to have integrated live transcription support for Ubuntu systems. For now I still prefer Google Meet but it makes the Zoom experience somewhat more usable. Happy Zoom-ing!
My Evolving Research Workflow: Conda, TensorFlow, PyTorch, and Disk Space
In the past, I have written about some workflow and coding tips, such as improving my development environment with virtualenvwrapper, organizing GitHub repositories, running and saving experiments in Python and understanding (a little) about how docker works.
As I transition to my new postdoc role at CMU as of September 2021, it feels like a good time to recap my current workflow. I am constantly trying to think about how I can be more productive and whether I should learn about this or that feature (the answer is usually "no" but sometimes it is "yes").
In this blog post, I will discuss different aspects of my current workflow, with a focus on: (1) conda environments, (2) installing TensorFlow and Pytorch with CUDA, and (3) managing storage on shared machines.
In the future, I plan to update this post with additional information about my workflow. There are also parts of my prior workflow that I have gotten rid of. Looking back, I'm surprised I managed to get a PhD with some of the sloppy tactics that I employed!
When reading this post, keep in mind that the main operating system I use is Ubuntu 18.04 and that I do essentially all my programming with Python. (I keep telling myself and writing in my New Year Resolution documents that I will get back to coding with C++, but I never do so. My apologies in advance.) At some point, I may upgrade to Ubuntu 20.04, but the vast majority of research code I use these days is still tied to Ubuntu 18.04. I do use a Macbook Pro laptop, but for work contexts, that is mainly for making presentations and possibly writing papers on Overleaf. If I do "research programming" on my laptop, it is done through ssh-ing to an Ubuntu 18.04 machine.
Conda Environments
Starting in 2019, I began using conda environments. Previously, I was using virtualenvs coupled with virtualenvwrapper to make handling multiple environments easier, but it turned out to be a huge hassle to manage with various "command not found" errors and warnings. Furthermore, I was running into countless issues with CUDA and TensorFlow incompatibilities, and inspired by this October 2018 Medium article, which amusingly says that if using "pip install" commands for TensorFlow, "There is a probability of 1% that this process will go right for you!", I switched to conda environments.
Conda environments work in basically the same way as virtualenvs in that they isolate a set of Python packages independent of the system Python. Here, "conda install" plays the role of "pip install". Not all packages installable with pip are available through conda, but that's not a huge issue because you can also run normal pip install commands in a conda environment. The process might be delicate, though (see this for a warning) but I can't remember if I have ever experienced issues with mixing conda and pip packages.
Here's how I get the process started on new machines:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
I use miniconda instead of anaconda, but that's mainly because I prefer something more lightweight to make the process faster and take less disk space. Furthermore, anaconda comes with packages that I would normally want to install myself anyway later (such as numpy) so that I can easily control versions and dependencies.
To be clear, here's what I do when I run conda envs after that bash command. I accept the license:
I always use the default location (click enter) which is typically /home/<USER>/miniconda3. Then after that I will see:
I typically say "yes" so that miniconda automatically adds stuff to my .bashrc file. After this, I can run the "conda" command right away, but I also don't want the "base" environment to be active right away because I would rather have each new command line window start with a blank non-conda environment. Thus, after closing and re-opening the shell (typically via exiting the machine and ssh-ing again) I do:
conda config --set auto_activate_base false
This information goes into the ~/.condarc file. After refreshing with . ~/.bashrc, conda is all set up for me to use. Here are a few commands that I regularly use:
conda activate <NAME> and conda deactivate to activate or deactivate the environment. When the environment is activated, use conda install <PACKAGE>.
conda info --envs to check all my existing conda environments.
conda list: This will check the installed packages in the current conda environment. This will also conveniently clarify if any packages were installed via pip.
conda create --name <NAME> python=3.7 -y, to create conda environments with the specified Python version. You can add the "-y" argument to avoid having to explicitly approve the process.
conda env remove --name <NAME>, to remove conda environments.
We now turn to discussing how conda environments work with TensorFlow and PyTorch.
Handling TensorFlow, PyTorch, and CUDA
Migrating to TensorFlow was the original motivation for me to use conda environments due to running into incompatible CUDA/CuDNN versions with "pip install tensorflow" commands on various machines. You can find a table of TensorFlow packages and their associated CUDA and CuDNN versions here and a popular StackOverflow post here.
As of today, the latest version of TensorFlow is 2.6.0 through pip, but it's 2.4.1 through conda. A different set of maintainers package the conda TensorFlow version as compared to the one provided through the Python Package Index (PyPI) which is from the official TensorFlow developers, which is why there is some version lag (see this post for some context). Since it's rare that I absolutely require the latest TensorFlow version, I focus on TensorFlow 2.4.1 here. I run the following commands to quickly start a Python 3.7 conda environment with TensorFlow 2.4.1 installed:
conda create --name tftest python=3.7 -y && conda activate tftest
conda install ipython tensorflow-gpu==2.4.1 -y
Similar Python versions will likely work as well. These days, I use Python 3.6 at a minimum. Also, I just put in ipython since I like running it over the default Python shell. Once I run ipython on the command line, I can try:
tf.config.list_physical_devices('GPU')
tf.test.is_built_with_cuda()
The tf.test.is_gpu_available() method is deprecated, so use tf.config.list_physical_devices('GPU') instead. Presumably, this should give information that is consistent with what happens when running nvidia-smi on the command line; the first one should list all GPUs and the second one should return True. If not, something went wrong.
This process consistently works for a variety of machines I have access to, and gets TensorFlow working while bundling CUDA internally within the conda environment. This means in general, the conda environment will not have the same CUDA version as the one provided from nvcc --version which is typically the one installed system-wide in /usr/local/. For the commands above, this should install cudatoolkit-10.1.243 in the conda environment. This package is 347.4 MB, and includes CuDNN. Here is another relevant StackOverflow post on this matter.
Finally, wrap things up by removing each created test environment to reduce clutter: conda env remove --name tftest.
Hopefully that helps clarify one way to install TensorFlow in conda environments for shared machines. One day I hope that TensorFlow will be simpler to install. To be clear, it's simple but could be made a little easier as judged by the community's reception. (To put things in perspective, remember how hard it was to install CAFFE back in 2014-2015? Heh.) In new "clean" machines where one can easily control which CUDA/CuDNN versions are packaged on a machine on the fly, such as those created from Google Cloud Platform, the pip version could be relatively easy to install.
What about PyTorch? For PyTorch, the process for installing is even easier because I believe that the PyTorch maintainers simultaneously maintain conda and pip packages, because we have the option of selecting either one on the official installation page:
As with my TensorFlow tests, I can test PyTorch installation via:
conda create --name pttest python=3.7 -y && conda activate pttest
conda install ipython pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
As of today, this will install PyTorch 1.9.0 along with ipython. Again, ipython is not necessary but I like including it. You can then check if PyTorch is using the GPU(s) as follows:
import torch
torch.cuda.is_available()
torch.cuda.device_count()
torch.cuda.get_device_name(0)
Here is the StackOverflow reference. As with my TensorFlow test, this method of installing PyTorch will detect the GPUs and does not rely on the system's existing CUDA version because the conda installation will provide it for us. For Pytorch, the cudatoolkit-10.2.89 package (which is 365.1 MB) gets installed, which you can check with cuda list. Once again, this also includes CuDNN.
Clean things up with: conda env remove --name pttest.
Managing Storage on Shared Machines
In academic research labs, students often share machines. It thus helps to have a scalable, efficient, and manageable way to store data. Here's how I typically do this for machines that I administer, where I am a "sudo" user and grant access to the machine to other lab members who may or may not be sudo (for example, I rarely make new undergrad researchers sudo unless there's a good reason). I assume that the machine is equipped with a separate SSD and HDD. The SSD is typically where users store their local data, and because it's an SSD rather than an HDD, reading and writing data is faster. The HDD is mainly for storing larger datasets, and typically has much more storage than the SSD.
For a clean machine, one of the most basic first steps is to make sure that the SSD and HDD are mounted upon startup, and accessible to all users. Usually, the SSD is automatically mounted, but the HDD might not be. I can mount these drives automatically by editing the /etc/fstab file, or by using the "disks" program, which will end up editing /etc/fstab for me. I suggest following the top answer to this AskUbuntu question. My convention is to mount the HDD under a directory named /data.
To inspect file systems, I use df -h, where the -h argument makes the sizes human-readable. Here's an example of a subset of the output when I run df -h:
/dev/nvme0n1p1 1.9T 1.2T 598G 67% /
/dev/sdb1 13T 571G 12T 5% /data
According to the above, the SSD has 1.9T of total space (of which 67 percent is used), and the HDD has 13T of total space. The output of df -h includes a bunch of other lines with information that I'm not sure how to interpret; I am guessing those correspond to other minor "drives" that are needed for Ubuntu systems to function. I only use df -h to look at the SSD and HDD, to make sure they are actually there, and to check disk space.
Incidentally, another way I check disk space is by using du -sh <directory>, which will list space recursively stored under <directory>. Depending on user privileges, the command might result in a lot of distracting "permission denied" warnings, in which case I add 2> /dev/null at the end of the command to suppress those messages. I recommend reading this article for more information and useful tips on managing disk space.
After mounting the HDD, it is typically under control of root for both the user and the group, which you can check with ls -h /. This is problematic if I want any user to be able to read and write to this directory. To resolve this, I usually follow the top answer to this AskUbuntu question. I typically make a new group called datausers, and then add all users to the group. I then change the ownership of the shared folder, /data. Lastly, I choose this option:
sudo chmod -R 1775 /data
According to the AskUbuntu question, this means that all users in datausers can add to /data, and can read but not write to each others files. Furthermore, only owners of files can delete them, and users outside of datausers will be able to see the files but not change them.
From these steps, running ls -lh / shows:
user@machine:~$ ls -lh /
drwxrwxr-t 6 root datausers 4.0K Sep 17 21:43 data
As with df -h, I am only showing part of the output of the above command, just the line that lists data. This shows that it is correctly under the group "datausers."
Finally, I reboot the machine, and then now users who are in the datausers group should be able to read and write to the /data without sudo access. Furthermore, unless sudo privileges are involved, users cannot modify data from other users in /data.
Conclusion and Outlook
Using conda environments has been a huge help for my research workflow, and makes it easy to manage multiple Python projects. I have also been fortunate to get a better sense for how to effectively manage a finite amount of disk space among multiple users.
Some stuff that I also use in my current workflow, and would like to write more about in the future, include VSCode, vim, the Python debugger, Docker, and ssh keys. I also would like to understand how packages work for C++, to see how the process contrasts with Python packages.
My PhD Dissertation, and a Moment of Thanks
Back in May, I gave my PhD dissertation talk, which is the second-to-last major milestone in getting a PhD. The last one is actually writing it. I think most EECS PhD students give their talk and then file in the written dissertation a few days afterwards. I had a summer-long gap, but the long wait is finally over. After seven (!) years at UC Berkeley, I have finally written up my PhD dissertation and you can download it here. It's been the ride of a lifetime, from the first time I set foot at UC Berkeley during visit days in 2014 to today. Needless to say, so much has changed since that day. In this post, I discuss the process of writing up my dissertation and (for fun) I share the acknowledgments.
The act of writing the dissertation was pretty painless. In my field, making the dissertation typically involves these steps:
Take 3-5 of your prior (ideally first-author) papers and stitch them back-to-back, with one paper as one chapter.
Do a find-and-replace to change all instances of "paper" to "chapter" (so that in the dissertation, the phrase "In this paper, we show…" turns to "In this chapter, we show …".
Add an introduction chapter and a conclusion chapter, both of which can be just a handful of pages long. The introduction explains the structure of the thesis, and the conclusion has suggestions for future work.
Then the little (or not so little things, in my case): add an acknowledgments section at the beginning, make sure the title and LaTeX formatting all look good, and then get signatures from your committee.
That's the first-order approximation to writing the PhD. Of course, the Berkeley Graduate Division claims that the chapters must be arranged and written in a "coherent theme" but I don't think people pay much attention to that rule in practice.
On my end, since I had already given a PhD talk, I basically knew I had the green-light to write up the dissertation. My committee members were John Canny, Ken Goldberg, and Masayoshi Tomizuka, 3 of the 4 professors who were on my qualifying exam committee. I emailed them a few early drafts, and once they gave approval via email, it was a simple matter of uploading the PDF to ProQuest, as per instructions from the Berkeley Graduate Division. Unfortunately the default option for uploading the PDF is to not have it open access (!!), which requires an extra fee of USD 95.00. Yikes! Josh Tobin has a Twitter post about this, and I agree with him. I am baffled as to why this is the case. My advice, at least to Berkeley EECS PhD students, is to not pay ProQuest, because we already have a website which lists the dissertations open-access, as it should be done — thank you Berkeley EECS!
By the way, I am legitimately curious: how much money does ProQuest actually make from selling PhD theses? Does anyone pay for a dissertation??? A statistic would be nice to see.
I did pay for something that is probably a little more worthwhile: printed copies of the dissertations, just so that I can have a few books on hand. Maybe one day someone besides me will read through the content …
Well, that was how I filed in the dissertation. What I wanted to do next here was restate what I wrote in the acknowledgments section of my dissertation. This section is the most personal one in the dissertation, and I enjoy reading what other students have to say. In fact, the acknowledgments are probably the most common part of theses that I read. I wrote a 9-page acknowledgments section, which is far longer than typical (but is not a record).
Without further ado, here are the acknowledgments. I hope you enjoy reading it!
When I reflect back on all these years as a PhD student, I find myself agreeing to what David Culler told me when I first came to Berkeley during visit days: "you will learn more during your years at Berkeley than ever before." This is so true for me. Along so many dimension, my PhD experience has been a transformative one. In the acknowledgments to follow, I will do my best to explain why I owe so many people a great debt. As with any acknowledgments, however, there is only so much that I can write. If you are reading this after the fact and wish that I had written more about you, please let me know, and I will treat you to some sugar-free boba tea or keto-friendly coffee, depending on your preferred beverage.
For a variety of reasons, I had one of the more unusual PhD experiences. However, like perhaps many students, my PhD life first felt like a struggle but over time became a highly fulfilling endeavor.
When I arrived at Berkeley, I started working with John Canny. When I think of John, the following phrase comes to mind: "jack of all trades." This is often paired with the somewhat pejorative "master of none" statement, but a more accurate conclusion for John would be "master of all." John has done research in a wider variety of areas than is typical: robotics, computer vision, theory of computation, computational geometry, human computer interaction, and he has taught courses in operating systems, combinatorics, and social justice. When I came to Berkeley, John had already transitioned to machine learning. I have benefited tremendously from his advice throughout the years, first primarily on machine learning toolkits when we were working on BIDMach, a library for high throughput algorithms. (I still don't know how John, a highly senior faculty, had the time and expertise to implement state-of-the-art machine learning algorithms with Scala and CUDA code.) Next, I got advice from John for my work in deep imitation learning and deep reinforcement learning, and John was able to provide technical advice for these rapidly emerging fields. As will be highlighted later, other members of his group work in areas as diverse as computer vision for autonomous driving, video captioning, natural language processing, generating sketches using deep learning, and protein folding — it sometimes seems as if all areas of Artificial Intelligence (and many areas of Human Computer Interaction) are or were represented in his group.
A good rule of thumb about John can be shown by the act of asking for paper feedback. If I ask an undergrad, I expect them to point out minor typos. If I ask a graduate student, I expect minor questions about why I did not perform some small experiment. But if I ask John for feedback, he will quickly identify the key method in the paper — and its weaknesses. His advice also extended to giving presentations. In my first paper under his primary supervision, which we presented at the Conference on Uncertainty in Artificial Intelligence (UAI) in Sydney, Australia, I was surprised to see him making the long trip to attend the conference, as I had not known he was coming. Before I gave my 20-minute talk on our paper, he sat down with me in the International Convention Centre Sydney to go through the slides carefully. I am happy to contribute one thing: that right after I was handed the award for "Honorable Mention for Best Student Paper" from the conference chairs, I managed to get the room of 100-ish people to then give a round of applause to John. In addition, John is helpful in fund-raising and supplying the necessary compute to his students. Towards the end of my PhD, when he served as the computer science division department chair, he provided assistance in helping me secure accommodations such as sign language interpreters for academic conferences.
I also was fortunate to work with Ken Goldberg, who would become a co-advisor and who helped me transition into a full-time roboticist. Ken is a highly energetic professor who, despite being a senior faculty with so many things demanding of his time, is able to give some of the most detailed paper feedback that I have seen. When we were doing serious paper writing to meet a deadline, I would constantly refresh my email to see Ken's latest comments, written using Notability on his iPad, and then immediately rush to address them. After he surprised me by generously giving me an iPad midway through my PhD, the first thing I thought of doing was to provide paper feedback using his style and to match his level of detail in the process. Ken also provides extremely detailed feedback on our research talks and presentations, an invaluable skill given the need to communicate effectively.
Ken's lab, called the "AUTOLab," was welcoming to me when I first joined. The Monday evening lab meetings are structured so that different lab members present on research progress in progress while we all enjoy good food. Such meetings were one of the highlights of my weeks at Berkeley, as were the regular lab celebrations to his house. I also appreciate Ken's assistance in networking across the robotics research community at various conferences, which has helped me feel more involved in the research community and also became the source for my collaboration with Honda and Google throughout my PhD. Ken is very active in vouching for his students and, like John, is able to supply the compute we need to do compute-intensive robot learning research. Ken was also helpful in securing academic accommodations at Berkeley and in international robotics conferences. Much of my recent, and hopefully future, research is based on what I have learned from being in Ken's lab and interacting with his students.
To John and Ken, I know I was not the easiest student to advise, and I deeply appreciate their willingness to stick with me over all these years. I hope that the end, I was able to show my own worth as a researcher. In academic circles, I am told that professors are sometimes judged based on what their students do, so I hope that I will be able to continue working on impactful research while confidently acting as a representative example for your academic descendants.
During my first week of work at Berkeley, I arrived to my desk in Soda Hall, and in the opposite corner of the shared office of six desks, I saw Biye Jiang hunched over his laptop working. We said "hi," but this turned out to be the start of a long-time friendship with Biye. It resonated with me when I told him that because of my deafness, I found it hard to communicate with others in a large group setting with lots of background noise, and he said he sometimes felt the same but for a different reason, as an international student from China. I would speak regularly with him for four years, discussing various topics over frequent lunches and dinners, ranging from research and then to other topics such as life in China. After he left to go to work for Alibaba in Beijing, China, he gave me a hand-written note saying: "Don't just work harder, but also live better! Enjoy your life! Good luck ^_^" I know I am probably failing at this, but it is on my agenda!
Another person I spoke to in my early days at Berkeley was Pablo Paredes, who was among the older (if not the oldest!) PhD students at Berkeley. He taught me how to manage as a beginning PhD student, and gave me psychological advice when I felt like I was hitting research roadblocks. Others who I spoke with from working with John include Haoyu Chen and Xinlei Pan, both of whom would play a major role in me getting my first paper under John's primary supervision, which I had the good fortunate to present at UAI 2017 in Sydney, Australia. With Xinlei, I also got the opportunity to help him for his 2019 ICRA paper on robust reinforcement learning, and was honored to give the presentation for the paper in Montreal. My enthusiasm was somewhat tempered by how difficult it was for Xinlei to get visas to travel to other countries, and it was partly his own experience that I recognized how difficult it could be for an international student in the United States, and that I would try to make the situation easier for them. I am also honored that Haoyu later gave a referral for me to interview at Waymo.
In November of 2015, when I had hit a rough patch in my research and felt like I had let everyone down, Florian Pokorny and Jeff Mahler were the first two members of Ken Goldberg's lab that I got to speak to, and they helped me to get my first (Berkeley) paper, on learning-based approaches for robotics. Their collaboration became my route to robotics, I am forever grateful that they were willing to work to me when it seemed like I might have little to offer. In Ken's lab, I would later get to talk with Animesh Garg, Sanjay Krishnan, Michael Laskey, and Steve McKinley. With Animesh and Steve, I only wish I could have joined the lab earlier so that I could have collaborated with them more often. Near the end of Animesh's time as a PhD student, he approached me after a lab meeting. He had read a blog post of mine and told me that I should have hung out with him more often — and I agree, I wish I did. I was honored when Animesh, now a rising star faculty at the University of Toronto, offered for me to apply for a postdoc with him. Once COVID-19 travel restrictions ease up, I promise that I will make the trip to Toronto to see Animesh, and similarly, to go to Sweden to see Florian.
Among those who I initially worked with in the AUTOLab, I want to particularly acknowledge Jeff Mahler's help with all things related to grasping; Jeff is one of the leading minds in robotic manipulation, and his Dex-Net project is one of the AUTOLab's most impactful projects, and shows the benefit of using a hybrid analytic and learned model in an age when so many have turned to pure learning. I look forward to seeing what his startup, Ambi Robotics, is able to do. I also acknowledge Sanjay's patience with me when I started working with the lab's surgical robot, the da Vinci Research Kit (dVRK). Sanjay was effectively operating like a faculty at that time, and had a deep knowledge of the literature going on in machine learning and robotics, and even databases (which was technically his original background and possibly his "official" research area, but as Ken said, "he's one of the few people who can do both databases and robotics"). His patience when I asked him questions was invaluable, and I often start research conversations by thinking about how Sanjay would approach the question. With Michael Laskey, I acknowledge his help in getting me started with the Human Support Robot and with imitation learning. The bed-making project that I took over with him would mark the start of a series of fruitful research papers on deformable object manipulation. Ah, those days of 2017 and 2018 were sweet, while Jeff, Michael, and Sanjay were all in the lab. Looking back, there were times on Fridays when I most looked forward to our lab "happy hours" in Etcheverry Hall. Rumor has it that we could get reimbursed by Ken for these purchases of corn chips, salsa, and beer, but I never bothered. I would be willing to pay far more to have these meetings happen again.
After Jeff, Michael, and Sanjay, came the next generation of PhD students and postdocs. I enjoyed my conversations with Michael Danielczuk, who helped to continue much of the Dex-Net and YuMi-related projects after Jeff Mahler's graduation. I will also need to make sure I never stop running so that I can inch closer and closer to his half-marathon and marathon times. I also enjoyed my conversations with Carolyn Matl and Matthew Matl, over various lab meetings and dinners, about research. I admire Carolyn's research trajectory and her work on manipulating granular media and dough manipulation, and I look forward to seeing Matthew's leadership at Ambi Robotics, and I hope we shall have more Japanese burger dinners in the future.
With Roy Fox, we talked about some of the most interesting topics in generative modeling and imitation learning. There was a time in summer 2017 in our lab when the thing I looked forward to the most was a meeting with Roy to check that my code implementations were correct. Alas, we did not get a new paper from our ideas, but I still enjoyed the conversations, and I look forward to reading about his current and future accomplishments at UC Irvine. With our other postdoc from Israel, Ron Berenstein, I enjoyed our collaboration on the robotic bed-making project, which may have marked the turning point of my PhD experience, and I appreciate him reminding me that "your time is valuable" and that I should be wisely utilizing my time to work on important research.
Along with Roy and Ron, Ken continued to show his top ability in recruiting more talented postdocs to his lab. Among those who I was fortunate to meet include Ajay Kumar Tanwani, Jeff Ichnowski, and Minho Hwang. My collaboration with Ajay started with the robot bed-making project, and continued for our IROS 2020 and RSS 2020 fabric manipulation papers. Ajay has a deep knowledge of recent advances in reinforcement learning and machine learning, and played key roles in helping me frame the messaging in our papers. Jeff is an expert kinematician who understands how to perform trajectory optimization with robotics, and we desperately needed him to improve the performance of our physical robots. With Minho, I enjoyed his help on getting the da Vinci Surgical Robot back in operation and with better performance than ever before. He is certainly, as Ken Goldberg proudly announced multiple times, "the lab's secret weapon," as should no doubt be evident from the large amount of papers the AUTOLab has produced in recent years with the dVRK. I wish him the best as a faculty at DGIST. I thank him for the lovely Korean tea that he gave me after our farewell sushi dinner at Akemi's! I took a picture of the kind note Minho left to me with the box of tea, so that as with Biye's note, it is part of my permanent record. During the time these postdocs were in the lab, I also acknowledge Jingyi Xu from the Technical University of Munich in Germany, who spent a half-year as a visiting PhD student, for her enthusiasm and creativity with robot grasping research.
To Ashwin Balakrishna and Brijen Thananjeyan, I'm not sure why you two are PhD students. You two are already at the level of faculty! If you ever want to discuss more ideas with me, please let me know. I will need to study how they operate to understand how to mentor a wide range of projects, as should be evident by the large number of AUTOLab undergraduates working with them. During the COVID-19 work-from-home period, it seemed as if one or both of them was part of all my AUTOLab meetings. I look forward to seeing their continued collaboration in safe reinforcement learning and similar topics, and maybe one day I will start picking up tennis so that running is not my only sport.
After I submitted the robot bed-making paper, I belatedly started mentoring new undergraduates in the AUTOLab. The first undergrad I worked with was Ryan Hoque, who had quickly singled me out as a potential graduate student mentor, while mentioning his interest in my blog (this is not an uncommon occurrence). He, and then later Aditya Ganapathi, were the first two undergraduates who I felt like I had mentored at least somewhat competently. I enjoyed working and debugging the fabric simulator we developed, which would later form the basis of much of our subsequent work published at IROS, RSS, and ICRA. I am happy that Ryan has continued his studies as a PhD student in the AUTOLab, focusing on interactive imitation learning. Regarding the fabrics-related work in the AUTOLab, I also thank the scientists at Honda Research Institute for collaborating with us: Nawid Jamali, Soshi Iba, and Katsu Yamane. I enjoyed our semi-regular meetings in Etcheverry Hall where we could go over research progress and brainstorm some of the most exciting ideas in developing a domestic home robot.
While all this was happening, I was still working with John Canny, and trying to figure out the right work balance with two advisors. Over the years, John would work with PhD students David Chan, Roshan Rao, Forrest Huang, Suhong Moon, Jinkyu Kim, and Philippe Laban, along with a talented Master's student Chen (Allen) Tang. As befitting someone like John, his students work on a wider range of research areas than is typical for a research lab. (There is no official name for John Canny's lab, so we decided to be creative and called it … "the CannyLab.") With Jinkyu and Suhong, I learned more about explainable AI and its application for autonomous driving, and on the non-science side, I learned more about South Korea. Philippe taught me about natural language processing, summarizing text, and his "NewsLens" project resonated with me, given the wide variety of news that I read these days, and I enjoyed the backstory for why he was originally motivated to work on this. David taught me about computer vision (video captioning), Roshan taught me about proteins, and Forrest taught me about sketching. Philippe, David, Roshan, and Forrest also helped me understand Google's shiny new neural network architecture, the Transformer, as well as closely-related architectures such as OpenAI's GPT models. I also acknowledge David's help for his work getting the servers set up for the CannyLab, and for his advice in building a computer. Allen Tang's master's thesis on how to accelerate deep reinforcement learning played a key role in my final research projects.
For my whole life, I had always wondered what it was like to intern at a company like Google, and have long watched in awe as Google churned out impressive AI research results. I had applied to Google twice earlier in my PhD, but was unable to land an internship. So, when the great Andy Zeng sent me a surprise email in late 2019, after my initial shock and disbelief wore off, I quickly responded with my interest in interning with him. After my research scientist internship under his supervision, I can confirm that the rumors are true: Andy Zeng is a fantastic intern host, and I highly recommend him. The internship in 2020 was virtual, unfortunately, but I still enjoyed the work and his frequent video calls helped to ensure that I stayed focused on producing solid research during my internship. I also appreciated the other Google researchers who I got to chat with throughout the internship: Pete Florence, Jonathan Tompson, Erwin Coumans, and Vikas Sindhwani. I have found that the general rule that others in the AUTOLab (I'm looking at you, Aditya Ganapathi) have told me is a good one to follow: "think of something, and if Pete Florence and Andy Zeng like it, it's good, and if they don't like it, don't work on it." Thank you very much for the collaboration!
The last two years of my PhD have felt like the most productive of my life. During this time, I was collaborating (virtually) with many AUTOLab members. In addition to those mentioned earlier, I want to acknowledge undergraduate Haolun (Harry) Zhang on dynamic cable manipulation, leading to the accurately-named paper Robots of the Lost Arc. I look forward to seeing Harry's continued achievements at Carnegie Mellon University. I was also fortunate to collaborate more closely with Huang (Raven) Huang, Vincent Lim, and many other talented newer students to Ken Goldberg's lab. Raven seems like a senior PhD student instead of just starting out, and Vincent is far more skilled than I could have imagined from a beginning undergraduate. Both have strong work ethics, and I hope that our collaboration shall one day lead to robots performing reliable lassoing and tossing. In addition, I also enjoyed my conversations with the newer postdocs to the AUTOLab, Daniel Brown and Ellen Novoseller, from whom I have learned a lot of inverse reinforcement learning and preference learning. Incoming PhD student Justin Kerr also played an enormous role in helping me work with the YuMi in my final days in the AUTOLab.
I also want to acknowledge the two undergraduates from John Canny's lab who I collaborated with the most, Mandi Zhao and Abhinav Gopal. Given the intense pressure of balancing both coursework and others, I am impressed they were willing to stick around with me while we finalized our work with John Canny. With Mandi, I hope we can continue discussing research ideas and US-China relations over WeChat, and with Abhinav, I hope we can pursue more research ideas in offline reinforcement learning.
Besides those who directly worked with me, my experience at Berkeley was enriched by the various people from other labs who I got to interact with somewhat regularly. Largely through Biye, I got to know a fair amount of Chinese international students, among them include Hezheng Yin, Xuaner (Cecilia) Zhang, Qijing (Jenny) Huang, and Isla Yang. I enjoyed our conversations over dinners and I hope they enjoyed my cooking of salmon and panna cotta. I look forward to the next chapter in all of our lives. It's largely because of my interaction with them that I decided I would do my best to learn more about anything related to China, which explains book after book that I have on my iBooks app.
My education at Berkeley benefited a great deal from what other faculty taught me during courses, research meetings, and otherwise. I was fortunate to take classes from Pieter Abbeel, Anca Dragan, Daniel Klein, Jitendra Malik, Will Fithian, Benjamin Recht, and Michael I. Jordan. I also took the initial iteration of Deep Reinforcement Learning (RL), back when John Schulman taught it, and I thank John for kindly responding to questions I had regarding Deep RL. Among these professors, I would like to particularly acknowledge Pieter Abbeel, who has regularly served as inspiration for my research, and somehow remembers me and seems to have the time to reply to my emails even though I am not a student of his nor a direct collaborator. His online lecture notes and videos in robotics and unsupervised learning are among those that I have consulted the most.
In addition to my two formal PhD advisors, I thank Sergey Levine and Masayoshi Tomizuka for serving on my qualifying exam committee. The days leading up to that event were among the most stressful I had experienced in my life, and I thank them for taking the time to listen to my research proposal. I also enjoyed learning more about deep reinforcement learning through Sergey Levine's course and online lectures.
I also owe a great deal to the administrators at UC Berkeley. The ones who helped me the most, especially during the two times during my PhD when I felt like I had hit rock bottom (in late 2015 and early 2018), were able to offer guidance and do what the could to help me stay on track to finish my PhD. I don't know all the details about what they did behind the scenes, but thank you, to Shirley Salanio, Audrey Sillers, Angie Abbatecola, and the newer administrators to BAIR. Like Angie, I am an old timer of BAIR. I was even there when it was called Berkeley Vision and Learning Center (BVLR), before we properly re-branded the organization to become Berkeley Artificial Intelligence Research (BAIR). I also thank their help in getting the BAIR Blog up and running.
My research was supported initially by a university fellowship, and then later by a six-year Graduate Fellowships for STEM Diversity (GFSD) which was formerly called the National Physical Science Consortium (NPSC) Fellowship. At the time I received the fellowship, I was in the middle of feeling stuck on several research progress. I don't know precisely why they granted me the fellowship, but whatever their reasons, I am eternally grateful for the decision they made. One of the more unusual conditions of the GFSD fellowship is that recipients are to intern at the sponsoring agency, which for me was the National Security Agency (NSA). I went there for one summer in Laurel, Maryland, and got a partial peek past the curtain of the NSA. By design, the NSA is one of the most secretive United States government agencies, which makes it difficult for people to acknowledge the work they do. Being there allowed me to understand and appreciate the signals intelligence work that the NSA does on behalf of the United States. Out of my NSA contacts, I would like to particularly mention Grant Wagner and Arthur Drisko.
While initially apprehensive about Berkeley, I have now come to accept it for some of the best it has to offer. I will be thankful of the many cafes I spent time in around the city, along with the frequent running trails both on the streets and in the hills. I only wish that other areas of the country offered this many food and running options.
Alas, all things must come to an end. While my PhD itself is coming to a close, I look forward to working with my future supervisor, David Held, in my next position at Carnegie Mellon University. Throughout the time when I was searching for a postdoc, I thank other faculty who took the time out of their insanely busy schedules to engage with me and to offer research advice: Shuran Song of Columbia, Jeannette Bohg of Stanford, and Alberto Rodriguez of MIT. I am forever in awe of their research contributions, and I hope that I will be able to achieve a fraction of what they have done in their careers.
In a past life, I was an undergraduate at Williams College in rural Massachusetts, which boasts an average undergraduate student body of about 2000 students. When I arrived at campus on that fall day in 2010, I was clueless about computer science and how research worked in general. Looking back, Williams must have done a better job preparing me for the PhD than I expected. Among the professors there, I owe perhaps the most to my undergraduate thesis advisor, Andrea Danyluk, as well as the other Williams CS faculty who taught me at that time: Brent Heeringa, Morgan McGuire, Jeannie Albrecht, Duane Bailey, and Stephen Freund. I will do my best to represent our department in the research area, and I hope that the professors are happy with how my graduate trajectory has taken place. One day, I shall return in person to give a research talk, and will be able to (in the words of Duane Bailey) show off my shiny new degree. I also majored in math, and I similarly learned a tremendous amount from my first math professor, Colin Adams, who emailed me right after my final exam urging me to major in math. I also appreciate other professors who have left a lasting impression on me: Steven Miller, Mihai Stoiciu, Richard De Veaux, and Qing (Wendy) Wang. I appreciate their patience during my frequent visits to their office hours.
During my undergraduate years, I was extremely fortunate to benefit from two Research Experiences for Undergraduates (REUs), the first at Bard College with Rebecca Thomas and Sven Andersen, and the second at the University of North Carolina at Greensboro, with Francine Blanchet-Sadri. I thank the professors for offering to work with me. As with the Williams professors, I don't think any of my REU advisors had anticipated that they would be helping to train a future roboticist. I hope they enjoyed working with me just as much as I enjoyed working with them. To everyone from those REUs, I am still thinking of all of you and wish you luck wherever you are.
I owe a great debt to Richard Ladner of the University of Washington, who helped me break into computer science. He and Rob Roth used to run a program called the "Summer Academy for Advancing Deaf and Hard of Hearing in Computing." I attended one of the iterations of this program, and it exposed to me what it might have been like to be a graduate student. Near the end of the program, I spoke with Richard one-on-one, and asked him detailed questions about what he thought of my applying to PhD programs. I remember him expressing enthusiasm, but also some reservation: "do you know how hard it is to get in a top PhD program?" he cautioned me. I thanked him for taking the time out of his busy schedule to give me advice. In the upcoming years, I always remembered to work hard in the hopes of achieving a PhD. (The next time I visited the University of Washington, years later, I raced to Richard Ladner's office the minute I could.) Also, as a fun little history note, when I was there that I decided to start my (semi-famous?) personal blog, which seemingly everyone at Berkeley's EECS department has seen, in large part because I felt like I needed to write about computer science in order to understand it better. I still feel that way today, and I hope I can continue writing.
Finally, I would like to thank my family for helping me persevere throughout the PhD. It is impossible for me to adequately put in words how much they helped me survive. My frequent video calls with family members helped me to stay positive during the most stressful days of my PhD, and they have always been interested in the work that I do and anything else I might want to talk about. Thank you.
Reframing Reinforcement Learning as Sequence Modeling with Transformers?
The Transformer Network, developed by Google and presented in a NeurIPS 2017 paper, is one of the few papers that can truly claim to have fundamentally transformed (pun intended) the field of Artificial Intelligence. Transformer Networks have become the foundation of some of the most dramatic performance advances in Natural Language Processing (NLP). Two prominent examples are Google's BERT model, which uses a bidirectional Transformer, and OpenAI's line of GPT models, which uses a unidirectional Transformer. Both papers have substantially helped out their respective companies' bottom line: BERT has boosted Google's search capabilities to new tiers and OpenAI uses GPT-3 for automatic text generation in their first commercial product .
For a solid understand of Transformer Networks, it is probably best to read the original paper and try out sample code. However, the Transformer Network paper has also spawned a seemingly endless series of blog posts and tutorial articles, which can be solid references (though with high variance in quality). Two of my favorite posts are from well-known bloggers Jay Alammar and Lilian Weng, who serve as inspirations for my current blogging habits. Of course, I am also guilty of jumping on this bandwagon, since I wrote a blog post on Transformers a few years ago.
Transformers have changed the trajectory of NLP and other fields such as protein modeling (e.g., the MSA transformer) and computer vision. OpenAI has an ICML 2020 paper which introduces Image-GPT, and the name alone should be self-explanatory. But, what about the research area I focus on these days, robot learning? It seems like Transformers have had less impact in this area. To be clear, researchers have already tried to replace existing neural networks used in RL with Transformers, but this does not fundamentally change the nature of the problem, which is consistently framed as a Markov Decision Process where states follow the Markovian property of being a function of only the prior state and action.
That might now change. Earlier this month, two groups in BAIR released arXiv preprints that use Transformers for RL, and which do away with MDPs and treat RL as one big sequence modeling problem. They propose models called Decision Transformer and Trajectory Transformer. These have not yet been peer-reviewed, but judging from the format, it's likely that both are under review for NeurIPS. Let's dive into the papers, shall we?
Decision Transformer
This paper introduces the Decision Transformer, which takes a particular trajectory representation as input, and outputs action predictions at training time, or the actual actions at test time (i.e., evaluation).
First, how is a trajectory represented? In RL, these are typically a sequence of states, actions, and rewards. In this paper, however, they consider the return to go:
\[\hat{R}_t = \sum_{t'=t}^{T} r_{t'}\]
resulting in the full trajectory representation of:
\[\tau = (\hat{R}_1, s_1, a_1, \hat{R}_2, s_2, a_2, \ldots, \hat{R}_T, s_T, a_t)\]
This already raises the question of why this representation is chosen. The reason is that at test time, the Decision Transformer must be paired up with a desired performance, which is cumulative episodic return. Given that as input, after each time step, the agent gets the per-time step reward from the environment emulator, and decreases the desired performance by that amount. Then, this revised desired performance value is passed again as input, and the process repeats. The immediate question I had after this was whether it would be possible to predict the return-to-go accurately, and if the Decision Transformer could extrapolate beyond the best return-to-go in the training data. Spoiler alert: the paper reports experiments with this, finding a strong correlation between predicted and actual return, and it is possible to extrapolate beyond the best return in the data, but only by a little bit. That's fair, it would be unrealistic to assume it could get any return-to-go feasible from the environment emulator.
The input to the Decision Transformer is a subset of the trajectory $\tau$ consisting of the $K$ most recent time steps, each of which consists of a tuple with three items as noted above (the return-to-go, state, and action). Note how this differs from a DQN-style method, which for each time step, takes in 4 stacked game frames but does not take in rewards or prior actions as input. Furthermore, in this paper, Decision Transformers use values such as $K=30$, so they consider a longer history.
The output of Decision Transformer simply requires predicting an action (during training), so it can be trained with the usual cross-entropy or mean square error loss functions, depending on whether the action is discrete or continuous.
Now, what is the architecture for predicting or generating actions? Decision Transformers use GPT, which is an auto-regressive model which means it handles probabilities of the form $p(x_t | x_{t-1}, \ldots, x_1)$ where the prediction of something at a current time is conditioned on all prior data. GPT uses this to generate (that's what the "G" stands for) by sampling the $x_t$ term. In my notation of the $x_i$ terms, imagine all of those represent data tuples of (return-to-go, state, action) – that's what the GPT model deals with, and it produces the next predicted tuple. Well, technically they only need to predict the action, but I wonder if state prediction could be useful? From communicating with the authors, they didn't get much performance benefit from predicting states, but it is doable.
There are also various embedding layers applied on the input before it is passed to the GPT model. I highly recommend looking at Algorithm 1 in the paper, which has it in nicely written pseudocode. The Appendix also clarifies the code bases that they build upon, and both are publicly available. Andrej Karpathy's miniGPT code looks nice and is self-contained.
That's it! Notice how the Decision Transformer does not do bootstrapping to estimate value functions.
The paper evaluates on a suite of offline RL tasks, using environments from Atari (discrete control), from D4RL (continuous control), and from a "Key-to-Door" task. Fortunately for me, I had recently done a lot of reading on offline RL, and I even wrote a survey-style blog post about it a few months ago. The Decision Transformer is not specialized towards offline RL. It just happens to be the problem setting the paper considers, because not only is it very important, it is also a nice fit in that (again) the Decision Transformer does not perform bootstrapping, which is known to cause diverging Q-values in many offline RL contexts.
The results suggest that Decision Transformer is on par with state-of-the-art offline RL algorithms. It is a little worse on Atari, and a little better on D4RL. It seems to do a lot better on the Key-to-Door task but I'm not sufficiently familiar with that benchmark. However, since the paper is proposing an approach fundamentally different from most RL methods, it is impressive to get similar performance. I expect that future researchers will build upon the Decision Transformer to improve its results.
Trajectory Transformer
Now let us consider the second paper, which introduces the Trajectory Transformer. As with the prior paper, it departs from the usual MDP assumptions, and it also does not require dynamic programming or bootstrapped estimates. Instead, it directly uses properties from the Transformer to encode all the ingredients it needs for a wide range of control and decision-making problems. As it borrows techniques from language modeling, the paper argues that the main technical innovation is understanding how to represent a trajectory. Here, the trajectories $\tau$ are represented as:
\[\tau = \{ \mathbf{s}_t^0, \mathbf{s}_t^{1}, \ldots, \mathbf{s}_t^{N-1}, \mathbf{a}_t^0, \mathbf{a}_t^{1}, \ldots, \mathbf{a}_t^{M-1}, r_t \}_{t=0}^{T-1}\]
My first reaction was that this looks different than the trajectory representation for Decision Transformers. There's no return-to-go written here, but this is a little misleading. The Trajectory Transformer paper tests three decision-making settings: (1) imitation learning, (2) goal-conditioned RL, and (3) offline RL. The Decision Transformer paper focuses on applying the framework to offline RL only. For offline RL, the Trajectory Transformer actually uses the return-to-go as an extra component in each data tuple in $\tau$. So I don't believe there is any fundamental difference in terms of the trajectory consisting of states, actions, and return-to-go, though the Trajectory Transformer seems to also take in the current scalar $r_t$ as input, so that could be one difference, and it also appears to use a discount factor in the return-to-go. Both seem minor.
Perhaps a more fundamental difference is with discretization. The Decision Transformer paper doesn't mention discretization, and from contacting the authors, I confirm they did not discretize. So for continuous states and actions, the Decision Transformer likely just represents them as vectors in $\mathbb{R}^d$ for some suitable $d$ representing the state or action dimension. In contrast, Trajectory Transformers use discretized states and actions as input, and the paper helpfully explains how the indexing and offsets work. While this may be inefficient, the paper states, it allows them to use a more expressive model. My intuition for this phrase comes from histograms — in theory, histograms can represent arbitrarily complex 1D data distributions, whereas a 1D Gaussian must have a specific "bell-shaped" structure.
As with the Decision Transformer, the Trajectory Transformer uses a GPT as its backbone, and is trained to optimize log probabilities of states, actions, and rewards, conditioned on prior information in the trajectory. This enables test-time prediction by sampling from the trained model using what is known as beam search. This is another core difference between the Trajectory Transformer and Decision Transformer. The former uses beam search, the latter does not, and that's probably because with discretization, it may be easier to do multimodal reasoning.
For quantitative results, they again test on D4RL for offline RL experiments. The results suggest that Trajectory Transformers are competitive with prior state-of-the-art offline RL algorithms. Again, as with Decision Transformers, the results aren't significant improvements, but the fact that they're able to get to this performance for the first iteration of this approach is impressive in its own right. They also show a nice qualitative visualization where their Trajectory Transformer can produce a long sequence of predicted trajectories of a humanoid, whereas a popular state-of-the-art model-based RL algorithm known as PETS makes significantly worse predictions.
The project website succinctly summarizes the comparisons between Trajectory Transformer and Decision Transformer as follows:
Chen et al concurrently proposed another sequence modeling approach to reinforcement learning. At a high-level, ours is more model-based in spirit and theirs is more model-free, which allows us to evaluate Transformers as long-horizon dynamics models (e.g., in the humanoid predictions above) and allows them to evaluate their policies in image-based environments (e.g., Atari). We encourage you to check out their work as well.
To be clear, the idea that Trajectory Transformer is model-based and that Decision Transformer is model-free is partly because the former predicts states, whereas the latter only predicts actions.
Both papers show that we can consider RL as a sequence learning problem, where Transformers can take in a long sequence of data and predict something. The two approaches can get around the "deadly triad" in RL since bootstrapping value estimates is not necessary. The use of Transformers enables building upon an extensive literature for Transformers in other fields — and it's very extensive, despite how Transformers are only 4 years old (it has an absurd 22955 Google Scholar citations as of today)! The models use the same fundamental backbone, and I wonder if there are ways to merge the approaches. Would beam search, for example, be helpful in Decision Transformers, and would conditioning on return-to-go be helpful for Trajectory Transformer?
To reitertate, the results are not "out of this world" compared to current state-of-the-art RL using MDPs, but as a first step, these look impressive. Moreover, I am guessing that the research teams are busy extending the capabilities of these models. These two papers have very high impact potential. Assuming the research community is able to improve upon these models, this approach may even become the standard treatment for RL. I am excited to see what will come.
My PhD Dissertation Talk
The long wait is over. After many years, I am excited to share that I delivered my PhD dissertation talk. I gave it on May 13, 2021 via Zoom. I recorded the 45-minute talk and you can find the video above.
I had multiple opportunities to practice the PhD talk, as I gave several talks earlier with a substantial amount of overlap, such as the one "at" Toronto in March (see the blog post here). My PhD talk, like prior talks, heavily focuses on robot manipulation of deformables, and includes discussions of my IROS 2020, RSS 2020, and ICRA 2021 papers. However, I wanted the focus to be broader than deformable manipulation alone, so I structured the talk to feature "robot learning" prominently, of which "deformable manipulation" is one particular example of robot learning. Then, rather than go through the "Model-Free," "Model-Based," and "Transporter Network" sections from my prior talks, I chose to title talk sections as follows: "Simulated Interactions," "Architectural Priors," and "Curricula." This also gave me the chance to feature some of my curriculum learning work with John Canny.
The audience had some questions at the end, but overall, the questions were generally not too difficult to answer. Perhaps in years past, it was typical to have very challenging questions at the end of a dissertation talk, and students may have failed if they couldn't answer well enough. Nowadays, every Berkeley EECS PhD student who gives a dissertation talk is expected to pass. I'm not aware of anyone failing after giving the talk.
I want to thank everyone who helped me get to this point today, especially when earlier in my PhD, I thought I would never reach this point. Or at the very least, I thought I would not have as strong a research record as I now have. A proper and more detailed set of acknowledgments will come at a later date.
I am not a "Doctor" yet, since I still need to write up the actual dissertation itself, which I will do this summer by "stitching" together my 4-5 most relevant first-author papers. Nonetheless, giving this talk is a huge step forward in finishing up my PhD, and I am hugely relieved that it's out of the way.
I will also be starting a postdoc position in a few months. More on that to come later …
Inverse Reinforcement Learning from Preferences
It's been a long time since I engaged in a detailed read through of an inverse reinforcement learning (IRL) paper. The idea is that, rather than the standard reinforcement learning problem where an agent explores to get samples and finds a policy to maximize the expected sum of discounted rewards, we instead are given data already, and must determine the reward function. After this reward function is learned, one can then learn a new policy based on this reward function by running standard reinforcement learning, but where the rewards for each state (or state-action) is determined from the learned reward function. As a side note, since this appears to be quite common and "part of" IRL, then I'm not sure why IRL is often classified as an "imitation learning" algorithm when reinforcement learning has to be run as a subroutine. Keep this in mind when reading papers on imitation learning, which often categorize algorithms as supervised learning (e.g., behavioral cloning) approaches vs IRL approaches, such as in the introduction of the famous Generative Adversarial Imitation Learning paper.
In the rest of this post, we'll cover two closely-related works on IRL that cleverly and effectively rely on preference rankings among trajectories. They also have similar acronyms: T-REX and D-REX. The T-REX paper presents the Trajectory-ranked Reward Extrapolation algorithm, which is also used in the D-REX paper (Disturbance-based Reward Extrapolation). So we shall first discuss how reward extrapolation works in T-REX, and then we will clarify the difference between the two papers.
T-REX and D-REX
The motivation for T-REX is that in IRL, most approaches rely on defining a reward function which explains the demonstrator data and makes it appear optimal. But, what if we have suboptimal demonstrator data? Then, rather than fit a reward function to this data, it may be better to instead figure out the appropriate features of the data that convey information about the underlying intentions of the demonstrator, which may be extrapolated beyond the data. T-REX does this by working with a set of demonstrations which are ranked.
To be concrete, denote a sequence of $m$ ranked trajectories:
\[\mathcal{D} = \{ \tau_1, \ldots, \tau_m \}\]
where if $i<j$, then $\tau_i \prec \tau_j$, or in other words, trajectory $\tau_i$ is worse than $\tau_j$. We'll assume that each $\tau_i$ consists of a series of states, so that neither demonstrator actions nor the reward are needed (a huge plus!):
\[\tau_i = (s_0^{(i)}, s_1^{(i)}, \ldots, s_T^{(i)})\]
and we can also assume that the trajectory lengths are all the same, though this isn't a strict requirement of T-REX (since we can normalize based on length) but probably makes it more numerically stable.
From this data $\mathcal{D}$, T-REX will train a learned reward function $\hat{R}_\theta(s)$ such that:
\[\sum_{s \in \tau_i} \hat{R}_\theta(s) < \sum_{s \in \tau_j} \hat{R}_\theta(s) \quad \mbox{if} \quad \tau_i \prec \tau_j\]
To be clear, in the above equation there is no true environment reward at all. It's just the learned reward function $\hat{R}_\theta$, along with the trajectory rankings. That's it! One may, of course, use the true reward function to determine the rankings in the first place, but that is not required, and that's a key flexibility advantage for T-REX – there are many other ways we can rank trajectories.
In order to train $\hat{R}_\theta$ so the above criteria is satisfied, we can use the cross entropy loss function. Most people probably start using the cross-entropy loss function in the context of classification tasks, where the neural network outputs some "logits" and the loss function tries to "get" the logits to match a true one-hot vector distribution. In this case, the logic is similar. The output of the reward network forms the (un-normalized) probability that one trajectory is preferable to another:
\[P(\hat{J}_\theta(\tau_i) < \hat{J}_\theta(\tau_j)) \approx \frac{\exp \sum_{s \in \tau_j} \hat{R}_\theta(s) }{ \exp \sum_{s \in \tau_i}\hat{R}_\theta(s) + \exp \sum_{s \in \tau_j}\hat{R}_\theta(s) }\]
when we then use in this loss function:
\[\mathcal{L}(\theta) = - \sum_{\tau_i \prec \tau_j } \log \left( \frac{\exp \sum_{s \in \tau_j} \hat{R}_\theta(s) }{\exp \sum_{s \in \tau_i} \hat{R}_\theta(s)+ \exp \sum_{s \in \tau_j}\hat{R}_\theta(s) } \right)\]
Let's deconstruct what we're looking at here. The loss function $\mathcal{L}(\theta)$ for training $\hat{R}_\theta$ is binary cross entropy, where the two "classes" involved here are whether $\tau_i \succ \tau_j$ or $\tau_i \prec \tau_j$. (We can easily extend this to include cases when the two are equal, but let's ignore for now.) Above, the true class corresponds to $\tau_i \prec \tau_j$.
If this isn't clear then reviewing the cross entropy (e.g., from this source), we see that between a true distribution "$p$" and a predicted distribution "$q$", it is defined as: $-\sum_x p(x) \log q(x)$ where the sum over $x$ iterates through all possible classes – in this case we only have two classes. The true distribution is $p=[0,1]$ if we interpret the two components as expressing the class $\tau_i \succ \tau_j$ at index 0, or $\tau_i \prec \tau_j$ at index 1. In all cases, the "class" we assign is to index 1 by design. The predicted distribution comes from the output of the reward function network:
\[q = \Big[1 - P(\hat{J}_\theta(\tau_i) < \hat{J}_\theta(\tau_j)), \; P(\hat{J}_\theta(\tau_i) < \hat{J}_\theta(\tau_j)) \Big]\]
and putting this together, the cross entropy term reduces to $\mathcal{L}(\theta)$ as shown above, for a single training data point (i.e., a single training pair $(\tau_i, \tau_j)$). We would then sample many of these pairs during training for each minibatch.
To get this to work in cases when the two trajectories are ambiguous, then you can set the "target" distribution to be $[0.5, 0.5]$. This is made explicit in this NeurIPS 2018 paper from DeepMind which uses the same loss function.
The main takeaway is that this process will learn a reward function assigning greater total return to higher ranked trajectories. As long as there are features associated with higher return that are identifiable from the data, then it may be possible to extrapolate beyond the data.
Once the reward function is learned, T-REX then runs policy optimization by running reinforcement learning, which in both papers here is Proximal Policy Optimization. This is done in an online fashion, but where instead of data coming in as $(s,a,r,s')$ tuples, they will be $(s,a,\hat{R}_\theta(s),s')$, where the reward is from the learned policy.
This makes sense, but as usual, there are a bunch of practical tips and tricks to get things working. Here are some for T-REX:
For many environments, "trajectories" often refer to "episodes", but these can last for a large number of time steps. To perform data augmentation, one can subsample trajectories of the same length among pairs of trajectories $\tau_i$ and $\tau_j$.
Training an ensemble of reward functions for $\hat{R}_\theta$ often helps, provided the individual components have values at roughly the same scale.
The reward used for the policy optimization stage might need some extra "massaging" to it. For example, with MuJoCo, the authors use a control penalty term that gets added to $\hat{R}_\theta(s)$.
To check if reward extrapolation is feasible, one can plot a graph that shows ground truth returns on the x-axis and predicted return on the y-axis. If there is strong correlation among the two, then that's a sign extrapolation is more likely to happen.
In both T-REX and D-REX, the authors experiment with discrete control and continuous control using standard environments from Atari and MuJoCo, respectively, and find that overall, their two stage approach of (1) finding $\hat{R}_\theta$ from preferences and (2) running PPO on top of this learned reward function, works better than competing baselines such as Behavior Cloning and Generative Adversarial Imitation Learning, and that they can exceed the performance of the demonstration data.
The above is common to both T-REX and D-REX. So what's the difference between the two papers?
T-REX assumes that we have rankings available ahead of time. This can be from a number of sources. Maybe they were "ground truth" rankings based on ground truth rewards (i.e., just sum up the true reward within the $\tau_i$s), or they might be noisy rankings. An easy way to test noisy rankings is to rank trajectories based on the time in training history if we extract trajectories from an RL agent's history. Another, but more cumbersome way (since it relies on human subjects) is to use Amazon Mechanical Turk. The T-REX paper does a splendid job testing these different rankings – it's one reason I really like the paper.
In contrast, D-REX assumes these rankings are not available ahead of time. Instead, the approach involves training a policy from the provided demonstration data via Behavior Cloning, then taking that resulting snapshot and rolling it out in the environment with different noise levels. This naturally provides a ranking for the data, and only relies on the weak assumption that the Behavior Cloning agent will be better than a purely random policy. Then with these automatic rankings, D-REX can just do exactly what T-REX did!
D-REX makes a second contribution on the theoretical side to better understand why preferences over demonstrations can reduce reward function ambiguity in IRL.
Some Theory in D-REX
Here's a little more on the theory from D-REX. We'll follow the notation from the paper and state Theorem 1 here (see the paper for context):
If the estimated reward function is $\;\hat{R}(s) = w^T\phi(s),\;$ the true reward function is \(\;R^*(s) = \hat{R}(s) + \epsilon(s)\;\) for some error function \(\;\epsilon : \mathcal{S} \to \mathbb{R}\;\) and \(\;\|w\|_1 \le 1,\;\) then extrapolation beyond the demonstrator, i.e., \(\; J(\hat{\pi}|R^*) > J(\mathcal{D}|R^*),\;\) is guaranteed if:
\[J(\pi_{R^*}^*|R^*) - J(\mathcal{D}|R^*) > \epsilon_\Phi + \frac{2\|\epsilon\|_\infty}{1 - \gamma}\]
where \(\;\pi_{R^*}^* \;\) is the optimal policy under $R^*$, \(\;\epsilon_\Phi = \| \Phi_{\pi_{R^*}^*} - \Phi_{\hat{\pi}}\|_\infty,\;\) and \(\|\epsilon\|_\infty = {\rm sup}\{ | \epsilon(s)| : s \in \mathcal{S} \}\).
To clarify the theorem, $\hat{\pi}$ is some learned policy for which we want to outperform the average episodic return in the demonstration data $J(\mathcal{D}|R^*)$. We begin by considering the difference in return between the optimal policy under the true reward (which can't be exceeded w.r.t. that reward by definition) and the expected return of the learned polcy (also under that true reward):
\[\begin{align} J(\pi_{R^*}^*|R^*) - J(\hat{\pi}|R^*) \;&{\overset{(i)}=}\;\; \left| \mathbb{E}_{\pi_{R^*}^*} \Big[ \sum_{t=0}^\infty \gamma^t R^*(s) \Big] - \mathbb{E}_{\hat{\pi}} \Big[ \sum_{t=0}^\infty \gamma^t R^*(s) \Big] \right| \\ \;&{\overset{(ii)}=}\;\; \left| \mathbb{E}_{\pi_{R^*}^*} \Big[ \sum_{t=0}^\infty \gamma^t (w^T\phi(s_t)+\epsilon(s_t)) \Big] - \mathbb{E}_{\hat{\pi}} \Big[ \sum_{t=0}^\infty \gamma^t (w^T\phi(s_t)+\epsilon(s_t)) \Big] \right| \\ \;&{\overset{(iii)}=}\; \left| w^T\Phi_{\pi_{R^*}^*} + \mathbb{E}_{\pi_{R^*}^*} \Big[ \sum_{t=0}^\infty \gamma^t \epsilon(s_t) \Big] - w^T\Phi_{\hat{\pi}} - \mathbb{E}_{\hat{\pi}} \Big[ \sum_{t=0}^\infty \gamma^t \epsilon(s_t) \Big] \right| \\ \;&{\overset{(iv)}\le}\;\; \left| w^T(\Phi_{\pi_{R^*}^*} -\Phi_{\hat{\pi}}) + \mathbb{E}_{\pi_{R^*}^*} \Big[ \sum_{t=0}^\infty \gamma^t \sup_{s\in \mathcal{S}} \epsilon(s) \Big] - \mathbb{E}_{\hat{\pi}} \Big[ \sum_{t=0}^\infty \gamma^t \inf_{s \in \mathcal{S}} \epsilon(s) \Big] \right| \\ \;&{\overset{(v)}=}\;\; \left| w^T(\Phi_{\pi_{R^*}^*} -\Phi_{\hat{\pi}}) + \Big( \sup_{s\in \mathcal{S}} \epsilon(s) - \inf_{s \in \mathcal{S}} \epsilon(s) \Big) \sum_{t=0}^{\infty} \gamma^t \right| \\ \;&{\overset{(vi)}\le}\;\; \left| w^T(\Phi_{\pi_{R^*}^*} -\Phi_{\hat{\pi}}) + \frac{2 \|\epsilon\|_\infty}{1-\gamma} \right| \\ \;&{\overset{(vii)}\le}\;\; \left| w^T(\Phi_{\pi_{R^*}^*} -\Phi_{\hat{\pi}})\right| + \frac{2 \|\epsilon\|_\infty}{1-\gamma} \\ \;&{\overset{(viii)}\le}\; \|w\|_1 \|\Phi_{\pi_{R^*}^*} -\Phi_{\hat{\pi}})\|_\infty + \frac{2 \|\epsilon\|_\infty}{1-\gamma} \\ &{\overset{(ix)}\le}\; \epsilon_\Phi + \frac{2\|\epsilon\|_\infty}{1 - \gamma} \end{align}\]
in (i), we apply the definition of the terms and put absolute values around the terms. I don't think this is necessary since the LHS must be positive, but it doesn't hurt.
in (ii), we substitute $R^*$ with the theorem's assumption about both the error function and how the estimated reward is a linear combination of features.
in (iii) we move the weights $w$ outside the expectation as they are constants and we can use linearity of expectation. Then we use the paper's definition of $\Phi_\pi$ as the expected feature counts for given policy $\pi$.
in (iv) we move the two $\Phi$ terms together (notice how this matches the theorem's $\epsilon_\Phi$ definition), and we then make this an inequality by looking at the expectations and applying "sup"s and "infs" to each time step. This is saying if we have $A-B$ then let's make the $A$ term larger and the $B$ term smaller. Since we're doing this for an infinite amount of time steps, I am somewhat worried that this is a loose bound.
in (v) we see that since the "sup" and "inf" terms no longer depend on $t$, we can move them outside the expectations. In fact, we don't even need expectations anymore, since all that's left is a sum over discounted $\gamma$ terms.
in (vi) we apply the geometric series formula to get rid of the sum over $\gamma$ and then the inequality results from replacing the "sup"s and "inf"s with the \(\| \epsilon \|_\infty\) from the theorem statement – the "2" helps to cover the extremes of a large positive error and a large negative error (note the absolute value in the theorem condition, that's important).
in (vii) we apply the Triangle Inequality.
in (viii) we apply Hölder's inequality.
finally, in (ix) we apply the theorem statements.
We now take that final inequality and subtract the average demonstration data return on both sides:
\[\underbrace{J(\pi_{R^*}^*|R^*)- J(\mathcal{D}|R^*)}_{\delta} - J(\hat{\pi}|R^*) \le \epsilon_\Phi + \frac{2\|\epsilon\|_\infty}{1 - \gamma} - J(\mathcal{D}|R^*)\]
Now we finally invoke the "if" condition in the theorem. If the equation in the theorem holds, then we can replace $\delta$ above as follows since it's just reducing the LHS:
\[\epsilon_\Phi + \frac{2\|\epsilon\|_\infty}{1 - \gamma} - J(\hat{\pi}|R^*) \le \epsilon_\Phi + \frac{2\|\epsilon\|_\infty}{1 - \gamma} - J(\mathcal{D}|R^*)\]
which implies:
\[- J(\hat{\pi}|R^*) \le - J(\mathcal{D}|R^*) \quad \Longrightarrow \quad J(\hat{\pi}|R^*) > J(\mathcal{D}|R^*),\]
showing that $\hat{\pi}$ has extrapolated beyond the data.
What's the intuition behind the theorem? The LHS of the theorem shows the difference in the return based on the optimal policy versus the demonstration data. By definition of optimality, the LHS is at least 0, but it can get very close to 0 if the demonstration data is very good. That's not good for extrapolation, and hence the condition for outperforming the demonstrator is less likely to hold (which makes sense). Focusing on the RHS, we see that it's value is larger if the maximum error in $\epsilon$ is large. This might be a very restrictive condition, since it's considering the maximum absolute error over the entire state set $\mathcal{S}$. Since there are an infinite amount of states in many practical applications, this means even one large error might cause the inequality in the theorem statement to fail.
The proof also relies on the assumption that the estimated reward function is a linear combination of features (that's what $\hat{R}(s)=w^T\phi(s)$ means) but $\phi$ could contain arbitrarily complex features, so I guess it's a weak assumption (which is good), but I am not sure?
Overall, the T-REX and D-REX papers are nice IRL papers that rely on preferences between trajectories. The takeaways I get from these works:
While reinforcement learning may be very exciting, don't forget about the perhaps lesser-known task of inverse reinforcement learning.
Taking subsamples of trajectories is a helpful way to do data augmentation when doing anything at the granularity of episodes.
Perhaps most importantly, I should understand when and how preference rankings might be applicable and beneficial. In these works, preferences enable them to train an agent to perform better than demonstrator data without strictly requiring ground truth environment rewards, and potentially without even requiring demonstrator actions (though D-REX requires actions).
I hope you found this post helpful. As always, thank you for reading, and stay safe.
Papers covered in this blog post:
Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, Scott Niekum. Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations, ICML 2019.
Daniel S. Brown, Wonjoon Goo, Scott Niekum. Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations, CoRL 2019.
Research Talk at the University of Toronto on Robotic Manipulation
A video of my talk at the University of Toronto with the Q-and-A at the end.
Last week, I was very fortunate to give a talk "at" the University of Toronto in their AI in Robotics Reading Group. It gives a representative overview of my recent research in robotic manipulation. It's a technical research talk, but still somewhat high-level, so hopefully it should be accessible to a broad range of robotics researchers. I normally feel embarrassed when watching recordings of my talks, since I realize I should have done X instead of Y in so many places. Fortunately I think this one turned out reasonably well. Furthermore, and to my delight, the YouTube / Google automatic captions captured my audio with a high degree of accuracy.
My talk covers these three papers in order:
Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor, IROS 2020.
VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation, RSS 2020.
Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks, ICRA 2021.
We covered the first two papers in a BAIR Blog post last year. I briefly mentioned the last one in a personal blog post a few months ago, with the accompanying backstory behind how we developed it. A joint Google AI and BAIR Blog post is in progress … I promise!
Regarding that third paper (for ICRA 2021), when making this talk in Keynote, I was finally able to create the kind of animation that shows the intuition for how a Goal-Conditioned Transporter Network works. Using Google Slides is great for drafting talks quickly, but I think Keynote is better for formal presentations.
I thank the organizers (Homanga Bharadhwaj, Arthur Allshire, Nishkrit Desai, and Professor Animesh Garg) for the opportunity, and I also thank them for helping to arrange the two sign language interpreters for my talk. Finally, if you found this talk interesting, I encourage you to view the talks from the other presenters in the series.
Getting Started with SoftGym for Deformable Object Manipulation
Visualization of the PourWater environment from SoftGym. The animation is from the project website.
Over the last few years, I have enjoyed working on deformable object manipulation for robotics. In particular, it was the focus of my Google internship work, and I previously did some work with deformables before that, highlighted with our BAIR Blog post here. In this post, I'd like to discuss the SoftGym simulator, developed by researchers from Carnegie Mellon University in their CoRL 2020 paper. I've been exploring this simulator to see if it might be useful for my future projects, and I am impressed by the simulation quality and how it also has support for fluid simulation. The project website has more information and includes impressive videos. This blog post will be similar in spirit to one I wrote almost a year ago about using a different code base (rlpyt) with a focus on the installation steps for SoftGym.
Installing SoftGym
The first step is to install SoftGym. The provided README has some information but it wasn't initially clear to me, as shown in my GitHub issue report. As I stated in my post on rlpyt, I like making long and detailed GitHub issue reports that are exactly reproducible.
The main thing to understand when installing is that if you're using an Ubuntu 16.04 machine, you (probably) don't have to use Docker. (However, Docker is incredibly useful in its own right, so I encourage you to learn how to use it if you haven't done so already.) If you're using Ubuntu 18.04, then you definitely have to use Docker. However, Docker is only used to compile PyFleX, which has the physics simulation for deformables. The rest of the repository can be managed through a standard conda environment.
Here's a walk-through of my installation and compilation steps on an Ubuntu 18.04 machine, and I assume that conda is already installed. If conda is not installed, I encourage you to check another blog post which describes my conda workflow.
So far, the code has worked for me on a variety of CUDA and NVIDIA driver versions. You can find the CUDA version by running:
seita@mason:~ $ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
For example, the above means I have CUDA 10.0. Similarly, the driver version can be found from running nvidia-smi.
Now let's get started by cloning the repository and then creating the conda environment:
conda env create -f environment.yml
This command will create a conda environment that has the necessary packages with their correct version. However, there's one more package to install, the pybind11 package, so I would install that after activating the environment:
conda activate softgym
conda install pybind11
At this point, the conda environment should be good to go.
Next we have the most interesting part, where we use Docker. Here's the installation guide for Ubuntu machines in case it's not installed on your machine yet. I'm using Docker version 19.03.6. A quick refresher on terminology: Docker has images and containers. An image is like a recipe, whereas a container is an instance of it. StackOverflow has a more detailed explanation. Therefore, after running this command:
docker pull xingyu/softgym
we are downloading the author's pre-provided Docker image, and it should be listed if you type in docker images on the command line:
seita@mason:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
xingyu/softgym latest 2cbcd6a50965 3 months ago 2.44GB
If you're running into issues with requiring "sudo", you can mitigate this by adding yourself to a "Docker group" so that you don't have to type it in each time. This Ask Ubuntu post might be helpful.
Next, we have to run a command to start a container. Here, we're using nvidia-docker since this requires CUDA, as one would expect given that FleX is from NVIDIA. This is not installed when you install Docker, so please refer to this page for installation instructions. Once that's done, to be safe, I would check to make sure that nvidia-docker -v works on your command line and that the version matches what's printed from docker -v. I don't know if it is necessary to have the two versions match.
As mentioned earlier, we have to start a container. Here is the command I use:
(softgym) seita@mason:~/softgym$ nvidia-docker run \
-v /home/seita/softgym:/workspace/softgym \
-v /home/seita/miniconda3:/home/seita/miniconda3 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
--gpus all \
-e DISPLAY=$DISPLAY \
-e QT_X11_NO_MITSHM=1 \
-it xingyu/softgym:latest bash
Here's an explanation:
The first -v will mount /home/seita/softgym (i.e., where I cloned softgym) to /workspace/softgym inside the Docker container's file system. Thus, when I enter the container, I can change directory to /workspace/softgym and it will look as if I am in /home/seita/softgym on the original machine. The /workspace seems to be the default directory we start in Docker containers.
A similar thing happens with the second mounting command for miniconda. In fact I'm using the same exact directory before and after the colon, which means the directory structure is the same inside the container.
The -it and bash portions will create an environment in the container which lets us type in things on the command line, like with normal Ubuntu machines. Here, we will be the root user. The Docker documentation has more information about these arguments. Note that -it is shorthand for -i -t.
The other commands are copied from the SoftGym Docker README.
Running the command means I enter a Docker container as a "root" user, and you should be able to see this container listed if you type in docker ps in another tab (outside of Docker) since that shows the activate container IDs. At this point, we should go to the softgym directory and run the scripts to (1) prepare paths and (2) compile PyFleX:
root@82ab689d1497:/workspace# cd softgym/
root@82ab689d1497:/workspace/softgym# export PATH="/home/seita/miniconda3/bin:$PATH"
root@82ab689d1497:/workspace/softgym# . ./prepare_1.0.sh
(softgym) root@82ab689d1497:/workspace/softgym# . ./compile_1.0.sh
The above should compile without errors. That's it! One can then exit Docker (just type in "exit"), though I actually would recommend keeping that Docker tab/window open on your command line editor, because any changes to the C++ code will require re-compiling it, so having the Docker already set in place to compile with one command makes things easier. Adjusting the C++ code is (almost) necessary if you wish to create custom environments.
If you are using Ubuntu 16.04, the steps should be similar but also much simpler, and here is the command history that I have when using it:
git clone https://github.com/Xingyu-Lin/softgym.git
cd softgym/
. ./prepare_1.0.sh
. ./compile_1.0.sh
cd ../../..
The last change directory command is because the compile script changes my path. Just go back to the softgym/ directory and you'll be ready to run.
Code Usage
Back in our normal Ubuntu 18.04 command line setting, we should make sure our conda environment is activated, and that paths are set up appropriately:
(softgym) seita@mason:~/softgym$ export PYFLEXROOT=${PWD}/PyFlex
(softgym) seita@mason:~/softgym$ export PYTHONPATH=${PYFLEXROOT}/bindings/build:$PYTHONPATH
(softgym) seita@mason:~/softgym$ export LD_LIBRARY_PATH=${PYFLEXROOT}/external/SDL2-2.0.4/lib/x64:$LD_LIBRARY_PATH
To make things easier, you can use a script like their provided prepare-1.0.sh to adjust paths for you, so that you don't have to keep typing in these "export" commands manually.
Finally, we have to turn on headless mode for SoftGym if running over a remote machine. This was a step that tripped me up for a while, even though I'm usually good about remembering this after having gone through similar issues using the Blender simulator (for rendering fabric images remotely). Commands like this should hopefully work, which run the chosen environment and have the agent take random actions:
(softgym) seita@mason:~/softgym$ python examples/random_env.py --env_name ClothFlatten --headless 1
If you are running on a local machine with a compatible GPU, you can remove the headless option to have the animation play in a new window. Be warned, though: the size of the window should remain fixed throughout, since the code appends frames together, so don't drag and resize the window. You can right click on the mouse to change the camera angle, and use W-A-S-D keyboard keys to navigate.
The given script might give you an error about a missing directory, but just add mkdir data/.
Long story short, SoftGym contains one of the nicest looking physics simulators I've seen for deformable objects. I also really like the support for liquids. I can imagine future robots transporting boxes and bags of liquids.
Working and Non-Working Configurations
I've tried installing Docker on a number of machines. To summarize, here are all the working configurations, which are tested by running the examples/random_env.py script:
Ubuntu 16.04, CUDA 9.0, NVIDIA 440.33.01, no Docker at all.
Ubuntu 18.04, CUDA 10.0. NVIDIA 450.102.04, only use Docker for installing PyFleX.
Ubuntu 18.04, CUDA 10.1. NVIDIA 430.50, only use Docker for installing PyFleX.
Ubuntu 18.04, CUDA 11.1. NVIDIA 455.32.00, only use Docker for installing PyFleX.
To clarify, when I list the above "CUDA" versions, I am getting them from typing the command nvcc --version, and when I list the "NVIDIA" driver versions, it is from nvidia-smi. The latter command also lists a "CUDA Version" but that is for the driver, and not the runtime, and these two CUDA versions can be different (on my machines the versions usually do not match).
Unfortunately, I have run into a case where SoftGym does not seem to work:
Ubuntu 16.04, CUDA 10.0, NVIDIA 440.33.01, no Docker at all. The only difference from a working setting above is that it's CUDA 10.0 instead of 9.0. This setting is resulting in:
Waiting to generate environment variations. May take 1 minute for each variation...
*** stack smashing detected ***: python terminated
Aborted (core dumped)
I have yet to figure out how to fix this. If you've found and addressed this fix, it would be nice to inform the code maintainers.
The Code Itself
The code does not include their reinforcement learning benchmarks. That is in a separate code base, which as of March 2021 is now public. In SoftGym, there is a basic pick and place action space with fake grippers, which may be enough for preliminary usage. In the GIFs for fabric environments, you can see these fake grippers with moving white spheres.
Fortunately, the SoftGym code is fairly readable and well-structured. There's a FlexEnv class and a sensible class hierarchy for the different types of deformables supported – rope, cloth, and liquids. Here's how the classes are structured, with parenting relationships based on the indentation below:
FlexEnv
RopeNewEnv
RopeFlattenEnv
RopeConfigurationEnv
ClothEnv
ClothDropEnv
ClothFlattenEnv
ClothFoldEnv
ClothFoldCrumpledEnv
ClothFoldDropEnv
FluidEnv
PassWater1DEnv
PourWaterPosControlEnv
PourWaterAmountPosControlEnv
One can generally match the environment names reported in the CoRL 2020 paper with the code classes. For example, the "FoldCloth" and "SpreadCloth" environments reported in the paper correspond to the "ClothFoldEnv" and "ClothFlattenEnv" classes.
The code maintainers responded to some questions I had in this GitHub issue report about making new environments. The summary is that (1) this appears to require knowledge of how to use a separate library, PyFleX, and (2) when we make new environments, we have to make new header files with the correct combination of objects we want, and then re-compile PyFleX.
As of November 2021, I have been using the code more and thus am more familiar with it compared to when I initially wrote this blog post. If you have questions on the code, I encourage you to file an issue report.
I hope this blog post can be of assistance when getting started with SoftGym. I am excited to see what researchers try with it going forward, and I'm grateful to be in a field where simulation for robotics is an activate area of research.
July 21, 2021: updated the post to reflect some of my additional tests, and to add the separate reinforcement learning algorithms repository.
November 06, 2021: updated the post to clarify best practices with compiling, and to explain that I have been using the code.
Five New Research Preprints Available
The video for the paper "Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks."
The Fall 2020 semester was an especially busy one, since I was involved in multiple paper submissions with my outstanding collaborators. Five preprints are now available, and this post summarizes each of these, along with some of the backstories behind the papers. In all cases, arXiv should have the most recent, up to date version of the paper.
Learning Dense Visual Correspondences in Simulation to Smooth and Fold Real Fabrics
Aditya Ganapathi, Priya Sundaresan, Brijen Thananjeyan, Ashwin Balakrishna, Daniel Seita, Jennifer Grannen, Minho Hwang, Ryan Hoque, Joseph Gonzalez, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg
The bulk of this work was actually done in Spring 2020, but we've made some significant improvements in the latest version on arXiv by expanding the experiments and improving the writing. The main idea in this paper is to use dense object descriptors (see my blog post here) in simulation to get correspondences between two different images of the same object, which in our case would be fabrics. If we see two images of the same fabric, but where the fabric's appearance may be different in the two images (e.g., having a fold versus no fold), we would like to know which pixels in image 1 correspond to pixels in image 2, in the sense that the correspondence will give us the same part of the fabric. We can use the learned correspondences to design robot policies that smooth and fold real fabric, and we can even do this in real environments with the aid of domain randomization.
I was originally hoping to include this paper in our May 2020 BAIR Blog post on fabric manipulation, but the blog authors and I decided against this, since this paper doesn't neatly fit into the "model-free" vs "model-based" categorization.
Intermittent Visual Servoing: Efficiently Learning Policies Robust to Tool Changes for High-precision Surgical Manipulation
Samuel Paradis, Minho Hwang, Brijen Thananjeyan, Jeffrey Ichnowski, Daniel Seita, Danyal Fer, Thomas Low, Joseph E. Gonzalez, Ken Goldberg
This paper proposes Intermittent Visual Servoing (IVS), a framework which uses a coarse controller in free space, but employs imitation learning to learn precise actions in regions that have the highest accuracy requirements. Intuitively, many tasks are characterized by some "bottleneck points", such as tightening a screw, and we'd like to specialize the learning portion for those areas.
To benchmark IVS, we test on a surgical robot, and train it to autonomously perform surgical peg transfer. For some context: peg transfer is a task commonly used as part of a curriculum to train human surgeons for robot surgery. Robots are commonly used in surgery today, but in all cases, these involve a human manipulating tools, which then cause the surgical robot to move in known directions. This process is specifically referred to as "teleoperation."
For our automated surgical robot on peg transfer, we show high success rates, and transferability of the learned model across multiple surgical arms. The latter is a known challenge as different surgical arm tools have different mechanical properties, so it was not clear to us if off-the-shelf IVS could work, but it did!
Superhuman Surgical Peg Transfer Using Depth-Sensing and Deep Recurrent Neural Networks
Minho Hwang, Brijen Thananjeyan, Daniel Seita, Jeffrey Ichnowski, Samuel Paradis, Danyal Fer, Thomas Low, Ken Goldberg
This paper is an extension of our ISMR 2020 and IEEE RA-Letters 2020 papers, which also experiment with surgical peg transfer. It therefore relates to the prior paper on Intermittent Visual Servoing, though I would not call it an extension of that paper, since we don't actually apply IVS here, nor do we test transferability across different surgical robot arms.
In this work, we use depth sensing, recurrent neural networks, and a new trajectory optimizer (thanks to Jeff Ichnowski) to get an automated surgical robot to outperform a human surgical resident on the peg transfer task. In this and our ISMR 2020 paper, Danyal Fer acted as the human surgical resident. For our ISMR 2020 paper, we couldn't get the surgical robot to be as good as him on peg transfer, prompting this frequent internal comment among us: Danyal, how are you so good??
Well, with the combination of these new techniques, plus terrific engineering work from postdoc Minho Hwang, we finally obtained accuracy and timing results at or better than those Danyal Fer obtained. I am looking forward to seeing how far we can push ahead in surgical robotics in 2021.
Robots of the Lost Arc: Learning to Dynamically Manipulate Fixed-Endpoint Ropes and Cables
Harry Zhang, Jeffrey Ichnowski, Daniel Seita, Jonathan Wang, Ken Goldberg
This shows a cool application of using a UR5 arm to perform high speed dynamic rope manipulation tasks. Check out the video of the paper (on the project website), which comes complete with some Indiana Jones style robot whipping demonstrations. We also name the proposed learning algorithm in the paper using the INDY acronym, for obvious reasons.
The first question I would have when thinking about robots whipping rope is: how do we define an action? We decided on a simple yet flexible enough approach that worked for whipping, vaulting, and weaving tasks: a parabolic action motion coupled with a prediction of the single apex point of this motion. The main inspiration for this came from the "TossingBot" paper from Andy Zeng, which used a similar idea for parameterizing a tossing action. That brings us to the fifth and final paper featured in this blog post …
Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks
Daniel Seita, Pete Florence, Jonathan Tompson, Erwin Coumans, Vikas Sindhwani, Ken Goldberg, Andy Zeng
Here, we finally have the one paper where I'm the first author, and the one for which I expended the bulk of my research efforts. (You can imagine what my work days were like last fall, with me working on this paper in the mornings and afternoons, followed by the other papers above in the evenings.) This paper came out of my Summer 2020 virtual internship with Google Robotics, where I was hosted by the great Andy Zeng. Before the internship began, Andy and I knew we wanted to work on deformable object manipulation, and we thought it would be nice to show a robot manipulating bags, since that would be novel. But we weren't sure what method to use to train the robot.
Fortunately, at that time, Andy was hard at work on something called Transporter Networks. It ended up as one of the top papers presented at CoRL 2020. Andy and I hypothesized that Transporter Networks could work well on a wide range of deformable object manipulation tasks. So, I designed over a dozen simulated environments using PyBullet that included the full suite of 1D, 2D, and 3D deformables. We were actually thinking of using Blender before the internship, but at some point I realized that Blender would not be suitable. Pivoting to PyBullet, though painful initially, proved to be one of the best decisions we made.
While working on the project, Andy and I wanted to increase the flexibility of Transporter Networks to different task specifications. That's where the "goal-conditioned" version came from. There are multiple ways of specifying goals; here, we decided to specify an image of the desired rearrangement configuration.
Once we had the architectures and the simulated tasks set up, it was a matter of finding the necessary compute to run the experiments, and iterating upon the design and tasks.
I am very pleased with how the paper came out to be, and I also hope to release a more detailed blog post about this paper, both here and on the BAIR and Google AI blogs. I also really enjoyed working with this team; I have not met any of the Google-affiliated authors in person, so I look forward to the day when the pandemic subsides.
I hope you find these papers interesting! If you have questions or would like to discuss topics in this papers further, feel free to reach out.
Every year I have a tradition where I try to write down all the books I read, and to summarize my thoughts. Despite how 2020 was quite different from years past, I was able to get away from the distractions of the world by diving into books. I have listed 40 books here:
Current Events (4 books)
Business and Technology (5 books)
Race and Anti-Racism (5 books)
Countries (4 books)
Psychology and Psychiatry (4 books)
The total is similar to past years (2016 through 2019): 34, 43, 35, 37. As always you can find prior summaries in the archives. I tried to cut down on the length of the summaries this year, but I was only partially successful.
Group 1: Popular Science
Every year, I try to find a batch of books that quenches my scientific curiosity.
** Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 ** (2011) blew me away via a whirlwind tour of the future. Michio Kaku, a famous theoretical physicist and CUNY professor, attempts to predict 2100. Kaku's vision relies on (a) what is attainable subject to the laws of physics, and (b) interviews with hundreds of leading scientific experts, including many whose names I recognize. Crudely, one can think of Physics of the Future as a more general vision of Ray Kurzweil's How to Create a Mind (discussed below) in that Kurzweil specializes in AI and neuroscience, whereas Kaku focuses on a wider variety of subjects. Physics of the Future has separate chapters on: Computers, AI, Medicine, Nanotech, Energy, Space Travel, Wealth, Humanity, and then the last one is about "Day in the Life of 2100." Kaku breaks down each subject into what he thinks will happen in (a) the near future to 2030, (b) then later in 2030-2070, and (c) from 2070-2100. For example, in the chapter on computers, much discussion is spent on the limits of current silicon-based CPUs, since we are hitting the theoretical limit of how many transistors we can insert in a chip of silicon, which is why there's been much effort on going beyond Moore's Law, such as parallel programming and quantum computing. In the AI chapter, which includes robotics, there is a brief mention of learning-based versus "classical" approaches to creating AI. If Kaku had written this book just a few years later, this chapter would look very different. In biology and medicine, Kaku is correct in that we will try to build upon advances in gene therapy and extend the human lifespan, which might (and this is big "might") be possible with the more recent CRISPR technologies (not mentioned in the book, of course). While my area of expertise isn't in biology and medicine, or the later chapters on nanotechnology and energy, by the time I finished this book, I was in awe of Kaku's vision of the future, but also somewhat tempered by the enormous challenges ahead of us. For a more recent take on Kaku's perspective, here is a one-hour conversation on Lex Fridman's podcast where he mentions CRISPR-like technologies will let humans live forever by identifying "mistakes" in cells (i.e., the reason why we die). I'm not quite as optimistic as Kaku is on that prospect, but I share his excitement of science.
** How to Create a Mind: The Secret of Human Thought Revealed ** (2012) by the world's most famous futurist, Ray Kurzweil. While his most popular book is The Singularity is Near from 2005, this shorter book — a follow-up in some ways — is a pleasure to read. In How to Create a Mind Kurzweil focuses on reverse-engineering the brain by conjecturing how the brain works, and how the process could be emulated in a computer. The aspiration is obvious: if we can do this, then perhaps we can create intelligent life. If, in practice, machines "trick" people into thinking they are real brains with real thought, then Kurzweil argues that for all practical purposes they are conscious (see Chapter 9).1 There was some discussion about split-brain patients and the like, which overlaps with some material in Incognito, which I read in 2017. Throughout the book, there is emphasis on the neocortex, which according to Wikipedia, plays a fundamental role in learning and memory. Kurzweil claims it acts as a pattern recognizer, and that there's a hierarchy to let us conduct higher-order reasoning. This makes sense, and Kurzweil spends a lot of effort describing ways we can simulate the neocortex. That's not to say the book is 100% correct or prescient. He frequently mentions Hidden Markov Models (HMMs), but I hardly ever read about them nowadays. Perhaps the last time I actually implemented HMMs was for a speech recognition homework assignment in the Berkeley graduate Natural Language Processing course back in 2014. The famous AlexNet paper was published just a few months after this book was published, catalyzing the Deep Learning boom. Also, Kruzweil's prediction that self-driving cars would be here "by the end of the decade" were wildly off. I think it's unlikely we will see them publicly available even by the end of this new decade, in December of 2029. But he also argues that as of 2012, the trends from The Singularity is Near are continuing, with updated plots showing that once a technology becomes an information technology then the "law of accelerating returns" will kick in, creating exponential growth. There are "arguments against incredulity," as argued by the late Paul Allen. Kurzweil spends the last chapter refuting Allen's arguments. I want to see an updated 2021 edition of Kurzweil's opinions on topics in this book, just like I do for Kaku's book.
** A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution ** (2017) by Berkeley Professor Jennifer A Doudna and her former PhD student Samuel H Sternberg (now at Columbia University). The Doudna lab has a website with EIGHTEEN postdocs at the time of me reading this! I'm sure that can't be the norm, since Doudna is one of the stars of the Berkeley chemistry department and recently won the 2020 Nobel Prize in Chemistry. This book is about the revolutionary technology called CRISPR. The first half provides technical background, and the second half describes the consequences, both the good (what diseases it may cure) and the bad (ethics and dangers). In prior decades, I remember hearing about "gene therapy," but CRISPR is "gene editing" — it is far easier to use CRISPR to edit genes than any prior technology, which is one of the reasons why it has garnered widespread attention since a famous 2012 Science paper by Doudna and her colleagues. The book provides intuition showing how CRISPR works to edit genes, though as with anything, it will be easier to understand for people who work in this field. The second half of the book is more accessible and brings up the causes of concern: designer babies, eugenics, and so on. My stance is probably similar to Doudna and of most scientists in that I support investigating the technology with appropriate restrictions. A Crack in Creation was published in 2017, and already in November 2018, there was a story that broke (see MIT Review, and NYTimes articles) about the scientist He Jiankui who claimed to create the first gene-edited humans. The field is moving so fast, and reading this book made it clear the obvious similarities between CRISPR and AI technologies and how they are (a) growing so powerful and (b) require safety and ethical considerations. Sadly, I also see how CRISPR can lead to battle lines over who has credit for the technology; in AI, we have a huge problem with "flag planting" and "credit assignment" and I hope this does not damage the biochemistry field. I am also curious about the relationship between CRISPR and polygenic scores,2 which were discussed in the book Blueprint (see my thoughts here). I wish there were more books like A Crack in Creation.
** Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies ** (2017) is one of my favorites this year. By Geoffrey West, a Santa Fe Institute theoretical physicist, and who's more accurately described as a "jack of all trades," the book unifies the theme of "scale" across organisms, cities, and companies. It asks questions like: why aren't there mammals the size of Godzilla? Why aren't humans living for 200 years? How does income and crime scale with city size? Any reputable scientist can answer the question about Godzilla: anything Godzilla's would not be able to support itself, unless it were somehow made of "different" living material. West's key insights are to relate this to an overarching theme of exponential growth and scaling. For example, consider networks and capillaries. Mammals have hearts that pump blood into areas of the body, with the vessel size decreasing up to the capillaries at the end. But across all mammals, the capillaries at the "ends" of this system are roughly the same size, and optimize the "reachability" of a system. Furthermore, this is similar to a water system in a city, so perhaps the organization and size limitations of cities are similar to those of mammals. Another key finding is that many attributes of life are constant across many organisms. Take the number of heartbeats in a mammal's lifespan. Smaller mammals have much faster heart rates, whereas bigger mammals have much slower heart rates, yet the number of heart beats is roughly the same across an enormous variety of organisms. That factor, along with mortality rates for humans, suggests a natural limit to human lifespans, so West is skeptical that humans will live far beyond the current record of 122 years. Scale is filled with charts showing various qualities that are consistent across organisms, cities, and companies, and which also demonstrate exponential growth. It reminds me of Steven Pinker's style of research in adding quantitative metrics to social sciences research. West's concludes with disconcerting discussions about whether humanity can continue accelerating at the superexponential rate we've been living. While careful not to fall under the "Malthusian trap," he's concerned that the environment will no longer be able to support our rate of living. Scale is a great book from one of academia's brightest minds that manages to make the scientific details into something readable. If you don't have the time to read 450+ pages, then his 2011 TED Talk might be a useful alternative.
** The Book of Why: The New Science of Cause and Effect ** (2018) is by 2011 Turing Award winner Judea Pearl, a professor at UCLA and a leading researcher in AI, along with science writer Dana MacKenzie3. I first remember reading about Pearl's pioneering work in Bayesian Networks when I was an undergrad trying (unsuccessfully) to do machine learning research. To my delight, Bayesian Networks are featured in The Book of Why, and I have fond memories of studying them for the Berkeley AI Prelims. Ah. Pearl uses a metaphor of a ladder with three rungs that describe understanding. The first rung is where the current Deep Learning "revolution" lies, and relates to pattern matching. In the second rung, a machine must be able to determine what happens when something is applied. Finally, the third and most interesting rung is on counterfactual inference: what happened if, instead of \(X\), we actually did \(Y\)? It requires us to imagine a world that did not exist, and Pearl argues that this thinking is essential to create advanced forms of AI. Pearl is an outspoken skeptic of the "Big Data" trend, where one just looks at the data to find a conclusion. So this book is his way of expressing his journey through causal inference to a wider audience, where he introduces the "\(P(X | do(Y))\)" operator (in contrast to \(P(X | Y)\)), how to disentangle the effect of confounding, and how to perform counterfactual inference. What is the takeaway? I'm judging the "Turing Award" designation correctly, it seems like Pearl's work and causality is widely accepted or at least not vigorously opposed by those in the community, so I guess it's been a success? I should also have anticipated that Andrew Gelman would review the book on his famous blog with some mixed reactions. To summarize (and I might share this view) while The Book of Why brings many interesting points, it may read too much as someone who's reveling in his "conquering" of "establishment statisticians," which might turn off readers. Some of the text is also over-claiming: the book says causality can help with smoking, taxes, climate change, and so forth, but those can arguably be done without necessarily resorting to the exact causal inference machinery.
** Human Compatible: Artificial Intelligence and the Problem of Control ** (2019) is by Berkeley computer science professor Stuart Russell and a leading authority on AI. Before the pandemic, I frequently saw Prof. Russell as our offices are finally on the same floor, and I enjoyed reading and blogging about his textbook (soon to be updated!) back when I was studying for the AI prelims. A key message from Human Compatible is that we need to be careful when designing AI. Russell argues: "machines are beneficial to the extent that their actions can be expected to achieve our objectives". In other words, we want robots to achieve our intended objectives, which is not necessarily — and usually is not! — what we exactly specified in the objective through a cost or reward function. Instead of this, the AI field has essentially been trying to make intelligent machines achieve "the machine's" objective. This is problematic in several ways, one of which is that humans are bad at specifying their intents. A popular example of this is in OpenAI's post about faulty reward functions. The BAIR blog has similar content in this post and a related post (by Stuart Russell's students, obviously). As AI becomes more powerful, mis-specified objective functions have greater potential for negative consequences, hence the need to address this and other mis-uses of AI (e.g., see Chapter 4 and lethal autonomous weapons). There are a range of possible techniques for obtaining provably beneficial AI, such as making machines "turn themselves off" and ensuring they don't block that, or having machines ask humans for assistance in uncertain cases, or having machines learn human preferences. Above all, Russell makes a convincing case for human-compatible AI discourse, and I recommend the book to my AI colleagues and to the broader public.
Group 2: Current Events
These are recent books covering current events.
** Factfulness: Ten Reasons We're Wrong About the World — and Why Things Are Better Than You Think ** (2018) by the late Hans Rosling, who died of cancer and was just able to finish this book in time with his family. Hans Rosling was a Swedish physician and academic, and from the public's view, may be best known for his data visualization techniques4 to explain why many of us in so-called "developed countries" have misconceptions about "developing countries" and the world more broadly. (Look him up online and watch his talks, for example this TED talk.) The ten reasons in Factfulness are described as "instincts": gap, negativity, straight line, fear, size, generalization, destiny, single perspective, blame, and urgency. In discussing these points, Rosling urges us to dispense with the terms "developing" and "developed" and instead to use a four-level scale, with most of the world today on "Level 2" (and the United States on "Level 4"). Rosling predicts that in 2040, most of the world will be on Level 3. Overall, this book is similar to Steven Pinker's Better Angels and Enlightenment Now so if you like those two, as I did, you will probably like Factfulness. However, there might not be as much novelty. I want to conclude with two thoughts. The criticism of "cherry-picking facts" is both correct but also unfair since any book that covers a topic as broadly as the state of the world will be forced to do so. Second, while reading this book, I think there is a risk of focusing too much on countries that have a much lower baseline of prosperity to begin with (e.g., countries on Level 1 and 2) and it would be nice to see if we can get similarly positive news for countries which are often viewed as "wealthy but stagnant" today, such as Japan and (in many ways) the United States. Put another way, can we develop a book like Factfulness that will resonate with factory workers in the United States who have lost jobs due to globalization, or people lamenting soaring income inequality?
** The Coddling of the American Mind: How Good Intentions and Bad Ideas are Setting Up a Generation for Failure ** (2018) was terrific. It's written by Greg Lukianoff, a First Amendment lawyer specializing in free speech on campuses, and Jonathan Haidt, a psychology professor at NYU, and one of the most well-known in his field. For perspective, I was aware of Haidt before reading this book. Coddling of the American Mind is an extended version of their article in The Atlantic, which introduced their main hypothesis that the trend of protecting students from ideas they don't like is counterproductive. Lukianoff and Haidt expected a wave of criticism after their article, but it seemed like there was agreement from across the political spectrum. They emphasize how much of the debate over free speech on college campuses is a debate within the political left, given the declining proportion of conservative students and faculty. The simple explanation is that the younger generation disagrees with older liberals, the latter of whom generally favor freer speech. The book mentions both my undergrad, Williams College, and my graduate school, the University of California, Berkeley, since both institutions have faced issues with free speech and inviting conservative speakers to campus. More severe were the incidents at Evergreen State, though fortunately what happened there was far from typical. Lukianoff and Haidt also frequently reference Jean Twenge's book IGen: Why Today's Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy – and Completely Unprepared for Adulthood – and What That Means for the Rest of Us, with a long self-explanatory subtitle. I raced through The Codding of the American Mind and will definitely keep it in mind for my own future. Like Haidt, I generally identify with the political left, but I read a fair amount of conservative writing and feel like I have significantly benefited from doing so. I also generally oppose disinviting speakers, or "cancel culture" more broadly. This book was definitely a favorite of mine this year. The title is unfortunate, as the "coddling" terminology might cause the people who would benefit the most to avoid reading it.
** The Tyranny of Merit: What's Become of the Common Good? ** (2020) by Michael J. Sandel, a Professor of Government at Harvard University who teaches political philosophy. Sandel's objective is to inform us about the dark side of meritocracy. Whereas in the past, being a high-status person in American society was mainly due to being white, male, and wealthy, nowadays America's educational system has changed to a largely merit-based one, however one defines "merit." But for all these changes, we still have low income mobility, where the children of the wealthy and highly educated are likely to remain in high status professions, and the poor are likely to remain poor. Part of this is because elite colleges and universities are still overly-represented by the wealthy. But, argues Sandel, even if we achieve true meritocracy, would that actually be a desirable thing? He warns us that this will exacerbate credentialism as "the last acceptable prejudice," where for the poor, the message we send to them is bluntly that they are poor because they are bad on the grounds of merit. That's a tough pill to swallow, which can breed resentment, and Sandel argues for this being one of the reasons why Trump won election in 2016. There are also questions about what properly defines merit, and unfortunate side effects of the race for credentialism, where "helicopter parenting" means young teenagers are trying to fight to gain admission to a small pool of elite universities. This book is more about identifying the problem rather than proposing solutions, but Sandel includes some modest approaches, such as (a) adding a lottery to admissions processes at elite universities, and (b) taxing financial transactions that add little value (though these seem quite incremental to me). Of course, he agrees, it's better to not have wealth or race be the deciding factor that determines quality of life, as Sandel opens up in his conclusion when describing how future home run record holder Hank Aaron had to practice batting using sticks and bottle caps due to racism. But that does not mean the current meritocracy status quo should be unchallenged.
** COVID-19: The Pandemic That Never Should Have Happened, and How to Stop the Next One ** (2020) by New Scientist reporter Debora MacKenzie, was quickly written in early 2020 and published in June, while the world was still in the midst of the pandemic. The book covers the early stages of the pandemic and how governments and similar organizations were unprepared for one of this magnitude despite early warnings. MacKenzie provides evidence that scientists were warning for years about the risks of pandemics, but that funding, politics, and other factors hindered the development of effective pandemic strategies. The book also provides a history of some earlier epidemics, such as the flu of 1918 and SARS in 2003, and why bats are a common source of infectious diseases. (But don't go around killing bats, that's a completely misguided way of fighting COVID-19.) MacKenzie urges us to provide better government support for research and development into vaccines, since while markets are a great thing, it is difficult for drug and pharmaceutical companies to make profits off of vaccines while investing in the necessary "R and D." She also wisely says that we need to strengthen the World Health Organization (WHO), so that the WHO has the capability to quickly and decisively state when a pandemic is occurring without fear of offending governments. I think MacKenzie hits on the right cylinders here. I support globalization when done correctly. We can't tear down the world's gigantic interconnected system, but we can at least make systems with more robustness for future pandemics and catastrophic events. As always, though, it's easier said than done, and I am well aware that many people do not think as I do. After all, my country has plenty of anti-vaxxers, and every country has its share of politicians who are hyper-nationalistic and are willing to silence their own scientists who have bad news to share.
Group 3: Business and Technology
Remote: Office Not Required (2013) is a concise primer on the benefits of remote work. It's by Jason Fried and David Hansson, cofounders of 37Signals (now Basecamp), a software company which specializes in one product (i.e., Basecamp!) to organize projects and communication. I used it once, back when I interned at a startup. Basecamp has unique work policies compared to other companies, which the authors elaborate upon in their 2017 manifesto It Doesn't Have to be Crazy At Work (discussed below). This book narrows down on the remote aspect of their workforce, reflecting how Basecamp's small group of employees works all around the world. Fried and Hansson describe the benefits of remote work: a traditional office is filled with distractions, the commute to work is generally unpleasant, talent isn't bound in specific cities, and so on. Then, they show how Basecamp manages their remote work force, essentially offering a guide to other companies looking to make the transition to remote work. I think many are making the transition if they haven't done so already. If anything, I was surprised that it's necessary to write a book on these "obvious" facts, but then again, this was published right when Marissa Mayer, then Yahoo!'s CEO, famously said Yahoo! would not permit remote work. In contrast, I was reading this book in April 2020 when we were in the midst of the COVID-19 pandemic which essentially mandated remote work. While I miss in-person work, I'm not going to argue against the benefits of some remote work.
Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley (2016) is by Antonio García Martínez. A "gleeful contrarian" who entered the world of Silicon Valley after a failed attempt at becoming a scientist (formerly a physics PhD student at UC Berkeley) and then a stint as a Goldman Sachs trader, he describes his life at Adchemy5, then as the CEO of his startup, AdGrok, and then his time at Facebook. AdGrok was a three-man startup with Martínez and two other guys, specializing in ads, and despite all their missteps, it got backed by the Y-Combinator. Was it bought by Facebook? Nope — by Twitter, and Martínez nearly screwed the whole acquisition by refusing to work for Twitter and joining Facebook, essentially betraying his two colleagues. At Facebook, he was a product manager specializing in ads, and soon got embroiled over the future ads design; Martínez was proposing a new system called "Facebook Exchange" whereas his colleagues mostly wanted incremental extensions of the existing Facebook Ads system (called "Custom Audiences"). He was eventually fired from Facebook, and then went to Twitter as an adviser, and as of 2019 he's at Branch. Here's a TL;DR opinionated summary: while I can see why people (usually men) might like this fast-paced exposé of Silicon Valley, I firmly believe there is a way to keep his good qualities — his determination, passion, focus — without the downsides of misogyny, getting women pregnant two weeks after meeting them, and flouting the law. I'll refer you to this criticism for more details, and to add on to this, while Martínez is able to effectively describe concepts in Silicon Valley and computing reasonably well, he often peppers those comments with sexual innuendos. This is absolutely not the norm among the men I work with. I wonder what his Facebook colleagues thought of him after reading this book. On a more light-hearted note, soon after reading Chaos Monkeys, I watched Michael I Jordan's excellent podcast conversation with Lex Fridman on YouTube6. Prof. Jordan discusses and criticizes Facebook's business model for failing to create a "consumer-producer ecosystem" and I wonder how much the idea of Facebook Exchange overlaps with Prof. Jordan's ideal business model.
** It Doesn't Have to Be Crazy at Work ** (2017). The authors are (again) Jason Fried and David Hansson, who wrote Remote: Office Not Required (discussed above). I raced through this book, with repeated smiles and head-nodding. Perhaps more adequately described as a rousing manifesto, it's engaging, fast-paced, and effectively conveys how Basecamp manages to avoid enforcing a crazy work life. Do we really need 80-hour weeks, endless emails, endless meetings, and so on? Not according to Basecamp: "We put in about 40 hours a week most of the year […] We not only pay for people's vacation time, we pay for the actual vacation, too. No, not 9 p.m. Wednesday night. It can wait until 9 a.m. Thursday morning. No, not Sunday. Monday." Ahh … Now, I definitely don't follow what this book says word-for-word. For example, I work far more than 40 hours a week. My guess is 60 hours, and I don't count time spent firing off emails in the evening. But I do my best. I try to ensure that my day isn't consumed by meetings or emails, and that I have long time blocks to myself for focused work. So far I think it's working for me. I feel reasonably productive and have not burnt out. I try to continue this during the age of remote work. Basecamp has been a working remotely for 20 years, and their software (and hopefully work culture) may have gotten more attention recently as COVID-19 spread through the world. Perhaps more employers will enable remote work going forward.
** Brotopia: Breaking Up the Boys' Club of Silicon Valley ** (2018) is by Emily Chang, a journalist, author, and current anchor of Bloomberg Technology. For those of us wondering why Silicon Valley continues to be heavily male-dominated despite years and years of public outcry, Chang offers a compelling set of factors. Brotopia briefly covers the early history of the tech industry and how employees were screened for certain factors that statistically favored men. She reviews the "Paypal Mafia" and why meritocracy is a myth, and then covers Google, a company which has for years had good intentions but has experienced its own share of missteps, lawsuits, and press scrutiny over its treatment of women. Then there's the chapter that Chang reportedly said was "the hardest to research by far," about secret parties hosted by Venture Capitalists and other prominent men in the tech industry, where they network and invite young women.7 Chang points out that incentives given by tech companies to employees (e.g., food, alcohol, fitness centers, etc.) often cater to the young and single, and encourage a blend of work and life, meaning that for relatively older women, work-family imbalance is a top reason why they leave the workforce at alarming numbers. The list of factors which make it difficult for women to enter and comfortably remain in tech goes on and on. After reading this book, I am constantly feeling depressed about the state of affairs here — can things really be that bad? There are, of course, things I should do given my own proximity and knowledge of the industry from an academic's viewpoint in STEM, where we have similar gender representation issues. I can at least provide a minimal promise that I will remember the history in this book and ensure that social settings are more comfortable for women.
** The Making of a Manager: What to do When Everyone Looks to You ** (2019) is by Julie Zhuo, who worked at Facebook for 14 years, and quickly rose through the ranks to become a manager at age 25, and eventually held a Vice President (VP) title. This book, rather than focusing on Zhuo's personal career trajectory, is best described as a general guide to managing with some case studies from her time at Facebook (appropriately anonymized, of course). Zhuo advises on the first few months of managing, on managing small versus large teams, the importance of feedback (both to reports and to managers), on hiring great people, and so on. A consistent theme is that the goal of managing is to increase the output of the entire team. I also liked her perspective on how to delegate tasks, because as managers rise up the hierarchy, meetings became the norm rather than the exception, and so the people who do "real work" are those lower in the hierarchy but who have to be trusted by managers. I generally view managing in the context of academia, since I am managed by my PhD advisors, and I manage several undergraduates who work with me on research projects. There is substantial overlap in the academic and industry realms, particularly with delegating tasks, and Zhuo's book — even with its focus on tech — provides advice applicable to a variety of domains. I hope that any future managers I have will be similar in spirit to Zhuo. Now, while reading, I couldn't help but think about how someone like Zhuo would manage someone like Antonio García Martínez, who wrote Chaos Monkeys (discussed earlier) and overlapped with her time at Facebook, since those two seem to be the polar opposites of each other. Whereas Zhuo clearly values empathy, honesty, diversity, support, and so on, Martínez gleefully boasts about cutting corners and having sex, including one case involving a Facebook product manager. The good news is that Martínez only lasted a few years at Facebook, whereas Zhuo was there for 14 years and left on her own accord to start Inspirit. Hopefully Inspirit will grow into something great!
Group 4: China
As usual, I find that I have an insatiable curiosity for learning more about China. Two focus on women-specific issues. (I have another one that's more American-based, near the end of this post, along with the "Brotopia" one mentioned above.)
** Leftover Women: The Resurgence of Gender Inequality in China ** (2014) is named based on the phrase derisively describing single Chinese women above a certain age (usually 25 to 27) who are pressured to marry and have families. It's written by Leta Hong Fincher, an American (bilingual in English and Chinese) who got her PhD in sociology at Tsinghua University. Leftover Women grew out of her dissertation work, which involved interviews with several hundred Chinese, mostly young well-educated women in urban areas. I had a rough sense of what gender inequality might be like, given its worldwide prevalence, but the book was able to effectively describe the issues specific to China. One major theme is housing in big cities, along with a 2011 law passed by the Chinese Supreme Court which (in practice) meant that it became more critical whose name was on the house deed. For married couples who took part in the house-buying spree over the last few decades (as part of China's well-known and massive rural-to-urban migration), usually the house deed used the man's name. This exacerbates gender inequality, as Hong Fincher repeatedly emphasizes that property and home values have soared in recent years, making those more important to consider than the salary one gets from a job. Despite these and other issues in China, Hong-Fincher reports some promising ways that grassroots organizations are attempting to fight these stereotypes for women, despite heavy government censorship and disapproval. I was impressed enough by Hong-Fincher's writing to read her follow-up 2018 book. In addition, I also noticed her Op-Ed for CNN arguing that women are disproportionately better at handling the COVID-19 pandemic.8 Her name has come up repeatedly as I continue my China education.
** Betraying Big Brother: The Feminist Awakening in China ** (2018) is the second book I read from Leta Hong Fincher. Whereas Leftover Women featured the 2011 Chinese Supreme Court interpretation of a housing deed law, this book emphasizes the Feminist Five, young Chinese women who were arrested for protesting sexual harassment. You can find an abbreviated overview with a Dissent article which is a nice summary of Betraying Big Brother. The Feminist Five women were harassed in jail and continually spied upon and followed after their release. (Their release may have been due to international pressure). It was unfortunate to see what these women had to go through, and I reminded myself that I'm lucky to live in a country where women (and men) can perform comparable protests with limited (if any) repercussions. In terms of Chinese laws, the main one relevant to this book is a recent 2016 domestic violence law, the first of its kind to be passed in China. While Fincher praises the passage of this law, she laments that enforcement is questionable and that gender inequality continues to persist. She particularly critiques Xi Jinping and the "hypermasculinity" that he and the Chinese Communist Party promotes. The book ends on an optimistic note on how feminism continues to persist despite heavy government repression. Furthermore, though this book focuses on China, Hong Fincher and the Feminist Five emphasize the need for an international movement of feminism that spans all countries (I agree). As a case in point, Hong Fincher highlights how she and other Chinese women attended the American women's march to protest Trump's election. While I didn't quite learn as much from this book compared to Leftover Women, I still found this to be a valuable item in my reading list about feminism.
** Superpower Showdown: How the Battle Between Trump and Xi Threatens a New Cold War ** (2020) by WSJ reporters Bob Davis and Lingling Wei was fantastic – I had a hard time putting this book down. It's a 450-page, highly readable account of diplomatic relations between the United States and China in recent years. The primary focus is the negotiation behind the scenes that led to the US-China Phase 1 trade deal in January 2020. As reporters, the authors had access to high-ranking officials and were able to get a rough sense of how each "side" viewed each other, not only from the US perspective but also from China's. The latter is unusual, as the Chinese government is less open with its decision-making, so it was nice to see a bit into how Chinese government officials viewed the negotiations. Davis and Wei likely split the duties by Davis reporting from the American perspective, and Wei reporting from the Chinese perspective. (Wei is a naturalized US citizen, and was among those forced to leave China when they expelled journalists in March 2020.) The authors don't editorialize too much, beyond trying to describe why they believed certain negotiations failed via listing the mistakes made on both sides — and there were a lot of failed negotiations. Don't ever say geopolitics is easy. Released in Spring 2020, Superpower Showdown was just able to get information about the COVID-19 pandemic, before it started to spread rapidly in the United States. Unfortunately, COVID-19, rather than uniting the US and China against a common enemy, instead further deteriorated diplomatic relations. Just after finishing the book, I found a closely-related Foreign Affairs essay by Trump's trade representative Robert E. Lighthizer. Consequently, I now have Foreign Affairs on my reading list.
** Blockchain Chicken Farm: And Other Stories of Tech in China's Countryside ** (2020) by Xiaowei Wang, who like me is a PhD student at UC Berkeley (in a different department, in Geography). Xiaowei is an American who has family and friends throughout China, and this book is partially a narrative of Wang's experience visiting different parts of the country. Key themes are visiting rural areas in China, rather than the big cities which get much of the attention (as China is also undergoing a rural-to-urban migration as in America), and the impact of technology towards rural areas. For example, the book mentions how chickens and pigs are heavily monitored with technology to maximize their fitness for human consumption, how police officers are increasingly turning to facial recognition software while still heavily reliant on humans in this process, and the use of Blockchain even though the rural people don't understand the technology (to be fair, it's a tricky concept). Wang cautions us that increased utilization of technology and AI will not be able to resolve every issue facing the country, and come with well-known drawbacks (that I am also aware of given the concern over AI ethics in my field) that will challenge China's leaders, so that they can continue to feed their citizens and maintain political stability. It's a nice, readable book that provides a perspective of the pervasiveness but also the limitations of technology in rural China.
Group 5: Race and Anti-Racism
** Evicted: Poverty and Profit in the American City ** (2016) is incredible. I can't add much more praise to what's already been handed to this Pulitzer Prize-winning book. Evicted is by Matthew Desmond, a professor of sociology at Princeton University. Though published in 2016, it was in 2008 and 2009 when he was a graduate student at the University of Wisconsin, when he moved into a trailer park in Milwaukee where poor whites lived. Desmond spent a few months following and interviewing residents and the landlord. He then repeated the process in the North Side of Milwaukee, where poor blacks lived. The result is an account of what it is like to be poor in America and facing chronic evictions.9 One huge problem: these tenants often had to pay 60-80 percent of their government welfare checks to rent. I also learned about how having children increases the chances of eviction, and how women are more vulnerable to eviction than men, and the role of race. The obvious question, of course, is what kind of policy solutions can help to improve the status quo. Desmond's main suggestion he posits in the epilogue is for a universal housing voucher, which might reduce the amount spent on homeless shelters. Admittedly, I understand that we need both good policies and better decision-making on the part of these tenants, so it's important for us to ensure that there are correct incentives for people to "graduate from welfare." Interestingly, Desmond didn't seem to discuss rent control that much, despite how it is a common topic I hear about nowadays. Another policy that might be relevant to this book is drug use, since pretty much every tenant here was on drugs. I generally oppose rent control and oppose widespread drug usage, but I also admit that implementing these policies would not fix the immediate problems the tenants face. Whatever your political alignments, if you haven't done so, I strongly recommend you add Evicted to your reading list. The only very minor suggestion I would ask for this book is to have an easy-to-find list of names and short biographies of the tenants at the start of the book.
** White Fragility: Why It's So Hard to Talk to White People About Racism ** (2018), by Robin DiAngelo, shot up to the NYTimes best-sellers list earlier this year, in large part from racial protests happening in the United States. Her coined phrase "white fragility" has almost become a household name. As DiAngelo says in the introduction, she is white and the book is mainly addressed to a white audience. (I am not really the target audience, but I still wanted to read the book.) DiAngelo discusses her experience trying to lead racial training training sessions among employees, and how whites often protest or push back against what she says. This is where the term "white fragility" comes from. Most whites she encounters are unwilling to have extensive dialogues that acknowledge their racial privileges, or try to end the discussion by saying defensive statements such as: "I am not racist, so I'm OK, someone else is the problem, end of story." I found the book to be helpful and thought provoking, and learned about several traps that I will avoid when thinking about race. When reading the book, while I don't think I personally felt challenged or insulted, I thought it served exactly as DiAngelo intended: to help me build up knowledge and stamina for discussion over racial issues.
** So You Want to Talk About Race ** (2018), by Ijeoma Oluo, attempts to provide guidelines for how we can talk about race. Like many books falling under the anti-racist theme, it's mainly aimed for white people to help them understand why certain topics or conduct are not appropriate for conversations on race. For example, consider chapters titled "Why can't I say the 'N' word?" and "Why can't I touch your hair?". While some of these seem like common sense to me — I mean, do people actually go about touching Black people's hair, or anyone's body? — I know that there's enough people who do this that we need to have this conversation. Oluo also effectively dispels the notion that we can just talk about class instead of race, or that we'll get class out of the way first. I also appreciate her mention of Asians in the chapter on why the model minority myth is harmful. I also see that Oluo wrote in the introduction about how she wished she could have allocated more discussion on Indigenous people. I agree, but no book can contain every topic, so it's not something I would use to detract from her work. Oluo has a follow-up book titled Mediocre: The Dangerous Legacy of White Male America, which I should check out soon.
Me and White Supremacy: Combat Racism, Change the World, and Become a Good Ancestor (2020) by Layla F. Saad. This started as a 28-day Instagram challenge that went viral. It was published in January 2020, and the timing could not have been better, given that just a few months later, we would see America face enormous racial protests. I read this book right after reading White Fragility, whose author (Robin DiAngelo) wrote the foreword, and says that Layla F. Saad gives us a roadmap for addressing the most common question white people have after an antiracist presentation: "What do I do?" In her introduction, Saad says: ""The system of white supremacy was not created by anyone who is alive today. But it is maintained and upheld by everyone who holds white privilege." Saad, an East African and Middle Eastern Black Muslim women who lives in Qatar and is a British citizen, wants us to tackle this problem so that we leave the world a better place than it is today. Me and White Supremacy is primarily aimed at white people, but also applies to people of color who hold "white privilege" which would apply to me. There are four parts: (1) the basics, (2) anti-blackness, racial stereotypes, and cultural appropriation, (3) allyship, and (4) power, relations, and commitments. For example, the allyship chapter mentions white apathy, white saviorism (as shown in The Blind Side and others), tokenism, and being "called out" for racism, which Saad says is inevitable if we take part in anti-racism work. In contrary to what I think Saad was expecting out of readers, I didn't experience too many conflicting emotions or uncomfortable feelings when reading this book. I don't know if that's a good thing or a bad thing. It may have been because I read this after White Fragility and So You Want to Talk About Race?. I will keep this book in mind, particularly the allyship section, now and in the future.
** My Vanishing Country: A Memoir ** (2020) is a memoir by Bakari Sellers, who describes his experience living in South Carolina. The value of the book is providing the perspective of Black rural working class America, instead of the white working class commonly associated with rural America (as in J.D. Vanci's Hillbilly Elegy). I read the memoir quickly and could not put it down. Here are some highlights from Sellers' life. When he was 22, freshly graduated out of Morehouse College and in his first year of law school at the University of South Carolina, he was elected to the South Carolina House of Representatives.10 Somehow, he simultaneously served as a representative while also attending law school. His representative salary was only 10,000 USD, which might explain why it's hard for the poor to build a career in state-level politics. He earned attention from Barack Obama, whom Sellers asked to come to South Carolina in return for Sellers' endorsement in the primaries. Eventually, he ran for Lieutenant Governor (as a Democrat), a huge challenge in a conservative state such as South Carolina, and lost. He's now a political commentator and a lawyer. The memoir covers the Charleston massacre in 2015, his disappointment when Trump was elected president (he thought that white women would join forces with non-whites to elect Hilary Clinton), and a personal story where his wife had health issues when giving birth, but survived. Sellers credits the fact that the doctors and nurses there were Black and knew Sellers personally, and he concludes with a call to help decrease racial inequities in health care, which persist today in the mortality rate when giving birth, and also with lead poisoning in many predominantly Black communities such as in Flint, Michigan.
Group 6: Countries
I continue utilizing the "What Everyone Needs to Know" book series. However, the batch I picked this year was probably less informative compared to others in the series. However, I'm especially happy to have read the fourth book here about Burma (not part of "What Everyone Needs to Know"), which I found from reading Foreign Affairs.
Brazil: What Everyone Needs to Know (2016) by Riordan Roett, Professor Emeritus at the Johns Hopkins University's School of Advanced International Studies (SAIS), who specializes in Latin American studies. Brazil is always a country that I've wanted to know more, given its size (in population and land area), its geopolitical situation in a place (Latin America) that I know relatively little about, and because of the Amazon rain forest. The book begins with the early recorded history of Brazil based on the Portuguese colonization, followed by the struggle for independence. It also records Brazil's difficulties with establishing Democracy versus military rule. Finally, it concludes with some thought questions about foreign affairs, and Brazil's relations with the US, China, and other countries. This isn't a page-turner book, but I think the bigger issue is that so much of what I want to know about Brazil relates to what happened over the last 5 years, particularly given the increasingly authoritarian nature of Brazil's leadership since then, with President Jair Bolsonaro.
Iran: What Everyone Needs to Know (2016), by the late historian Michael Axworthy, provides a concise overview of Iran's history. I bought it on iBooks and started reading it literally the day before the murder of Qasem Soleimani. Soleimani was widely believed to be next-in-line to succeed Ali Khamenei as the Supreme Leader of Iran; the "Supreme Leader" is the highest office in Iran. If you are interested in a recap of those events, see this NYTimes account on the events that nearly brought war between the US and Iran. The book was published in 2016 so it did not contain that information, and the last question was predictably about the future of Iran after the 2015 Nuclear Deal,11 with Axworthy noting that Iran seems to be pulled in "incompatible directions," one for liberalization and modernity, the other for conservative Islam and criticism of Israel. The book mentions the history of the people who lived in the area that is now Iran. Back then, that was the Persian Empire, and I liked how Axworthy commented on Cyrus and Darius I, since they are the two Persian leaders in the Civilization IV computer game that I used to play. Later, Axworthy mentions the Iran-Iraq war and the Revolution of 1979 which deposed the last Shah (Mohammad Reza Pahlavi) in favor of Ruhollah (Ayatollah) Khomeini. Overall, this book is OK but was boring in some areas, and is too brief. It may be better to read Axworthy's longer (but older) book about Iran.
Russia: What Everyone Needs to Know (2016) is by Timothy J. Colton, a Harvard University Professor of Government and Russian Studies. The focus of this book is on Russia, which includes the Soviet Union from the period of 1922 to its dissolution in 1991 into 15 countries, one of which was Russia itself. As usual for "What Everyone Needs to Know" books, it starts with dry early history. The book gets more interesting when it presents the Soviet Union (i.e., USSR) and its main leaders: Joseph Stalin, Nikita Krushchev, Leonid Brezhnev, and Mikhail Gorbachev. Of those leaders, I support Gorbachev the most due to glasnost, and oppose Stalin the most, from the industrial-scale killing on his watch. Then there was Boris Yeltsin and, obviously, Vladimir Putin, who is the subject of much of the last chapter of the book. This book, like the one about North Korea I read last year, ponders about who might succeed Vladimir Putin as the de facto leader of Russia? Putin is slated to be in power until at least 2024, and he likely won't give it to his family given that he has no sons. Russia faces other problems, such as alcoholism and demographics, with an aging population and a significantly lower average lifespan for males compared to other countries of Russia's wealth. Finally, Russia needs to do a better job at attracting and retaining talent in science and engineering. This is one of the key advantages the United States has. (As I said earlier, we cannot relinquish this advantage.) Final note: Colton uses a lot of advanced vocabulary in this book. I had to frequently pause my reading to refer to a dictionary.
** The Hidden History of Burma: Race, Capitalism, and the Crisis of Democracy in the 21st Century ** (2020) is Thant Myint-U's latest book on Burma (Myanmar)12. Thant Myint-U is is now one of my expert sources for Burmese-related topics. He's lived there for many years, and has held American and Burmese citizenship at various points in his life. He is often asked to advise the Burmese government and frequently engages with high-level foreign leaders. His grandfather, U Thant, was the third Secretary General of the United Nations from 1961 to 1971, and I'm embarrassed I did not know that; amusingly, Wikipedia says U Thant was the first Secretary General who retired while on speaking terms with all major powers. The Hidden History of Burma discusses the British colonization, the struggle for independence, and the dynamics of the wildly diverse population (in terms of race and religion). Featured heavily, of course, is Aung San Suu Kyi, the 1991 Nobel Peace Prize13 recipient, and a woman who I first remember learning about back in high school. She was once viewed as the beacon of Democracy and human rights — until, sadly, the last few years. She's been the current de facto leader of government and overseeing one of the most brutal genocides in modern history of the Rohingya Muslims. Exact numbers are unclear, but it's estimated that hundreds of thousands have either been killed or have fled to neighboring Bangladesh. How did this happen? The summary is that it wasn't so much that Burma (and Aung San Suu Kyi) made leaps and bounds of progress before doing a 180 sometime in 2017. Rather, the West, and other foreigners who wanted to help, visit, and invest in the country, badly miscalculated and misinterpreted the situation in Burma while wanting to view Aung San Suu Kyi as an impossibly impeccable hero. There's a lot more in the book about race, identity, and capitalism, and how this affects Burma's past, present, and future. Amusingly, I've been reading Thant Myint-U's Twitter feed, and he often fakes confusion as to whether his tweets are referring to the US or Burma: A major election? Widening income inequality? Illegal immigrants? Big bad China? Environmental degradation? Social media inspired violence? Who are we talking about here? For another perspective on the book, see this CFR review.
Group 7: Psychology and Psychiatry
** 10% Happier: How I Tamed the Voice in My Head, Reduced Stress Without Losing My Edge, and Found Self-Help That Actually Works–A True Story ** (2014) is a book by Dan Harris which (a) chronicles his experience with meditation and how it can reduce stress, and (b) attempts to present meditation as an option to many readers but without the big "PR problem" that Harris admits plagues meditation. For (a), Harris turned to meditation to reduce the anxiety and stress he was experiencing as a television reporter; he had several panic attacks on air and, for a time, turned to drugs. His news reporting got him involved with religious, spiritual, and "happiness" gurus who turned out to be frauds (Ted Haggard and James Arthur Ray), which led Harris to question the self-help industry. A key turning point in Harris' life was attending a 10-day Buddhist meditation retreat in California led by Joseph Goldstein. He entered the retreat in part due to the efforts of famous close friend Sam Harris (no relation). After the retreat, he started practicing meditation and even developed his own "10% Happier app" with colleagues. Harris admits that meditation isn't a panacea for everything, so that's one of the reasons for the wording "10% happier" in the title. I read many books, so because of sheer quantity, it's rare when I can follow through on a book's advice. I will try my best here. My field of computer science and robotics research is far different from Harris' field, I also experience some stress in maintaining my edge due to the competitive nature of research, so hopefully I can follow this. Harris says all we need are 5 minutes a day. To start: sit comfortably, feel your breath, and each time you get lost in thought, please gently return to breath and start over.
** Misbehaving: The Making of Behavioral Economics ** (2015) by Nobel Laureate Richard Thaler of the University of Chicago, is a book that relates in many ways to Daniel Kahneman's Thinking, Fast and Slow (describing work in collaboration with Amos Tversky). If you like that book, you will probably like this one, since it covers similar themes, which shouldn't be surprising as Thaler collaborated with Kahneman and Tversky for portions of his career. Misbehaving is Thaler's personal account of his development of behavioral economics, a mix of an autobiography and "research-y" topics. It describes how economics has faced internal conflicts between those who advocate for a purely rational view of agents (referred to as "Econs" in the book) and those who incorporate elements of human psychology into their thinking, which may cause classical economic theory to fail due to irrational behavior by humans. In chapter after chapter, Thaler argues convincingly that human behavior must be considered to understand and properly predict economic behavior.
Option B: Facing Adversity, Building Resilience, and Finding Joy (2016) is co-written by Sheryl Sandberg and Adam Grant, and for clarity is told from the perspective of Ms. Sandberg. She's the well-known Chief Operating Officer of Facebook and the bestselling author of Lean In, which I read a few years ago. This book arose out of the sudden death of her former husband, Dave Goldberg, in 2015, and how she went through the aftermath. Option B acknowledges that, sometimes, people simply cannot have their top option, and must deal with the second best situation, or the third best, and so on. It also relates to Lean In to some extent; that book was criticized for being elitist in nature, and Option B emphasizes that many women may face roadblocks to career success and financial safety, and hence have to consider "second options." Option B contains anecdotes from Sandberg's experience in the years after her husband's death, and integrates other stories (such as the famous Uruguay flight which crashed, leading survivors to resort to cannibalism) and psychological studies to investigate how people can build resilience and overcome such traumatic events. As of mid-2020, it looks like Ms. Sandberg is now engaged again, so while this doesn't negate her pain of losing Dave Goldberg, she shows – both in the book and in person – that one can find joy again after tragedy.
Good Reasons for Bad Feelings: Insights from the Frontier of Evolutionary Psychiatry (2019), by Randolph M. Nesse, a professor at Arizona State University, is about psychiatry. Wikipedia provides a short intro: psychiatry is the medical specialty devoted to the diagnosis, prevention, and treatment of mental disorders. This book specializes in the evolutionary aspect of psychiatry. A key takeaway from the book is that humans did not evolve to have mental illness or disorders. Dr. Nesse has an abbreviation for this: Viewing Diseases As Adaptations (VDAA), which he claims is the most common and serious mistake in evolutionary medicine. The correct question is, instead, why did natural selection shape traits that make us vulnerable to disease? There are intuitive explanations. For one, any personality trait exhibits itself across a spectrum of extremity. Some anxiety is necessary to help protect against harm, but having too much can be a classic sign of a mental disorder. Also, what was best back for our ancestors is not true today, as vividly demonstrated by the surge in obesity in developed countries. Another takeaway, one that I probably should have expected, is that the science of psychiatry has had plenty of controversy. Consider the evolutionary benefits of homosexuality (if any). Dr. Nesse says it's a common question he gets, and he avoids answering because he doesn't think it's settled. From my non-specialist perspective, this book was a readable introduction to evolutionary psychiatry.
Group 8: Miscellaneous
** The Conscience of a Liberal ** (2007, updated forward 2009) is a book by the well-known economist and NYTimes columnist Paul Krugman. The title is similar to that of Barry Goldwater's 1960 book, and of course, the 2017 version from former Senator Jeff Flake (which I read). In The Conscience of a Liberal, Krugman describes why he is a liberal, discusses the rise of modern "movement" Conservatism, and argues that a Democratic presidential administration must prioritize universal health care. The book was written in 2007, so he couldn't have known that Obama would win in 2008 and pursue Obamacare, and I know from reading Krugman's columns over the years that he's very pro-Obamacare. Many of Krugman's columns today at the NYTimes reflect the writing in this book. That's not to say the ideas are stale — much of it is due to the slow nature of government in that it takes us ages to make progress on any issue, such as the still-unseen universal health care. Krugman consistently argues in the book (as in his columns) for having a public option in addition to a strong private sector, rather than creating true socialized medicine which is what Britain uses. Regarding Conservatism, Krugman gets a lot right here: he essentially predicts correctly that Republicans can't just get rid of Obamacare due to the huge backlash, just like Eisenhower-type Republicans couldn't get rid of the New Deal. I also think he's right on race, in that the Republicans have been able to get an alliance between the wealthy pro-business and low-tax elite with the white working class, a bond which is even stronger today under Trump. My one qualm is his surprising discounting of abortion as a political issue. It's very strong in unifying the Republican party, but perhaps he'd change that in a modern edition.
** Steve Jobs ** (2011) by acclaimed writer Walter Isaacson is the definitive biography of Steve Jobs. Described as a classic "wartime CEO" by Ben Horowitz in The Hard Thing About Hard Things, Jobs co-founded Apple with Steve Wozniak, but by 1985, Jobs was forced to leave in the wake of internal disagreements. Then, after some time in another startup and at Pixar, Jobs returned to Apple in 1997 when it was on the verge of bankruptcy, and somehow in the 2010s, Apple was on its way to being the most valuable company in the world and the first to hit 1 trillion in market capitalization. While writing the biography, Isaacson had access to Steve Jobs, his family, friends, and enemies. In fact, Isaacson had explicit approval from Jobs, who asked him to write the book on the basis of Isaacson's prior biographies of Benjamin Franklin, Albert Einstein, and others. I am not sure if Jobs ever read this book, since he passed away from cancer only a few months after this book was published. The book is a mammoth 550-page volume, but it reads very quickly, and I often found myself wishing I could read more and more – Isaacson has a gift for tracing the life of Jobs, his upsides and downsides, and interactions with people as part of his CEO experience. There's also a fair amount about the business aspects of Apple that made me better understand how things work. I can see why people might think it's definitely recommended reading for MBAs. I wonder, and I hope, that there are ways to achieve his business success and talents without having the downsides of: angry outbursts, super-long work hours, demand for control, refusing and imposing unrealistic expectations (his "reality distortion field"). I would be curious to see how he contrasts with the style of other CEOs.
The Only Investment Guide You'll Ever Need by Andrew Tobias is a book with a bad title but which has reasonably good content. It was first written in 1978, but has been continually updated over the years, and the most recent version which I read was the 2016 edition. As I prepare to move beyond my graduate student days, I should use my higher salary to invest more. Why? With proper investment, the rate of return on the money should be higher than if I let it sit in a savings account accumulating interest. Of course, that depends on investing wisely. The first part of the book has advice broadly applicable to everyone: how to save money in so-called incremental ways that add up over time. While advice such as buying your own coffee instead of going to Starbucks and living slightly below your means sounds boring and obvious, it's important to get these basics out of the way. The second part dives more into investing in stocks, and covers concepts that are more foreign to me. My biggest takeaway is that one should avoid commission fees that add up, and that while it's difficult to predict stocks, in the long run, investing in stocks generally pays off. This book, being a guide, is the kind that's not necessarily meant to be read front-to-back, but one where I should return to every now and then on demand to get an opinion on an investing related topic.
Nasty Women: Feminism, Resistance, and Revolution in Trump's America (2017) is a series of about 20 essays by a diverse set of women, representing different races, religions, disabilities, sexual orientations, jobs, geographic locations, and various other qualities. It was written shortly after Trump's election, and these women unanimously oppose him. It was helpful to understand the experiences of these women, and how they felt threatened by someone who bragged about sexual assault and has some retrograde views on women. There was clear disappointment from these women towards the "53% of white women who voted for Trump," a statistic repeated countless times in Nasty Women. On the issue of race, some of the Black women writers felt conflicted about attending the Women's March, given that the original idea for these marches came from Black women. I agree with the criticism of these writers towards some liberal men, who may have strongly supported Bernie Sanders but had trouble supporting Clinton. For me, it was actually the reverse; I voted for Clinton over Sanders in the primaries. That said, I don't agree with everything. For example, one author criticized the notion of Sarah Palin calling herself a feminist, and said that we need a different definition of feminism that doesn't include someone like Palin. I think women have a wide range of beliefs, and we shouldn't design feminism to leave Conservative women out of the umbrella. Nonetheless, there's a lot of agreement between me and these authors.
The Hot Hand: The Mystery and Science of Streaks (2018) is by WSJ reporter Ben Cohen, who specializes in covering the NBA, NCAA, and other sports. "The hot hand" refers to a streak in anything. Cohen goes over the obvious: Stephen Curry is the best three point shooter in the history of basketball, and he can get on a hot streak. But, is there a scientific basis to this? Is there actually a hot hand, or does Curry just happen to hit his usual rate of shots, except that due to the nature of randomness, sometimes he will just have streaks? Besides shooting, Cohen reviews streaks in areas such as music, plays, academia, business, and Hollywood. From the first few chapters, it seems like most academics don't think there is a hot hand, whereas people who actually perform the tasks (e.g., athletes) might think otherwise. The academics include Amos Tversky and Daniel Kahneman, the two famous Israeli psychologists who revolutionized their field. However, by the time we get to the last chapter of this book, Cohen points out two things that somehow were missed in most earlier discussions of the hot hand. First, basketball shots and similar things are not "independent, identically, distributed," and controlling for the harder shot selection that people who think they have "the hot hand" take, they actually overperform relative to expectations. The second is slightly more involved but has to do with sequences of heads and tails that has profound implications in interpreting the hot hand. In fact, you can see a discussion on Andrew Gelman's famous blog. So, is there a hot hand? The book leaves the question open, which I expected since a vague concept like this probably can't be definitively proved or disproved. Overall, it's a decent book. My main criticism is that some of the anecdotes (e.g., the search for a Swedish man in a Soviet prison and the Vincent van Gogh painting) don't really mesh well with the book's theme.
How to Do Nothing: Resisting the Attention Economy (2019) by artist and writer Jenny Odell is a manifesto about trying to move focus away from the "attention economy" as embodied by Facebook, Twitter, and other social media and websites which rely on click-through and advertisements for revenue. She wrote this after the Trump election, since (a) she's a critic of Trump, and (b) Trump's constant use of Twitter and other attention-grabbing comments have turned the country into a constant 24-hour news cycle. Odell cautions against us trying to use "digital detox" as a solution, and reviews the history of several such digital detox or "utopia" sessions that failed to pan out. The book isn't the biggest page-turner but is still thought-provoking. However, I am not sure about her proposed tactics for "how to do nothing" except perhaps to focus on nature more? She supports preserving nature, along with people who protested the development of condos over preserved land, but this would continue to exacerbate the Bay Area's existing housing crisis. I see the logic, but I can't oppose more building. I do agree with reducing the need to attention, and while I do use social media and support its usage, I agree there are limits to it.
Inclusify: The Power of Uniqueness and Belonging to Build Innovative Teams (2020) is a recent book by Stefanie K. Johnson, a professor at the University of Colorado Boulder's Leeds School of Business who studies leadership and diversity. Dr. Johnson defines inclusify "to live and lead in a way that recognizes and celebrates unique and dissenting perspectives while creating a collaborative and open-minded environment where everyone feels they truly belong." She argues it helps increase sales, drives innovation, and reduces turnover, and the book is her attempt at distilling these lessons about improving diversity efforts at companies. She identifies six types of people who might be missing out on the benefits of inclusification: the meritocracy manager, the culture crusader, the team player, the white knight, the shepherd, and the optimist. I will need to keep these groups in mind to make sure I do not fall into these categories. Despite how I agree with the book's claims, I'm not sure how much I benefited from reading Inclusify, given that I read this one after several other books this year that covered similar ground (e.g., many "anti-racist" books discuss these topics). I published this blog post a few months after reading the book, and I confess that I remember less about its contents as compared to other books.
Master of None: How a Jack-of-All-Trades Can Still Reach the Top (2020) is by Clifford Hudson, the former CEO of Sonic Drive-In, a fast food restaurant chain (see this NYTimes profile for context). This is an autobiography of Hudson who tries to push back against the notion that to live an accomplished life, one needs to master a particular skill, as popularized from books such as Malcolm Gladwell's Outliers and his "10,000 Rule". Hudson argues that his life has been fulfilling despite never deliberately mastering one skill. The world is constantly changing, so it is necessary to quickly adapt, to say "yes" to opportunities that arise, and to properly delegate tasks to others who know better. I think Hudson himself serves as evidence for not necessarily needing to master one skill, but the book seems well tailored for folks working in business, and I would be curious to see discussion in an academic context, where the system is built to encourage us to specialize in one field. It's a reasonably good autobiography and a fast read. I would not call it super great or memorable. I may read David Epstein's book Range: Why Generalists Triumph in a Specialized World to follow-up on this topic.
Well, that is it for 2020.
Kurzweil predicts that "we will encounter such a non-biological entity" by 2029 and that this will "become routine in the 2030s." OK, let me revisit that in a decade! ↩
As far as I know, "polygenic scores" require taking a bunch of DNA samples and predicting outcomes, while CRISPR can actually do the editing of that DNA to lead to such outcomes. I'd be curious if any biochemists or psychologists could chime in to correct my understanding. ↩
Dana MacKenzie has an interesting story about being denied tenure at Kenyon College (which he taught after leaving Duke, when it was clear he would also not get tenure there). You can find it on his website. There is also a backstory on how he and Judea Pearl got together to write the book. ↩
Personally, I first found out about Hans Rosling through a Berkeley colleague's research on data visualization. ↩
I didn't realize that Martínez knew David Kauchak during their Adchemy days. I briefly collaborated with Kauchak during my undergraduate research. ↩
I somehow did not know about Lex Fridman's AI podcast. If my book reading list feels shallower this year, then I blame his podcast for all those thrilling videos with pioneers of AI and related fields. ↩
To state the obvious, I have never been to one of these parties. Chang says: "the vast majority of people in Silicon Valley have no idea these kinds of sex parties are happening at all. If you're reading this and shaking your head […] you may not be a rich and edgy male founder or investor, or a female tech in her twenties." ↩
I rarely post status updates on my Facebook anymore, but a few days before her Op-Ed, I posted a graphic I created with pictures of leaders along with their country's COVID-19 death count and death as a fraction of population. And, yes, my self-selected countries led by female leaders have done a reasonable job controlling the outbreak. I'm most impressed with Tsai Ing-wen of Taiwan, who had to handle this while (a) being geographically close to China itself, and (b) largely ostracized by the wider international community. For an example of the second point, look at how a WHO official dodged a question about Taiwan and COVID-19. ↩
If you're curious, Desmond has a postscript at the end of the book explaining how he did this research project, including when he felt like he needed to intervene, and how the tenants treated him. It's fascinating, and I wish this section of the book were much longer, but I understand if Desmond did not want to raise too much attention to himself. In addition, there is a lot of data in the footnotes. I read all the footnotes, and recommend reading them even if it comes at the cost of some "reading discontinuity." ↩
When he ran for his state political office, he and a small group of campaigners went door-to-door and contacted people face-to-face. I don't know how this would scale to larger cities or work in the age of COVID-19. Incidentally, there isn't any discussion on COVID-19, but I suspect if Sellers had written the book just a few months later, he would discuss the pandemic's disparate impact on Blacks. ↩
I do not feel like I know enough about the Iran Nuclear Deal to give a qualified statement. I was probably a lukewarm supporter of it, but since the deal no longer appears to be active as of January 2020, I am in favor of a stronger deal (as in, one that can get U.S. congressional approval) if that is at all possible. ↩
The ruling army junta (i.e., a government led by the military) changed the English name of the country from Burma to Myanmar in 1989. ↩
She isn't the only terrible recipient of the Nobel Peace Prize. Reading the list of past recipients sometimes feels like going through one nightmare after the other. ↩
Mechanical Search in Robotics
One reason why I enjoy working on robotics is because many of the problems the research community explores are variants of tasks that we humans do on a daily basis. For example, consider the problem of searching for and retrieving a target object in clutter. We do this all the time. We might have a drawer of kitchen appliances, and may want to pick out a specific pot for cooking food. Or, maybe we have a box filled with a variety of facial masks, and we want to pick the one to wear today when venturing outside (something perhaps quite common these days). In the robotics community, recent researchers that I collaborate with have formulated this as the mechanical search problem.
In this blog post, I discuss four recent research papers on mechanical search, split up into two parts. The first two focus on core mechanical search topics, and the latter two propose using something called learned occupancy distributions. Collectively, these papers have appeared at ICRA 2019 and IROS 2020 (twice), and one of these is an ICRA 2021 submission.
Mechanical Search and Visuomotor Mechanical Search
The ICRA 2019 paper formalizes mechanical search as the task of retrieving a specific target object from an environment containing a variety of objects within a time limit. They frame the general problem using the Markov Decision Process (MDP) framework, with the usual states, actions, transitions, rewards, and so on. They consider a specific instantiation of the mechanical search MDP as follows:
They consider heaps of 10-20 objects at the start.
The target object to extract is specified by a set of $k$ overhead RGB images.
The observations at each time step (which a policy would consume as input) are RGB-D, where the extra depth component can enable better segmentation.
The methods they use do not use any reward signal.
They enable three action primitives: (a) push, (b) suction, and (c) grasp.
The push action is there so that the robot can rearrange the scene for better suction and grasp actions, which are the primitives that actually enable the robot to retrieve the target object (or distractor objects, for that matter). While more complex action primitives might be useful for mechanical search, this would introduce complexities due to the curse of dimensionality.
Here's the helpful overview figure from the paper (with the caption) showing their instantiation of mechanical search:
I like these type of figures, and they are standard for papers we write in Ken Goldberg's lab.
The pipeline is split up into a perception stage and a search policy stage. The perception stage first computes a set of object masks from the input RGB-D observation. It then uses a trained Siamese Network to check the "similarity" between any of these masks, and those of the target images. (Remember, in their formulation, we assume $k$ separate images that specify the target, so we can feed all combinations of each target image with each of the computed masks.) If a target image is found, then they can run the search policy to select one of the three allowed action primitives, depending on the action primitive with the highest "score." How is this value chosen? We can use off-the-shelf Dex-Net policies to compute the probability of action successes. Please refer to my earlier blog post here about Dex-Net.
Here are a couple of things that might not be clear upon a first read of the paper:
There's a difference between how action qualities are computed in simulation versus real. In simulation, grasp and suction actions both use indexed grasps from a simulated Dex-Net 1.0 policy in simulation, which is easy to use as it avoids having to run segmentation. In addition, Dex-Net 1.0 literally contains a dataset of simulated objects plus successful grasps for each object, so we can cycle through those as needed.
In real, however, we don't have easy access to this information. Fortunately, for grasp and suction actions, we have ready-made policies from Dex-Net 2.0 and Dex-Net 3.0, respectively. We could use them in simulation as well, it's just not necessary.
To be clear, this is how to compute the action quality. But there's a hierarchy: we need an action selector that can use the computed object masks (from the perception stage) to decide which object we want to grasp using the lower-level action primitives. This is where their 5 algorithmic policies come into play, which correspond to "Action Selector" in the figure above. They test with random search, prioritizing the target object (with and without pushing), and a largest first variant (again, with and without pushing).
The experiments show that, as expected, algorithmic policies that prioritize the target object and the larger objects (if the target is not visible) are better. However, a reader might argue that from looking closely at the figures in the paper, the difference in performance among the 4 algorithmic policies other than the random policy may be minor.
That being said, as a paper that introduces the mechanical search problem, they have a mandate to test the simplest types of policies possible. The conclusion correctly points out that an interesting avenue for future work is to do reinforcement learning. Did they do that?
Yes! This is good news for those of us who like to see research progress, and bad news for those who were trying to beat the authors to it. That's the purpose of their follow-up IROS 2020 paper, Visuomotor Mechanical Search. It fills in the obvious gap made from the ICRA 2019 paper: that performance is limited by algorithmic policies, which are furthermore restricted to linear pushes parameterized by an initial point and then a push direction. Properly-trained learning-based policies that can perform continuous pushing strategies should be able to better generalize to complex configurations than algorithmic ones.
Since naively applying Deep RL is very sample inefficient, the paper proposes an approach combining three components:
Demonstrations. It's well-known that demonstrations are helpful in mitigating exploration issues, a topic I have previously explored on this blog.
Asymmetric Information. This is a fancy way of saying that during training, the agent can use information that is not available at test time. This can be done when using simulators (as in my own work, for example) since the simulator includes detailed information such as ground-truth object positions which are not easily accessible from just looking at an image.
Mid-Level Representations. This means providing the policy (i.e., actor) not the raw RGB image, but something "mid-level." Here, "mid-level" means the segmentation mask of the target object, plus camera extrinsics and intrinsics. These are what actually get passed as input to the mechanical search policy, and the logic for this is that the full RGB image would be needlessly complex. It is better to just isolate the target object. Note that the full depth image is passed as input — the mid-level representation just replaces the RGB component.
In the MDP formulation for visuomotor mechanical search, observations are RGBD images and the robot's end-effector, actions are relative end-effector changes, and the reward is a shaped and hand-tuned to encourage the agent to make the target object visible. While I have some concerns about shaping rewards in general, it seems to have worked for them. While the actor policy takes in the full depth image, it simultaneously consumes the mid-level representation of the RGB observation. In simulation, one can derive the mid-level representation from ground-truth segmentation masks provided by PyBullet simulation. They did not test on physical robots, but they claim that it should be possible to use a trained segmentation model.
Now, what about the teachers? They define three hard-coded teachers that perform pushing actions, and merge the teachers as demonstrators into the "AC-Teach" framework. This is the authors' prior paper that they presented at CoRL 2019. I read the paper in quite some detail, and to summarize, it's a way of performing training that can combine multiple teachers together, each of which may be suboptimal or only cover part of the state space. The teachers use privileged information by not using images but rather using positions of all objects, both the target and the non-target(s).
Then, with all this, the actor $\pi_\theta(s)$ and critic $Q_\phi(s, a)$ are updated using standard DDPG-style losses. Here is Figure 2 from the visuomotor mechanical search paper, which summarizes the previous points:
Remember that the policy executes these actions continuously, without retracting the arm after each discrete push, as done in the method from the ICRA 2019 paper.
They conduct all experiments in PyBullet simulation, and extensively test by ablating on various components. The experiments focus on either a single-heap or a dual-heap set of objects, which additionally tests if the policy can learn to ignore the "distractor" heap (i.e., the one without the target object in it) in the latter setting. The major future work plan is to address failure cases. I would also add that the authors could consider applying this on a physical robot.
These two papers give a nice overview of two flavors of mechanical search. The next two papers also relate to mechanical search, and utilize something known as learned occupancy distributions. Let's dive in to see what that means.
X-RAY and LAX-RAY
In an IROS 2020 paper, Danielczuk and collaborators introduce the idea of X-RAY for mechanical search of occluded objects. To be clear: there was already occlusion present in the prior works, but this work explicitly considers it. X-RAY stands for maXimize Reduction in support Area of occupancY distribution. The key idea is to use X-RAY to estimate "occupancy distributions," a fancy way of labeling each bounding box in an image with the likelihood that it contains the target object.
As with the prior works, there is an MDP formulation, but there are a few other important definitions:
The modal segmentation mask: regions of pixels in an image corresponding to a given target object which are visible.
The amodal segmentation mask: regions of pixels in an image corresponding to a given target image which are either visible or invisible. Thus, the amodal segmentation mask must contain the modal segmentation mask, as it has both the visible component, plus any invisible stuff (which is where the occlusion happens).
Finally, the occupancy distribution $\rho \in \mathcal{P}$: the unnormalized distribution describing the likelihood that a given pixel in the observation image contains some part of the target object's amodal segmentation mask.
This enables them to utilize the following reward function to replace a sparse reward:
\[\tilde{R}(\mathbf{y}_k, \mathbf{y}_{k+1}) = |{\rm supp}(f_\rho(\mathbf{y}_{k}))| - |{\rm supp}(f_\rho(\mathbf{y}_{k+1}))|\]
where \(f_\rho\) is a function that takes in an observation \(\mathbf{y}_{k}\) (following the paper's notation) and produces the occupancy distribution \(\rho_k\) for a given bounding box, and where \(|{\rm supp}(\rho)|\) for a given support \(\rho\) (dropping the $k$ subscript for now) is the number of nonzero pixels in \(\rho\).
Why is this logical? By reducing the occupancy distribution, one decreases the number pixels that MIGHT occlude the target objects, hence reducing uncertainty. Said another way, increasing this reward gives us greater certainty as to where the target object is located, which is an obvious prerequisite for mechanical search.
The paper then describes (a) how to estimate $f_\rho$ in a data-driven manner, and then (b) how to use this learned $f_\rho$, along with $\tilde{R}$, to define a greedy policy.
There's an elaborate pipeline for generating the training data. Originally I was confused about their procedure for translating the target object. But after reading carefully and watching the supplementary video, I understand; it involves simulating a translation and rotation while keeping objects fixed. Basically, they pretend they can repeatedly insert the target object at specific locations underneath a pile of distractor objects, and if it results in the same occupancy distribution, then they can include such images in the data to expand the occupancy distribution to its maximum possible area (by aggregating all the amodal maps), meaning that estimates of the occupancy distribution are a lower bound on the area.
As expected, they train using a Fully Convolutional Network (FCN) with a pixel-wise MSE loss. You can think of this loss as taking the target image and the image produced from the FCN, unrolling them into long vectors \(\mathbf{x}_{\rm targ}\) and \(\mathbf{x}_{\rm pred}\), then computing
\[\|\mathbf{x}_{\rm targ} - \mathbf{x}_{\rm pred}\|_2^2\]
to find the loss. This glosses over a tiny detail: the network actually predicts occupancy distributions for different aspect ratios (one per channel in the output image) and only the channel with the similar input aspect ratio gets considered for the loss. Not a huge deal to know if you're skimming the paper: it probably suffices to just realize that it's the standard MSE.
Here is the paper's key overview figure:
They propose to plan a grasp with the most amount of corresponding occupancy area. Why? A pick and place at that spot will greatly reduce the subsequent occupancy area of the target object.
It is nice that these FCNs can reasonably predict occupancy distributions for target objects unseen in training, and that it can generalize to the physical world without actually training on physical images. Training on real images would be harder since depth images would likely be noisier.
The two future works they propose are: relieving themselves of the assumption that the target object is flat, and (again) saying that they will do reinforcement learning. This paper was concurrent with the visuomotor mechanical search, but that paper did not technically employ X-RAY, so I suppose there is room to merge the two.
Next, what about the follow-up work of LAX-RAY? This addresses an obvious extension in that instead of top-down grasping, one can do lateral grasping, where the robot arm moves horizontally instead of vertically. This enables application to shelves. Here's the figure summarizing the idea:
We can see that a Fetch robot has to reveal something deep into the shelf by pushing objects in front to either the left or the right. The robot has a long thin board attached to its gripper, it's not the usual Fetch gripper. The task ends as soon as the target object, known beforehand, is revealed.
As with standard X-RAY, the method involves using a Fully Convolutional Network (FCN) to map from an image of the shelf to a distribution of where the target object could be. (Note: the first version of the arXiv paper says "fully connected" but I confirmed with the authors that it is indeed an FCN, which is a different term.) This produces a 2D image. Unlike X-RAY, LAX-RAY maps this 2D occupancy distribution to a 1D occupancy distribution. The paper visualizes these 1D occupancy distributions by overlaying them on depth images. The math is fairly straightforward on how to get a 1D distribution: just consider every "vertical bar" in the image as one point in the distribution, then sum over the values from the 2D occupancy distribution. That's how I visualize it.
The paper proposes three policies for lateral-access mechanical search:
Distribution Area Reduction (DAR): ranks actions based on overlap between the object mask and the predicted occupancy distribution, and picks the action that reduces the sum the most. This policy is the most similar, in theory, to the X-RAY policy: essentially we're trying to "remove" the occupancy distribution to reduce areas where the object might be occluded.
Distribution Entropy Reduction over n Steps (DER-n): this tries to predict what the 1D occupancy distribution will look like over $n$ steps, and then picks the one with lowest entropy. Why does this make sense? Because lower entropy means the distribution is less spread out, and concentrated towards one area, telling us where the occluded item is located. The authors also introduce this so that they can test with multi-step planning.
Uniform: this tests a DAR ablation by removing the predicted occupancy distribution.
They also introduce a First-Order Shelf Simulator (FOSS), a simulator they use for fast prototyping, before experimenting with the physical Fetch robot.
What are some of my thoughts on how they can build upon this work? Here are a few:
They can focus on grasping the object. Right now the objective is only to reveal the object, but there's no actual robot grasp execution. Suctioning in a lateral direction might require more sensitive controls to avoid pushing the object too much, as compared to top-down where gravity stops the target object from moving away.
The setup might be a bit constrained in that it assumes stuff can be pushed around. For example consider a vase with water and flowers. Those might be hard to push, and are at risk of toppling.
To summarize, here is how I view these four papers grouped together:
Paper 1: introduces and formalizes mechanical search, and presents a study of 5 algorithmic (i.e., not learned) policies.
Paper 2: extends mechanical search to use AC-Teach for training a learned policy that can execute actions continually.
Paper 3: combines mechanical search with "occupancy distributions," with the intuition being that we want the robot to check the most likely places where an occluded object could be located.
Paper 4: extends the prior paper to handle lateral access scenarios, as in shelves.
What are some other thoughts and takeaways I have?
It would be exciting to see this capability mounted onto a mobile robot, like the HSR that we used for our bed-making paper. (We also used a Fetch, and I know the LAX-RAY paper uses a Fetch, but the Fetch's base stayed put during LAX-RAY experiments.) Obviously, this would not be novel from a research perspective, so something new would have to be added, such as adjustments to the method to handle imprecision due to mobility.
It would be nice to see if we can make these apply for deformable bags, i.e., replace the bins with bags, and see what happens. I showed that we can at least simulate bagging items in PyBullet in some concurrent work.
There's also a fifth mechanical search paper, on hierarchical mechanical search, also under review for ICRA 2021. I only had time to skim it briefly and did not realize it existed until after I had drafted the majority of this blog post. I have added it in the reference list below.
Michael Danielczuk, Andrey Kurenkov, Ashwin Balakrishna, Matthew Matl, David Wang, Roberto Martín-Martín, Animesh Garg, Silvio Savarese, Ken Goldberg. Mechanical Search: Multi-Step Retrieval of a Target Object Occluded by Clutter, ICRA 2019.
Andrey Kurenkov, Joseph Taglic, Rohun Kulkarni, Marcus Dominguez-Kuhne, Animesh Garg, Roberto Martín-Martín, Silvio Savarese. Visuomotor Mechanical Search: Learning to Retrieve Target Objects in Clutter, IROS 2020.
Michael Danielczuk, Anelia Angelova, Vincent Vanhoucke, Ken Goldberg. X-Ray: Mechanical Search for an Occluded Object by Minimizing Support of Learned Occupancy Distributions, IROS 2020.
Huang Huang, Marcus Dominguez-Kuhne, Jeffrey Ichnowski, Vishal Satish, Michael Danielczuk, Kate Sanders, Andrew Lee, Anelia Angelova, Vincent Vanhoucke, Ken Goldberg. Mechanical Search on Shelves using Lateral Access X-RAY, arXiv 2020.
Andrey Kurenkov, Ajay Mandlekar, Roberto Martin-Martin, Silvio Savarese, Animesh Garg. AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers, CoRL 2019.
Andrey Kurenkov, Roberto Martín-Martín, Jeff Ichnowski, Ken Goldberg, Silvio Savarese. Semantic and Geometric Modeling with Neural Message Passing in 3D Scene Graphs for Hierarchical Mechanical Search, arXiv 2020.
« Prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Next »
[email protected]
DanielTakeshi
(Never!)
This is my blog, where I have written over 300 articles on a variety of topics. Recent posts tend to focus on computer science, my area of specialty as a Ph.D. student at UC Berkeley. | CommonCrawl |
Water splitting into hydrogen and oxygen by non-traditional redox inactive zinc selenolate electrocatalyst
Aditya Upadhyay, Saurav KV, Manish Kumar, Evelin Varghese, Ananda Hodage, and 3 more
The development of alternative energy sources is the utmost priority of the developing society. Unlike many prior homogenous electrocatalysts that rely on the change in an oxidation state of metal center and or electrochemically active ligand, the synthesized novel bimetallic zinc selenolate complex consisting of redox inactive zinc metal ion and catalytically inactive ligand catalyzes the electrochemical oxygen evolution from the water with rate constant 7.28 s-1 at onset potential 1.028 V vs. NHE. On the other hand, the hydrogen evolution reaction proceeds with 47.32 s-1 observed rate constant and -0.256 V onset potential vs. NHE. DFT computations and control experiments suggest that the redox chemistry at selenium center, Lewis acidity, and cooperativity effect of two zinc atoms facilitate the electrochemical oxidation and reduction of water into oxygen and hydrogen, respectively.
water splitting
non-traditional redox inactive zinc selenolate electrocatalyst
With the exponential increase in the consumption of current energy sources across the world, the biggest challenge that lies ahead is the development of alternative energy sources that are competitive with the currently available energy resources.1 Over the past decades, substantial progress has been made in energy production electrochemically, mainly those involving water, hydrogen, and oxygen, wherein water oxidizes at the anode, known as oxygen evolution reaction (OER) and the reduction of the proton to the hydrogen, termed as hydrogen evolution reaction (HER) occurs at the cathode.2-4 Generally, heavy transition metal (TM) catalytic system augment OER and HER.5-12 In these catalysts, the metal ion performs the electron transfer and also interacts with the substrate during bond-forming and breaking events.13-14 Consequently, a metal center is associated with the formation of metal hydride and metal oxide intermediates in HER and OER, respectively. Moreover, earth-abundant 3d-transition metals such as nickel, copper, and cobalt complexes (Scheme 1) have been prepared with the appropriate choice of organosulfur and organoselenium ligands having selenium or sulfur at the active site to interact with the proton and to assist the formation of a metal-hydride bond by the proton shuttling process, which is the crucial step in the reduction of proton in hydrogen evolution reaction.15-20 Besides the organotransition metal selenolate electrocatalysts, the selenium and transition metals (Fe, Co, Ni, Cu) derived materials also provide an alternate for the electrocatalytic water oxidation and water reduction reactions (OER and HER).21
Further, for oxygen evolution reaction (OER) from water, redox active ligands have been employed to stabilize the high oxidation state of central 3d-transition metal in the catalytic intermediates which enhances the reactivity of the oxidized metal oxide intermediates.22-27 Thus, the reactivity of metal-hydride and metal oxide bonds is crucial for optimum catalytic efficiency. Also, the traditional metal hydrides and metal oxides require open coordination sites and able to accommodate multiple redox two and four-electron processes.28-29
In this context, it is desirable to explore new pathways in which metal hydrides are not necessary to involve in the catalytic cycle. Further, economic and sustainable alternatives are highly desirable to ease out the dependency on the transition metals (TMs, vide infra). In 2016, Graupperhaus and co-workers reported the electrocatalytic proton reduction of acetic acid and oxidation of hydrogen gas by redox-active thiosemicarbazone ligand along with its zinc complex by avoiding the traditional metal hydride approach (Scheme 1).30-31 Nonetheless, zinc complexes have not been reported, which could electrocatalyze water splitting. Sun et al. have unsuccessfully attempted the electrocatalytic water reduction by using zinc(II)pentapyridine complex. However, the cobalt(II) complex of the same ligand was successfully able to reduce water electrocatalytically.32
From the past decade, our group has been active in the synthesis and catalytic activity of organoselenium compounds.33-35 Recently our group has reported a diorgano diselenide derived from the o-aminodiselenide ligand, which could activate aerial oxygen towards the oxidation of organothiols.36-37 Inspired by nature38 and by the non-transition metal electrocatalysts; zinc thiosemicarbazone and aluminium-bis aminopyridine (Scheme 1) for hydrogen evolution reactions (HER) from water,30-31,39-40 herein, we report the synthesis and structural characterization of the novel bimetallic zinc selenolate electrocatalyst 1 that catalyzes the water-splitting reaction by ligand assisted pathways in both directions; oxygen evolution and hydrogen evolution reactions without adding acid or base. Mechanistic insights into the electrocatalytic activity of bimetallic zinc selenolate complex have also been gained by synthesizing mercury selenolate catalyst.
Electrochemical Studies
The cyclic voltammogram (CV) of catalyst 1 at 1 mM in propylene carbonate obtained using a glassy carbon electrode displayed a pronounced catalytic wave. Propylene carbonate solvent was chosen for the electrocatalysis because of it has wide potential window with an oxidative limit of >2.0 (vs. NHE), and weak coordinating ability in comparison with water.7,50-51 The onset of the catalytic wave was observed at 1.13 V versus normal hydrogen electrode (NHE, the potentials presented in this study are referenced to it). The cyclic voltammogram of catalyst 1 revealed a one-electron oxidation wave at 1.14 V vs. NHE, corresponding to the oxidation of Se-2 to Se-1,52 while a weak reverse peak was also observed (Figure 2A). A solution of bimetallic zinc catalyst 1 (1 mM) in propylene carbonate solution containing 1% water shows an increase in anodic current of 19 µA in comparison to without water (Figure 2A).
Further, the anodic current increases up to 31 µA with increasing water concentration in the electrochemical cell, which is indicative of an electrocatalytic OER process. The catalytic current remained unaltered beyond 4% of water, suggesting saturation of OER with a significant increase in the anodic current (ΔI = 16.42 µA) with kobs = 7.28 s-1 (Figure S4, SI). However, the first oxidation wave remains unaltered during the electrocatalysis, implying the stability of the ligand during water oxidation by catalyst 1. Further, a decrease in the catalytic current was observed with an increase in the scan rate (Figure S5, SI), which indicates that the current is associated with the catalytic process.
The addition of water to the solution (1 mM) of diselenide 3, which is a ligand for zinc selenolate catalyst 1, increases the current by only ΔI = 0.12 µA (Figure S28, SI). Moreover, the addition of water does not enhance the current, which implies the inability of diselenide ligand 3 to catalyze water oxidation. Similarly, ZnCl2 solution (1 mM) does not catalyze the water oxidation as no change in the anodic current was observed (Figure S29, SI), and instead, precipitation was observed in the electrochemical cell during the first cycle of electrocatalysis.
Next, we sought to study the hydrogen evolution reaction (HER) from water in propylene carbonate solution by using glassy carbon (GC) working electrode. For this, the electrochemical experiment was performed in the presence of 1 mM of catalyst 1 and 0.25% of water under cathodic (negative) potential. A successive increase of water in the cell results in an increase in the cathodic current (Figure 2C). The maximum cathodic current 12 µA (ΔI = 9 µA) was elevated at 1.5% water concentration. However, the onset potential of -0.256 V vs. NHE was observed along with rate constant (kobs) of 47.32 s-1 (Figure S10, SI). Notably, the hydrogen evolution reaction (HER) from water in the presence of catalyst 1 under acidic conditions (using aqueous acetic acid and strong trifluoracetic acid, TFA) led to low cathodic current 14.8 µA (ΔI = 5.1 µA) and 3.2 µA (ΔI = 0.8 µA), (Figure S14 and S16, SI) contrary to the earlier reported catalysts in which addition an acid increases the cathodic current. It seems that the hydrogen evolution reaction (HER) catalyzed by zinc selenolate catalyst 1 proceeds preferentially by the deprotonation from water and may not from the acid, and significantly low rate constant in a strong trifluoroacetic acid could be due to the poor stability of the catalyst 1 in TFA.
To gain a deeper insight into the catalytic activity, the kinetic activities have been studied for OER and HER. A plot of icat/ip vs. [H2O]1/2 was found to be linear, indicating bimolecular first-order reaction kinetics (Figure S4 and S10, SI). Similarly, a linear plot for icat vs. [catalyst 1] was observed upon varying the concentration of zinc selenolate catalyst 1, which suggests first-order reaction kinetics with respect to the catalyst 1 (Figures 2B and 2D).
The stability of zinc selenolate catalyst 1 under oxygen and hydrogen evolution reaction conditions is confirmed by constant potential electrolysis (CPE) at an applied potential of 1.34 V (vs. NHE for OER) and -0.26 V (vs. NHE for HER) for 2h at GC electrode surface in which a significant change in current was not observed (Figure 3A and 3B). Further, the stability of catalyst 1 under electrocatalysis was also confirmed by Electron Dispersive X-Ray Spectroscopy (EDXS) in which decomposition of catalyst 1 was not realized as a residue for Zn or zinc oxide/ zinc selenide was not observed in the spectra (Figure 3C, EDXS, blue part), and also scanning electron microscopy (SEM) study of Pt-electrode does not show any change in the surface morphology (Figures 3C). The UV-Visible study was also performed on the solution from electrolysis cell after 2 h of the bulk electrolysis and before the electrolysis. The characteristic absorbance at 380 nm of zinc selenolate catalyst 1 was nearly identical (red and blue lines in Figure 3D) to the one before electrolysis (black line) confirms that the structure of the catalyst 1 remains unchanged after the electrocatalysis. These results demonstrate that the bimetallic zinc selenolate complex 1 serves as a robust catalyst for water splitting in a homogeneous system. Also, the evolved oxygen from the water during electrocatalysis was then determined by CPE, reveals a Faradic efficiency of 79% for oxygen evolution (Figure S22, SI).
Heavier analog mercury selenolate 4 also catalyzed oxygen evolution reaction (Figure, S23, SI), albeit low cathodic current 14 µA, and rate constant (0.038 s-1) (Figure S24, SI), presumably attributed to monometallic nature, was observed in comparison to the bimetallic zinc selenolate catalyst 1. Further, mercury selenolate 4 failed to catalyze the hydrogen evolution reaction (HER) from water (Figure S27). Mercury selenolate 4 was found unstable under negative potential as deposition was observed on the surface of the working electrode, presumably due to reduced mercury and could be due to feasible standard reduction potential ( 0.74V vs. SHE) of mercury ion to mercury than that of hydrogen ion to H2.
Mechanistic Studies
Mechanistic insights on bimetallic zinc selenolate 1 catalyzed OER and HER from water were gained from the theoretical assessment of the likely reaction pathways supported by the mass analysis and control experiments. The free energies of intermediates along with possible HER and OER pathways, depicted in Scheme 3, were calculated at DFT/B3LYP/def2-TZVP level of theory.
Both reactions proceed via adsorption of two hydrogen-bonded water molecules, with one coordinating to Zn(a) and the other forming a hydrogen bond to bridging oxygen yielding 1a (ΔG = 54.91 kcal mol-1), the formation of which also confirmed by mass spectrometry (Figure S1, SI). In OER, diaqua species 1a undergoes proton-coupled electron transfer (PCET) to form 1b (confirmed by mass spectrometry Figure S2, SI) with Zn(a)-OH and Zn(b)-OH2 centers. Spin-density and NBO analysis (Figure S30, SI) indicates that the electron is lost primarily from Se(a), consistent with the Se-2/Se-1 oxidation peak seen in the CV. A second PCET, accompanied by intramolecular rearrangements, leads to selenenic acid 1c. Here selenium plays a crucial role in stabilizing the –OH bridging it to Zn(a), whereas the second –OH migrates to bridge the two Zn centers and bridging µ-phenolic oxygen become terminal. The next two successive PCET steps provide selenoxide species 1d and the intramolecular ZnO---H(N) hydrogen-bonded intermediate 1e, respectively. Poor stability of selenoxide (Se=O) bond attributed to weaker π-overlap of selenium with oxygen53 and better leaving group tendency of selenium in the ligand37,53 facilitate 1e to undergo an intramolecular rearrangement to 1f containing divalent selenium center and a peroxo linkage (rO-O = 1.48Å) between Zn centers and a phenolic ring. Subsequently, better ligation ability of water than oxygen to zinc would lead to oxygen evolution from 1f with the concomitant release zinc selenolate catalyst 1a.
The HER mechanism, also initiated by the formation of 1a, proceeds with the abstraction of a proton from water by Se(a) to form 1g under applied negative potential. Subsequently, H from –Se(a)H and –NH groups combine to evolve as H2 yielding selone 1h.37 An added electron then reduces 1h to the anion radical intermediate 1i where the electronic charge on Se (0.24 e-) and radical on N (0.33 e-) are in conjugation through the phenyl ring (0.39 e-) as confirmed using spin density (see inset of Scheme 3) and natural bond order (NBO) analysis. Next, the addition of water molecule and removal of OH– anion effectively adds a proton to the system resulting in the radical intermediate 1m stabilized by conjugation with the phenyl ring. Subsequently, the sequential addition of electron and proton regenerates the water adsorbed state 1a (Scheme S3, SI).
In summary, a novel bimetallic zinc selenolate has been synthesized and structurally characterized. Further, bimetallic zinc selenolate electrocatalyzed both OER and HER from water in without adding acid or base and could complement TM catalysts and redox-active ligands. The synthesized complex catalyzes the water splitting via ligand assisted pathway, which bypasses the formation of metal hydrides during catalysis. However, ligand and ZnCl2 alone are not able to electrocatalyze the water splitting; only by combining these two show the catalytic activity where the Lewis acidity of Zn(II) plays a vital role in the mechanism as it binds with a water molecule and then would help in the removal of a proton from ligated water and also tune the redox potential of catalyst. From the mechanism, it is clear that the presence of a selenium center is crucial for the electrocatalysis of water. The reaction mainly occurs at the selenium center for the four-electron and two-electron transfer in oxygen evolution reaction (OER) and hydrogen evolution reaction (HER), respectively, while the Lewis acidic Zn(II) center brings the water molecule near to the active selenium center. Furthermore, we noted that the oxygen and nitrogen heteroatoms present in 1 is not only crucial for the isolation of bimetallic zinc complex but also facilitate electron transfer in OER and HER as this not only serve as two electrons donor and acceptor due to the presence of the ortho-amino group but also selenium seems weak electrophile and avoid forming an undesirable stable selenium-oxygen bond. It is also evident from our DFT analysis that selenium is a crucial active site on bimetallic zinc selenolate electrocatalyst for both OER and HER. Further progress is being made to enhance the reactivity of ligand in the bimetallic selenolate complexes to synergistically activate the small molecules, as we have presented here.
Representative Procedure. To the stirred solution of schiff base diselenide 2 (248 mg, 0.45 mmol, 1 equiv.) in ethanol, we added sodium borohydride (76 mg, 2.0 mmol, 4 equiv) to generated in-situ selenol and stirred the solution up to 4 h at room temperature. Then we added zinc chloride (122 mg, 0.9 mmol, 2 equiv.) and stirred the solution for 2 h. After that, the solvent was removed by the rotatory evaporator, and the solid residue was washed with aqueous sodium bicarbonate solution several times to afford a light yellow colored novel bimetallic zinc selenolate complex 1 in (230 mg) 75% yield. Crystallization was done in DMSO water (2:1) mixture to afford yellow-colored crystals.
Electrochemistry. A potentiostat (SP-240 Bio-logic Instrument) was used for electrochemical measurements. The three-electrode electrochemical cell consisted of a Glassy carbon (3 mm Diameter) as the working electrode, a nonaqueous Ag/AgNO3 (10 mM AgNO3) as a reference electrode and a platinum wire as a counter electrode were used for the electrochemical measurements.
The electrochemical potential was converted relative to the normal hydrogen electrode (NHE; all potentials reported in this work are referenced to the NHE) following a literature protocol. Current and peak potentials for the catalytic waves were compared without addition of H2O or TFA (ip) and with the addition of H2O or TFA (icat). Current ratios, icat/ip, were plotted vs. [H2O]1/2 and [TFA]1/2 to determine first-order rate constants (k) using the of Eq. 1 for OER and Eq. 2 for HER.
$$\frac{{i}{cat}}{{i}{p}}=\frac{{\left(RT\right)}^{1/2}}{0.446{\left(nFʋ\right)}^{1/2}} ({{k}_{cat })}^{1/2}$$
$$\frac{{i}{cat}}{{i}{p}}=\frac{{\left(RT\right)}^{1/2}}{0.446{\left(nFʋ\right)}^{1/2}}{\left(k\right)}^{1/2} {\left({H}_{2}O\right)}^{1/2}$$
Wherein, R, T, n, F, and ν are the universal gas constant, temperature, number of electrons transferred, Faraday constant, and scan rate, respectively.
The chronoamperometry experiment has been performed in a stirring electrolyte solution in order to make the solution free from in-situ generated oxygen bubbles. A potential of 1.34 V vs. NHE (for OER) and − 0.26 V vs. NHE (for HER) has been chosen for the chronoamperometry experiment.
Computational Details. All the electronic structure calculations presented in this work were performed using density functional theory implemented in the TURBOMOLE 7.4 electronic structure package. Optimization of molecular geometry in the gas phase was carried out using B3LYP hybrid functional. The electronic configuration of the atoms was described with def2-TZVP basis set. For all the calculations, resolution of identity (RI) approximation with corresponding auxiliary basis set was used to speed up the calculations. The stability of the optimized geometries was confirmed by the absence of imaginary vibrational frequencies. Thermal corrections to Gibbs free energy, including zero-point energy, were obtained from vibrational frequency analysis at the same level of theory at 298.15 K and 1 atm within ideal-gas rigid-rotor harmonic-oscillator approximations.
Data availability. The authors declare that the data supporting the findings of this study are available within the paper and the Supplementary Information, as well as from the authors upon request.
S.K. and A.U. designed the research. A.U., S.K.V. and E.V. performed the CV study and analyzed by A.P., A.H. and A.U. developed the methodology, M.K. and V.S. performed the theoretical studies, S.K., A.U., A.P., M.K. and V.S. wrote the manuscript.
SK acknowledges DST-SERB (CRG/2019/000017) New Delhi and IISER Bhopal for financial support. AUY and SKV (DST INSPIRE) acknowledges the IISER Bhopal for an Institute fellowship. Professor R. J Butcher (Howard University, Washington) for resolving the crystal structure of 1 (CCDC No. 194 9548). We sincerely thank the Department of Chemistry at IISER Bhopal for providing High-Performance Computing, Single Crystal XRD, and Scanning Electron Microscopy facilities.
Seh, W., Kibsgaard, J., Dickens, C. F., Chorkendorff, I., Nørskov J. K. & Jaramillo, T. F. Combining theory and experiment in electrocatalysis: Insights into materials design. Science 355, 4998 (2017).
Li, J., Guttinger, R., More ́, R., Song, F., Wan, W. & Patzke, G. R. Frontiers of water oxidation: The quest for true catalysts. Soc. Rev. 46, 6124−6147 (2017).
Symes, M. D. & Cronin, L. Decoupling hydrogen and oxygen evolution during electrolytic water splitting using an electron-coupled-proton buffer. Chem. 5, 403−409 (2013).
Wang, D., Sampaio, R. N., Troian-Gautier, L., Marquard, S. L., Farnum, B. H., Sherman, B. D., Sheridan, N. V., Dares, C. J., Meyer, G. J. & Meyer, T. J. Molecular photoelectrode for water oxidation inspired by photosystem II. Am. Chem. Soc. 141, 7926−7933 (2019).
Hetterscheid, D. G. H., Van der Vlugt, J. I., Bruin, B. de. & Reek, J. N. H. Water splitting by cooperative catalysis. Chem. Int. Ed. 48, 8178–8181 (2009)
Sala, X., Ertem, M. Z., Vigara, L., Todorova, T. K., Chen, W., Rocha, R. C., Aquilante, F., Cramer, C. J., Gagliardi, L. & Llobet, A. The cis‐[RuII(bpy)2(H2O)2]2+ water oxidation catalyst revisited. Angew Chem Int. Ed. 49, 7745−7747 (2010).
Chen, Z., Concepcion, J. J., Luo, H., Hull, J. F., Paul, A. & Meyer, T. J. Nonaqueous catalytic water oxidation. Am. Chem. Soc. 132, 17670−17673 (2010).
Karunadasa, H. I., Chang, C. J. & Long, J. R. A molecular molybdenum-oxo catalyst for generating hydrogen from water. Nature 464, 1329–1333 (2010).
Thoi, V. S., Sun, Y., Long, J. R. & Chang, C. J. Complexes of earth-abundant metals for catalytic electrochemical hydrogen generation under aqueous conditions. Soc. Rev. 42, 2388–2400 (2013).
Norris, M. R., Concepcion, J. J., Fang, Z., Templeton, J. L. & Meyer, T. J. Low‐overpotential water oxidation by a surface‐bound ruthenium‐chromophore–ruthenium‐catalyst assembly. Chem. Int. Ed. 52, 13580–13583 (2013).
Daniel, Q., Duan, L., Timmer, B. J. J., Chen, H., Luo, X., Ambre, R., Wang, Y., Zhang, B., Zhang, P., Wang, L., Li, F., Sun, J., Ahlquist, M. & Sun, L. Water oxidation initiated by in situ dimerization of the molecular Ru(pdc) catalyst ACS Catal. 8, 4375−4382 (2018).
Matheu, R., Garrido-Barros, P., Gil-Sepulcre, Ertem, M. M. Z., Sala, X., Gimbert- Suriñach, C. & Llobet, A. The development of molecular water oxidation catalysts. Rev. Chem. 3, 331–341 (2019).
Cao, R., Laia, W. & Du, P. Catalytic water oxidation at single metal sites. Energy Environ. Sci. 5, 8134–8157 (2012).
Sundstrom, E. J., Yang, X., Thoi, V. S., Karunadasa, H. I., Chang, C. J., Long, J. R. & Head-Gordon, M. Computational and experimental study of the mechanism of hydrogen generation from water by a molecular molybdenum-oxo electrocatalyst. Am. Chem. Soc. 134, 5233−5242 (2012).
Solis, B. H. & Hammes-Schiffer, S. Computational study of anomalous reduction potentials for hydrogen evolution catalyzed by cobalt dithiolene complexes. Am. Chem. Soc. 134, 15253−15256 (2012).
Downes, C. A. & Marinescu, S. C. Efficient electrochemical and photoelectrochemical H2 production from water by a cobalt dithiolene one-dimensional metal–organic surface. Am. Chem. Soc. 137, 13740−13743 (2015).
Wombwell, C., Caputo, C. A. & Reisner, E. [NiFeSe]-Hydrogenase chemistry. Chem. Res. 48, 2858−2865 (2015).
Downes, C. A. & Marinescu, S. C. Bioinspired metal selenolate polymers with tunable mechanistic pathways for efficient H2 ACS Catal. 7, 848−854 (2017).
Downes, C. A., Yoo, J. W., Orchanian, N. M., Haiges, R. & Marinescu, S. C. H2 evolution by a cobalt selenolate electrocatalyst and related mechanistic studies. Commun. 53, 7306−7309 (2017).
Koshiba, K., Yamauchi, K. & Sakai, K. A nickel dithiolate water reduction catalyst providing ligand-based proton-coupled electron-transfer pathways. Chem. Int. Ed. 56, 4247 –4251 (2017).
Xia, X., Wang, L., Sui, N., Colvinc, V. L. & Yu, W. W. Recent progress in transition metal selenide electrocatalysts for water splitting. Nanoscale 12, 12249–12262 (2020).
Su, X.-J., Zheng, C., Hu, Q.-Q., Du, H.-Y., Liao, R.-Z. & Zhang, M.-T. Bimetallic cooperative effect on O–O bond formation: Copper polypyridyl complexes as water oxidation catalyst. Dalton Trans. 47, 8670-8675 (2018).
Du, H.-Y., Chen, S.-C., Su, X.-J., Jiao, L. & Zhang, M.-T. Redox-active ligand assisted multielectron catalysis: A case of CoIII complex as water oxidation catalyst. Am. Chem. Soc. 140, 1557−1565 (2018).
Baydoun, H., Burdick, J., Thapa, B., Wickramasinghe, L., Li, D., Niklas, J., Poluektov, O. G., Schlegel, H. B. & Verani, C. N. Immobilization of an amphiphilic molecular cobalt catalyst on carbon black for ligand-assisted Water Oxidation. Chem. 57, 9748−9756 (2018).
Wang, D. & Groves, J. T. Efficient water oxidation catalyzed by homogeneous cationic cobalt porphyrins with critical roles for the buffer base. Natl. Acad. Sci. U.S.A. 110, 15579–15584 (2013).
Nakazono, T., Parentab, A. R. & Sakai, K. Cobalt porphyrins as homogeneous catalysts for water oxidation. Commun. 49, 6325–6327 (2013).
Wang, H.-Y., Mijangos, E., Ott, S. & Thapper, A. Water oxidation catalyzed by a dinuclear cobalt–polypyridine complex. Chem. Int. Ed. 53, 14499 –14502 (2014).
Blakemore, J. D., Crabtree, R. H. & Brudvig, G. W. Molecular catalysts for water oxidation. Rev. 23, 12974–13005 (2015).
Luo, G.-G., Zhang, H.-L., Tao, Y.-W., Wu, Q.-Y., Tianc, D. & Zhang, Q. Recent progress in ligand-centered homogeneous electrocatalysts for hydrogen evolution reaction. Chem. Front. 6, 343–354 (2019).
Haddad, A. Z., Garabato, B. D., Kozlowski, P. M., Buchanan, R. M. & Grapperhaus, C. A. Beyond metal-hydrides: Non-transition-metal and metal-free ligand-centered electrocatalytic hydrogen evolution and hydrogen oxidation. Am. Chem. Soc. 138, 7844−7847 (2016).
Cronin, S. P., Mamun, A. A., Toda, M. J., Mashuta, M. S., Losovyj, Y., Kozlowski, P. M., Buchanan, R. M. & Grapperhaus, C. A. Utilizing charge effects and minimizing intramolecular proton rearrangement to improve the overpotential of a thiosemicarbazonato zinc HER catalyst. Chem. 58, 12986–12997 (2019).
Sun, Y., Bigi, J. P., Piro, N. A., Tang, M. L., Long, J. R. & Chang, C. J. Molecular cobalt pentapyridine catalysts for generating hydrogen from water. Am. Chem. Soc. 133, 9212–9215 (2011).
Verma, A., Jana, S., Prasad, C. D., Yadav, A. & Kumar, S. Organoselenium and DMAP co-catalysis: Regioselective synthesis of medium-sized halolactones and bromooxepanes from unactivated alkenes. Commun. 52, 4179−4182 (2016).
Kumar, S., Yan, J., Poon, J.-f., Singh, V. P., Lu, X., Ott, M. K., Engman, L. & Kumar, S. Multifunctional antioxidants-regenerable radical-trapping and hydroperoxide decomposing ebselenols. Chem. Int. Ed. 55, 3729–3733 (2016).
Jana, S., Verma, A., Kadu, R. & Kumar, S. Visible-light-induced oxidant and metal-free dehydrogenative cascade trifluoromethylation and oxidation of 1,6-enynes with water. Sci. 8, 6633–6644 (2017).
Balkrishna, S. J., Bhakuni, B. S. & Kumar, S. Copper catalyzed/mediated synthetic methodology for ebselen and related isoselenazolones. Tetrahedron 67, 9565–9575 (2011).
Rathore, V., Upadhyay, A. & Kumar, S. A organodiselenide with dual mimic function of sulfhydryl oxidases and glutathione peroxidases: Aerial oxidation of organothiols to organodisulfides. Lett. 20, 6274–6278 (2018).
Cox, N., Pantazis, D. A., Neese, F. & Lubitz, W. Artificial photosynthesis: understanding water splitting in nature. Interface Focus 5 (2015).
Qi, S., Fan, Y., Wang, J., Song, X., Li, W. & Zhao, M. Metal-free highly efficient photocatalysts for overall water splitting: C3N5 Nanoscale 12, 306–315 (2020).
Thompson, E. J. & Berben, L. A. Electrocatalytic hydrogen production by an aluminum(III) complex: Ligand-based protonand electron transfer. Chem. Int. Ed. 54, 11642–11646 (2015).
Cheng, Y., Emge T. J. & Brennan, J. G. Polymeric Cd(Se-2-NC5H4)2 and square planar Hg(Se-2-NC5H4)2: Volatile CVD precursors to II-VI semiconductors Chem. 33, 3711–3714 (1994).
Mugesh, G., Singh, H. B., Patel, R. P. & Butcher, R. J. Synthesis and structural characterization of monomeric selenolato complexes of zinc, cadmium, and mercury. Chem. 37, 2663–2669 (1998).
Freedman, D., Kornienko, A., Emge, T. J. & Brennan, J. G. Divalent samarium compounds with heavier chalcogenolate (EPh; E = Se, Te) ligands. Chem. 39, 2168–2171 (2000).
Ritch, J. S. & Chivers, T. Coordination chemistry of a new P,Te-centred ligand: synthesis, NMR spectra and X-ray structures of M(TePPri2NPPri2)2 (M = Zn, Cd, Hg). Dalton Trans. 957–962 (2008).
Emge, T. J., Romanelli, M. D., Moore, B. F. & Brennan, J.G. Zinc, cadmium, and mercury complexes with fluorinated selenolate ligands. Chem. 49, 7304–7312 (2010).
Pöllnitz, A., Silvestru, C., Carpentierb, J.-F. & Silvestru, A. Diorganodiselenides and zinc(II) organoselenolates containing (imino)aryl groups of type 2-(RN=CH)C6H4. Dalton Trans. 41, 5060–5070 (2012,).
Sharma, R. K., Wadewala, A., Kedarnath, G., Vishwanadh, B. & Jain, V. K. Pyrimidyl-2-selenolates of cadmium and mercury: Synthesis, characterization, structures and their conversion to metal selenide nano-particles. Chim. Acta 411, 90–96 (2014).
Patel, S., Meenakshi, Hodage, A. S., Verma, A., Agrawal, S., Yadav, A. & Kumar, S. Synthesis and structural characterization of monomeric mercury (II) selenolate complexes derived from 2-phenylbenzamide ligands. Dalton Trans. 45, 4030–4040 (2016).
Byer, O., Lazebnik, B. F. & Smeltzer, D. L. Methods for Euclidean Geometry, Mathematical Association of America, Vol.37, pp. 51–52 (2010).
Coggins, M. K., Zhang, M-T, Vannucci, A. K., Dares, C. J. & Meyer, T. J. Am. Chem. Soc. 136, 5531−5534 (2014).
Gagliardi, C. J., Vannucci, A. K., Concepcion, J. J., Chen, Z. & Meyer, T. J. Energy Environ. Sci. 5, 7704–7717 (2012).
Engman, L., Perssorn, J., Andersson, C. M. & Berglund, M. Application of the hammett equation to the electrochemical oxidation of diaryl chalcogenides and aryl methyl chalcogenides. Chem. Soc., Perkin Trans. 2, 1309–1313 (1992).
Reich, H. J. & Hondal, R. J. Why nature chose selenium. ACS Chem. Biol. 11, 821–841 (2016).
Additional Declarations
There is NO Competing Interest.
SupportingInformation.pdf
Supplimentart Information
zincselenolate1.cif
Supplimentary Dataset 1
RawData.rar
Massdataofintermediate1a.rar
Massdataofintermediate1b.rar | CommonCrawl |
How to determine whether the function is one-to-one? Differential Calculus
Determine whether the function f is one-to-one
f(t) is the number of people in line at a movie theater at time t.
calculus differential
Jamila Diamond
Jamila DiamondJamila Diamond
$\begingroup$ Don't you mean $f(t)$? Do you understand what a one-to-one function is? $\endgroup$ – Git Gud Sep 25 '13 at 16:39
$\begingroup$ Not necessarily one to one. We could have $f(t_0)=17$, and at some later time $t_1$, $f(t_1)=17$. This could happen in a couple of ways: (i) The line has not moved or (ii) It has moved, $3$ people have entered the theatre, but $3$ have joined the end of the line. $\endgroup$ – André Nicolas Sep 25 '13 at 16:41
$\begingroup$ One to one means that you can (in theory) uniquely figure out the input if you know the input. $\endgroup$ – copper.hat Sep 25 '13 at 17:01
Informally, we can think of a one-to-one function as one that maps distinct elements in the domain to distinct elements in the codomain. Or, in other words, if $f$ maps $a$ and $b$ map to the same thing, then $a=b$.
Formally, a function $f:A \rightarrow B$ is called one-to-one if $f(a)=f(b)$ implies $a=b$. Equivalently, if $a \neq b$, then $f(a) \neq f(b)$.
In this question, we have a function $f:T \rightarrow \mathbb{Z}^{\geq 0}$ defined by $f(t)$ is the number of people in line at a movie theater at time $t$, and $T$ is the set of times for which "time" is defined. The task is to find two distinct times $t_1 \in \mathbb{R}$ and $t_2 \in \mathbb{R}$ for which $f(t_1)=f(t_2)$.
Here's a simple mathematical answer; it assumes (a) the theater has been open longer than an instant, and (b) there are a finite number of people in existence.
Suppose there are $N<\infty$ people in existence. We pick $N+1$ distinct points of time $t_1,t_2,\ldots,t_{N+1}$ (assuming the theater has been open longer than an instant, since time is continuous, such points of time exist). Then the pigeonhole principle implies there are two points in time $t_i$ and $t_j$ in which $f(t_i)=f(t_j)$; in other words, we have $N+1$ numbers $$f(t_1),f(t_2),\ldots,f(t_{N+1})$$ that all belong to $\{1,2,\ldots,N\},$ so they can't all be distinct.
Rebecca J. StonesRebecca J. Stones
$\begingroup$ $0$ is also a possible value of $f$... $\endgroup$ – user103402 Nov 20 '13 at 3:06
Not the answer you're looking for? Browse other questions tagged calculus differential or ask your own question.
How to determine if a function is one-to-one?
How to determine whether a sequence of functions converges uniformly or pointwise to a function?
Determine whether the following statement is true or false:
determine whether the improper integrals converge or not
Determine whether the graph of the function is the graph of a one-to-one function.
One-to-one function
Finding the solid angle subtended at a viewer's eyes by a movie screen
How do I determine whether a function/map T is one-to-one and/or onto?
Determine whether f is a function, an injection, a surjection
How to determine whether a piecewise function with conditions instead of equations has removable discontinuities? | CommonCrawl |
BMC Medical Informatics and Decision Making
Using decision fusion methods to improve outbreak detection in disease surveillance
Gaëtan Texier ORCID: orcid.org/0000-0002-9242-10181,2,
Rodrigue S. Allodji1,3,4,
Loty Diop5,
Jean-Baptiste Meynard1,6,
Liliane Pellegrin1,2 &
Hervé Chaudet1,2
BMC Medical Informatics and Decision Making volume 19, Article number: 38 (2019) Cite this article
The Correction to this article has been published in BMC Medical Informatics and Decision Making 2019 19:81
When outbreak detection algorithms (ODAs) are considered individually, the task of outbreak detection can be seen as a classification problem and the ODA as a sensor providing a binary decision (outbreak yes or no) for each day of surveillance. When they are considered jointly (in cases where several ODAs analyze the same surveillance signal), the outbreak detection problem should be treated as a decision fusion (DF) problem of multiple sensors.
This study evaluated the benefit for a decisions support system of using DF methods (fusing multiple ODA decisions) compared to using a single method of outbreak detection. For each day, we merged the decisions of six ODAs using 5 DF methods (two voting methods, logistic regression, CART and Bayesian network - BN). Classical metrics of accuracy, prediction and timelines were used during the evaluation steps.
In our results, we observed the greatest gain (77%) in positive predictive value compared to the best ODA if we used DF methods with a learning step (BN, logistic regression, and CART).
To identify disease outbreaks in systems using several ODAs to analyze surveillance data, we recommend using a DF method based on a Bayesian network. This method is at least equivalent to the best of the algorithms considered, regardless of the situation faced by the system. For those less familiar with this kind of technique, we propose that logistic regression be used when a training dataset is available.
The task of outbreak detection can be considered as a classification problem, and outbreak detection algorithms (ODAs) can be viewed as classifiers or sensors providing a binary decision (outbreak yes or no) for each time step of surveillance. For specialists in charge of a disease surveillance system, with more than 120 ODAs published [1] and in the absence of a consensus among specialists, the task of choosing the best ODA remains an highly complex one [2, 3]. Indeed ODA performance depends on several characteristics associated with the outbreak curve (shape, duration and size), the baseline (mean, variance) [4, 5] and their relationships (signal-to-noise ratio, signal-to-noise difference) [6, 7]. In this context, the hope of having a single algorithm that would be efficient enough to detect all outbreaks in all situations faced by a disease surveillance system is probably illusory.
For that reason, certain teams in charge of disease/syndromic surveillance systems choose to work with several ODAs to analyze the same surveillance dataset [8] as a multisensor system [9] with the objective of being able to produce correct decisions with a given amount of input information. Even if multiple sensors provide significantly more information on which to base a decision than a single sensor, using multiple classifiers or sensors can lead to several issues. Among them, as detailed in [9, 10], we can cite data conflict (agreement between classifier decisions), uncertainty, correlation, imprecision, incompleteness…, all of which makes decision fusion (DF) a challenging task. Finally, all these problems call into question the true benefit of using multiple ODAs for decision-making.
If we consider ODA decisions as a whole, the outbreak detection problem should be treated as a decision fusion problem of multiple classifiers/sensors. Decision fusion methods are tailored to generate a single decision, from multiple classifiers or biometric sensor decisions [11]. Fusion also provides the advantage of compensating for the deficiencies of one sensor by using one or more additional sensors. Moreover, in the context of surveillance, most of these techniques are automatable and can be added to the decision support system integrated in a disease surveillance system.
There are numerous publications on fusion methods for outbreak detection focused on the fusion of data collected from multiple streams [12,13,14,15,16,17] using different methods, such as Bayesian Networks, to manage different sources of data potentially useable in surveillance. However, to our knowledge, only one work [18] describes a decision fusion method applied to a single data stream. This study used an approach to enhance the classifier structure and yielded ambivalent results, according to the authors. The study's limitations and the conceptual framework of Dietterich's reasons (statistical, computational and representational) [19], justifying why multiple classifiers may work better than a single one, suggest the necessity of new studies in this field.
With the aim of improving decision making for disease surveillance system users, we propose to evaluate the benefit of using DF methods fusing multiple ODA decisions versus using a single method of outbreak detection.
This study is a proof of concept that aims at evaluating the capabilities of DF methods to enhance the reliability of outbreak detection systems. For this purpose, we will use synthetic data for controlling the outbreak curve characteristics in place of real data, which don't allow the experimental controls required for this study.
In the lack of a consensual gold standard allowing the delineation of a real outbreak within a disease surveillance series [7], the necessity to control precisely the onset and the end of the outbreak signal and finally to obtain a sufficient sample size to allow an adequate evaluation, we choose, as several authors (Buckeridge, Jackson..), to use synthetic data. A more complete discussion on this subject can be found in Texier and al [20].
The simulated data sets were generated according to approaches already detailed in previous studies [4, 7, 20]. Each simulated dataset was generated by combining two components: a baseline and outbreak signals. In this work, given a minimum outbreak spacing of 15 days between two outbreaks, the outbreak signals were randomly superimposed on baseline data in order to respect 10 ± 1% of the prevalence of outbreak days over 20 years. Five levels of baseline were generated, corresponding to the expected daily incidences of 1, 3, 5, 10 and 30 cases per day. Based on a real outbreak of Norovirus which had already been published [21], we used a resampling method [4, 7, 20], to generate curves with four different outbreak magnitudes (10, 30, 50 and 100 cases) and with a same duration of 12 days, corresponding to the duration of the originating real outbreak. Depending on the influence of the curve shape on ODA evaluation results, we considered that the use of resampling methods for generating our epidemic curves was the most realistic (see on this topic [20]). Twenty evaluation datasets (corresponding to the different combinations of the 5 levels of baseline with the 4 levels of outbreak magnitudes) were produced. We calculated the sample size required to estimate our evaluation metrics (as the sensitivity defined by Jafarpour [22]) with a specified level of confidence and precision (with a maximal error allowed of 3%). To reach this objective of precision, each algorithm had to evaluate 1100 outbreaks during this study. Finally, our evaluation datasets corresponded to 146,000 simulated days of surveillance that were evaluated by each sensor.
For methods requiring a learning period, we simulated data with a 5-year surveillance period. Training and evaluation datasets were generated independently but had similar characteristics in terms of baseline level, outbreak size, and prevalence. We used exactly the same training dataset for all the learning methods.
Outbreak detection algorithms
In this study, we used a set of six outbreak detection algorithms frequently used in routine disease surveillance systems [8], for which several statistical packages [23] are available and which are easily implementable. We chose the Cumulative Sum (CUSUM) chart as proposed by Rossi [24], the C-family of detection algorithms (C1, C2, and C3), which are adaptive algorithms included in the Early Aberration Reporting System (EARS) developed by the Centers for Disease Control and Prevention (CDC) [25], the Exponential Weighted Moving Average algorithms (EWMA) [6], and the Farrington algorithm, which should be applicable to various types of infections [26].
Decision fusion methods (DFMs)
Taxonomy and choice
Fusion of data/information can be carried out on three levels of abstraction: data fusion, feature fusion, and classifier fusion (also referred to as decision fusion or mixture of experts) [27]. Due to the large number of classifier fusion methods in the literature, we decided to base our choice of methods on a taxonomy of these techniques proposed by Ruta [28]. Based on individual classifier outputs, Ruta identified two main approaches to combining classifiers, namely classifier selection (or structure optimization) and classifier fusion. The first approach looks for the single best classifier or a selected group of classifiers and uses only their outputs to build a final decision or for further processing.
The second approach focuses on classifier outputs and combines those outputs. According to the characteristics of the combined outputs, several authors have identified three levels of aggregation [28,29,30]:
The measurement level: A classifier attributes a probability value to each label
The rank level: A classifier ranks all labels in a queue and chooses the top label
The abstract level (or single class label): A classifier only generates a single-label output (in our case, outbreak yes or no).
These three levels form an information gradient where the measurement level contains the most information and the abstract level contains the least [30].
We selected two simple and intuitive methods from the abstract level: the majority voting scheme and the weighted voting scheme.
The second level aims at reordering a class set. Logistic regression methods, which are situated at this level and are well known to epidemiologists, assign a weight to each classifier reflecting its importance in an efficient multiple sensor system. In this category, we also selected the CART Method [31].
The largest group of classifier fusion methods associated with the measurement level produces output values in the [0–1] range. These values cover all known measures of evidence (probability, possibility, necessity, belief, and plausibility) and are tailored to quantify a level of uncertainty. Indeed, all the fusion methods in this group try to reduce the level of uncertainty by maximizing a measure of evidence [28]. From this group, we selected the Bayesian Belief Networks method. A brief synopsis on each decision fusion method chosen is provided below.
Voting methods
The simplest way to combine the decisions of multiple outbreak detection algorithms is by voting, which corresponds to performing a linear combination of the prediction results of the algorithms. In the case of majority voting (MV) scheme fusion, the method gives equal weight to the decisions and carries out the prediction with the highest number of votes as the result. Weighted majority voting (WMV) stems from relaxing the assumption about equal individual accuracies. We choose area under the ROC Curve to weight the vote. Indeed, the AUC, which is based on both sensitivity and specificity, can be considered as a relevant indicator of algorithm performance to weight the vote, increasing the participation of decision with high sensitivity and specificity.
The reader is referred to Rahman et al. [32] for a comprehensive examination of the subject.
The logistic regression model relates the conditional probability of an event distributed as a binomial Y according to a weighted combination of values for variables such as X1,X2,…,Xn which represent the decision of each outbreak detection algorithm (suppose j(1 ≤ j ≤ n) then Xj = 1 or Xj = 0) [33]. Y is the response variable corresponding to the true value for outbreak generated in the simulated data, while the various X's, usually called explanatory variables, are ODAs. As for the weighted voting scheme, logistic regression can be seen as a linear combination (y = ß1 X1 + ß2 X2 + … + ßi Xi) of ODA decisions Xi weigthed by an estimated coefficient ßi. To estimate the model coefficients (ßi), the logistic regression was run on the training dataset. The selection of the final model in the training step was based on the lowest Akaike Information Criterion (AIC). In the end, the model selected was used on the simulated data having a 20-year surveillance period. On any given day, the results of the ODAs provide a predicted value of Y, representing the probability of an outbreak on that day. If this predicted probability exceeds 0.5, we classify the day as an outbreak day.
Classification and regression trees (CART)
CART is a classification method that has been successfully used in many health care applications [34]. Although there are variants of tree-based methods with different splitting criteria, CART was selected for this study, since it is used in decision fusion [35]. The reader is directed to Breiman [36] for a comprehensive description of the CART algorithm.
The six decisions of the ODA are used as independent variables in our CART model. As with logistic regression, the training data sets were used for the construction of maximum tree and for the choice of the right tree size. The rpart package of R software was used for the implementation of the CART model [37].
Bayesian networks (BNs)
Bayesian Networks (BNs) belong to the family of directed acyclic graph models. The network structure can be described as follows: Each node in the graph represents a binary variable provided by each classifier (i.e. ODA), while the edges between the nodes represent probabilistic dependencies among the corresponding variables.
As with the two previous DF methods (logistic regression and CART), the dataset generated during a 5-year surveillance period was used to train the BN. The bnlearn R package [38, 39] and Netica [40] were used to implement the BN. To validate the Bayesian network structure from our data, we used the Hill Climbing algorithm based on the Bayesian Information Criterion (BIC) score. An estimated probability of epidemic presence is provided by the BN and a probability threshold of 50% was selected to classify the outbreak presence/absence status for a given day, as in logistic regression.
Evaluation metrics
We evaluated the performance metrics using several criteria: accuracy, prediction quality, and timeliness of outbreak detection. Accuracy was assessed by the specificity (Sp), the sensitivity (Se), and the area under the ROC (Receiver Operating Characteristic) curve (AUC) [22]. Two variants of Se were calculated in the paper: Se per day, which is the probability of correctly classifying outbreak days, and Se per outbreak, which is the ability to detect at least one outbreak day over the entire duration of the outbreak.
The evaluation of the quality of predictions was done using positive and negative predictive values (PPV and NPV respectively). The timeliness of outbreak detection was evaluated using the time to detection, the proportion of cases required for outbreak detection, the weighted AUC and the area under the Activity Monitor Operating Characteristic (AMOC) curve. The time to detection was defined as the mean and median number of days from the beginning of each outbreak to the first alarm during the outbreak. The proportion of cases required for outbreak detection was defined as the number of cases already occurring by the moment of detection divided by the total number of cases in the outbreak. This quantity can be seen as the minimal number of outbreak cases required for outbreak detection. The area under weighted ROC (AUWROC) is an ROC curve in which each point of the curve is weighted by a timeliness measure [41] and the area under the AMOC curve represents the relationship between the timeliness of outbreak detection and the false alarm rate (1-Specificity) [42]. A timeliness score defined as the proportion of time saved by detection relative to an outbreak onset, was also calculated as follows:
$$ \mathrm{Timeliness}\ \mathrm{score}=1-\frac{\mathrm{time}\ \mathrm{detection}-\mathrm{time}\ \mathrm{onset}\ }{\mathrm{Outbreak}\ \mathrm{duration}} $$
where outbreak duration is the total outbreak length in days, time detection is the index of the day within the time series when the outbreak is detected and time onset is the index of the day on which outbreak starts [22]. The timeliness score is 1 if the outbreak is detected on the first day of occurrence and 0 when the outbreak is not detected [6].
We also assessed the influence of the outbreak and baseline characteristics on the performance metrics of the ODAs and the DF methods. As defined in a previous study, the signal-to-noise difference (SND) was used for this evaluation [7]. In practice, three scenarios corresponding to three values of SND were considered: positive, quasi-null and negative SND. A positive SND corresponds to a higher number of cases in the outbreak than in the baseline during the outbreak period, and a negative SND to the opposite.
All algorithms, DF methods, and analyses were implemented with R software 3.3.0 [23] using the following packages: surveillance (for most algorithms), qcc (for EWMA), flux (for the estimations of AUCs), rpart and rpart.plot (for CART), bnlearn (Bayesian Networks).
Accuracy and quality of prediction assessment
Table 1 summarizes the performance metrics of accuracy for six ODA and five DFM in terms of detection sensitivity per outbreak or per day, specificity, PPV, NPV and AUC. The six outbreak detection algorithms had a detection sensitivity per outbreak ranging from 72 to 89%, with the lowest for the C1 algorithm and the highest for the EWMA algorithm. The implementation of DFM showed that voting methods provided detection sensitivities per outbreak [78 to 82%], close to those of CUSUM, C3 or Farrington while other DFMs such as logistic regression, CART, or BN, had on average a detection sensitivity per outbreak lower than the range indicated above. The detection sensitivity per day varied strongly from 10 to 45% for the ODAs. This metric was more stable among the DFM, as it varied only from to 23 to 27%.
Table 1 Performance metrics for the accuracy and prediction quality of the outbreak detection algorithms and the decision fusion methods
Concerning the quality of outbreak prediction, PPVs were ranged from 36 to 51% for the outbreak detection algorithms, and was higher for the five DFMs starting at 61% and reaching more than 90% for the three DFMs using a learning step (logistic regression, CART, or Bayesian networks). Thus, when the best algorithm had one chance in two to correctly predict the outbreak status for a given day, the best fusion methods had nine out of ten chances not to be mistaken. However, NPVs were almost identical between the outbreak detection algorithms and the fusion methods.
Our evaluation results show that the three DFMs using a learning step yielded overall accuracies that were quite close to that found for CUSUM, which consistently provided the highest accuracy (AUC =73%) among outbreak detection algorithms (see Fig. 1).
Accuracy measured by area under curve (AUC) according to outbreak detection algorithm and decision fusion method
Timeliness assessment
Timeliness is a key metric for early warning surveillance systems. It refers to the ability of the detection algorithm to detect a signal aberration early enough to enable public health authorities to respond rapidly. Among the outbreak detection algorithms, the best timeliness was achieved by the EWMA algorithm (cases required = 41%, time to detection = 5.28, proportion of delay = 38%) [Table 2]. For the DFMs, the simplest react the most rapidly. In general, fusion methods were slightly slower than detection algorithms. But when we weighted timeliness by integrating accuracy metrics to reflect the fact that a rapid false alarm is of relatively little value, DFMs produced similar results, in terms of AMOC or AUWROC, to that provided by the CUSUM algorithm, which was the fastest detection algorithm.
Table 2 Performance metrics for the timeliness of outbreak detection of the detection algorithms and decision fusion methods
The influence of signal-to-noise difference on outbreak detection performance
From our results, it is clear that the SND has a direct impact on the timeliness and the capacity of outbreak detection, whatever method was used. Firstly, when the outbreak signal is easy to detect among the baseline noise, the best performance in terms of detection is provided by the Farrington algorithm (Specificity = 100%, PPV = 99%, NPV = 95%, AUWROC = 79%) [Table 3]. Overall, fusion methods seem to perform at the same level as the best ODA when SND is positive. It should be noted that when the SND tends towards zero, fusion methods even seem to provide a slight improvement over ODAs. Then, when the outbreak signal is more difficult to detect among the baseline noise, the best performance in terms of detection is provided by the CUSUM algorithm (PPV = 96%, NPV = 91%, AUWROC = 59%) but when timeliness is considered more important than PPV, EWMA (time to detection = 5, proportion of delay = 37%, AUWROC = 54%) and the Farrington algorithm (time to detection = 5, proportion of delay = 55%, AUWROC = 56%), can be considered as a good compromise that comes at the price of a high rate of false alarms when the SND is negative (PPV = 25 to 46%).
Table 3 Influence of signal-to-noise difference (SND) characteristics on the performance metrics of the detection algorithms and the fusion methods
Evaluation of decision fusion
Majority voting
The voting method is the simplest DF method to implement, since it doesn't require a priori knowledge. Whatever the situation, to guarantee the best results for the voting method, it is better to use an odd number of independent ODAs [43]. The main qualities of this method are probably its timeliness (with only 49% of total number of outbreak cases required on average before a detection and a proportion of delay = 0.44) with a detection occurring on average 5.3 days after the onset of the outbreak, with relatively good performance as long as the SND remains positive. Another advantage is its simplicity of implementation and the possibility of changing the decision rule with the aim of optimizing detection. Here, we chose a majority voting decision rule, but others exist, such as Byzantine, unanimity, or m-out-of n voting rules [44].
Theoretically promising compared to the above technique (by overweighting the most efficient ODA), Weighted majority voting ultimately suffers from the limitations of voting methods without the advantage in terms of reactivity offered by the simple voting method. Xu [29], and several authors have compared this approach to other DF methods and find that this method usually underperforms, as it did in our study.
In the logistic regression method, the logit provides an estimated probability of an outbreak. In our experiment, we used the theoretical optimal threshold of 0.5 as the decision rule, as suggested by Verlinde [45], to confirm or invalidate the alarm. But, decision threshold fixed at 0.5 should be adjusted to improve sensitivity, specificity and predictive value by using another experimentally-determined threshold [46].
As explained in Verlinde, one advantage of logistic regression is the possibility to consider ßi parameters as direct measure of the relative importance of an ODA. It minimized the total error rate, (combining with the same weight, false alarm rate and false negative rate) with a low rate of false alarms (0.0%) compared with decision tree (0.3%) and majority voting (3.2%), but a higher rate of false negative days (2.7%) compared with decision tree (7.7%) and majority voting (0.0%). Verlinde, Altmann also considered logistic regression to be the best meta-classifier [47] according to the AUC and accuracy criteria. According to these authors, logistic regression is useful when the different experts show significant differences in terms of accuracy and is also considered a robust method.
Like logistic regression, CART can be used for ODA selection and ranking by identifying the most important sensors (near the root node). Because CART makes no assumption about the underlying distribution, this point can be considered an advantage, in comparison with logistic regression models, particularly when the data are far from the (multivariate) normal distribution [34].
However, we agree with several authors in finding that tree structure learned from data is very sensitive to a small change in the training data set and provides very different splits, ultimately making interpretation somewhat precarious [35, 48]. And according to the type of dataset, a change in the split criteria can lead to the creation of very different trees. In addition, the different threshold parameters of the rpart algorithm did not allow us to improve prediction performance, especially in the datasets with a very low SND. According to the literature, the major reason for this instability is the hierarchical nature of the process: The effect of an error in the top split is propagated down to all of the splits below it. The performance of CART was consistently good, but slightly below that of the regression models and BN, and was always more accurate than voting scheme methods. The difficulty in identifying the right settings remains a problem.
Our evaluation results show that, whatever the outbreaks and baseline characteristics, logistic regression and the Bayesian Networks were able to achieve detection with high accuracy (AUC = 0.70 – Table 1), which is similar to the best algorithm performance (AUC = 0.73). The ROC curve comparison for the prediction of "detection" presented in Fig. 1 shows that DFM with a training step performs as well as the best ODA (CUSUM: AUC = 0.73).
Considering that an NPV around 0.93 was found for all methods (ODA and DFM), we observed a major gain (77%) in terms of positive predictive values (PPV) by using DFMs (BN, logistic regression and CART methods: PPV around 90%) compared to the best ODA (Farrington: PPV = 51%), which also requires a 5-year training period.
Bayesian methods are less reliant on specific asymptotic results, a property that can be a hindrance when employing frequentist methods in small sample contexts [49]. Another advantage of a Bayesian model is that there is no a priori hypothesis about the nature of the modeled relationships [50]. Like other DF "learning" methods we noticed that, occasionally, BN depends on the learning step, making this method sensitive to that step. Another advantage of BN models is their capacity to enrich their "surveillance knowledge" from new cases to update their probability tables even if the surveillance practices may change over time. This continuous training [47] enables the model to be updated and its predictive quality to be improved, allowing outbreak detection to be tailored to each surveillance system.
Using decision fusion for real time detection
Provided that the BN graph was adapted to the surveillance dataset, tools like NETICA© make it possible to visualize and calculate the conditional probability associated with each real-time ODA decision (Additional file 1: Table S1). Unlike other decision fusion methods, this dynamic tool also makes it possible to take into account the order in which results appear. For example, during the structured learning step of our experiment with our dataset based on a baseline at 1 and a signal at 30 for a real outbreak day, we identified three algorithms of interest: CUSUM, EWMA, and C3. We observed that when the CUSUM ODA triggers an alarm alone, while all the other ODAs remain silent, the probability of an outbreak is estimated at 81.0%. It grows to 96.8% if the second alarm is produced by EWMA and to 98.7% if the third is produced by C3. Results are modified as follows if the alarm sequence is EWMA/CUSUM/C3: 5.4, 96.8, 98.7%. However, if we take into account a new alarm (the fourth) triggered by an ODA with a non-significant link to the outbreak status, for example in this case the C1 algorithm, the probability falls to 50%, showing the importance of the training period for methods for which contributing ODAs need to be selected.
We agree with Jafarpour [22] that inference performed using a BN can help to develop what-if analyses in disease surveillance activity or to identify an efficient ODA configuration and combination given the desired level of detection performance. This type of tool provides insight into the features of detection methods that are important to optimize to obtain better detection.
Decision fusion: benefits and limitations
In this study, we try to quantify the value of decision fusion (proof-of concept) in disease surveillance by using a simulated dataset standardized (allowing reproducible evaluation). The choice to use 20 years' period was only driven by sample size constraints required for statistical precision in our study. This level of background information would not be required for routine implementation. This period is an extreme situation because in the real life of surveillance, measurements and ecology of diseases are not consistent over the 20 years.
A number of extensions to this work may also improve the generalization of our study. First, we suggest before implementation to consider other kinds of outbreak curves in addition to our Norovirus outbreak. However, we have known since Buckeridge and Jackson [4, 5] that ODA performance results are influenced by curve shape. Our results were also affected by the quality of the training period for models requiring that step. In the absence of historical data or a realistic (for the population under surveillance) simulated dataset, we need to clarify and compare more precisely the use of a single ODA versus a decision fusion tool. That is why, before putting them into routine use, we advise epidemiologists to validate their decision fusion models in their own context of use, with their own data and especially by testing the different diseases habitually faced by their system.
As expected [7], the most informative determinants of detection performance was SND, which is a parameter combining the baseline levels and the peak size of the outbreak. However, one limitation in comparing surveillance and DF methods is the difficulty in choosing the evaluation metric to optimize. Indeed, and according to the aim and context of surveillance, people in charge of surveillance systems need to optimize either the PPV, the NPV, the timeliness, or a mix of these metrics (AUWROC, AMOC, etc.). This limitation was addressed in our work by proposing different evaluation metrics and surveillance circumstances (surveillance scenario).
Our results are a contribution to the fact that decision fusion models can decrease the risk of using a single inappropriate ODA. Indeed, this approach does not require the prior choice of an ODA, which could be unsuitable for a specific context. In this sense, choosing to use decision fusion is a way to control the risk of ODA misspecification and limitation. In most cases, a decision fusion model outperforms a single algorithm. These results support the conceptual framework of Dietterich's reasons (statistical, computational, and representational) [19], that justify why multiple classifiers may work better than a single one.
Use of synthetic data in this work is only driven by our focus on reproducible assessments of performance across the different DF approaches. An in-depth application to real surveillance data is beyond the aim of this paper. But before any deployment of decision methods, in a real disease surveillance system using several algorithms on the same data, a confirmation step should be considered.
This work can be extended by including more fusion decision methods such as Dempster-Shaffer, fuzzy logic, Neural Network [28] /Deep Learning or by using the framework of decision spaces [51].
Finally, our paper illustrates the fact that a good decision fusion method (as BN, logistic regression, or CART) is in our experiment at least equivalent to the best algorithm in terms of compromise between an early warning and the probability that the alarm triggered is a false alarm, whatever the situation being faced by the system, without the drawback of betting on the future. So, we recommend a decision fusion model based on a Bayesian Network approach to identify disease outbreaks in systems using several ODAs to analyze surveillance data. This conclusion doesn't take into consideration other characteristics of surveillance system especially it's stability, it's human involvement and it's resulting timeliness.
Numerous tools in the field of Bayesian Networks offer as an output a probability of outbreak presence/absence, thus making it possible to evaluate and readjust the decision threshold and real-time forecast. For those less familiar with this kind of technique, we suggest using logistic regression when a learning dataset is available. Otherwise, with a positive SND, a voting scheme technique can be considered in this specific circumstance.
In the future and once their parameters have been set, these statistical techniques could be integrated in decision support systems which will aim at providing assistance to expert decision making strategies during daily outbreak surveillance activities [52]. The major issues and challenges of such tools and techniques will be their adequacy to decision-related activities of these experts in outbreak context, described as real-setting, time-constrained, complex and uncertain situations [53, 54].
Following publication of the original article [1], the authors reported that one of the authors' names is spelled incorrectly.
DFM:
Decision fusion methods
NPV:
Negative predictive value
ODA:
Outbreak detection algorithm
PPV:
Positive predictive value
SND:
Signal-to-noise difference
Texier G, Buisson Y. From outbreak detection to anticipation. Rev Epidemiol Sante Publique. 2010;58(6):425–33.
Texier G. Evaluation methods for temporal outbreak dectection algorithms in early warning surveillance. PhD. Marseille: Aix-Marseille University; 2016.
Bravata DM, McDonald KM, Smith WM, Rydzak C, Szeto H, Buckeridge DL, Haberland C, Owens DK. Systematic review: surveillance systems for early detection of bioterrorism-related diseases. Ann Intern Med. 2004;140(11):910–22.
Jackson ML, Baer A, Painter I, Duchin J. A simulation study comparing aberration detection algorithms for syndromic surveillance. BMC Med Informat Decis Making. 2007;7:6.
Buckeridge DL. Outbreak detection through automated surveillance: a review of the determinants of detection. J Biomed Inform. 2007;40(4):370–9.
Lombardo JS, Buckeridge DL. Disease surveillance: a public health informatics approach. Hoboken: Wiley; 2007.
Texier G, Farouh M, Pellegrin L, Jackson ML, Meynard JB, Deparis X, Chaudet H. Outbreak definition by change point analysis: a tool for public health decision? BMC Med Inform Decis Making. 2016;16:33.
Chen H, Zeng D, Yan P. Public health syndromic surveillance systems. In: Infectious disease informatics: syndromic surveillance for public health and BioDefense. Boston: Springer US; 2010. p. 9–31.
Fourati H, editor. Multisensor Data Fusion: From Algorithms and Architectural Design to Applications (Book). United States: Series: Devices, Circuits, and Systems, CRC Press, Taylor & Francis Group LLC; 2015.
Khaleghi B, Khamis A, Karray FO, Razavi SN. Multisensor data fusion: a review of the state-of-the-art. Information Fusion. 2013;14(1):28–44.
Li SZ. Encyclopedia of Biometrics: I-Z, vol. 1. New York: Springer Science & Business Media; 2009.
Rolka H, Burkom H, Cooper GF, Kulldorff M, Madigan D, Wong WK. Issues in applied statistics for public health bioterrorism surveillance using multiple data streams: research needs. Stat Med. 2007;26(8):1834–56.
Burkom H, Loschen W, Mnatsakanyan Z, Lombardo J. Tradeoffs driving policy and research decisions in biosurveillance. Johns Hopkins APL Tech Dig. 2008;27(4):299–312.
Burkom HS, Ramac-Thomas L, Babin S, Holtry R, Mnatsakanyan Z, Yund C. An integrated approach for fusion of environmental and human health data for disease surveillance. Stat Med. 2011;30(5):470–9.
Mnatsakanyan ZR, Burkom HS, Coberly JS, Lombardo JS. Bayesian information fusion networks for biosurveillance applications. J Am Med Inform Assoc. 2009;16(6):855–63.
Najmi AH, Magruder SF. An adaptive prediction and detection algorithm for multistream syndromic surveillance. BMC Med Inform Decis Making. 2005;5:33.
Lau EH, Cowling BJ, Ho LM, Leung GM. Optimizing use of multistream influenza sentinel surveillance data. Emerg Infect Dis. 2008;14(7):1154–7.
Jafarpour N, Precup D, Izadi M, Buckeridge D. Using hierarchical mixture of experts model for fusion of outbreak detection methods. AMIA Annu Symp Proc. 2013;2013:663–9.
Dietterich TG. Ensemble Methods in Machine Learning. In: Multiple Classifier Systems: First International Workshop, MCS 2000 Cagliari, Italy, June 21–23, 2000 Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg; 2000. p. 1–15.
Texier G, Jackson ML, Siwe L, Meynard JB, Deparis X, Chaudet H. Building test data from real outbreaks for evaluating detection algorithms. PLoS One. 2017;12(9):e0183992.
Centers for Disease C, Prevention. Outbreaks of gastroenteritis associated with noroviruses on cruise ships--United States, 2002. MMWR Morb Mortal Wkly Rep. 2002;51(49):1112–5.
Jafarpour N, Izadi M, Precup D, Buckeridge DL. Quantifying the determinants of outbreak detection performance through simulation and machine learning. J Biomed Inform. 2015;53:180–7.
R Core Team. R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2016. URL https://www.R-project.org/
Rossi G, Lampugnani L, Marchi M. An approximate CUSUM procedure for surveillance of health events. Stat Med. 1999;18(16):2111–22.
Hutwagner L, Thompson W, Seeman GM, Treadwell T. The bioterrorism preparedness and response early aberration reporting system (EARS). J Urban Health. 2003;80(2 Suppl 1):i89–96.
Farrington CP, Andrews NJ, Beale AD, Catchpole MA. A statistical algorithm for the early detection of outbreaks of infectious disease. J R Stat Soc Ser A. 1996;159(3):547.
Dasarathy BV. Sensor fusion potential exploitation-innovative architectures and illustrative applications. Proc IEEE. 1997;85(1):24–38.
Ruta D, Gabrys B. An overview of classifier fusion methods. Comput Inf Syst. 2000;7(1):1–10.
Xu L, Krzyzak A, Suen C. Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans Syst Man Cybernet. 1992;22(3):418–35.
Sinha A, Chen H, Danu DG, Kirubarajan T, Farooq M. Estimation and decision fusion: a survey. Neurocomputing. 2008;71(13–15):2650–6.
Jordan MI, Jacobs RA. Hierarchical mixtures of experts and the EM algorithm. Neural Comput. 1994;6(2):181–214.
Rahman AFR, Alam H, Fairhurst MC. Multiple classifier combination for character recognition: revisiting the majority voting system and its variations. In: Document analysis systems V: 5th international workshop, vol. 2002. Berlin, Heidelberg: Springer Berlin Heidelberg; 2002. p. 167–78.
Hosmer DW, Lemeshow S, Sturdivant RX. Applied logistic regression. New York: Wiley; 2013.
Harper PR. A review and comparison of classification algorithms for medical decision making. Health Policy. 2005;71(3):315–31.
Bishop CM. Pattern recognition and machine learning. Information science and statistics. New York: Springer-Verlag; 2006.
Breiman L. Classification and regression trees. Belmont: Wadsworth International Group; 1984.
Therneau T, Atkinson B, Ripley B. rpart: Recursive Partitioning and Regression Trees. R package version 4.1–10. 2015. https://CRAN.R-project.org/package=rpart.
Scutari M. Learning Bayesian networks with the bnlearn R package. J Stat Softw. 2010;35(3):22.
Nagarajan R, Scutari M, Lbre S. Bayesian Networks in R: with Applications in Systems Biology. New York: Springer-Verlag; 2013.
Netica software. In. Vancouver, BC, Canada. Norsys Software Corporation. Available from: http://www.norsys.com/. Accessed 1 Mar 2019.
Kleinman KP, Abrams AM. Assessing surveillance using sensitivity, specificity and timeliness. Stat Methods Med Res. 2006;15(5):445–64.
Buckeridge DL, Burkom H, Campbell M, Hogan WR, Moore AW. Algorithms for rapid outbreak detection: a research synthesis. J Biomed Inform. 2005;38(2):99–113.
Lam L, Suen SY. Application of majority voting to pattern recognition: an analysis of its behavior and performance. IEEE Trans Syst Man Cybern Syst Hum. 1997;27(5):553–68.
Parhami B. Voting algorithms. IEEE Trans Reliab. 1994;43(4):617–29.
Verlinde P, Druyts P, Cholet G, Acheroy M. Applying Bayes based classifiers for Decision fusion in a multimodal identity verification system. In: International symposium on pattern recognition February 1999; Brussels, Belgium. 1999.
Ho TK, Hull JJ, Srihari SN. Decision combination in multiple classifier systems. IEEE Trans Pattern Anal Mach Intell. 1994;16(1):66–75.
Altmann A, Rosen-Zvi M, Prosperi M, Aharoni E, Neuvirth H, Schulter E, Buch J, Struck D, Peres Y, Incardona F, et al. Comparison of classifier fusion methods for predicting response to anti HIV-1 therapy. PLoS One. 2008;3(10):e3470.
Hastie T, Tibshirani R, Friedman JH. The elements of statistical learning: data mining, inference, and prediction. 2nd ed. New York: Springer-Verlag; 2009.
McNeish D. On using Bayesian methods to address small sample problems. Struct Equ Model Multidiscip J. 2016;23(5):750–73.
Ducher M, Kalbacher E, Combarnous F, Finaz de Vilaine J, McGregor B, Fouque D, Fauvel JP. Comparison of a Bayesian network with a logistic regression model to forecast IgA nephropathy. Biomed Res Int. 2013;2013:686150.
Giabbanelli PJ, Peters JG. An algebraic approach to combining classifiers. Procedia Comput Sci. 2015;51(C):1545–54.
Texier G, Pellegrin L, Vignal C, Meynard JB, Deparis X, Chaudet H. Dealing with uncertainty when using a surveillance system. Int J Med Inform. 2017;104:65–73.
Salas E, Klein G. Linking expertise and naturalistic decision making. Mahwah: Lawrence Erlbaum Associates Publishers; 2001.
Chaudet H, Pellegrin L, Bonnardel N. Special issue on the 11th conference on naturalistic decision making. Cogn Tech Work. 2015;17(3):315–8.
We are grateful to Dimanche Allo, Leonel Siwe, and Michael Jackson who were involved in building the tools used to generate the simulation datasets.
The authors received no specific funding for this work.
Data are simulated by resampling methods from a real outbreaks of Norovirus already published [21].
French Armed Forces Center for Epidemiology and Public Health (CESPA), SSA, Camp de Sainte Marthe, 13568, Marseille, France
Gaëtan Texier
, Rodrigue S. Allodji
, Jean-Baptiste Meynard
, Liliane Pellegrin
& Hervé Chaudet
UMR VITROME, IRD, AP-HM, SSA, IHU-Méditerranée Infection, Aix Marseille Univ, 13005, Marseille, France
CESP, Univ. Paris-Sud, UVSQ, INSERM, Université Paris-Saclay, Villejuif, France
Rodrigue S. Allodji
Cancer and Radiation Team, Gustave Roussy Cancer Center, F-94805, Villejuif, France
International Food Policy Research Institute (IFPRI), Regional Office for West and Central Africa Regional Office, 24063, Dakar, Sénégal
Loty Diop
UMR 912 - SESSTIM - INSERM/IRD/Aix-Marseille Université, 13385, Marseille, France
Jean-Baptiste Meynard
Search for Gaëtan Texier in:
Search for Rodrigue S. Allodji in:
Search for Loty Diop in:
Search for Jean-Baptiste Meynard in:
Search for Liliane Pellegrin in:
Search for Hervé Chaudet in:
GT, HC, LP, and JBM participated in data collection and study management. GT, LD, RA, and HC performed programming and simulating. GT, LD, RA, and HC participated in statistical analysis. GT, HC, LP, and JBM contributed significantly to the preparation of the study and its conception. All authors participated in the study and took part in the discussion and the writing of the article. All authors read and approved the final manuscript.
Correspondence to Gaëtan Texier.
The original version of this article was revised: the original article contained an error in an author name. It should be Allodji not Alldoji.
Table S1. An example of 25 years of dataset (training dataset the first 5 years + evaluation dataset the next 20 years) used in this study to evaluate outbreak detection algorithm and decision fusion methods (Baseline = 3 cases by days in average, Total number of outbreak cases injected =50 cases). The baseline (Column A) level of disease surveillance corresponding to an average of 3 cases declared by days in the system and the complete outbreak signal corresponding to a total of 50 cases according a shape of Norovirus outbreak injected (Column B) several time in the baseline. Column C represents the first day of the outbreak (1 = Start of the outbreak) and Column D all days considered as epidemic (=1). (XLSX 195 kb)
Texier, G., Allodji, R.S., Diop, L. et al. Using decision fusion methods to improve outbreak detection in disease surveillance. BMC Med Inform Decis Mak 19, 38 (2019) doi:10.1186/s12911-019-0774-3
Decision support system
Disease surveillance system
Decision fusion
Outbreak detection
Clinical decision-making, knowledge support systems, and theory | CommonCrawl |
Surpassing the classical limit in magic square game with distant quantum dots coupled to optical cavities
Sinan Bugu1,
Fatih Ozaydin2,3 &
Tetsuo Kodera1
Scientific Reports volume 10, Article number: 22202 (2020) Cite this article
The emergence of quantum technologies is heating up the debate on quantum supremacy, usually focusing on the feasibility of looking good on paper algorithms in realistic settings, due to the vulnerability of quantum systems to myriad sources of noise. In this vein, an interesting example of quantum pseudo-telepathy games that quantum mechanical resources can theoretically outperform classical resources is the Magic Square game (MSG), in which two players play against a referee. Due to noise, however, the unit winning probability of the players can drop well below the classical limit. Here, we propose a timely and unprecedented experimental setup for quantum computation with quantum dots inside optical cavities, along with ancillary photons for realizing interactions between distant dots to implement the MSG. Considering various physical imperfections of our setup, we first show that the MSG can be implemented with the current technology, outperforming the classical resources under realistic conditions. Next, we show that our work gives rise to a new version of the game. That is, if the referee has information on the physical realization and strategy of the players, he can bias the game through filtered randomness, and increase his winning probability. We believe our work contributes to not only quantum game theory, but also quantum computing with quantum dots.
Quantum mechanical resources can enable some tasks such as superdense coding and teleporting an unknown state1 which are impossible to realize with classical resources. Many approaches to optimizing quantum resources for efficient quantum computation and quantum communication such as gate-model, quantum channel capacity, optimizing quantum memory, and algorithms have been studied2,3,4,5,6,7,8,9,10,11,12. On the other hand, speeding up classically possible computational tasks which are beyond the ability of any classical computer such as unsorted database search and factorization1 and some other devoted efforts13,14,15 in achieving supremacy have been attracting an intense attention. One of the most groundbreaking advances in quantum technologies is the recent claim of Google that they have achieved quantum supremacy16.
Surpassing the classically achievable limit in various tasks is also in the center of attraction. For example in quantum metrology, surpassing the classical shot noise limit has been studied extensively under various scenarios taking into account the standard decoherence channels and thermal noise17,18,19,20,21,22,23. Quantum resources also enable advantages in thermodynamics24,25,26,27. Quantum games—where "everyone wins"28, provide an interesting playground for investigating the advantages of utilizing various quantum weirdness over classical resources. Among quantum pseudo-telepathy games where quantum mechanical resources can theoretically outperform classical resources, a widely studied one is the so-called Magic Square game (MSG), in which two players, say Alice and Bob, play against a referee. In the MSG, players are allowed to communicate, share any resources and agree on any strategy, only until the game starts. The game is played on a \(3 \times 3\) square matrix with binary entries. Once the game starts, referee gives numbers a and b to Alice and Bob, respectively, where \(a, b \in \{1, 2, 3\}\). Alice fills row a and Bob fills column b, i.e. each tell referee the numbers to fill. They win if the sum of numbers in row a (column b) is even (odd) and the intersecting element is the same. Otherwise, they lose, i.e. the referee wins. Let us illustrate one of the nine possible instances that referee gives Alice \(a=2\), and Bob \(b=3\). They will win if they can fill the row and column as \(\{0,0,0\},\{0,0,1\}\), respectively, or \(\{0,1,1\},\{0,1,0\}\), for example, resulting in two possible winning instances (i) and (ii) given in Table 1.
Table 1 Two possible winning instances for players Alice and Bob, if they are given \(a=2\) and \(b=3\), respectively. They will win if Alice can fill the second row as \(\{0,0,0\}\) and Bob the third column as \(\{0,0,1\}\), or \(\{0,1,1\}\) and \(\{0,1,0\}\) corresponding to the final matrices shown on left (i) and right (ii), respectively.
The shortcoming of utilizing classical resources in playing the MSG is that no matter what strategy they choose, the players can win against the referee only in eight cases out of nine, resulting in the average winning probability 8/9. However, this winning probability could theoretically achieve unity if they could have shared a four-qubit entangled state given in Eq. (1), and applied an appropriate quantum strategy29.
$$\begin{aligned} |\phi \rangle = {1 \over 2} ( |0011\rangle + |1100\rangle - |0110\rangle - |1001\rangle ), \end{aligned}$$
where Alice holds the first two qubits and Bob holds the third and fourth qubits. This four qubit state is actually the composition of two EPR (Einstein–Podolsky–Rosen) pairs in the form \({1 \over \sqrt{2} }( |01\rangle - |10\rangle ) \otimes {1 \over \sqrt{2} }( |01\rangle - |10\rangle )\), each shared by Alice and Bob, such that Alice (Bob) possesses the first and third (second and fourth) qubits. The strategy they determine before the game starts is as follows. According to the row (column) number given by the referee, Alice (Bob) applies one of the three two-qubit operations \(A_a\) (\(B_b\)), where \(a, b \in \{1, 2, 3\}\), given in Eqs. (2) and (3). That is, following the above example, Alice applies \(A_2\), and Bob applies \(B_3\).
$$\begin{aligned} A_1 =\,{1 \over \sqrt{2}} \left( \begin{array}{cccc} i &{} \ \ 0 &{} 0 &{} 1 \\ 0 &{} -i &{} 1 &{} 0 \\ 0 &{} \ \ i &{} 1 &{} 0 \\ 1 &{} \ \ 0 &{} 0 &{} i \\ \end{array} \right) \!\!\! , \ \ \ \ A_2 = {1 \over 2} \left( \begin{array}{cccc} \ \ i &{} 1 &{} \ \ 1 &{} \ \ i \\ -i &{} 1 &{} -1 &{} \ \ i \\ \ \ i &{} 1 &{} -1 &{} -i \\ -i &{} 1 &{} \ \ 1 &{} -i \\ \end{array} \right) \!\! , \ \ \ \ A_3 = {1 \over 2} \left( \begin{array}{cccc} -1 &{} -1 &{} -1 &{} \ \ 1 \\ \ \ 1 &{} \ \ 1 &{} -1 &{} \ \ 1 \\ \ \ 1 &{} -1 &{} \ \ 1 &{} \ \ 1 \\ \ \ 1 &{} -1 &{} -1 &{} -1 \\ \end{array} \right) \!\! , \end{aligned}$$
$$\begin{aligned} B_1 =\, {1 \over 2} \left( \begin{array}{cccc} \ \ i &{} -i &{} \ \ 1 &{} \ \ 1 \\ -i &{} -i &{} \ \ 1 &{} -1 \\ \ \ 1 &{} \ \ 1 &{} -i &{} \ \ i \\ -i &{} \ \ i &{} \ \ 1 &{} \ \ 1 \\ \end{array} \right) \!\! , \ \ \ \ B_2 = {1 \over 2} \left( \begin{array}{cccc} -1 &{} \ \ i &{} 1 &{} \ \ i \\ \ \ 1 &{} \ \ i &{} 1 &{} -i \\ \ \ 1 &{} -i &{} 1 &{} \ \ i \\ -1 &{} -i &{} 1 &{} -i \\ \end{array} \right) \!\! , \ \ \ \ B_3 = {1 \over \sqrt{2}} \left( \begin{array}{cccc} \ \ 1 &{} 0 &{} \ \ 0 &{} 1 \\ -1 &{} 0 &{} \ \ 0 &{} 1 \\ \ \ 0 &{} 1 &{} \ \ 1 &{} 0 \\ \ \ 0 &{} 1 &{} -1 &{} 0 \\ \end{array} \right) \!\!. \end{aligned}$$
Next, measuring their qubits, each obtains two classical bits and determine the third bit according to the parity conditions. Note that the measurement of each party do not provide a single result, but one of the possible results with some probability. However, thanks to the entangled state and quantum strategy, in the ideal case where there is no noise and experimental imperfections, the results of Alice and Bob are found to be both satisfying the parity conditions and that the intersecting number is the same. Following the same example (\(a=2\), \(b=3\)), in addition to two instances given in Table 1, the instances (iii) through (viii) given in Table 2 could occur each with 1/8 probability, summing up to unity. For a more detailed example, let us take instance (iv), that after applying \(A_2\) and \(B_3\), measurement results yield two classical bits \(\{0,1\}\) for Alice and \(\{1,1\}\) for Bob. To satisfy the parity conditions, Alice extends her two-bit string to \(\{0,1,1\}\), and Bob to \(\{1,1,1\}\).
Table 2 Given \(a=2\) for Alice and \(b=3\) for Bob under ideal conditions, in addition to two possible winning instances given in Table 1, any of these six winning instances can occur each with probability 1/8 by applying \(A_2\) and \(B_3\) and performing the measurement.
However, quantum systems are very fragile that any source of imperfections during the process might affect the performance of the task, and the MSG is of no exception. Gawron et al.'s work on the noise effects in the MSG30 clearly showed that if qubits hold by Alice and Bob are subject to noise, that their four-qubit state is not the pure state in Eq. (1) but rather a mixed state, the average winning probability decreases and with increasing noise, the probability can drop well below the classical limit 8/9. This work was followed by others in various settings31,32,33,34,35. Hence, although quantum advantage is imminent in theory, it is of great interest to design a physical system to bring this advantage to life and investigate the conditions for surpassing the classically achievable limit.
In this work, addressing this problem, we propose a few nanometer-sized silicon single quantum dots (SQDs)36,37,38 with bandgap small enough to allow spin-photon interaction based setup within the reach of current technology. Electron spins confined in quantum dots provide a promising basis for quantum computation with potential for scaling and reasonably long coherence time39,40,41,42,43. In the basic proposal39, single spins form a logical basis with a single qubit operation via spin resonance. The silicon-based quantum dot has been studied intensively and attracted great interest thanks to its charge offset stability and compatibility with CMOS and quantum information technology44,45,46,47,48,49. Hence, progressive approaches based on quantum dots have been proposed in various areas of quantum information such as preparing multipartite entanglement via Pauli spin blockade in double quantum dot system50, and coupling photonic and electronic qubits in microcavity systems51,52. What is more, coupling quantum dots to nanophotonic waveguide53, and optical microcavity54 for quantum information processing have recently been experimentally demonstrated. By considering various physical imperfections, we first show that the MSG can be implemented in a quantum system outperforming the classical resources under realistic conditions. Next, thanks to our physical analysis, we design a new version of the game, that having information on the physical realization and strategy of the players, in order to decrease their winning probability, referee can bias the game.
Extending each two-qubit operation on two spin qubits denoted as \(q_1\) and \(q_2\), to a three-qubit operation via an ancillary photon, so that any two-qubit operation could be realized on spatially separated spin qubits.
Our setup is based on quantum computation with quantum dots coupled to spatially separated optical cavities. In our setup, each spin of a quantum dot constituting each logical qubit of Alice and Bob is coupled to the optical field of the cavity. Introducing ancillary photons, quantum operations on two distant qubits of each player are realized through photon-spin interactions. That is, as illustrated in Fig. 1 each two-qubit operation on logical qubits is extended to an equivalent three-qubit operation which is realized by only single-qubit operations on photons or spins, and two-qubit operations on photon-spin pairs. As already considered in many works55 and explained in Methods section, our configuration realizes a controlled-phase \(CP(\pi -\theta )\) gate between spin and photon, which reduces to a controlled-Z (CZ) gate in the ideal condition for \(\theta =0\), with \(CZ=CP(\pi )\). Hence, before considering physical imperfections and taking into account the effect of finite \(\theta\) on winning probability, we first decompose each two-qubit unitary operation in terms of single-qubit gates (detailed in Methods section) and CZ gates, and then extend the decomposed two-qubit circuits to three-qubit circuits.
As controlled-Not (CNOT) gates with single qubit gates constitute a universal set, any unitary operation can be decomposed in terms of these gates1,56. What is more, a CNOT is equivalent to a \(\mathrm{CZ}\) up to two Hadamard (Had) gates applied to the target qubit before and after the \(\mathrm{CZ}\) gate, i.e. \(CNOT^{1,2} \equiv Had^2.CZ.Had^2\), where the superscript of single qubit gates represent which qubit it is applied to, and we use \(\mathrm{CNOT}^{1,2}\) for the first qubit being the control and second qubit being the target qubit, and \(\mathrm{CNOT}^{2,1}\) for the opposite case. For each operation \(A_a\) (and \(B_b\)), we find the decomposition \(A^d_a\) in terms of \(\mathrm{CNOT}\) and single qubit gates, and then the extension to three-qubit circuit \(A^e_a\), and finally the circuit \(A^{CZ}_a\) consisting of only \(\mathrm{CZ}\) and single qubit gates. We are now ready to present the decompositions we find for the two-qubit operations given in Eqs. (2) and (3), as
Circuit diagrams for realizing the operations of players. Blue H gate represent a Hadamard gate, purple gates represents Controlled-Z (CZ) gate or \(R_x\), \(R_y\), and \(R_z\) gates which are rotation around x, y, and z axis, respectively. Referee gives the row number a to Alice and column number b to Bob. Alice applies \(A_a\) and Bob applies \(B_b\), each to his/her two spin qubits (\(q_1\) and \(q_2\)) in distant optical cavities through an ancillary photon. As the photon passes through the cavity, the interaction realizes a CZ operation between the photon and spin in the ideal conditions. We used IBM qiskit57 to draw our decomposed circuit diagrams.
Proposed experimental setup for playing the game. Referee gives the number of row a (column b) to Alice (Bob) to fill with binary entries. The initial four-qubit state given in Eq. (1) is a composition of two EPR pairs (one illustrated with red and the other with green circles) shared by Alice and Bob. Each quantum dot (red or green circles) coupled to an optical cavity (blue toroids) constitutes one logical qubit. Following the extension strategy in Fig. 1, each two-qubit operation (given in Eqs. 2 and 3 ) on the logical qubits (distant quantum dots) is realized via an ancillary photon traveling between the optical cavities as: Following a SWAP operation between the photon and the spin coupled to the first cavity, the photon is sent to the second cavity to realize the desired operation. "Operations" represent the overall operations as decomposed in Eqs. (2) red and (3), each containing single qubit operations, and one or two CZ operations. Each CZ is realized through the interaction between ancillary photon and second spin qubit, \(q^A_2\) (or \(q^B_2\)). Each "Op" represents either an identity operator, or a set of single qubit operations on photonic or spin qubit. After the "Operations", the photon is sent back to the first cavity for swapping back the quantum state with the spin qubit. Two-spin qubits of each party are now ready to be measured for obtaining the binary entries.
$$\begin{aligned} A^d_1=\, & {} e^{ - {7\pi \over 8} i} R_z^1({\pi / 4}).CNOT^{2,1}. R_z^2({7\pi / 4}) . R_x^2({\pi / 2}). R_z^1({7\pi / 4}). CNOT^{2,1}. R_z^1({\pi / 2}). R_y^1({\pi }). R_z^2({3\pi / 2}). R_y^2({\pi }), \end{aligned}$$
$$\begin{aligned} A^d_2\,= \,& {} R_y^1({\pi / 2}) . R_z^2({\pi }) . CNOT^{1,2} . R_z^1({\pi / 2}). R_y^1({\pi }) . R_z^2({\pi }). R_y^2({\pi / 2}) . R_z^2({ 3 \pi / 2}), \end{aligned}$$
$$\begin{aligned} A^d_3\,=\, & {} R_z^1(\pi ) . R_y^1(\pi / 2) . CNOT^{1,2} . R_y^1(\pi /2). R_z^1(\pi ) . R_y^2(\pi ), \end{aligned}$$
$$\begin{aligned} B^d_1\,=\, & {} e^{ {7\pi \over 8} i} R_x^2(3 \pi / 2) . R_y^2(3 \pi / 4) . CNOT^{1,2} . R_z^1(\pi / 4) . R_y^1(3 \pi / 2). R_y^2(3 \pi / 2). CNOT^{1,2} . R_z^1(2 \pi ) . R_z^2(3 \pi / 2) . R_y^2(\pi ), \end{aligned}$$
$$\begin{aligned} B^d_2\,=\, & {} e^{i \pi } R_y^1( \pi / 2). R_y^2( \pi / 2). R_z^2(3 \pi / 2). CNOT^{1,2} . R_z^1(3 \pi / 2). R_z^2( \pi ),\end{aligned}$$
$$\begin{aligned} B^d_3\,=\, & {} R_y^1( \pi / 2). R_z^2( \pi ). R_y^2( \pi ). CNOT^{1,2}. R_y^1( \pi / 2). R_z^1( \pi ) . R_y^2( \pi / 2). \end{aligned}$$
Unlike other four operations, requiring not one but two CNOT gates in the decomposition, \(A_1\) and \(B_1\) are going to play a significant role in the physical realization of the task, and give rise to a new version of the game.
For extending the decomposed two-qubit (spin–spin) circuits to three-qubit (spin–photon–spin) circuits as illustrated in Fig. 1, we will make use of two-qubit SWAP gates, which can be realized as \(SWAP \equiv CNOT^{1,2}.CNOT^{2,1}.CNOT^{1,2}\). Our strategy for realizing the interaction between two spins via a three-qubit operation using only two-qubit gates is as follows. For each player, the ancillary photon is sent to the first cavity to interact several times. Before and after each interaction which realizes a CZ gate between the photon and the spin, Hadamard gates are applied to both qubits appropriately, so that three CNOT gates equivalent to a SWAP gate are realized. That is, quantum states of the first spin and ancillary photon are swapped. The photon is then sent to the other cavity containing the second spin. Through interactions realizing CZ gates, and single qubit operations on spin and photon, the actual operation is realized. Finally, the photon is sent back to the first cavity to swap back the quantum state with the spin. The overall operation is equivalent to the corresponding two-qubit operation of the player. We illustrate the circuit diagram for each overall operation in Fig. 2, red and the corresponding experimental setup in Fig. 3.
We start with the initial state (in Eq. 1) and two ancillary photons each in \(|0\rangle\) state in the physical order of qubits as \(Alice^1, Ancilla^{A}, Alice^2\) and \(Bob^1,Ancilla^B,Bob^2\), which can be written as
$$\begin{aligned} |\Psi \rangle =\, (SWAP \otimes id \otimes id \otimes SWAP) .(|0\rangle ^{A} \otimes |\phi \rangle \otimes |0\rangle ^{B}), \end{aligned}$$
where id is the single qubit identity operator. Note first that this writing is only for the sake of clarity to explain the physical order of the qubits, hence the SWAP operations in Eq. (10) are not taken into account in the physical realization. Note also that for simplicity in tracing out operations during calculations, we start by swapping the ancillary photon with the first photon of Alice, while we swap the ancillary photon with the second photon of Bob. With \(a,b \in \{1,2,3\}\), extended three-qubit operations \(A^e_a\) and \(B^e_b\) are defined as
$$\begin{aligned} A^e_a\,=\, & {} (SWAP \otimes id) . (id \otimes A^d_a). (SWAP \otimes id), \end{aligned}$$
$$\begin{aligned} B^e_b\,=\, & {} (id \otimes SWAP) . (B^d_b \otimes id). (id \otimes SWAP). \end{aligned}$$
Upon receiving the number a (b) from the referee, Alice (Bob) applies the operation \(A^e_a\) (\(B^e_b\)). Next step is to trace out the ancillary qubits, and finally perform the measurements. Under ideal conditions, as the ancillary qubits are back in their initial \(|0\rangle\) states separable from the logical qubits, tracing them out will not disturb them. Note that as these measurements are not Bell measurements, they can be performed on distant spin qubits separately.
Physical imperfections
Neglecting the imperfections such as the decoherence or absorption of photons between distant cavities, major physical imperfections we take into account in this analysis are due to Q-factor of the chosen optical cavity, coherence of qubits and the coupling, which all contribute to the imperfection in the desired operation between spin qubits and photon qubits. In general, according to the technology used, the realized operation might deviate from the ideal \(CZ \equiv CP(\pi )\) to \(CP(\pi -\theta )\). Following our decomposition and extension, it is straightforward to take this effect into account by simply replacing each CZ in the circuits with \(CP(\pi -\theta )\). This time, final states of the ancillary qubits will not be \(|0\rangle\), and measurement result on each qubit pair will not yield 1/8, but rather a function of \(\theta\). We plot the success probability \(P_s\) for each \(\{a,b\}\) in Fig. 4. As we expected, \(a=1\) and \(b=1\) is the worst case for players. On the contrary to the other cases, the decompositions of both \(A_1\) and \(B_1\) contain not one but two CNOT gates, they require two imperfect CZ gates in the realistic settings, which lead to a potential new version of MSG.
Following the usual scenarios in quantum games, we assumed that Alice and Bob initially shared the ideal state given in Eq. (1), and did not take into account the imperfections in preparing the state, which could slightly decrease the overall success probabilities. However, as the initial state consists of two Bell states, its preparation is straightforward58,59. On the other hand, Alice and Bob could choose to prepare the initial state not based on four spin qubits, but two being the photonic qubits. Hence, the first SWAP gate (i.e. the imperfect CZ gates to realize it) could be removed, this time increasing the overall success probability.
For realizing the interaction between the incident photon and the spin qubit in not only in quantum dots but in also nitrogen-vacancy centers in diamond, and also atomic qubits, various optical microcavities are considered. Achieving ultra-high quality factor, microtoroid resonators with whispering gallery modes are promising60,61,62,63. Single-sided or double-sided cavities even with small Q-factors64,65 are also shown to be candidates for realizing atom-photon or spin-photon interactions with high success rate, enabling myriad quantum information processing tasks from entanglement generation to quantum teleportation (see Refs.58,59,66,67 and references therein).
Note that, our particular setup herein is robust because due to Eq. (18), the parameter \(\theta\) can take only two values, \(\pi\), or 0, the latter being either for realizing the desired operation according to the conditions (as explained in the Methods section), or deviating from \(\pi\) only in the extremely imperfection conditions such as \(g \ll 5 \sqrt{\kappa \gamma }\) (where g is the coupling strength of the cavity to the quantum dot, \(\kappa\) is the cavity decay rate and \(\gamma\) is the quantum dot spin decay rate), which is not anticipated with high-Q resonators. However, our analysis on the affect of physical imperfections on the success probability is more general for realization in any technology where the interaction yields the operation \(CP(\pi -\theta )\) with a finite \(\theta\).
Our analysis showed that, if some of the possible cases for realizing a task requires more complicated operations, new versions of the task could arise. Suppose Alice and Bob are playing MSG against a referee, with the present experimental setup following our decomposition. Then the question lying in the heart of game theory is, whether the referee has information on their setup and strategy. If so, in order to decrease the winning probability of the players (that is increasing the own winning probability), instead of each round with an evenly distributed random numbers a and b for the row and column, respectively, the referee can tend to choose always \(a=1\) or \(b=1\), and even \(a=b=1\).
In summary, taking into account possible physical imperfections, we proposed a physical setup for playing MSG feasible with the current technology. We found the limits of imperfections for surpassing the classical winning probability. We also showed that, depending on the partial information, the referee can bias the game to increase his/her winning probability, which gives rise to a new version of the Magic Square game.
Success probability \(P_s\) as a function of \(\theta\) for each pair of numbers \(\{a, b\}\) referee can give Alice and Bob for filling the row and column, respectively, with binary entries. Here, \(\theta\) represents the imperfection of the interaction between the logical qubit and the ancillary qubit, i.e. realizing not the desired \(CZ \equiv CP(\pi )\) but \(CP(\pi -\theta )\) operation. Operations \(A_1\) and \(B_1\) (corresponding to \(a=1\) and \(b=1\), respectively) are more complex than others that they contain more controlled operations, i.e. \(CP(\pi -\theta )\). Hence, success probability of players decreases faster for \(a=1\) or \(b=1\), and the fastest for \(a=b=1\).
A quantum dot coupled to an optical cavity can be coupled to the cavity mode, and the interaction between the cavity and the quantum dot spin is governed by Jaynes-Cummings model with the Hamiltonian
$$\begin{aligned} H = \sum _{j=R,L} [ {\omega _{j0} \over 2} \sigma _{jz} + \omega _{jC} a^{\dagger }_j a_j + i g_j(a_j \sigma _{j+} - a^{\dagger }_j \sigma _{j-})] + H_R, \end{aligned}$$
where \(a^{\dagger }\) and a are the creation and annihilation operators of the cavity field, respectively. R and L denote the circular polarizations of the photon, associated with the optical transitions in the quantum dot (see Fig. 5) and index j runs for R and L. \(\omega _0\) and \(\omega _C\) are the transition frequencies of the electronic energy levels and the frequency of the cavity field, \(\sigma _+\), \(\sigma _-\) and \(\sigma _z\) are the raising, lowering and inversion operators of the quantum dot spin between the two corresponding levels, respectively. The Hamiltonian for the field and atomic reservoirs are denoted by \(H_R\), and we take \(\hslash =1\). Applying a magnetic field, non-zero spin level splitting can be achieved, so that R and L polarized photons receive different phase shifts upon the interaction with the quantum dot-cavity system58, as explained below.
When an incident photon with frequency \(\omega _p\) is introduced to the cavity, the Langevin equations for a and \(\sigma _-\) can be obtained for the low temperature reservoir and neglecting the vacuum input field, as
$$\begin{aligned} { \text {d} a_j \over \text {d}t} = [ i (\omega _p - \omega _C) - { \kappa \over 2} ] a_j(t) - g\sigma _{j-}(t) - \sqrt{\kappa } a_{j, in} (t), \end{aligned}$$
$$\begin{aligned} { \text {d} \sigma _{j-} \over \text {d}t} = [ i (\omega _p - \omega _0) - { \gamma \over 2} ] \sigma _{j-}(t) - g\sigma _{j,z}(t) a_{j} (t), \end{aligned}$$
where g is the coupling strength of the cavity to the quantum dot, \(\kappa\) is the cavity decay rate and \(\gamma\) is the quantum dot spin decay rate. Assuming weak assumption limit \(\langle \sigma _z \rangle = -1\) and adiabatically eliminating the cavity mode, the reflection coefficient for the input photon pulse is found as59,68
$$\begin{aligned} r(\omega _p) = { [ i (\omega _C - \omega _p) - {\kappa \over 2} ] [ i (\omega _0 - \omega _p) + {\gamma \over 2} ] + g^2 \over [ i (\omega _C - \omega _p) + {\kappa \over 2} ] [ i (\omega _0 - \omega _p) + {\gamma \over 2} ] + g^2 }. \end{aligned}$$
If the quantum dot is uncoupled from the cavity, the reflection coefficient for the input photon becomes
$$\begin{aligned} r_0(\omega _p) = { i (\omega _C - \omega _p) - {\kappa \over 2} \over i (\omega _C - \omega _p) + {\kappa \over 2} }. \end{aligned}$$
The reflection coefficients can be obtained for the resonant condition \(\omega _p=\omega _0=\omega _C\) as
$$\begin{aligned} r(\omega _p) = { - \kappa \gamma + 4 g^2 \over \kappa \gamma + 4 g^2 }, \ \ \text {and} \ \ r_0(\omega _p) = -1. \end{aligned}$$
\(\Lambda\) type optical transitions possible in a quantum dot. The transitions \(|-\rangle \leftrightarrow |e\rangle\) and \(|+\rangle \leftrightarrow |e\rangle\) are associated with the left and right polarization of the photon, denoted as \(|L\rangle\) and \(|R\rangle\) respectively.
Due to the spin-dependent optical transition rules55 as simply illustrated in Fig. 5, and optical Faraday rotation, an \(|R\rangle\) polarized incident photon receives a phase shift \(e^{i \phi _0}\) because, due to large level splitting, the spin state of the quantum dot is decoupled from the incident pulse58. However, if the incident photon is \(|L\rangle\) polarized, it will receive a phase shift \(e^{i \phi }\) (\(e^{i \phi _0}\)) depending on the spin state of the quantum dot \(|-\rangle\) \((|+\rangle )\), where \(\phi\) and \(\phi _0\) are the arguments of \(r(\omega _p)\) and \(r_0(\omega _p)\), respectively. For the resonant condition, and \(g > 5 \sqrt{\kappa \gamma }\), one approximately finds \(\phi =0\) and \(\phi _0=\pi\). Placing a \(\pi\) phase shifter to the photon reflection path, a controlled-Z gate between the electronic spin of the quantum dot and the incident photon is realized as \(|R\rangle |+\rangle \rightarrow |R\rangle |+\rangle\), \(|R\rangle |-\rangle \rightarrow |R\rangle |-\rangle\), \(|L\rangle |+\rangle \rightarrow |L\rangle |+\rangle\), \(|L\rangle |-\rangle \rightarrow -|L\rangle |-\rangle\). The implementations of single qubit operations on spins and incident photons can be realized effectively and with high fidelity via electric pulses69 and half wave plates70, respectively. One- and two-qubit operations we use in this work are
$$\begin{aligned} \text {CNOT} = \left( \begin{array}{cccc} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 1 &{} 0 \\ \end{array} \right) , \ \ \ CP( \theta ) = \left( \begin{array}{cccc} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} \exp ({i \pi \theta })\\ \end{array} \right) , \end{aligned}$$
$$\begin{aligned} R_x( \theta ) = \left( \begin{array}{cc} \cos {\theta \over 2} &{} i \sin {\theta \over 2} \\ i \sin {\theta \over 2} &{} \cos {\theta \over 2} \\ \end{array} \right) ,\ \ \ R_y( \theta ) = \left( \begin{array}{cc} \cos {\theta \over 2} &{} \sin {\theta \over 2} \\ - \sin {\theta \over 2} &{} \cos {\theta \over 2} \\ \end{array} \right) ,\ \ \ R_y( \theta ) = \left( \begin{array}{cc} \exp ({i \theta \over 2}) &{} 0 \\ 0 &{} \exp ({ - i \theta \over 2}) \\ \end{array} \right) . \end{aligned}$$
Nielsen, M. A. & Chuang, I. L. Quantum computation and quantum information. Phys. Today 54, 60 (2001).
Gyongyosi, L. & Imre, S. A survey on quantum computing technology. Comput. Sci. Rev. 31, 51–71 (2019).
MathSciNet Article Google Scholar
Gyongyosi, L., Imre, S. & Nguyen, H. V. A survey on quantum channel capacities. IEEE Commun. Surv. Tutor. 20, 1149–1205 (2018).
Gyongyosi, L. & Imre, S. Optimizing High-Efficiency Quantum Memory with Quantum Machine Learning for Near-Term Quantum Devices. Sci. Rep. 10, 135 (2020).
Gyongyosi, L. & Imre, S. Circuit depth reduction for gate-model quantum computers. Sci. Rep. 10, 1–17 (2020).
Gyongyosi, L. Quantum state optimization and computational pathway evaluation for gate-model quantum computers. Sci. Rep. 10, 2 (2020).
Gyongyosi, L. & Imre, S. Dense quantum measurement theory. Sci. Rep. 9, 2 (2019).
Gyongyosi, L. & Imre, S. Quantum circuit design for objective function maximization in gate-model quantum computers. Quantum Inf. Process. 18, 225 (2019).
ADS MathSciNet Article Google Scholar
Gyongyosi, L. & Imre, S. Training optimization for gate-model quantum neural networks. Sci. Rep. 9, 2 (2019).
Farhi, E., Gamarnik, D. & Gutmann, S. The quantum approximate optimization algorithm needs to see the whole graph: A typical case. arXiv preprint arXiv:2004.09002 (2020).
Farhi, E., Goldstone, J., Gutmann, S. & Leo, Z. The quantum approximate optimization algorithm and the sherrington-kirkpatrick model at infinite size. arXiv preprint arXiv:1910.08187 (2019).
Lloyd, S. Quantum approximate optimization is computationally universal. arXiv preprint arXiv:1812.11075 (2018).
Harrow, A. W. & Montanaro, A. Quantum computational supremacy. Nature 549, 203–209 (2017).
ADS CAS PubMed Article PubMed Central Google Scholar
Neill, C. et al. A blueprint for demonstrating quantum supremacy with superconducting qubits. Science 360, 195–199 (2018).
ADS MathSciNet CAS PubMed Article PubMed Central Google Scholar
Bremner, M. J., Montanaro, A. & Shepherd, D. J. Achieving quantum supremacy with sparse and noisy commuting quantum computations. Quantum 1, 8 (2017).
Arute, F. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019).
ADS CAS PubMed Article Google Scholar
Pezzé, L. & Smerzi, A. Entanglement, nonlinear dynamics, and the Heisenberg limit. Phys. Rev. Lett. 102, 100401 (2009).
ADS MathSciNet PubMed Article CAS PubMed Central Google Scholar
Ma, J., Huang, Y.-X., Wang, X. & Sun, C. Quantum fisher information of the Greenberger–Horne–Zeilinger state in decoherence channels. Phys. Rev. A 84, 022302 (2011).
ADS Article CAS Google Scholar
Erol, V., Ozaydin, F. & Altintas, A. A. Analysis of entanglement measures and locc maximized quantum Fisher information of general two qubit systems. Sci. Rep. 4, 5422 (2014).
ADS CAS PubMed PubMed Central Article Google Scholar
Ozaydin, F. Phase damping destroys quantum Fisher information of W states. Phys. Lett. A 378, 3161–3164 (2014).
ADS CAS MATH Article Google Scholar
Altintas, A. A. Quantum Fisher information of an open and noisy system in the steady state. Ann. Phys. 367, 192–198 (2016).
ADS MathSciNet CAS MATH Article Google Scholar
Ozaydin, F. & Altintas, A. A. Quantum metrology: Surpassing the shot-noise limit with Dzyaloshinskii–Moriya interaction. Sci. Rep. 5, 16360 (2015).
Ozaydin, F. & Altintas, A. A. Parameter estimation with Dzyaloshinskii–Moriya interaction under external magnetic fields. Opt. Quantum Electron. 52, 70 (2020).
Scully, M. O., Zubairy, M. S., Agarwal, G. S. & Walther, H. Extracting work from a single heat bath via vanishing quantum coherence. Science 299, 862–864 (2003).
Türkpençe, D. & Müstecaplıoğlu, Ö. E. Quantum fuel with multilevel atomic coherence for ultrahigh specific work in a photonic carnot engine. Phys. Rev. E 93, 012145 (2016).
ADS PubMed Article CAS PubMed Central Google Scholar
Tuncer, A., Izadyari, M., Dağ, C. B., Ozaydin, F. & Müstecaplıoğlu, Ö. E. Work and heat value of bound entanglement. Quantum Inf. Process. 18, 373 (2019).
Dag, C. B., Niedenzu, W., Ozaydin, F., Mustecaplıoglu, O. E. & Kurizki, G. Temperature control in dissipative cavities by entangled dimers. J. Phys. Chem. C 123, 4035–4043 (2019).
Ball, P. Everyone wins in quantum games (1999).
Brassard, G., Broadbent, A. & Tapp, A. Quantum pseudo-telepathy. Found. Phys. 35, 1877–1907 (2005).
ADS MathSciNet MATH Article Google Scholar
Gawron, P., Miszczak, J. & Sładkowski, J. Noise effects in quantum magic squares game. Int. J. Quantum Inf. 6, 667–673 (2008).
MATH Article Google Scholar
Ramzan, M. & Khan, M. Distinguishing quantum channels via magic squares game. Quantum Inf. Process. 9, 667–679 (2010).
MathSciNet MATH Article Google Scholar
Fialík, I. Noise and the magic square game. Quantum Inf. Process. 11, 411–429 (2012).
Gawron, P. & Pawela, Ł. Relativistic quantum pseudo-telepathy. Acta Phys. Pol., B 47, 1147 (2016).
Pawela, Ł, Gawron, P., Puchała, Z. & Sładkowski, J. Enhancing pseudo-telepathy in the magic square game. PLoS One 8, e64694 (2013).
Ozaydin, F. Quantum pseudo-telepathy in spin systems: The magic square game under magnetic fields and the Dzyaloshinskii–Moriya interaction. Laser Phys. 30, 025203 (2020).
Garoufalis, C., Zdetsis, A. D. & Grimme, S. High level ab initio calculations of the optical gap of small silicon quantum dots. Phys. Rev. Lett. 87, 276402 (2001).
Wilcoxon, J., Samara, G. & Provencio, P. Optical and electronic properties of Si nanoclusters synthesized in inverse micelles. Phys. Rev. B 60, 2704 (1999).
Wolkin, M., Jorne, J., Fauchet, P., Allan, G. & Delerue, C. Electronic states and luminescence in porous silicon quantum dots: The role of oxygen. Phys. Rev. Lett. 82, 197 (1999).
Loss, D. & DiVincenzo, D. P. Quantum computation with quantum dots. Phys. Rev. A 57, 120 (1998).
DiVincenzo, D. P., Bacon, D., Kempe, J., Burkard, G. & Whaley, K. B. Universal quantum computation with the exchange interaction. Nature 408, 339–342 (2000).
Taylor, J. et al. Fault-tolerant architecture for quantum computation using electrically controlled semiconductor spins. Nat. Phys. 1, 177–183 (2005).
Veldhorst, M. et al. An addressable quantum dot qubit with fault-tolerant control-fidelity. Nat. Nanotechnol. 9, 981 (2014).
Leon, R. et al. Coherent spin control of s-, p-, d-and f-electrons in a silicon quantum dot. Nat. Commun. 11, 1–7 (2020).
ADS Google Scholar
Zimmerman, N. M., Huber, W. H., Fujiwara, A. & Takahashi, Y. Excellent charge offset stability in a si-based single-electron tunneling transistor. Appl. Phys. Lett. 79, 3188–3190 (2001).
Fujiwara, A. & Takahashi, Y. Manipulation of elementary charge in a silicon charge-coupled device. Nature 410, 560–562 (2001).
Dutta, A., Oda, S., Fu, Y. & Willander, M. Electron transport in nanocrystalline Si based single electron transistors. Jpn. J. Appl. Phys. 39, 4647 (2000).
Takahashi, Y., Ono, Y., Fujiwara, A. & Inokawa, H. Silicon single-electron devices. J. Phys.: Condens. Matter 14, R995 (2002).
ADS CAS Google Scholar
Ono, Y., Fujiwara, A., Nishiguchi, K., Inokawa, H. & Takahashi, Y. Manipulation and detection of single electrons for future information processing. J. Appl. Phys. 97, 2 (2005).
Bugu, S. et al. RF reflectometry for readout of charge transition in a physically defined PMOS silicon quantum dot. arXiv preprint arXiv:2010.07566 (2020).
Bugu, S., Ozaydin, F., Ferrus, T. & Kodera, T. Preparing multipartite entangled spin qubits via pauli spin blockade. Sci. Rep. 10, 1–8 (2020).
Han, X. et al. Effective W-state fusion strategies for electronic and photonic qubits via the quantum-dot-microcavity coupled system. Sci. Rep. 5, 12790 (2015).
Li, N., Yang, J. & Ye, L. Realizing an efficient fusion gate for W states with cross-Kerr nonlinearities and QD-cavity coupled system. Quantum Inf. Process. 14, 1933–1946 (2015).
ADS MATH Article Google Scholar
Uppu, R. et al. On-chip deterministic operation of quantum dots in dual-mode waveguides for a plug-and-play single-photon source. Nat. Commun. 11, 3782 (2020).
Najer, D. et al. A gated quantum dot strongly coupled to an optical microcavity. Nature 575, 622–627 (2019).
Cheng, L.-Y., Wang, H.-F. & Zhang, S. Simple schemes for universal quantum gates with nitrogen-vacancy centers in diamond. JOSA B 30, 1821–1826 (2013).
Iten, R. et al. Introduction to universalQcompiler. arXiv preprint arXiv:1904.01072 (2019).
Cross, A. The ibm q experience and qiskit open-source quantum computing software. APS 2018, L58-003 (2018).
Hu, C., Young, A., O'Brien, J., Munro, W. & Rarity, J. Giant optical faraday rotation induced by a single-electron spin in a quantum dot: Applications to entangling remote spins via a single photon. Phys. Rev. B 78, 085307 (2008).
Hu, C., Munro, W. & Rarity, J. Deterministic photon entangler using a charged quantum dot inside a microcavity. Phys. Rev. B 78, 125318 (2008).
Wei, H.-R. & Long, G. L. Hybrid quantum gates between flying photon and diamond nitrogen-vacancy centers assisted by optical microcavities. Sci. Rep. 5, 12918 (2015).
Chen, Q., Yang, W., Feng, M. & Du, J. Entangling separate nitrogen-vacancy centers in a scalable fashion via coupling to microtoroidal resonators. Phys. Rev. A 83, 054305 (2011).
Cheng, L.-Y., Wang, H.-F., Zhang, S. & Yeon, K.-H. Quantum state engineering with nitrogen-vacancy centers coupled to low-Q microresonator. Opt. Express 21, 5988–5997 (2013).
Wei, H.-R. & Deng, F.-G. Compact quantum gates on electron-spin qubits assisted by diamond nitrogen-vacancy centers inside cavities. Phys. Rev. A 88, 042323 (2013).
An, J.-H., Feng, M. & Oh, C. Quantum-information processing with a single photon by an input-output process with respect to low-Q cavities. Phys. Rev. A 79, 032303 (2009).
Li, M., Lin, J.-Y. & Zhang, M. High-fidelity hybrid quantum gates between a flying photon and diamond nitrogen-vacancy centers assisted by low-Q single-sided cavities. Ann. Phys. 531, 1800312 (2019).
Duan, L.-M. & Kimble, H. Scalable photonic quantum computation through cavity-assisted interactions. Phys. Rev. Lett. 92, 127902 (2004).
Heo, J. et al. Implementation of controlled quantum teleportation with an arbitrator for secure quantum channels via quantum dots inside optical cavities. Sci. Rep. 7, 1–12 (2017).
Walls, D. F. & Milburn, G. J. Quantum Optics (Springer, Berlin, 2007).
Yoneda, J. et al. A quantum-dot spin qubit with coherence limited by charge noise and fidelity higher than 99.9%. Nat. Nanotechnol. 13, 102–106 (2018).
Bartkowiak, M. & Miranowicz, A. Linear-optical implementations of the iswap and controlled not gates based on conventional detectors. JOSA B 27, 2369–2377 (2010).
SB thanks to Roger Colbeck and Raban Iten, and FO thanks to Bilen Basarir for fruitful discussions. SB acknowledges Japanese Government MEXT scholarship. FO acknowledges the Personal Research Fund of Tokyo International University. This work was partially supported by JSPS KAKENHI Grant Number: JP18K18996 and JP20H00237, JST CREST (JPMJ CR1675) and MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant Number JPMXS0118069228.
Department of Electrical and Electronic Engineering, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo, 152-8552, Japan
Sinan Bugu & Tetsuo Kodera
Institute for International Strategy, Tokyo International University, 1-13-1 Matoba-kita, Kawagoe, Saitama, 350-1197, Japan
Fatih Ozaydin
Department of Information Technologies, Isik University, Sile, Istanbul, 34980, Turkey
Sinan Bugu
Tetsuo Kodera
S.B. designed the scheme and carried out the theoretical analysis under the guidance of F.O. and T.K., S.B., F.O. and T.K. reviewed the manuscript and contributed to the interpretation of the work and the writing of the manuscript.
Correspondence to Sinan Bugu.
The authors declare no competing interests.
Bugu, S., Ozaydin, F. & Kodera, T. Surpassing the classical limit in magic square game with distant quantum dots coupled to optical cavities. Sci Rep 10, 22202 (2020). https://doi.org/10.1038/s41598-020-79295-x
About Scientific Reports
Guest Edited Collections
Scientific Reports Top 100 2017
Scientific Reports Top 10 2018
Editorial Board Highlights
Author Highlights
Scientific Reports ISSN 2045-2322 (online) | CommonCrawl |
Factors affecting adoption of improved sorghum varieties in Tanzania under information and capital constraints
Aloyce R Kaliba ORCID: orcid.org/0000-0002-4360-99971,
Kizito Mazvimavi1,
Theresia L Gregory1,
Frida M Mgonja1 &
Mary Mgonja1
Agricultural and Food Economics volume 6, Article number: 18 (2018) Cite this article
Low adoption of agricultural technology is among the main reasons for low farm productivity and high incidence of poverty and food insecurity in sub-Saharan countries including Tanzania. In this study, we examine the factors affecting adoption of improved sorghum varieties using data from 822 randomly selected sample households in northern and central Tanzania. We employ a multiple-hurdle Tobit model to assess the factors affecting adoption after controlling for both capital and information constraints. We also use t-distributed stochastic neighbor embedding to cluster farmers into homogeneous groups. The method allows to reduce the dimensionality while preserving the topology of the dataset, which increases the clustering accuracy. It also superiors for visualization of the clustering results. Results show that radio and other mass media outlets that create awareness will increase adoption among farmers who do not face capital constraint. Some farmers lack basic resources such as land and capital, and subsidies could have a high impact on these farmers. Other farmers simply need assurance on the performance of improved sorghum varieties. Field days, on-farm trials, and demonstration plots could be useful in supporting these farmers. A tailored support system, however, needs a sustained investment in both quantity and quality of services. There is therefore a need to develop a pluralistic research and extension systems that encourage the use of information technologies and community-based organizations to reach specific groups of farmers.
The population of Sub-Saharan Africa is growing fast, and 70% of the population is in rural areas that depend on the agricultural sector as a source of livelihood. The sector is not growing fast enough to meet food adequacy. Much of the agricultural growth achieved to date is by the expansion of agricultural land area. In the face of an increasing population, agricultural land expansion has reached its geographical limits and has become a leading cause of soil fertility decline and environmental degradation (Wiggins 2000; Breisinger et al. 2011). The agricultural sector is still an important economic sector, and it employs over 50% of working adults and over 65% of the labor force (Gollin, Parente, and Rogerson 2002). Improving agricultural production and productivity through adoption of improved agricultural technologies is an important pathway that will improve livelihoods of the majority and enhance food security. Adoption of new and improved practices, expansion of rural financial markets, increased capital and equipment ownership, and development of research and extension linkages could all contribute to increases in productivity, which is a prerequisite for poverty alleviation and enhanced food security (Von Braun, Ruel, and Gillespie 2010; Wesley and Faminow 2014). While many countries in Asia, the Caribbean, and Latin America have registered production and productivity gains from adopting agricultural technologies such as hybrid seeds, inorganic fertilizer, and irrigation, in Sub-Saharan Africa, the adoption of promising agricultural technologies has been far from ubiquitous and has remained particularly low. For example, Gollin, Morris, and Byerlee (2005) show that improved maize varieties accounted for 17% of the total area harvested in Sub-Saharan Africa compared to 90% in East and South East Asia and the Pacific and 57% in Latin America and the Caribbean. Primarily cultivated by smallholder farmers for domestic consumption, sorghum thrives in harsh climates, is drought resistant, and can improve food security and mitigate the influence of climate change especially among vulnerable populations (Ahmed, Sanders, and Nell 2000). The sorghum crop is an important source of protein and nutrients for millions of people. In West Africa, sorghum accounts for 70% of total cereal production (Atokple 2003). The adoption rates of improved sorghum varieties (ISVs) vary significantly within Sub-Saharan Africa, with Southern Africa having higher adoption rates than other parts of the region. The sorghum crop consistently accounts for more than 30% of the total cultivated land, and 23% of the total sorghum crop area is planted with improved varieties. In most parts of West Africa, the area with ISVs is less than 2% of the total cultivated land (Cline 2007; Burke, Lobell, and Guarino 2009). As discussed in Gollin, Lagakos, and Waugh (2014), there is also a large gap between what the sub-Saharan farmer produces per unit area and production potential with the available technology.
Worldwide, recent research and extension efforts have resulted in better agricultural practices, new and improved crops varieties, and improvements in soil and water management practices. However, Meinzen-Dick et al. (2004) argue that the only way for sub-Saharan farmers to gain from these new agricultural technologies is through adoption, after perceiving them to be beneficial and profitable. To enhance the adoption, there are several studies that focus on mapping agricultural technology adoption patterns and on finding variables associated with adopters of these technologies. This study extends the latter category by using a two-step cluster analysis to group farmers into subgroups with similar adoption patterns. The generated knowledge is important in terms of formulating specific policies and/or targeting specific groups of farmers to promote the adoption of ISVs in Tanzania and giving feedback to institutions involved in agricultural research and extension in similar regions in Sub-Saharan Africa.
One of the goals of this study was to quantify the factors influencing the adoption of ISVs developed by the International Crop Research Institute for Semi-Arid Tropics (ICRISAT) and tested by the Department of Research and Development (DRD) of Tanzania's Ministry of Agriculture, Livestock, and Fisheries. The results from this study will allow ICRISAT and DRD to test the validity of their new research strategies and to suggest an efficient mechanism and adoption pathways for other crops. In addition, the present study adds to the literature about the role of a lack of information and capital constraints on the adoption of ISVs. The analysis illustrates how access to information and the availability of capital jointly affect the adoption behavior of sorghum producers. We go beyond the traditional approach of assessing factors affecting adoption by using a two-step cluster analysis and t-distributed stochastic neighbor embedding (t-SNE) that allows visualization of the underlying relationships among farmers with similar adoption patterns (Burke, Lobell, and Guarino 2009). The results are key for good decision-making process in terms of designing cost-effective agricultural research prorates and extension advisory services.
In the following section, we present an overview of sorghum research and development in Tanzania followed by a description of the source of the data analyzed in this study. Then, we present a conceptual framework for technology adoption in the presence of multiple binding constraints, the empirical specifications of a multiple-hurdle Tobit model, and a brief review of two-step cluster analysis. In the last two sections, we present key findings and the policy implications for scaling up the adoption of improved ISVs in Tanzania.
Sorghum research in Tanzania
Sorghum (Sorghum bicolor (L.) Moench or Mtama in Swahili is one of the five most important cereal crops in the world, and because of its broad adaptation, it is one of the climate-ready crops (Association for Strengthening Agricultural Research in East and Central Africa 2013). In Tanzania, sorghum is the second most important staple food after maize, supporting more than 80% of the population (Rohrbach et al. 2002). Most farming systems in Tanzania are increasingly cultivating sorghum as the main crop to address recurring food shortages resulting from other crop failures (Kombe 2012). Sorghum research and development activities in Tanzania trace back to the early 1980s. During that period, ICRISAT began collaborating with DRD as well as some non-governmental organizations (NGOs) to test improved sorghum varieties using both on-station and on-farm trials. Early efforts led to the release of three sorghum varieties: Tegemeo, Pato, and Macia in 1978, 1997, and 1998, respectively (Mgonja et al. 2005). In 2002, they released the Wahi and Hakika varieties, and in 2008, they released NARCO Mtama 1. Seed Co Tanzania Limited also released the Sila variety in 2008 (Monyo et al. 2004). Kilimo (2008), Kanyeka, Kamala, and Kasuga (2007), and Association for Strengthening Agricultural Research in East and Central Africa (2013) summarize agronomic and physical characteristics of these varieties. The varieties are drought-tolerant and are for human consumption. Agro-pastoralists use crop residues as animal fodder (Rohrbach and Kiriwaggulu 2007; Kombe 2012). Over the past decade, sorghum is slowing entering the nonfood and value-add markets with use in the baking, brewery, and animal feed industries. The focus of current research and extension efforts is on linking farmers to this nonfood market to stimulate production and scale up ISV adoption in Tanzania (Monyo et al. 2004).
The data for this analysis are from a survey conducted by Selian Agricultural Research Institute (SARI), Arusha, Tanzania, in collaboration with ICRISAT, Nairobi, Kenya. The first author of the present study developed the structured questionnaire. A 2-day enumerator-training workshop, organized by the main author, was conducted in May 2013 to review the questionnaire. Twenty-five extension agents working in major sorghum farming systems and three scientists from ICRISAT participated in the workshop. After the workshop, the questionnaire was pre-tested in the Singida Rural and Rombo Districts. Issues found during the questionnaire pre-test provided guidance for refinement of the final survey instrument used in the study.
We considered the intensity of sorghum production and importance of sorghum in the farming system to select participating regions and districts. The sample area included the Iramba, Singida, and Manyoni Districts (Singida Region, 435 sample households), Kondoa District (Dodoma Region, 102 sample households), Babati District (Manyara Region, 110 sample households), Rombo District (Kilimanjaro Region, 57 sample households), and Kishapu District (Shinyanga Region,118 sample households). We randomly selected two sample wards and one village from each ward from each district. Administrative subdivisions in Tanzania include regions, districts, wards, and villages. Therefore, the village is the lowest administrative unit (Map 1).
Location of sample households in Tanzania
To create a counterfactual (for impact assessment in another study), 60% of the responding households were an adopter, that is, planted at least one improved sorghum variety during the 2013/2014 farming season. For statistical analysis, the sample size per village was at least 50 households. The survey covered 822 households, of which 505 were adopters (61.44%) and 317 were non-adopters (38.56%). At the village level, we first grouped farmers into adopters and non-adopters using the village register and then randomly selected sample households from each group. Previously trained enumerators collected the data from the respondents, who were knowledgeable farmers at the household level.
Modeling adoption under information and capital constraints
Theoretically, the adoption of agricultural technology occurs when the expected utility from the technology exceeds that of non-adoption (Huffman 1974; Rahm and Huffman 1984). Since utility is not observable, single, or multivariate limited dependent models have been a workhorse for estimating factors affecting adoption (Huffman and Mercier 1991; Grabowski and Kerr 2013). Cragg's double-hurdle model (Cragg 1971) extends these models if a farmer faces two hurdles while deciding to adopt. Croppenstedt, Demeke, and Meschi (2003) modified Cragg's model to directly model imperfections that create multiple hurdles during the adoption process.
In this study, there are three groups of farmers. The first group passed all hurdles and adopted the improved seeds. The second group had a desired demand but lacked either information or capital. In this group, there were farmers with limited information on ISVs not constrained by capital and farmers with enough information on ISVs but not enough capital to buy improved seeds and/or complementary inputs. The third group consisted of farmers who were non-adopters with access to both information and capital, but they did not adopt ISVs due to other unknown constraints. Given the standard utility maximization condition for the adoption process and letting \( {\boldsymbol{D}}_{\boldsymbol{i}}^{\boldsymbol{T}} \)stand for a binary variable for the adoption decision (where adoption = 1 and 0 otherwise), \( {\boldsymbol{D}}_{\boldsymbol{i}}^{\boldsymbol{c}\mathbf{1}} \) is a binary variable representing information constraint, and \( {\boldsymbol{D}}_{\boldsymbol{i}}^{\boldsymbol{c}\mathbf{2}} \)is a binary variable standing for capital constraint. The multiple-hurdle Tobit model is:
$$ {D}_i^{\ast }={D}_i^T{D}_i^{c\mathbf{1}}{D}_i^{c\mathbf{2}}=\left\{\begin{array}{l}>\mathbf{0},\kern0.5em \mathbf{if}\ \mathbf{ISVs}\ \mathbf{is}\ \mathbf{adopted}\\ {}\kern0.5em \mathbf{0},\mathbf{if}\ \mathbf{ISVs}\ \mathbf{is}\ \mathbf{not}\ \mathbf{adopted}\end{array}\right. $$
In this equation, \( {\boldsymbol{D}}_{\boldsymbol{i}}^{\ast} \) is a latent variable standing for the unobservable intensity of adoption measured as the proportion of cropland allotted to ISVs. The variable is positive for adopters and zero for non-adopters. Adoption occurs when three factors hold simultaneously: the discounted expected utility of profit from ISVs adoption is positive, the farmer is sufficiently aware of ISVs, and the farmer has access to capital to invest in the new sorghum enterprise (Grabowski and Kerr 2013). Each constraint is independent. The probability of allotting land to ISVs is the multiple of the probability of each constraint. We could estimate Eq. (1) using a joint maximum likelihood as in Jones (1992), Smith (2003), Moffatt (2005), Teklewold et al. (2006), Shiferaw et al. (2015), and Burke, Myers, and Jayne (2015). The underlying assumption is that a binomial probability model governs the binary outcome of whether an outcome variable has a zero or a positive realization. The likelihood function is therefore separable with respect to the different parameters and is the sum of the log likelihoods from two separate models—a binomial probability and a zero-truncated model. The maximization of different components of the log-likelihood function generates consistent, efficient, and unbiased estimates. Expressions defining farmer groups with desired demand but constrained by a lack of information and capital are as follows:
$$ {D}_i^{\ast }={\beta}^T{X}_i+{\mu}_i;\kern1em {I}_i^{\ast }={G}^{c\mathbf{1}}={\alpha}^T{z}_i+{\omega}_i;\kern1em \mathbf{and}\kern1em {S}_i^{\ast }={G}^{c\mathbf{2}}={\delta}^T{h}_i+{\varepsilon}_i. $$
In Eq. (2), \( {\boldsymbol{D}}_{\boldsymbol{i}}^{\ast} \) is the observed demand that is truncated at zero, excluding non-adopters (Tobin 1958); I* and S* are the unobservable demand constrained by a lack of information and capital, respectively; z and h are the vectors of covariates that affect access to agricultural information and capital, respectively; and α and δ are the parameter vectors of the model. The random variable μi is N(0, σ2), and the random variables ωi and εi are N(0, 1).
Estimating Eq. (2) using a multiple-hurdle Tobit (Tobin 1958) framework as explained in Feder, Just, Zilberman (1985), Roodman (2011) and Croissant, Carlevaro, Hoareau (2016) allows the prediction of both intensity and probability of adoption. The first hurdle defining adoption and non-adoption is modeled as a probability choice where adoption occurs with probability \( \boldsymbol{P}\left({\boldsymbol{D}}_{\boldsymbol{i}}=\mathbf{1}\right)=\boldsymbol{P}\left({\boldsymbol{y}}_{\boldsymbol{i}}^{\ast}>\mathbf{0}\right) \) and non-adoption with probability \( P\left({\boldsymbol{D}}_{\boldsymbol{i}}=\mathbf{0}\right)=\boldsymbol{P}\left({\boldsymbol{y}}_{\boldsymbol{i}}^{\ast}\boldsymbol{\le}\mathbf{0}\right)=\mathbf{1}-\boldsymbol{P}\left({\boldsymbol{y}}_{\boldsymbol{i}}^{\ast}>\mathbf{0}\right) \), where P(.) is the probability function and \( {\boldsymbol{y}}_{\boldsymbol{i}}^{\ast} \) is the latent variable representing the intensity of adoption. In the second and third hurdles, singular probability choice models replace the second and the third expression such that \( P\left({\boldsymbol{I}}_{\boldsymbol{i}}^{\ast}=\mathbf{1}\right)=\mathbf{1} \) and \( P\left({\boldsymbol{S}}_{\boldsymbol{i}}^{\ast}=\mathbf{1}\right)=\mathbf{1} \). To estimate Eq. (2), Smith (2003) suggests setting zero correlations between random disturbances. The Voong test (Vuong 1989) tests the hypothesis of no correlation between incidence and intensity of adoption.
The four subgroups of farmers discussed above included adopter (505 sample households), non-adopter with desired demand and without capital constraint but lacked enough information (150 sample households), non-adopter with capital constraints (85 sample households), and non-adopter with no desire to adopt improved sorghum varieties and no capital or information constraints (82 sample households). The average time between learning about ISVs and field testing was 3.76 years, and for the third quartile, this time was 4 years. Farmers in the desired demand group who lacked information were either not aware of any improved sorghum varieties, or if they were aware, then the threshold was less than 4 years. Farmers in the desired demand group who were aware of ISVs were asked follow-up questions to identify reasons for non-adoption, and they either identified lack of capital or credit as a major constraint to adoption.
There are three types of covariates to include in Eq. (2): farm and farmer associated attributes, attributes associated with the technology, and farming goals. Examples of these variables include human capital represented by the level of education of the farmer, risk and risk management strategies, and access to the institutional support systems such as marketing facilities, research and extension services, availability of credit, and transportation. Other variables include production factors, such as farm size, number of livestock, and off-farm income and income sources. Farmers may have different farming goals such as subsistence or market-oriented farming. Feder and Slade (1984); de Janvry, Fafchamps, and Elisabeth (1991); Holden, Shiferaw, and Pender (2001); and Adegbola and Gardebroekb (2007) describe these variables in detail.
Apart from finding factors affecting adoption, understanding the diversity of farmers is of critical importance for the successful development of interventions. We extended this study by grouping farmers into sub-homogenous groups with similar adoption patterns through a two-step cluster analysis. There were three main procedures applied in the cluster analysis: hierarchical cluster analysis, k-means cluster analysis, and two-step cluster analysis (Rousseeuw 1987). Hierarchical clustering is useful for small datasets or when examining changes (merging and emerging clusters). With k-means clustering, the number of clusters is specified in advance, and k is the number of clusters. It is also efficient when using normally distributed continuous variables and when there is enough data to allow variability among the created clusters (Gower 1971).
Two-step clustering is suitable for large datasets, especially when there is a mixture of continuous and categorical variables (Gorgulu, 2010). The goal is to automatically form several clusters based on the mix of categorical and continuous variables. Most algorithms for two-step clustering use the first step to pre-cluster the data into many small sub-clusters. The second step uses the pre-cluster to form the desired number of clusters, or if the desired number of clusters is unknown, then these algorithms will automatically find the best number of clusters. In this study, we used two-step clustering tools to group sample households into homogenous groups. The variables used for grouping were both categorical and continuous and included the estimated probability of censoring (P(y∗ > 0)) and the estimated expected value of an uncensored dependent variable (E(y ∣ y∗ > 0) and all statistically significant variables in Eq. (2).
The first step involved calculating Gower's distance matrix to separate households into (dis)similar groups. We could not use the Euclidean distance since it is valid for only continuous variables. For the limitations of using Euclidean distance in cluster analysis, see Gower (1971) and Struyf, Hubert, and Rousseeuw (1997). After calculating Gower's distance matrix, the second step involved finding an optimal number of clusters and portioning the (dis)similar groups partitioned around medoids (PAM) to form clusters and using a silhouette distance to determine optimal number of clusters as suggested in Rousseeuw (1987), Kaufman and Rousseeuw (1990), and Pollard and van der Laan (2002). This approach depends on the actual partition of the objects and not on the type of clustering algorithm. The best method to visualize the formed clusters is t-distributed stochastic neighbor embedding or t-SNE. Developed by van der Maaten and Hinton (2008), t-SNE is a dimension reduction technique that tries to preserve the local structure and make clusters visible in a 2D or 3D visualization. t-SNE is a non-linear dimensionality reduction algorithm for finding patterns in the data by grouping observation clusters based on similarities in a large dataset with many variables. It is extremely useful for visualizing high-dimensional data. It overcomes the limitations of many linear dimensionality reduction algorithms and concentrates on placing dissimilar data points far apart in a lower dimension representation. t-SNE is based on probability distributions with a random walk on neighborhood graphs to find the structure within the data. Bunte et al. (2012) and Donaldson (2016) show that t-SNE presents high dimension data on low dimension while preserving global geometry at all measurement scales in the dataset. We conducted all analysis in the R environment (R Core Team 2017).
Factors affecting adoption
Table 1 presents summary statistics on the incidence and intensity of adoption. In the table, farm size is the total land area cultivated in the 2013/2014 farming season. Most farmers cultivated a single variety of sorghum rather than a combination of different varieties. The widely adopted improved sorghum variety was Macia. About 22% of the households adopted the Macia variety and 18% of the farmers adopted the Tegemeo variety. Hakika and Macia adopters had smaller land holdings in terms of cultivated land. Macia variety adopters cultivated about 1.91 ha of land and allotted about 0.94 ha to Macia variety. Adopters of the Hakika variety cultivated 1.62 ha and allotted 0.83 ha to that variety. Other households cultivated more than 2.28 ha and allotted less than 0.82 ha to ISVs. The proportion of land allotted to ISVs ranged from 13% for both Tegemeo and Pato to 26% for Macia. However, the proportion of land allotted to the Macia variety was more variable compared to others with a standard deviation of 21%. For Hakika, the proportion of land allotted to that variety was 25%, and the standard deviation was 15%. The land apportioned to other varieties was less than 17%, and the standard deviation was less than 13%. In the sample, 91% of non-adopters cultivated local varieties other than Langalanga, a variety of choice for non-adopters.
Table 1 Land allocation to improved and local varieties
Table 2 shows the results from the multiple-hurdle Tobit model, and we present summary statistics of all covariate variables in Appendix. The first part of Table 2 shows the results estimated with the hypothesis that there is no correlation between the main adoption equation and the two hurdle equations (Estimate 1). The results in the second part (Estimate 2) are after imposing correlation among the three equations. For each estimate, the log-likelihood ratios compare the specified model with a naive model, defined as a model without covariates. In both cases, the models with covariates performed better than the naive model at the 10% and 1% level of significance for the independent and dependent models, respectively. The Vuong test (1989) in the first estimate (Estimate 1) compares the presented results with a simple choice model as suggested by Heckman (1976, 1979). The test minimizes the Kullback-Leibler information criterion, and the test results are for finding the best parametric model specification. In this case, the independent multiple-hurdle Tobit model without correlation performed better than a simple-selection model. The estimated Vuong test statistic is 29.1980 and was significant at the less than 1% probability level; rejecting the null hypothesis that the two models were equivalent.
Table 2 Regression results on factors affecting adoption
The Vuong test also compares the specification between independent (Estimate 1) and dependent models (Estimate 2) that impose correlation between the main adoption equation and the hurdle equations. The dependent model was the model of choice compared to the independent model as showed by statistical significance of the Vuong test (p < 0.01). In addition, the estimated correlation parameters were statistically significant (p < 0.01). Particularly, there was a high and negative correlation between adoption and lack of both information and capital. Although the correlation is not causation and because we are modeling intensity of adoption, we can conclude categorically that lack of information and limited capital decreases both incidences of adoption and adoption intensities.
The results in Table 2 also show a high positive correlation between lack of information and capital constraint that is associated with decreased incidence and intensity of adoption. The positive relationship implies that most farmers who lack information on ISVs are also likely to be poor. In the study area, the main source of agricultural information is from both the public agricultural research and extension systems. Their effectiveness in influencing adoption of new agricultural technologies depends on the strength of linkages between farmers, extension agents, and research scientists. These linkages are still weak, and there is no incentive or mechanism for either extension agents or research scientists to network with poor households. Poor households are also likely to be outside of the information networks such as farmer-to-farmer linkages, participation in farmer field schools, or contract farming. These variables are important during the adoption process and have a high impact on incidence and adoption intensity.
Because the dependent multiple-hurdle Tobit model results are superior, this discussion also focuses on the second part of Table 2. For the adoption equation, even though the gender of household head is not statistically significant, this parameter is positive, showing that households headed by male farmers are more likely to adopt ISVs compared to female-headed households. Most adoption studies show that gender-linked differences in the adoption of agricultural technologies are not directly attributable to a farmer being male or female but to differences in access to key requisites such as improved seeds. Female-headed households are likely to lack the resources and networks that allow male-headed households to access the primary and secondary inputs that are necessary for the adoption of agricultural technologies. However, some studies, including De Groote et al. (2002), show that the gender of farmers did not influence adoption, which is contrary to other studies such as Thomson, Gelson, and Elias (2014) that report that gender was important in explaining the adoption of improved seeds.
Other non-statistically significant variables that represented household characteristics included labor availability, education level of household members, and income level of the household. These results are inconsistent with other studies that show these variables influence the adoption of improved seeds. Labor availability is usually associated with adoption (Hoop et al. 2014). However, the labor market tends to dictate technology adoption depending on whether the area targeted with the new agricultural technology has a net labor surplus or the proposed technology is labor saving or labor intensive. The labor market also depends on the opportunity cost of off-farm labor. Due to the subsistence nature of the farming system and the lack of alternative use of surplus labor, labor may not be a major constraint in the study area (Diagne 2006). Studies including Feleke and Zegeye (2005), and Thomson, Gelson, and Elias (2014) reported a statistical influence of education on adoption of improved seeds. Education level is associated with human capital and the ability of farmers to adjust faster to new production and market conditions. Similarly, Kaliba (2004); Langyintuo and Mungoma (2008); Marra, Pannell, and Ghadimb (2003); and Awotide et al. (2012) argue that wealth is often associated with the adoption of new agricultural technologies because wealthier farmers are more likely to try new agricultural technology.
The marital status dummy variable (married = 1 and 0 otherwise) was statistically significant (p < 0.1). The results imply that married couples are more likely to adopt ISVs. The results are consistent with other studies including Peterman et al. (2010) and Kondylies and Mueller (2013) that showed that married farmers have distinct agricultural contacts that include extension agents and agro-dealers compared to divorced, widowed, or single farmers who are more dependent on other farmers as their reliable source of agricultural information. The average age variable for all household members was also statistically significant (p < 0.1) but negatively associated with incidence and intensity of adoption of ISVs. The results imply that in the study area, the adopters of ISVs were young households. Although there are studies indicating that the age of the farmer does not influence adoption (Paudel and Matsuoka 2008), the results of other studies, such as Kaliba, Verkuijl, and Mwangi (2000); Wakeyo and Gardebroek (2013); Gebrezgabher et al. (2015); and Lambert, Paudel, and Larson (2015), support these results that suggest that older farmers are likely to be more risk-averse than younger farmers.
Knowledge measured in years since the farmer was aware of ISVs has a positive and highly significant impact on adoption. As shown by Leggesse, Burton, and Ozanne (2004); Diagne (2006); Diagne and Demont (2007); and Oster and Thornton (2009) in most cases, exposure to a technology is not random, and technology awareness is an important precondition for adoption to occur. However, individual farmers need enough time to transition from old to new agricultural technology. After adopting the technology, the farmer may decide to continue using it or stop using it. This action depends on the experienced benefits and associated risks after adoption (Asuming-Brempong et al. 2011; Kabunga, Dubois, and Qaim 2012).
Other variables with positive and significant impacts on adoption included the quality of extension services, the intensity of research activities, and market participation. These three variables are related to the availability of institutional support systems. Development experts have emphasized agricultural extension and rural education as crucial in achieving agricultural development, poverty reduction, and food security (Evenson, 2001; Feder, Murgai, and Quizon 2003; Ginéa and Yang 2009). Agricultural extension services are useful for incentivizing the adoption of ISVs and associated agronomic practices that increase yield such as line planting and weeding. Similarly, increases in research activities imply that there are research-managed, farmer-managed, or on-farm trials that create awareness, which encourages others to test and eventually adopt new technologies and practices (Lambrecht et al. (2014). The promotion of the improved agricultural technologies in this study hinges on the premise that adoption of improved seeds will results in higher production and increased productivity. Increased production and productivity will allow smallholder farmers to enter the market to sell their surplus crop. However, there are limited studies that focus on the interdependencies between market participation and adoption of new agricultural technologies.
Dummy variables standing for the three regions (i.e., Kilimanjaro, Manyara, and Singida) control for the possibility that farming systems with favorable soil and climatic conditions might be more likely to have farmers who are willing to adopt ISVs. We dropped both Dodoma and Manyara Regions from the model due to issues related to the singularity of the Hessian matrix. Principally, Singida farmers were more likely to adopt ISVs than farmers in the Dodoma and Manyara Regions, and Kilimanjaro farmers were less likely to adopt ISVs than farmers in the Dodoma and Manyara Regions. There was no statistically significant difference in the incidence of adoption among farmers in the Shinyanga, Dodoma, and Manyara Regions. Comparatively, farmers in the Singida Region were dependent on sorghum production as a source of food and income; therefore, they were more likely to try new varieties that would increase the production and productivity of their available resources. In the Kilimanjaro Region, sorghum production is at an infancy stage of development. Farmers are still depending on landraces with known yields.
The signs on the coefficients of all information constraint variables were negative, as expected. The results suggest that these variables tended to reduce information constraints for non-adopters with a desired demand. The statistically significant variables were income and knowledge of ISVs. This result may indicate that information delivered by extension agents in this study was not otherwise available to certain types of farmers, especially the poor. As discussed before, if a farmer is not aware that a technology exists, then adoption is not possible. In the study area, extension agents as an exogenous source of information may be neglecting the poorest farmers as discussed in Alwang and Siegel (1994) or women and female-headed households who tend to be relatively poor. Moreover, the diffusion of information related to new agricultural technologies such as ISVs is a dynamic process within social networks. Farmers learn about the profitability of the technology and about how to correctly use it from their own experience and from their peers' experiences. While learning from others is important, several factors can make social learning inefficient. Conley and Udry (2010) show poor farmers rely primarily on family, kinship, and neighbor networks for social learning. However, due to limited social networks, poor farmers are unlikely to see the decision process of peers, making it more difficult to accurately assess the available information about new agricultural technologies.
Except for the intensity of research activities variable, the signs of the coefficients of all capital constraint variables were negative, as expected. Similarly, the results suggest that these variables tended to reduce capital constraints for non-adopters with a desired demand. The variables for income, knowledge about ISVs, and market participation were statistically significant in reducing capital constraint among non-adopters with a desired demand. In the adoption literature, one of the most highlighted constraints to agricultural technology adoption is the availability of capital, which reduces both liquidity constraints faced by farmers and risk aversion. Availability of capital facilitates experimentation with new agricultural technologies and enhances diffusion of new agricultural technologies as rich farmers or farmers with access to credit are more likely to be the early adopters. A common finding is that adoption requires a set of minimum incentives and capacities from the farmer's perspective or an investment threshold that is not necessary for traditional production practices. If farmers are assured that investment in new agricultural technologies will have positive returns, then they may be encouraged to access credit from all markets. Furthermore, market participation by farmers increases the net returns from agricultural production and available resources including capital.
Groups of farmers
Figure 1 shows the relationship between the estimated average silhouette distance and the proposed number of optimal clusters. When there are three clusters, the mean silhouette distance is 0.33 for the entire dataset. However, the shape of the graph does not taper off after three clusters, which implies that many farmers are outside or on the boundary of the three selected clusters. Tapering occurs when the number of clusters equals 11. We expected these results given the heterogenous nature of small-scale farms. For example, using cluster analysis to study family farms in Switzerland, Hoop et al. (2014) estimated a mean silhouette distance of 0.24 for 12 optimal clusters. Gorgulu (2010) used similar techniques to classify dairy animal performance and calculated average silhouette distances that were between 0.35 and 0.52. In this study, the average silhouette distance was 0.203 for 12 clusters.
Mean silhouettes distance by number of clusters
We used visual inspection to find the number of clusters with the best results after plotting the clusters using t-SNE to reduce the number of dimensions to two by giving each data point a location in a map, thereby avoiding the crowding of points in the center of the map. We used the Barnes-Hut algorithm to approximate the distance between the points because it reduces the number of pairwise distances. Nine clusters provided the best visualization results with few outliers and few overlaps. In Fig. 2, the whole numbers are the cluster names, and the fractions are the estimated (mean) probability of adoption within the cluster. Farmers in cluster 4 have the highest probability of adoption (0.82), and farmers in cluster 6 have the lowest probability (0.20).
Relative position of the nine clusters
The polar plots in Fig. 3 illustrate the most prominent variables within each identified cluster. The variables on the polar axis represent the following in clockwise order: geometric mean age of adults in the household (age), awareness of ISVs in years (aware), credit availability (credit), weighted education level in years (edu), expected adoption intensity in hectares (expv), labor availability in labor equivalent (labor), market participation (market), estimated probability of adopting ISVs (pado), quality of government extension services (qext), intensity of research activities (rese), marital status of household head (status), gender of household head (typehh), and total wealth in Tshs (wealth). In Fig. 3, the general variables that distinguish the clusters are the wealth indicator that removes capital constraints and awareness that removes information constraints. Notice that the prominence of other variables depends on the individual clusters.
Polar plots of identified clusters
The nine identified clusters illustrate the typical characteristics of diverse groups of farmers within the sample. Ninety-nine of the sample households (12.07%) belonged to cluster 1. This cluster included mostly sample households with young farmers and with intermediate awareness of ISVs. While the probability of adoption was intermediate (0.47), the expected intensity of adoption was high due to awareness of ISVs. Since capital, rather than information, was highly limiting, we refer to this cluster as "adopters with adoption potential." Increases in available capital and awareness through increased quality of extension services and intensity of research activities could increase the adoption of ISVs among the members of this cluster. The second cluster had 170 sample households (20.73%), and it included sample households with mature family members and a labor supply that was not limiting. The probability of adoption is intermediate (0.32), with a low expected intensity of adoption. We refer to this cluster as "adverse adopters." The members of this cluster do not face both capital and information constraints and have the basic resources to adopt ISVs. They need more training and more evidence-based extension services such as field days and demonstration plots that manifest the superiority of ISVs.
The third cluster included 54 sample households (6.59%). In this cluster, all variables included in the regression model were above the third quartile. The probability of adoption was 0.77, the members were potentially married, and the household head was male. This cluster is referred to as "continuous adopters," and its members need continuous support from both research and extension services. The fourth cluster had 93 sample households (11.34%) and is similar to cluster 3, but its members included young farmers with the highest probability of adoption at 0.82. We referred to this cluster as "continuous innovators" since its members have all the characteristics of innovators. Research and extension agents could use this cluster to test new agricultural technologies related to ISVs in the study area through farmer-managed trials. The fifth cluster included 30 sample households (3.66%). Household members in this cluster were also similar to cluster 3 but included young and wealthy farmers. Cluster members were more aware of ISVs and had access to credit. The low probability of adoption (0.36) in this cluster could be attributable to low access to research and extension services. Cluster 5 is referred to as "adopters in waiting" since adoption among these clusters could be scaled up through an increase in the intensity of research and extension activities. Referred to as "typical non-adopters," cluster 6 included 105 sample households (12.80%). The members of this cluster had all the attributes that positively influence adoption with both capital and information that were not limiting. The probability of an adoption in cluster 6 was the lowest at 0.2, meaning that the characteristics of ISVs and the existing institutional support systems do not influence adoption, and non-adoption is a choice made by individual farmers.
All variables for the household members in cluster 7 that includes 35 sample households (7.93%) were between the second and third quartile of the overall sample. The probability of adoption, however, was low at 0.33. The attributes of the members of this cluster were quite mixed; however, approximately 35% of the households in this cluster had unmarried household heads, and 45% of the households headed by females in this study belonged to cluster 7. We called this cluster the "virtual adopters" since adoption is mainly constrained by the unavailability of basic resources such as land and labor, which is magnified by a lack of capital and information about the technology. Cluster 8 had 35 sample households (4.27%), but 60% of the households were in the first quartile (based on wealth distribution), and the majority were unmarried couples and included households headed by females. Despite being highly aware of the technology, the probability of adoption was 0.46, which was high given the attributes of this cluster and factors that positively influence adoption. We called this cluster the "enthusiast adopters." In defiance of resource constraints, the members of clusters 7 and 8 had the potential to use all available resources to adopt ISVs. Directing research and extension activities that focus on easing resource constraints would be beneficial for these two clusters. Cluster 9 had 169 sample households (20.61%) and contained members who were old, wealthy, and with above average resources including labor and credit; therefore, this cluster was referred to as the "veteran adopters." The probability of adoption for this cluster was 0.49, and awareness campaigns or/and increased research and extension activities could scale-up the adoption among members of this cluster. These results show that farmers are not homogenous and need tailored research and extension messages or/and public policies to scale-up the adoption of ISVs. While awareness campaigns among households in clusters 4, 6, and 7 could increase adoption, the households in clusters 7 and 8 need basic resource support systems to scale-up the adoption process. Other clusters need more classroom training, field days, and demonstration trials to create confidence and assurance of the performance of ISVs.
Adoption studies are evaluation tools aimed at generating knowledge to intensify the impact of agricultural programs. Using data from northern and central Tanzania, the focus of this study was on finding strategies to alleviate existing constraints and scale-up the adoption process. We mapped the factors influencing adoption using a multiple-hurdle Tobit model and t-distributed stochastic neighbor embedding (t-SNE) to cluster and visualize homogenous groups of farmers. The results showed that there is a threshold for both knowledge and capital before a farmer begins experimenting with improved sorghum varieties. Assurances that improved sorghum varieties are superior to landrace will sensitize the farmers to access credit from both informal and formal markets. Market participation will increase returns from available resources and profitability of the sorghum enterprise and will therefore increase adoption.
Demonstrating the superiority of improved sorghum varieties will have a more effective outcome when applied to households with limited networks. Learning by doing or learning from other peers and public policies such as targeted input subsidies will have a high impact. Classroom training and demonstration plots can end information asymmetry and increase the knowledge threshold, which will jump-start and scale-up the adoption process. Evidence from this study also suggests that young farmers with resources and knowledge about improved sorghum varieties are increasingly adopting improved sorghum varieties. Mass media could play a key role in increasing awareness of the potential of improved sorghum varieties to increase productivity and create wealth. Establishing a central delivery scheme and training of extension professionals on using mass media sources are highly recommended. This scheme could facilitate the delivery of well-designed, effective, and efficient agricultural extension content to sorghum farming communities. Regional television stations and radios and hand-held electronic devices could provide a continuous and sustained means of information and education for farmers in remote villages. Due to a comparatively short crop cycle (about 6 months), mass media messages must be highly informative, intensive, and coordinated to avoid mixed messages and information overload. Studies addressing complementary factors such as soil quality as related to organic and inorganic fertilizer use and marketing studies to analyze the localized small-scale value-added potential of sorghum would increase both market participation and profits from sorghum enterprises.
There is also an urgent need to strengthen the ability of local government and the private sector to play a more prominent role in delivering tailored services to underserved groups including female farmers and the poor who face different production and market constraints. A strong pedagogical linkage between research, extension, and policy professionals is essential in promoting appropriate, easily accessible, and current agricultural technology. Training to incentivize scientists and extension agents and engagement of policymakers during farmer training and field days are valuable to supporting these important linkages.
ASARECA:
Association of Strengthening Agricultural Research in East and Central Africa
DRD:
Department of Research and Development
ICRISAT:
International Crop Research Institute for Semi-Arid Tropics
ISVs:
Improved sorghum varieties
NARCO:
National Agricultural Research Cooperation
NGOs:
Partitioning around medoids
Selian Agricultural Research Institute
t-SNE:
t-distributed stochastic neighbor embedding
Adegbola P, Gardebroekb C (2007) The effect of information sources on technology adoption and modification decisions. Agric Econ 37:55–65
Ahmed MM, Sanders JH, Nell WT (2000) New sorghum and millet cultivar introduction in Sub-Saharan Africa: impacts and research agenda. Agric Syst 64:56–58
Alwang J, Siegel P (1994) Rural Poverty Assessment. An analysis of causes and policy recommendations. Volume III of the Zambia Poverty Assessment", Report No. 12985-ZA. Southern Africa Development, The World Bank, Washington
Association for Strengthening Agricultural Research in East and Central Africa (2013) Sorghum and Millet (ASARECA) annual report. http://www.asareca.org/publication-categories/annual-reports
Asuming-Brempong S, Gyasi KO, Marfo KA, Diagne A, Wiredu AN, Asuming-Boakye A, Frimpong BN (2011) The exposure and adoption of New Rice for Africa (NERICAs) among Ghanaian rice farmers: what is the evidence? Afr J Agric Res 6(27)
Atokple IDK (2003) Sorghum and millet breeding in West Africa in practice. CSIR Savanna Agricultural Research Institute, Tamale http://www.afripro.org.uk/papers/paper14Atokple.pdf
Awotide BA, Diagne A, Wiredu AN, Vivian Ebihomon O (2012) Wealth status and agricultural technology adoption among smallholder rice farmers in Nigeria. OIDA Int J Sustainable Development 5(2):97–114
Von Braun J, Ruel M, Gillespie S (2010) Bridging the gap: linking agriculture, and health to achieve the millennium development goals. In: Pinstrup-Andersen P (ed) The African food system and its interaction with human health and nutrition. Cornell University Press in cooperation with the United Nations University, Ithaca
Breisinger C, Zhu T, Al Riffai P, Nelson G, Robertson R, Funes J, Verner D (2011) Global and local economic impacts of climate change in Syrian Arab republic and options for adaptation. IFPRI discussion paper 1071. International food policy research institute, Washington, D.C
Bunte K, Haase S, Biehl M, Villmann T (2012) Stochastic neighbor embedding (SNE) for dimension reduction and visualization using arbitrary divergences. Neurocomputing 90(1):23–45
Burke MB, Lobell DB, Guarino L (2009) Shifts in African crop climates by 2050, and the implications for crop improvement and genetic resources conservation. Glob Environ Chang 19(3):317–325
Burke WJ, Myers RJ, Jayne TS (2015) A triple-hurdle model of production and market participation in Kenya's dairy market. Am J Agric Econ 97(4):1227–1246
Cline WR (2007) Global warming and agriculture: impact estimates by country. Center for Global Development: Peterson Institute for International Economics, Washington, DC, p 38
Conley TG, Udry CR (2010) Learning about a new technology: pineapple in Ghana. Am Econ Rev 100(1):35–69
Core Team R (2017) R: a language and environment for statistical computing. In: R Foundation for statistical computing. Austria. URL, Vienna https://www.R-project.org/
Cragg JG (1971) Some statistical models for limited dependent variables with application to the demand for durable goods. Econometrica 39:829–844
Croissant Y, Carlevaro F, Hoareau S (2016) mhurdle: multiple hurdle Tobit models. R package version 1:1–7 https://CRAN.R-project.org/package=mhurdle
Croppenstedt A, Demeke M, Meschi M (2003) Technology adoption in the presence of constraints: the case of fertilizer demand in Ethiopia. Rev Dev Econ 7(1):58–70
De Groote H, Doss C, Lyimo S, Mwangi W (2002) Adoption of maize technologies in East Africa – what happened to Africa emerging maize revolution? In Green Revolution in Asia and its Transferability to Africa, Tokyo, p 5 December 8–10
de Janvry A, Fafchamps M, Elisabeth E (1991) Peasant household behavior with missing markets: some paradoxes explained. Economic J 101(409):1400–1417
Diagne A (2006) Diffusion and adoption of NERICA rice varieties in Côte d'Ivoire. Dev Econ XLIV-2:208–231
Diagne A, Demont M (2007) Taking a new look at empirical models of adoption: average treatment effect estimation of adoption rate and its determinants. Agric Econ 37(2–3):201–210
Donaldson J (2016) tsne: T-distributed stochastic neighbor embedding for R (t-SNE). R package version 0.1–3. https://CRAN.R-project.org/package=tsne
Evenson R (2001) Economic impacts of agricultural research and extension. In: Gardner B, Rausser G (eds) Handbook of agricultural economics chapter 11
Feder G, Just RE, Zilberman D (1985) Adoption of agricultural innovations in developing countries: a survey. Econ Dev Cult Chang 33(2):255–297
Feder G, Murgai R, Quizon J (2003) Sending farmers back to school: the impact of farmer field schools in Indonesia. Rev Agric Econ 26(1):45–46
Feder G, Slade R (1984) The acquisition of information and the adoption of new technology. Am J Agric Econ 66(2):312–320
Feleke S, Zegeye T (2005) Adoption of improved maize varieties in southern Ethiopia: factors and strategy option. Food Policy 31:442–457
Gebrezgabher SA, Meuwissen MPM, Kruseman G, Lakner D, Oude Lansink AGJM (2015) Factors influencing adoption of manure separation technology in the Netherlands. J Environ Manag 150:1–8
Ginéa X, Yang D (2009) Insurance, credit, and technology adoption: field experimental evidence from Malawi. J Dev Econ 89(1):1–11
Gollin D, Lagakos D, Waugh ME (2014) The agricultural productivity gap. Q J Econ 129(2):939–993
Gollin D, Morris M, Byerlee D (2005) Technology adoption in intensive post-green revolution systems. Am J Agric Econ 87(5):1310–1316
Gollin D, Parente SLE, Rogerson R (2002) The role of agriculture in development. Am Econ Rev Papers Proc 92:60–164
Gorgulu O (2010) Classification of dairy cattle in terms of some milk yield characteristics using by fuzzy clustering. J Animal Veterinary Advances 9(14): eNo.: 1947–1951. https://doi.org/10.3923/javaa.2010.1947.1951
Gower JC (1971) A general coefficient of similarity and some of its properties. Biometrics 27:857–874
Grabowski PP, Kerr JM (2013) Resource constraints and partial adoption of conservation agriculture by hand-hoe farmers in Mozambique. Int J Agricultural Sustainability. https://doi.org/10.1080/14735903.2013.782703
Heckman JJ (1976) The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models. Ann Econ Soc Meas 5(4):475–492
Heckman JJ (1979) Sample selection bias as a specification error. Econometrica 47:153–161
Holden S, Shiferaw B, Pender J (2001) Market imperfections and land productivity in the Ethiopian highlands. J Agric Econ 52(3):53–70
Hoop D, Mack G, Mann S, Schmid D (2014) On the dynamics of agricultural labor input and their impact on productivity and income: an empirical study of Swiss family farm. Int J Agricultural Management 3(4):221–231
Huffman WE (1974) Decision making: the role of education. Am J Agric Econ 56:85–97
Huffman WE, Mercier S (1991) Joint adoption of microcomputer technologies: an analysis of farmers' decisions. Rev Econ Statistics 73:541–546
Jones AM (1992) A note on the computation of the double hurdle model with dependence with an application to tobacco expenditure. Bull Econ Res 44(1):67–74
Kabunga NS, Dubois T, Qaim M (2012) Yield effects of tissue culture bananas in Kenya: accounting for selection bias and the role of complementary inputs. J Agric Econ 63(2):444–464
Kaliba AR (2004) Technical efficiency of smallholder dairy farms in Central Tanzania. Quarterly J Int Agriculture 43(1):39–55
Kaliba AR, Verkuijl HJM, Mwangi W (2000) Factors affecting adoption of maize production technologies in the intermediate and lowlands of Tanzania. J Agric Appl Econ 32(1):35–47
Kanyeka E, Kamala R, Kasuga R (2007) Improved agricultural research technologies recommended in Tanzania. The Department of Research and Training, Ministry of Agriculture, Food Security, and Cooperatives, Dar-es-Salaam
Kaufman L, Rousseeuw PJ (1990) Finding groups in data: an introduction to cluster analysis. Wiley, New York
Kilimo (2008) Ministry of Agriculture, Livestock, and Fisheries Development (2008) Tanzania Variety List. http://www.kilimo.go.tz/index.php/en/resources/2008/12/
Kombe S (2012) Tanzania: Moshi farmers assured of sorghum, millet market. Tanzania Daily News, Dar es Salaam Available: http://allafrica.com/stories/201207060815.html
Kondylies F, Mueller V (2013) Seeing is believing? Evidence from a demonstration plot experiment. International Food Policy Research Institute, Washington, DC
Lambert DM, Paudel KP (2015) Larson JA (2015) bundled adoption of precision agriculture technologies by cotton producers. J Agric Resour Econ 40(2):325–345
Lambrecht I, Vanlauwe B, Merckx R, Maertens M (2014) Understanding the process of agricultural technology adoption: mineral fertilizer in eastern DR Congo. World Dev 59:132–146. https://doi.org/10.1016/j.worlddev.2014.01.024
Langyintuo AS, Mungoma C (2008) The effect of household wealth on the adoption of improved maize varieties in Zambia. Food Policy 33(2008:550–559
Leggesse DL, Burton M, Ozanne A (2004) Duration analysis of technological adoption in Ethiopian agriculture. J Agric Econ 55(3):613–631
van der Maaten L, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 1:1–48 2008
Marra M, Pannell DJ, Ghadimb AA (2003) The economics of risk, uncertainty and learning in the adoption of new agricultural technologies: where are we on the learning curve? Agric Syst 75(2–3):215–234
Meinzen-Dick, R, Adato, M, Haddad, L, Hazell, P (2004) Science and poverty: an interdisciplinary assessment of the impact of agricultural research. http://ageconsearch.umn.edu/bitstream/42566/2/pr16.pdf
Mgonja MA, Chandra S, Gwata ET, Obilana AB, Monyo ES, Rohrbach DD, Chisi M, Kudita S, Saadan HM (2005) Improving the efficiencies of national crop breeding programs through region-based approaches: the case of sorghum and pearl millet in southern Africa. J Food, Agriculture Environment 3:124–129
Moffatt PG (2005) Hurdle models of loan default. J Oper Res Soc 56(9):1063–1071
Monyo ES, Ngereza J, Mgonja MA, Rohrbach DD, Saadan HM, Ngowi P (2004) Adoption of improved sorghum and pearl millet technologies in Tanzania. International Crops Research Institute for the Semi-Arid Tropics, Bulawayo
Oster E, Thornton R (2009) Determinants of technology adoption: Private value and peer effects in menstrual cup take-up, Mimeo University of Chicago, Chicago
Paudel P, Matsuoka A (2008) Factors influencing adoption of improved maize varieties in Nepal: a case study of Chitwan District. Australian J Basic Applied Sci 2(4):824–834
Peterman A, Quisumbing A, Behrman J, Nkonya E. (2010) Understanding gender differences in agricultural productivity in Uganda and Nigeria (No. 1003) International Food Policy Research Institute (IFPRI)
Pollard KS, van der Laan MJ (2002) A method to identify significant clusters in gene expression data, U.C. Berkeley Division of Biostatistics Working Paper Series:107 http://biostats.bepress.com/ucbbiostat/paper107/
Rahm MR, Huffman WE (1984) The adoption of reduced tillage: the role of human capital and 544 other variables. Am J Agric Econ 66(4):405–413
Rohrbach DD, Kiriwaggulu JAB (2007) Commercialization prospects for sorghum and pearl millet in Tanzania. SAT eJournal ejournalicrisatorg 3(1):1-24
Rohrbach DD, Mtenga K, Kiriwaggulu JAB, Monyo ES, Mwaisela F, Saadan HM (2002) Comparative study of three community seed supply strategies in Tanzania. International crops research Institute for the Semi-Arid Tropics, Bulawayo
Roodman D (2011) Fitting fully observed recursive mixed-process models with cmp. Stata J 11(2):159–206
Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Computational and Applied Mathematics 20:53–65
Shiferaw B, Kebede T, Kassie M, Fisher M (2015) Market imperfections, access to information and technology adoption in Uganda: challenges of overcoming multiple constraints. Agric Econ 46(4):475–488
Smith MD (2003) On dependency in double-hurdle models. Stat Pap 44(4):581–595
Struyf A, Hubert M, Rousseeuw PJ (1997) Integrating robust clustering techniques in S-PLUS. Computational Statistics and Data Analysis 26:17–37
Teklewold H, Dadi L, Yami A, Dana N (2006) Determinants of adoption of poultry technology: a double-hurdle approach. Livest Res Rural Dev 18(3):2006
Thomson K, Gelson T, Elias K (2014) Adoption of improved maize seed varieties in southern Zambia. Asian J Agricultural Sci 6(1):33–39
Tobin J (1958) Estimation of relationships for limited dependent variables. Econometrica 26(1):24–36
Vuong QH (1989) Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica 57(2):307–333
Wakeyo MB, Gardebroek K (2013) Does water harvesting induce fertilizer use among smallholders? Evidence from Ethiopia. Agric Syst 114:54–63
Wesley, AS, Faminow, M (2014). Background paper: research and development and extension services in agriculture and food security. ADB economics working paper series. https://www.adb.org/sites/default/files/publication/151701/ewp-425.pdf
Wiggins S (2000) Interpreting changes from the 1970s to the 1990s in African agriculture through village studies. World Dev 28(4):631–622
We would like to thank the farmers who willingly participate in the study, extension agents in Dodoma and Kilimanjaro who conducted the surveys, and two anonymous reviewers for their valuable reviews that improved this paper.
The International Crop Research Institute for Semi-Arid Tropics (ICRISAT), Nairobi, Kenya, through the Monitoring Evaluation Impact and Learning Program provided funding for this study. However, the views expressed in this paper are those of the authors and do not necessarily represent ICRISAT's view and policy.
We cannot share the data used in this study which belong to ICRISAT. The data has variables that could identify the participating farmers. Reduced data can be available from the corresponding author on reasonable request after getting a permission from ICRISAT.
Southern University College, Baton Rouge, Louisiana, USA
Aloyce R Kaliba, Kizito Mazvimavi, Theresia L Gregory, Frida M Mgonja & Mary Mgonja
Aloyce R Kaliba
Kizito Mazvimavi
Theresia L Gregory
Frida M Mgonja
Mary Mgonja
All authors took part in the research design and data collection process. ARM cleaned and analyzed the data and wrote the article. All authors read and approved the final manuscript.
Correspondence to Aloyce R Kaliba.
Aloyce R Kaliba is a Professor of Economics and Statistics in the College of Business at Southern University and A&M College. He graduated from Kansas State University, Manhattan, USA, with a MSc and PhD in 1986 and 2000, respectively. He specialized in Agricultural Economics with a special interest in International Development and Policy Analysis. Between 1984 and 1997, he worked with the Ministry of Agriculture in Tanzania as an extension agent and an Agricultural Economist before joining the University of Arkansas at Pine Bluff as a Policy Analyst in 2001 and Southern University as an Associate Professor in 2007. Apart from teaching, he is also a Co-Director of the University Center for Entrepreneurial an Economic Development. His mission includes strengthening research capacity and management in developing countries and establishing collaborative research and extension activities between US and African researchers.
Kizito Mazvimavi has a PhD in Development Studies from the University of Wisconsin-Madison, USA. He is an agricultural economics expert with over 25 years of experience as a researcher, monitoring and evaluation specialist, and project manager. He has undertaken work for Different Development Agencies both as a development specialist and managing impact assessments of agricultural relief and market interventions. As a Country Representative for ICRISAT in Zimbabwe and Impact Assessment Specialist for Eastern and Southern Africa, he is currently a principal investigator for various impact assessment studies and supervising the implementing various agricultural research projects.
Theresia L Gregory is an Agricultural Economist at Selian Agricultural Research Institute (SARI), Arusha, Tanzania. The institute mandates include conducting crop research in northern Tanzania. She is the Lead Scientists in Economic Evaluation and impact assessment of new agricultural innovations introduced in the region.
Fridah M Mgonja is a Principal Agricultural Research Officer within the Crops Research Program at Selian Agricultural Research Institute (SARI). She is a lead scientist in participatory variety selection, which provides a wide choice of varieties to farmers to evaluate in their own environment using their own resources for increasing production. She is also a coordinator of the Harnessing Opportunities for Productivity Enhancement (HOPE) project that focuses on developing improved varieties and crop management practices to increase productivity under harsh, dry production environments in many parts of Sub-Saharan Africa and South Asia.
Mary Mgonja is a plant breeder, who works as the director for technology and communication at Namburi Agricultural Company Limited, a private Tanzanian agricultural enterprise. She holds a Doctor of Philosophy in plant breeding, and plant genetics, jointly obtained from the University of Ibadan and from the International Institute of Tropical Agriculture, also located in Ibadan. Before joining Namburi, she was a country director of the Alliance for a Green Revolution in Africa (AGRA). She has served as principal scientist on the improvement of dryland cereals at the International Crops Research Institute for the Semi-Arid Tropics and a Tanzania representative in the crop networks in the Southern African Development Community (SADC) and in the East African Community (EAC).
Table 3 Summary statistics of all covariates
Kaliba, A.R., Mazvimavi, K., Gregory, T.L. et al. Factors affecting adoption of improved sorghum varieties in Tanzania under information and capital constraints. Agric Econ 6, 18 (2018). https://doi.org/10.1186/s40100-018-0114-4
Multiple-hurdle Tobit
Two-step cluster analysis | CommonCrawl |
My big mistake about dense sets
2021: J
I made a big mistake in a Math Stack Exchange answer this week. It turned out that I believed something that was completely wrong.
Here's the question, are terminating decimals dense in the reals?. It asks if the terminating decimals (that is, the rational numbers of the form !!\frac m{10^n}!!) are dense in the reals. "Dense in the reals" means that if an adversary names a real number !!r!! and a small distance !!d!!, and challenges us to find a terminating decimal !!t!! that is closer than !!d!! to point !!r!!, we can always do it. For example, is there a terminating decimal !!t!! that is within !!0.0000001!! of !!\sqrt 2!!? There is: !!\frac{14142135}{10^7} = 1.4142135!! is closer than that; the difference is less than !!0.00000007!!.
The answer to the question is 'yes' and the example shows why: every real number has a decimal expansion, and if you truncate that expansion far enough out, you get a terminating decimal that is as close as you like to the original number. This is the obvious and straightforward way to prove it, and it's just what the top-scoring answer did.
I thought I'd go another way, though. I said that it's enough to show that for any two terminating decimals, !!a!! and !!b!!, there is another one that lies between them. I remember my grandfather telling me long ago that this was a sufficient condition for a set to be dense in the reals, and I believed him. But it isn't sufficient, as Noah Schweber kindly pointed out.
(It is, of course, necessary, since if !!S!! is a subset of !!\Bbb R!!, and !!a,b\in S!! but no element of !!S!! between these, then there is no element of !!S!! that is less than distance !!\frac{b-a}2!! of !!\frac{a+b}2!!. Both !!a!! and !!b!! are at that distance, and no other point of !!S!! is closer.)
The counterexample that M. Schweber pointed out can be explained quickly if you know what the Cantor middle-thirds set is: construct the Cantor set, and consider the set of midpoints of the deleted intervals; this set of midpoints has the property that between any two there is another, but it is not dense in the reals. I was going to do a whole thing with diagrams for people who don't know the Cantor set, but I think what follows will be simpler.
Consider the set of real numbers between 0 and 1. These can of course be represented as decimals, some terminating and some not. Our counterexample will consist of all the terminating decimals that end with !!5!!, and before that final !!5!! have nothing but zeroes and nines. So, for example, !!0.5!!. To the left and right of !!0.5!!, respectively, are !!0.05!! and !!0.95!!.
In between (and around) these three are: $$\begin{array}{l} \color{darkblue}{ 0.005 }\\ 0.05 \\ \color{darkblue}{ 0.095 }\\ 0.5 \\ \color{darkblue}{ 0.905 }\\ 0.95 \\ \color{darkblue}{ 0.995 }\\ \end{array}$$
(Dark blue are the new ones we added.)
And in between and around these are:
$$\begin{array}{l} \color{darkblue}{ 0.0005 }\\ 0.005 \\ \color{darkblue}{ 0.0095 }\\ 0.05 \\ \color{darkblue}{ 0.0905 }\\ 0.095 \\ \color{darkblue}{ 0.0995 }\\ 0.5 \\ \color{darkblue}{ 0.9005 }\\ 0.905 \\ \color{darkblue}{ 0.9095 }\\ 0.95 \\ \color{darkblue}{ 0.9905 }\\ 0.995 \\ \color{darkblue}{ 0.9995 }\\ \end{array}$$
Clearly, between any two of these there is another one, because around !!0.????5!! we've added !!0.????05!! before and !!0.????95!! after, which will lie between !!0.????5!! and any decimal with fewer !!?!! digits before it terminates. So this set does have the between-each-two-is-another property that I was depending on.
But it should also be clear that this set is not dense in the reals, because, for example, there is obviously no number of this type that is near !!0.7!!.
(This isn't the midpoints of the middle-thirds set, it's the midpoints of the middle-four-fifths set, but the idea is exactly the same.) | CommonCrawl |
Similarity corpus on microbial transcriptional regulation
Oscar Lithgow-Serrano ORCID: orcid.org/0000-0003-1995-16691,2,
Socorro Gama-Castro1,
Cecilia Ishida-Gutiérrez1,
Citlalli Mejía-Almonte1,
Víctor H. Tierrafría1,
Sara Martínez-Luna1,
Alberto Santos-Zavaleta1,
David Velázquez-Ramírez1 &
Julio Collado-Vides1,3
Journal of Biomedical Semantics volume 10, Article number: 8 (2019) Cite this article
The ability to express the same meaning in different ways is a well-known property of natural language. This amazing property is the source of major difficulties in natural language processing. Given the constant increase in published literature, its curation and information extraction would strongly benefit from efficient automatic processes, for which corpora of sentences evaluated by experts are a valuable resource.
Given our interest in applying such approaches to the benefit of curation of the biomedical literature, specifically that about gene regulation in microbial organisms, we decided to build a corpus with graded textual similarity evaluated by curators and that was designed specifically oriented to our purposes. Based on the predefined statistical power of future analyses, we defined features of the design, including sampling, selection criteria, balance, and size, among others. A non-fully crossed study design was applied. Each pair of sentences was evaluated by 3 annotators from a total of 7; the scale used in the semantic similarity assessment task within the Semantic Evaluation workshop (SEMEVAL) was adapted to our goals in four successive iterative sessions with clear improvements in the agreed guidelines and interrater reliability results. Alternatives for such a corpus evaluation have been widely discussed.
To the best of our knowledge, this is the first similarity corpus—a dataset of pairs of sentences for which human experts rate the semantic similarity of each pair—in this domain of knowledge. We have initiated its incorporation in our research towards high-throughput curation strategies based on natural language processing.
Expressing the same approximate meaning with different wording is a phenomenon widely present in the everyday use of natural language. It shows the richness and polymorphic power of natural language, but it also exhibits the complexity implied in understanding the conveyed meaning. Due to these characteristics, paraphrase identification is necessary for many Natural Language Processing (NLP) tasks, such as information retrieval, machine translation, and plagiarism detection, among others. Although strictly a "paraphrasis" refers to a rewording that states the same meaning, i.e., its evaluation should only result in true or false, frequently a graded paraphrasing is needed. This graded paraphrasing is often called Semantic Textual Similarity (STS).
Textual similarity depends on particular text features, domain relations, and the applied perspective; therefore, textual similarity has to be defined according to the context. This context specification presupposes the delineation of the kind of textual similarity desired, e.g., assigning grades of importance to the syntactic parallelism, to the ontological closeness, to the statistical representations likeness, etc.
It is not a simple endeavor to explicitly state these grades of importance. The difficulty stems from the fact that it is very complicated to envisage all possible language feature variations to express the same idea, and so to have a broad perspective and to identify which features or relations are important. It is for these steps that a paraphrase corpus is a very useful instrument, because it implicitly captures those nuances.
There are several paraphrase corpora available, both for general and specific domains. However, as stated before, these corpora are very sensitive to the aimed task and to the targeted domain. Hence, when a task or domain is very specific and the available corpora do not fit, an ad hoc corpus has to be built. This is the case for the biomedical curation of the literature about the regulation of transcription initiation in bacteria, a specific domain of knowledge within the biomedical literature.
RegulonDBFootnote 1 [1] is a manually curated standard resource, an organized and computable database, about the regulation of gene expression in the model enterobacteria Escherichia coli K-12. It aims at integrating within a single repository all the scattered information in the literature about genetic regulation in this microorganism, including elements about transcriptional regulation, such as promoters, transcription units (TUs), transcription factors (TFs), effectors that affect TFs, active and inactive conformations of TFs, TF binding sites (TFBSs), regulatory interactions (RIs) of TFs with their target genes/TUs, terminators, riboswitches, small RNAs, and their target genes. We are capable of keeping up to date with the literature thanks to constant manual curation in an effort initiated close to 20 years ago. However, the pace of curation tends to lag behind the number of publications, motivating the implementation of automatic curation processesFootnote 2. Certainly, biocuration typically accelerates with the emergence of novel technologies, and furthermore, we believe that the depth and detail of the description of what is extracted from the literature could be increased significantly. As shown in the most recent publication of RegulonDB [2], the number of curated objects has increased over the years. Finally, another major motivation stems from the fact that microbial genomes have been constructed under similar evolutionary principles as E. coli; thus, the methods that can be trained with literature for E. coli should be very well applicable to the literature on gene regulation in other microbial organisms, for which the literature has not been subject to curation. RegulonDB plays an important role in scientific research: it has been cited in more than 1700 scientific publications.
As an ongoing effort to enrich the already curated information and to improve the curation process, we are developing NLP tools, some of which rely on STS. The goal with these STS assessment tools is to discover statements, in different publications, connected by their meaning. One of the direct contributions to the curation process could be to facilitate the discovery of supporting evidence for a piece of curated information. Table 1 shows a pair of sentences, from different publications, that express very similar meanings and that provide supporting evidence for each other. These pairs of sentences exemplify what is intended to be annotated within our corpus and, thus, the kind of annotations that we expect to produce through machine learning models trained with this corpus. Due to the very specific nature of our domain, we built the ad hoc graded paraphrase corpus to be used as a training and evaluation source of truth for our NLP tools.
Table 1 Examples of sentences of different publications that express very similar meanings
In the following sections, we first describe the methodology followed to build our corpus, then we analyze it quantitatively, and finally we briefly mention the immediate foreseen uses of the corpus.
Related work and motivation
STS aims to measure the degree of semantic equivalence between two fragments of text. To achieve this, it tries to unveil the meaning conveyed by a textual expression and compare it with the meaning conveyed by another one. The comparison's result is a graded similarity score that ranges from an exact semantic match to a completely independent meaning, passing through a continuous scale of graded semantic parallelism. This scale intuitively captures the notion that a pair of texts can share different aspects of meaning at different levels [3], i.e., they could differ in just some minor details, they could share a common topic and important details, or they could share only the domain and context, etc. Another characteristic of STS is that it treats similarities between two texts as bijective, setting this task apart from textual entailment, where the relation is directed and cannot be assumed true in the inverse direction.
Many NLP tasks, such as machine translation, question answering, summarization, and information extraction, potentially benefit from this quantifiable graded bidirectional notion of textual similarity. Building this kind of corpus is difficult and is labor-intensive, and that is why there are not as many corpora of this kind as might be expected, given their usefulness.
In recent years, the most notorious efforts on the STS task and their corresponding corpus constructions were tackled by the Semantic Evaluation Workshop (SEMEVAL) [3]. The SEMEVAL corpus consists of 15,000 sentence pairs from different sources, with the Microsoft Research Paraphrase (MSRP) and PASCAL VOC [4] corpora among them. The SEMEVAL corpus was annotated through crowdsourcing, using a scale from 5 (identical) to 0 (completely unrelated).
Another corpus that is useful for STS is the User Language Paraphrase corpus (ULPC) [5]. This corpus was built by asking students to rephrase target sentences. As a result, 1998 sentence pairs were annotated with ratings ranging from 1 to 6 for 10 paraphrasing dimensions; entailment and lexical, syntactic, and semantic similarities were among those dimensions.
The SIMILAR corpus [6] is the product of a qualitative assessment of 700 pairs of sentences from the MSRP corpus; in addition to providing word-to-word semantic similarity annotations, it also supplies a qualitative similarity relationship—identical, related, context, close, world knowledge, or none—between each pair of sentences.
Among corpora that do not rely on graded similarity but instead on binary paraphrases, there are important corpora, such as the MSRP corpus [7]. It is one of the first major public paraphrase corpora, comprising 5801 new sentence pairs, of which 67% were judged "semantically equivalent" by two human judges. In the Q&A field, another corpus, The Question Paraphrase corpus [8], was built by collecting from WikiAnswers 7434 sentences formed by 1000 different questions and their paraphrases.
All these corpora target general domains and were sourced mainly from the news, making it very difficult to fit them into a specific topic such as ours: bacterial transcriptional regulation. Closer to our domain is the BIOSSES corpus [9]. It is formed by 100 pairs of sentences from the biomedical domain which were rated following the guidelines of the STS SEMEVAL task. The candidate sentences were collected from the set of articles that cited at least 1 of 20 reference articles (between 12 and 20 citing articles for each reference article). Those sentence pairs that cited the same reference article were selected. Articles were taken from the Biomedical Summarization Track Training Dataset from the Text Analysis Conference.
Due to the extension of the biomedical domain and the small size of the BIOSSES corpus, most likely it does not capture the nuances of our subject of study. For this reason, we decided to build our own corpus of naturally occurring non-handcrafted sentence pairs within the subject of regulation of gene expression in E. coli K-12. The semantic similarity grade of each pair was evaluated by human experts of this field.
A corpus is "a collection of pieces of language text in electronic form, selected according to external criteria to represent, as far as possible, a language or language variety as a source of data for linguistic research" [10]. Before building a corpus, the textual source set, the evaluation rules, the corpus size, and other characteristics must be defined. This design should be, as much as possible, informed and principled so that the resulting corpus fulfills the desired goals. The decisions involved within the axes of consideration [10] for the corpus construction are the following.
The sampling policy defines where and how the candidate texts are going to be selected, following three main criteria: the orientation, in this case a contrastive corpus with the aim of showing the language varieties that express the same meaning (semantic similarity); the selection criteria that circumscribe candidates to written sentences (origin and granularity) in English (language) taken from scientific articles (type) on the topic of genetic regulation (domain), where the sentence attitudeFootnote 3 is irrelevant and a specific content is not required; finally, the sampling criteria consists of preselection of sentence pairs through a very basic STS component followed by a filtering process to keep the same number of exemplars for each similarity grade, i.e., a balanced candidate set.
The corpus representativeness and balance refer to the kind of features and to the distribution of those features in the exemplars; hence, these characteristics determined the usage possibilities of the corpus. In this sense, sentences containing any biological element or knowledge were preferred. It was more important that all similarity grades were represented within the corpus and preferably in equal proportions. Our main analysis axis was the semantic similarity between pairs of sentences and not the topic represented by each sentence, the sentences' specialization or technical level, nor the ontological specificity of the terms in the sentence.
The orientation of a corpus' topics impacts directly the variety and size of the resulting vocabulary. Whereas embracing more topics can broaden the possibilities for use of the corpus, this can also have negative consequences in the semantic similarity measures due to the increased chances of the same term having different meanings for different topics (ambiguity). Consequently, a limited set of topics was preferred. We intended for the corpus to be representative of the genetic regulation literature. It is worth noting that it was not limited to those sentences specifically about genetic regulation but all kinds of sentences present in the corresponding literature. The corpus homogeneity was tackled by stripping out those sentences considered too short (less than 10 words) [11] and those sentences that were not part of the main body of the articleFootnote 4.
Finally, a corpus' size should be dependent on the questions that it is aimed to answer and the type of tasks where it can be applied [12, 13]. However, in practice it is largely restrained according to available resources (time, money, and people). Our main goals are to train our STS system and to measure its performance. Because our STS system is based on the combination of several similarity methods, it is difficult to estimate the required number of cases that would make it a significant training source, because this varies for each type of metric. For example, one of the most demanding methods on training data is neural networks, whose complexity can be expressed based on the number of parameters (P), and it is common practice to have at least P2 training cases. This would result in thousands of training cases, which is out of our reach. Thus, we focused on the second goal, to measure the STS system performance. We planned to measure the Pearson's correlation between the computed system similarity and that generated by human experts (corpus). According to [14], considering a medium-size effect (r = 0.30), a significance level of 0.05, and a power of 80%, 85 samples would be enough. However, [15] and [16] suggested a minimum sample size of 120 cases in order to allow not only a Pearson's correlation analysis but also a regression analysis. With this in mind, we decided to generate a corpus of 170 sentence pairs, i.e., a number of pairs just above those thresholds.
Lastly, as a validity exercise, we compared our design decisions versus those taken in other corpora, for example, the MSRP corpus. In the construction of the MSRP corpus [7], several constraints were applied to narrow the space of possible paraphrases. However, in our opinion and for our specific purpose, these guidelines limit the aspects of semantic similarity that the corpus could capture. For example, only those pairs of sentences with at least 3 words in common and within a range of Levenshtein edit distance were considered, but these parameters constrain similarity, at least to a certain extent, to a textual one; it was required that for a pair to be a candidate, the length in words of the shorter sentence be more than 66% of the length of the longer sentence, thus limiting the possibility for the corpus to represent cross-level semantic similarity [17], a phenomenon of sentences with different lengths. It is also noteworthy that the MSRP corpus has an agreed consensus that 67% of the proposed sentence pairs are paraphrases, meaning that the majority of sentences are semantically equivalent and, therefore, other grades of similarity and even nonsimilarity are underrepresented.
Compiling the corpus
As stated in the sampling criteria of the corpus design, the selection of candidate pairs was performed using a basic STS process that automatically assigned continuous similarity scores between 0 and 1 inclusive, where 1 represented exact semantic equivalenceFootnote 5 and 0 indicated a totally unrelated meaning. The referred basic STS process was performed by a tool that we developed to compare the semantic similarity of two sentences using only their word embeddings. The strategy consisted of averaging the embeddings of the sentence words to produce a sentence embedding and compute the cosine between both sentence embeddings as a measure of their similarity. This strategy is well known as a good baseline for this kind of task. It is worth noting that the embeddings were trained on RegulonDB's literature—Transcriptional Regulation domain (further details of this strategy and the word embedding training are presented in [18]). Next, the final candidate sentences were selected by a balanced stratified random sampling from those prerated sentence pairs.
This process was applied to two different sets: the anaerobiosis FNR (Fumarate and Nitrate Reductase regulatory protein) subset formed by articles about anaerobiosis; and the general set, consisting of sentences taken by randomly sampling of all of RegulonDB's articles (5963 publications). The former subset was manually built by an expert curator who selected, from anaerobiosis articles, sentences that she considered relevant within the subject. To generate the latter subset, we first extracted the textual content (sentences) from the 5963 publications (PDFs) found in the literature of RegulonDB by using a tool that we built for this purpose. Then, as a naive approach to only focus on sentences belonging to the article's main sections (e.g., methods, results, discussion), we discarded the first 30% and the last 30% of sentences from each article. Finally, we randomly chose two sentences from each article.
The resulting corpus is formed by pairs of sentences, of which 40% come from the anaerobiosis FNR subset and 60% from the general subset. A big picture of the described pipeline is shown in Fig. 1.
Corpus compilation pipeline. This pipeline, from bottom to top, shows the steps that were taken to compile the corpus (171 sentence pairs) that was later evaluated by annotators regarding the semantic similarity between the sentence pairs. First, two subsets, the anaerobiosis-FNR and the more general one, were compiled using different strategies. Then, a basic STS process was applied to both subsets in order to have a preliminary semantic similarity evaluation. This preliminary evaluation was used to select candidate sentences, creating a corpus that ended up with 40% of sentences from the anaerobiosis subset and 60% from the general subset
Annotation design
In addition to the corpus design, it was necessary to delineate the semantic similarity rating process. We followed a similar rating scale to the one used in SEMEVAL. This is an ordinal scale ranging from 0 to 4, where a Sentence Pair Similarity Score (SPSS) of 0 represents a totally disconnected semantic relation between two sentences and 4 conveys an exact semantic match, with the three middle scores indicating similarity shades, as shown in Table 2.
Table 2 Rating scale
Seven human experts, who are coauthors of the present article, comprised the set of annotators for the task. We decided to apply a non-fully crossed study designFootnote 6 in which different sentence pairs were rated by different subsets of 3 annotators, i.e., each sentence pair would be rated by 3 annotators selected by chance from the set of the 7 human experts. Some studies have shown that 2 evaluations per item can be enough [19], but we considered that 3 annotators per item would allow evaluation of a larger number of exemplars, and also that 3 is the smallest number to provide a median when there is no consensus and a discrete final score is desired.
Due to the fact that "what is considered semantically equivalent" is prone to be biased by personal subjective considerations, it was necessary to homogenize the annotation process among raters. This was done by a training period of 4 iterative sessions to help annotators become familiar with the annotation guidelines and the corpora to be rated, and also to refine annotation guidelines. During this training, each session consisted of evaluating a small subset of sentence pairs, and at the end of each session, disagreements were discussed and solved and annotation guidelines were more precisely defined. This training period was considered concluded when a minimum annotator interagreement was achieved or the annotators considered that they fully understood the annotation guidelines.
In order to make the annotation process less subjective, some general guidelines were initially given to raters. These were collected from other corpus-building experiments [20] and from our own observations, including:
Order. Clauses in a compound sentence can be arranged in a different order without implying a change in its meaning.
Missing clauses. In complex or compound sentences, if a clause is present in one and missing in the other, it does not automatically result in a zero similarity. It depends on the grade of importance of the shared information.
Adjectives. Missing adjectives in principle do not affect similarity.
Enumerations. Missing elements can produce a minor decrease in the similarity score unless enumeration conveys the main sentence meaning. Reordering is considered equivalent.
Abbreviations. Abbreviations are considered equivalent, e.g. "vs" and "versus."
Hypernyms and hyponyms. The two forms share a grade of similarity, e.g., "sugar substance" vs "honey" vs "bee honey."
Compound words. Some terms are semantically equivalent to multiterm expressions, e.g., "anaerobiosis" and "in the absence of oxygen," "oxidative and nitrosative stress transcriptional regulator" and "oxyR," or "hemorrhage" and "blood loss."
Generalization or abstractions. Consider that two textual expressions share some grade of semantic similarity if one is a generalization or abstraction of the other, e.g., 8 vs "one-digit number."
Consensual refinement
General guidelines were subsequently refined and enriched during the consensus sessions.
As a first approximation to clarify the rating scale in our context, it was decided we would use the class of RegulonDB objects as topic markers within the sentences. RegulonDB contains objects of the following classes: Gene, Gene Product, Protein, Motif, Promoter, Transcription Unit (TU), Regulatory Interaction (RI), Reaction, Transcription Factor (TF), and Growth Condition (GC). Next, we provide example cases for each score that help to clarify our similarity scale.
SPSS of 4.
Both sentences have in common the same objects and express the same meaning. i.e., they are paraphrases of each other. The following pair of sentences serve to illustrate this grade:
This would mean that the IS5 element is able to provide FNR regulatory sites if inserted at appropriate positions.
In any case, insertion of an IS5 element is able to increase FNR-dependent expression or to place genes under FNR control.
Both sentences share the same objects and other elements of their meaning. However, one of the sentences lacks relevant elements, does not refer to the same objects, or arrives at different conclusions. Some cases we could envision are that both sentences refer to the same Gene and share all other information, except that in one the gene is activated and in the other it is repressed; sentences referencing the same RI but that differ in terms of the RI's conditions; both sentences almost paraphrase each other, but one has more details.
The relation between the next pair of sentences exemplifies the last case:
These results confirm that the N-terminal domain of NikR is responsible for DNA recognition.
In preliminary experiments, we have also found that a subset of mutations within the DNA region protected by the N-terminal domain reduce the affinity of NikR for the operator—data not shown.
Both sentences share at least one specific object and some other similarities, for example, a pair of sentences that refer to the same TF (see example (a)). An interesting singularity from the expert evaluation was the observation that "aerobic" and "anaerobic" conditions are related, since they both refer to oxygen availability. Therefore, in this corpus, contrasting conditions like these have a certain degree of similarity (see examples (a) and (b)).
Example (a)
The fnr mutant was thus deficient in the anaerobic induction of fumarate reductase expression.
Aerobic regulation of the sucABCD genes of Escherichia coli, which encode K-ketoglutarate dehydrogenase and succinyl coenzyme A synthetase: roles of ArcA, Fnr, and the upstream sdhCDAB promoter.
Example (b)
Transcription of the fdnGHI and narGHJI operons is induced during anaerobic-growth in the presence of nitrate.
Both sentences have the same object class in common, but the specific object is different. Since Gene and GC objects are highly common in RegulonDB's literature, it was decided that sharing only these classes is not a sufficient condition for sentences to be rated with an SPSS of 1. When comparing a sentence that mentions a TF with another one that mentions any other object (or GC) that refers to the same process in which the TF is involved, an SPSS of 1 has to be assigned to the sentence pair. An SPSS of 1 was also considered in cases when both sentences referred to sequences and genes, even when neither the sequences nor the mentioned genes were the same. The following pair of sentences is an example of this grade:
To test whether the formate induction of the cyx promoter could be mediated by the fhlA gene product, the expression of the cyx-lacZ fusion was examined in an fhlA deletion strain in the presence and in the absence of formate.
Sentences do not even share objects class. A possible case is that sentences share Gene and GC class (the exceptions of SPSS 1 grade) but not the same specific objects; the following pair of sentences is an example of this case:
Carbon metabolism regulates expression of the pfl (pyruvate formate-lyase) gene in Escherichia coli.
Later work showed that most mutants lacking either ACDH or ADH activities of the AdhE protein mapped in the adhE gene at 27.9 min [1,4].
It was clarified that sentences do not necessarily have to contain biological content or refer to RegulonDB's objects to be annotated and have an SPSS above 0. The annotation assesses the similarity in meaning irrespective of the topic.
Table 3 is a summary of the above-described guidelines.
Table 3 Refined rating scale
Annotation process
To facilitate the annotation process, we decided to provide annotators with a spreadsheet template (see Fig. 2). The template was designed so that all needed information would be self-contained and the rater did not have to switch to other files. It consisted of a list of all sentence pairs that the annotator had to rate; for each sentence pair, the IDs and text were displayed. The area where the user wrote the scores was organized into columns where each column represented an annotation session, with date and time at the top. A rating scale table was also included as a reference.
Annotation template. The image shows the spreadsheet template that was used by the annotators. Sentence pairs to be rated are shown in the rows, one sentence pair per row. The cells to the right of each sentence pair were reserved for the annotators' evaluation, with one annotation session per column. At the top is a rating scale table which was included as a reference
The process consisted in: provide each annotator with a file, based on the annotation template, containing exclusively the sentence pairs that have to be evaluated by him/her; annotators had a fixed period of time of one week to rate all pairs; during that period each annotator could divide the rating task into as many sessions as desired as long as he added the session's date and time; it was indicated that sessions should be exclusive and continuous, i.e., the task should not be interrupted by more than 5 min and annotators should not be performing other tasks in parallel.
The process consisted of providing each annotator with a file, based on the annotation template, containing exclusively the sentence pairs that had to be evaluated by him/her. Annotators had a fixed period of time of 1 week to rate all pairs; during that period, each annotator could divide the rating task into as many sessions as desired, as long as he or she added the session's date and time. It was indicated that sessions should be exclusive and continuous, i.e., the task should not be interrupted by more than 5 min and annotators should not be performing other tasks in parallel.
It is worth noting that the pairs of sentences assigned to each annotator were randomly selected from the set of pairs.
Corpus evaluation
The recommended way to evaluate the quality of the resulting corpus is through the Inter-Rater Agreement, also known as Inter-Rater Reliability (IRR) [19, 21–25]. IRR is a measure of the agreement between two or more annotators who have rated an item using a nominal, ordinal, interval, or ratio scale. It is based on the idea that observed scores (O) are the result of the scores that would be obtained if there were no measurement error—true scores (T)—plus the measurement error (E), i.e., O=T+E [21]. One possible source of measurement errors is the measure-instruments instability when multiple annotators are involved. IRR focuses on analyzing how much of the observed scores' variance corresponds to variance in the true scores by removing the measurement error between annotators. Thus, the reliability coefficient represents how close the given scores (by multiple annotators) are to what would be expected if all annotators had used the same instrument: the higher the coefficient, the better the reliability of the scores.
There are multiple IRR statistics, and which one to use depends on the study design. To select the IRR statistic, some factors should be considered, such as the type of measured variable (nominal, ordinal, etc.), if it is a fully crossed study design or not, and if what it is desired is to measure the annotators' or the ratings' reliability.
Our design (see "Annotation design" section) corresponds to a non-fully crossed study design, where an ordinal variable is measured and we are interested in measuring the ratings' reliability. Having that in mind, the statistics that better accommodated our study were Fleiss' Kappa (Fleiss) [26], Krippendorff's Alpha (Kripp), Intra Class Correlation (ICC) [27], Kendall (Kendall) [28], and Gwet's AC1 (Gwet) [22].
One of the most-used IRR statistics is Cohen's Kappa analysis (k) (5) [29]. It is a relation between the proportion of units in which the annotators agreed (\(\mathfrak {p}_{o}\)) and the proportion of units for which agreement is expected by chance (\(\mathfrak {p}_{c}\)); thus \(k = (\mathfrak {p}_{o} - \mathfrak {p}_{c}) / (1 - \mathfrak {p}_{c})\). Originally, this measure was intended for just two annotators who rated all items, so variants were developed in order to fit non-fully crossed study designs with more than two raters per item. The Fleiss' Kappa (1) is a nonweighting measure that considers unordered categories; it was designed for cases when m evaluators are randomly sampled from a larger population of evaluators and each item is rated by a different sample of m evaluators. In Eq. (1), pa represents the averaged extent to which raters agree for the item's rate and pε is the proportion of assignments to the categories.
$$ k = \frac{p_{a} - p_{\epsilon}}{1 - p_{\epsilon}} $$
Krippendorff's Alpha (2) is an IRR measure that is based on computing the disagreement. It provides advantages like being able to handle missing data and handling various sample sizes, and it supports categorical, ordinal, interval, or ratio measured variable metrics. In (2), Do is the observed disagreement and Dε is the disagreement one would get if rates were by chance. Thus, it is the ratio between the observed disagreement and the expected disagreement.
$$ \alpha = 1 - \frac{D_{o}}{D_{\epsilon}} $$
Intra-class correlation (3) is a consistency measure that can be used to evaluate the ratings' reliability by comparing the item's rating variability to the variability of all items and all ratings. It is appropriate for fully crossed as well as for non-fully crossed study designs and when there are two or more evaluators. Another feature is that the disagreement's magnitude is considered in the computation, as in a weighted Kappa. In (3), var(β) accounts for variability due to differences in the items, var(α) is from the variability due to differences in the item's reevaluations, and var(ε) is for the variability due to differences in the rating scale used by annotators. Consistent with our study design, we selected the ICC variant as: a "one-way" model, to avoid accounting for systematic deviations among evaluators, because annotators for each item were selected at random. We used the average as the unit of analysis, because all items were rated by an equal number of annotators (i.e., 3).
$$ ICC = \frac{var(\beta)}{ var(\alpha) + var(\beta) + var(\epsilon) } $$
Kendall's coefficient is an association measure that quantifies the degree of agreement among annotators based on the ranking of the items. As a special case of the correlation coefficient, this coefficient will be high when items' orders (ranked by the given rate) would be similar across annotators. It is based on the computation of the normalized symmetric distances between the ranks. Because it relies on the distances instead of the absolute values, it better handles consistent rater biases, i.e., the bias effect. In (4), nc refers to the number of concordant and nd to the number of discordant ranks within a sample of n items.
$$ W = \frac{ n_{c} - n_{d}}{\frac{1}{2} n(n-1)} $$
[30] demonstrated that the Kappa coefficient is influenced by trait prevalence (distribution) and base rates, thus limiting comparisons across studies. For that reason, [22] proposed an IRR coefficient (6) that, as Cohen's Kappa statistic, adjusts the chance agreement—raters agree based on a random rating—to avoid inflating the agreement probability with not true intentional rater's agreement. However, Gwet's coefficient has the property of not relying on independence between observations; weights are based on weighted dissimilarities. This coefficient presents several advantages: it is less sensitive to marginal homogeneity and positively biases for trait prevalence (more stable); it can be extended to multiple raters; as Krippendorff's coefficient it can deal with categorical, ordinal, interval, or ratio measures and it can handle missing data; contrary to weighted Kappa, it is not necessary to provide arbitrary weights when applied to ordinal data.
$$ Kappa = \frac{p - e(\kappa) }{1 - e(\kappa)} $$
$$ AC = \frac{p - e(\gamma) }{1 - e(\gamma)} $$
The difference between Gwet and Kappa is in the way that the probability of chance agreement is estimated. In Kappa, e(κ) is based on combining the estimates of the chance that both raters independently classify a subject into category 1 and estimates the probability of independent classification of a subject into category 2 (7), whereas with Gwet this is based on the chance that any rater (A or B) classifies an item into a category (8).
$$ {}e(\kappa) = \left(\frac{A1}{N} \right) \left(\frac{B1}{N} \right) + \left(\frac{A2}{N} \right) \left(\frac{B2}{N} \right) $$
$$ {}\begin{aligned} e(\gamma) &= 2P_{1}(1-P_{1})\\ &= 2 \left(\frac{(A1 + B1)/2}{N} \right) \left(1- \left(\frac{(A1 + B1)/2}{N} \right) \right) \end{aligned} $$
It is important to note that Gwet proposes 2 variants of its statistic, AC1 and AC2. AC2 is a weighted version—some disagreements between raters are considered more serious than others—of AC1 and thus a better alternative for ordinal data. AC2 is intended to be used with any number of raters and an ordered categorical rating system to rate objects, as is our case. In AC2, both chance agreement as well as misclassification errors are adjusted; thus, it is defined as a "bias-adjusted conditional probability that two randomly chosen raters agree given that there is no agreement by chance" [22].
Training period
The training period consisted of 4 iterations in each of which a set of sentence pairs was rated by all annotators. Afterwards, we had a consensus session where conflicts were resolved and questions about the guidelines were answered, resulting in updating the guidelines.
We performed the IRR analysis of each iteration in order to review the effect of consensus sessions in homogenizing the annotation process. As can be seen in Fig. 3 and Table 4, the grade of interagreement increased in each iteration irrespective of the statistic. In the fourth session, we reached a Fleiss' Kappa of 0.546 as the lowest metric, which is considered a moderate strength of agreement [31]. However, we have to remember that this metric is a nonweighting coefficient, for example, when 2 annotators do not agree on the evaluation of a pair of sentences, for this metric is equally wrong when one annotator grades them with 4 and the other with 0 (i.e., evaluations differ by 4 points) as when one grades them with 2 and the other with 3 (i.e., evaluations differ by only 1 point). That is why we reached an almost-perfect IRR in statistics that better deals with ordinal scales: ICC (0.964) and Gwet's AC2 (0.910). It is noteworthy that Gwet's coefficients are much more highly recommended methods to compute IRR than those of the Kappa coefficients family.
The progress of IRR through the consensus sessions. The chart shows the IRR measured using five different metrics. The IRR score is represented on the y-axis, and the results for the four sessions are chronologically displayed on the x-axis. IRR scores, in all metrics, improved in each subsequent consensus session. For example, the IRR measured using Gwet's AC2 coefficients improved from 0.545 in the first session to 0.910 in the last one, that is, the annotators' evaluations were much more homogeneous at the end of the consensus sessions
Table 4 IRR through agreement sessions
We also compared the IRR between all combinations of annotators' pairs as a way of detecting consistent bias of one annotator versus the others (see Fig. 4). We determined that more guideline clarifications were needed for annotator 4, who consistently had lower IRR values than the other raters.
IRR between pairs of annotators at the end of the training sessions. This chart shows the IRR (ICC) of each annotator compared with each of the other annotators. Both x- and y-axes represent annotators; for example, the intersection of the y-value 4 and x-value 5 represents the IRR between annotator-4 and annotator-5. As shown on the IRR scale to the right, the higher the IRR, the more intense the red color, and so in this case, there is a moderate IRR between annotator-4 and annotator-5 and higher agreement between annotators 2 and 3. We noted that annotator-4 had a lower agreement with all others and thus he needed more guideline clarifications
After the training period, we built the corpus based on the proposed design (see "Annotation design" section). It resulted in 171 pairs of sentences, each rated by 3 annotators selected by chance from the group of 7 experts. It is noteworthy that the sentences evaluated during the training period were not included in these 171 pairs.
Several IRR analyses were performed to assess the degree that annotators consistently assigned similarity ratings to sentence pairs (see Table 5). The marginal distributions of similarity ratings did not indicate a considerable bias among annotators (Fig. 5), but they did show a prevalence effect towards lower similarity rates (Fig. 6). A statistic less sensitive to this effect was Gwet's AC, which makes an appropriate index of IRR, in particular the AC2 variant, due to the ordinal nature of our data. The resulting coefficient indicated very good agreement [32] of AC2=0.8696 with a 95% confidence interval [0.8399, 0.8993].
Ratings distribution per annotator. In this chart, the seven annotators are represented on the y-axis, and on the x-axis the evaluation proportions for each similarity grade are represented. Similarity grades are ordered from lowest similarity (0) at the left to highest (4) at the right. For example, it can be seen that both annotator-4 and annotator-5 had the highest proportions of 0-similarity evaluations, but annotator-5 tended to give higher grades in the rest of the cases
Individual ratings distribution. This chart shows the distribution of annotators' ratings per similarity grade during the evaluation of the corpus (not the training period). The x-axis shows the five similarity scale values, and the percentage of evaluations within each grade are represented on the y-axis. More than 40% of the evaluations were rated as "no similarities" (score of 0); nevertheless, 50% of evaluations were in the similarity value range between 1 and 3
Table 5 Corpus' inter-rate agreement for various statistics
For the sake of clarity, we investigated if the non-fully crossed design caused too-inflated coefficients. To do this, we first grouped the sentence pairs by the annotators who rated them (now each of these groups could be considered a fully crossed study design); next, we computed the IRR for each group; finally, we computed the arithmetic mean of all groups. The resulting averages (Table 6) were quite similar to coefficients computed for the whole corpus, reconfirming the corpus reliability.
Table 6 IRR by annotators group
From the individual rating distribution (Fig. 6), we can see that although the distribution is biased towards no similarity, we achieved a good amount (> 50%) of sentence pairs rated within the 1-3 score range.
We observed that the IRR increased more significantly after the third training session. We think that this increase can be explained mainly by two factors. First, annotators familiarized themselves with the guidelines and they had a better understanding of what was expected for the task. Despite task explanations and annotation guidelines, in the first sessions there was a tendency to grade the similarity of the biological objects mentioned in the compared texts and to overlook the full semantics conveyed by those texts. Second, after the first two sessions, annotators had collected a good set of examples along with the respective explanatory notes from the previous consensus sessions. These examples served as disambiguation sources when needed. It is interesting that both factors are related to the hypothesis that although similarity is an intuitive process, there is not a perfect consensus, especially about the grades of similarity [33–36]. It depends on the personal context, and we could confirm the importance of guidelines and consensus sessions to homogenize, at a certain grade, the annotators performance.
Another practice that we found helpful during the consensus sessions was the participation of a mediator who was familiarized with the guidelines and with the task's goal but was not part of the annotators group, i.e., a third party. When needed, the mediator's role was limited to exhort annotators to explain their posture and, if pertinent and possible, to put the discussion in equivalent terms through a general context analogy. This helped to avoid unjustified influence of those annotators who were more experienced or who upheld more strongly their opinions.
In general, annotators agreed that the sentences without biological objects mentions were more difficult to assess and that in the candidate sentences there was a clear bias toward low similarity scores. This similarity dataset is just the first iteration of an ongoing process. We plan to repeat this strategy to extend the dataset; instead of using the basic STS process, now we could use a similarity model trained with the current corpus [18], and therefore it is reasonable to expect an improvement in the preselection step, more likely resulting in a more balanced rating distribution, i.e., more grades of 3 and 4.
In the spirit of weighing the size and distribution of our corpus against previous work, we compared it with BIOSSES. We selected this corpus because, to the best of our knowledge, it is the only similarity corpus specialized for the biomedical domain, and setting our corpus side by side with the general-domain ones (e.g., MSRP, SEMEVAL, ULPC) would be unfair. Regarding balance for these two corpora with respect to the number of sentence pairs per grade, BIOSSES is better balanced, with 15% of sentences graded with a value of 0, 12% with 1, 27% with 2, 35% with 3, and 11% graded with 4. Our corpus has a distribution of 48%, 22%, 15%, 14%, and 1% corresponding to the 0, 1, 2, 3, and 4 similarity grades. However, concerning corpora size, although it is still small our corpus, with 171 sentence pairs, is 70% larger than the BIOSSES corpus, which consists of only 100 pairs of sentences. Moreover, even though BIOSSES is specialized for the biomedical domain, its coverage is still too broad for our purpose. This is evidenced by the fact that, when analyzing terms' frequencies in BIOSSES, within the top 50 terms we found terms like cell, tumor, cancer, study, report, human, gene, lung, leukemia, etc., whereas in our corpus the prevailing terms are site, expression, activation, gene, protein, strain, regulation, DNA, region, downstream, upstream, etc.
We believe that our publicly available dataset (see "Availability of data and materials" section) can be of great benefit in several NLP applications. For example, we are already successfully using it to fine-tune and test a semantic similarity engine as part of an assisted curation pipeline. Within these experiments, we used an ensemble of similarity metrics that were string, distributional, and ontology based. The individual measures were combined through different regression models which were trained using the corpus presented in this publication. Our models obtained strong correlations (ρ=0.700) with human evaluations, which are far from state-of-the-art in general domains but are quite good considering our highly specialized domain—Microbial Transcriptional Regulation. In the absence of this corpus, the only alternative would have been to equally weight the different metrics, which in our experiments results in a Pearson's correlation (ρ) of 0.342, at best. With these experiments, it was shown that this corpus is not only relevant but also useful for applied tasks [18].
We did not obtain a corpus with ratings as balanced as desired; however, we now have a good representation of 4 of the 5 rates and a corpus with very good IRR. Therefore, it is going to serve well our purposes, and we think it can be quite a valuable starting point, with respect to data and processes to continue building a standard similarity corpus in the transcriptional regulation literature. To the best of our understanding, this is the first similarity corpus in this field, and thus it represents a stepping stone towards the evaluation and training of NLP-based high-throughput curation of literature on microbial transcriptional regulation.
http://regulondb.ccg.unam.mx/
http://regulondb.ccg.unam.mx/menu/tools/nlp/index.jsp
Declarative, interrogative, exclamatory, etc.
Based on the stylographic tag assigned by our home-made PDF processing tool.
Applying a baseline metric.
In fully crossed design studies all evaluated items (pairs of sentences) are rated by the same set of annotators, whereas in non-fully crossed design studies, different items are rated by different subsets of annotators.
FNR:
Fumarate and nitrate reductase regulatory protein
Fleiss:
Fleiss'Kappa coefficient
GC:
Gwet:
Gwet's AC1 statistic
Intraclass correlation
IRR:
Interrater reliability
Kendall:
Kendall coefficient
Kripp:
Krippendorff's Alpha coefficient
Microsoft Research Paraphrase
NLP:
RI:
Regulatory interaction
RNA:
SEMEVAL:
Semantic Evaluation workshop
SPSS:
Sentence Pair Similarity Score
STS:
Semantic Textual Similarity
Transcription Factor(s)
TFBS:
TF binding site
TU:
Transcription Unit
ULPC:
User Language Paraphrase Corpus
Gama-Castro S, Salgado H, Santos-Zavaleta A, Ledezma-Tejeida D, Muñiz-Rascado L, García-Sotelo JS, Alquicira-Hernández K, Martínez-Flores I, Pannier L, Castro-Mondragón JA, Medina-Rivera A, Solano-Lira H, Bonavides-Martínez C, Pérez-Rueda E, Alquicira-Hernández S, Porrón-Sotelo L, López-Fuentes A, Hernández-Koutoucheva A, Del Moral-Chavez V, Rinaldi F, Collado-Vides J. RegulonDB version 9.0: High-level integration of gene regulation, coexpression, motif clustering and beyond. Nucleic Acids Res. 2016; 44(D1):133–43. https://doi.org/10.1093/nar/gkv1156.
Santos-Zavaleta A, Salgado H, Gama-Castro S, Sánchez-Pérez M, Gómez-Romero L, Ledezma-Tejeida D, García-Sotelo JS, Alquicira-Hernández K, Muñiz-Rascado LJ, Peña-Loredo P, Ishida-Gutiérrez C, Velázquez-Ramírez DA, Del Moral-Chávez V, Bonavides-Martínez C, Méndez-Cruz C-F, Galagan J, Collado-Vides J. RegulonDB v 10.5: tackling challenges to unify classic and high throughput knowledge of gene regulation in E. coli K-12. Nucleic Acids Res. 2018:1–9. https://doi.org/10.1093/nar/gky1077.
Agirre E, Cer D, Diab M, Gonzalez-Agirre A, Guo W. SEM 2013 shared task : Semantic Textual Similarity. Second Jt Conf Lexical Comput Semant (SEM 2013). 2013; 1:32–43.
Everingham M, Van Gool L, Williams CKI, Winn J, Zisserman A. The Pascal visual object classes (VOC) challenge. Int J Comput Vis. 2010; 88(2):303–38. https://doi.org/10.1007/s11263-009-0275-4.
McCarthy PM, McNamara DS. The User-Language Paraphrase Corpus. Cross-Disciplinary Adv Appl Nat Lang Process [Internet]. Hershey: IGI Global; 2012, pp. 73–89. Available from: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/978-1-61350-447-5.ch006.
Rus V, Lintean M, Moldovan C, Baggett W. The SIMILAR Corpus: A Resource to Foster the Qualitative Understanding of Semantic Similarity of Texts. Semant Relations II Enhancing Resour Appl 8th Lang Resour Eval Conf (LREC 2012). 2012.: p. 23–5.
Dolan WB, Brockett C. Automatically Constructing a Corpus of Sentential Paraphrases. In: Proc Third Int Work Paraphrasing [Internet]. Asia Federation of Natural Language Processing: 2005. p. 9–16. Available from: https://www.microsoft.com/en-us/research/publication/automaticallyconstructing-a-corpus-of-sentential-paraphrases/.
Bernhard D, Gurevych I. Answering learners' questions by retrieving question paraphrases from social Q&A sites. Proc Third Work Innov Use NLP Build Educ Appl - EANL '08 (June). 2008:44–52. https://doi.org/10.3115/1631836.1631842.
Sogancloglu G, Öztürk H, Özgür A. BIOSSES: A semantic sentence similarity estimation system for the biomedical domain. In: Bioinformatics: 2017. p. 49–58. https://doi.org/10.1093/bioinformatics/btx238.
Sinclair J. Developing linguistic corpora: a guide to good practice. 2004. https://ota.ox.ac.uk/documents/creating/dlc/chapter1.htm Accessed 16 May 2017.
Karaoglan B, Kisla T, Metin SK, Hürriyetoglu U, Soleymanzadeh K. Using Multiple Metrics in Automatically Building Turkish Paraphrase Corpus. Res Comput Sci. 2016; 117:75–83.
Paroubek P, Chaudiron S, Hirschman L. Principles of evaluation in natural language processing. Traitement Automatique des Langues. 2007; 48(1):7–31.
Juckett D. A method for determining the number of documents needed for a gold standard corpus. J Biomed Inform. 2012; 45(3):460–70. https://doi.org/10.1016/j.jbi.2011.12.010.
Cohen J. A power primer. Psychol Bull. 1992; 112:155–9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19565683.
Moinester M, Gottfried R. Sample size estimation for correlations with pre-specified confidence interval. The Quantitative Methods for Psychology. 2014; 10:124–30. Available from: http://www.tqmp.org/RegularArticles/vol10-2/p124.
Chuan CL, Penyelidikan J. Sample size estimation using Krejcie and Morgan and Cohen statistical power analysis: A comparison. Jurnal Penyelidikan IPBL. 2006; 7(1):78–86.
Jurgens D, Pilehvar MT, Navigli R. Cross level semantic similarity: an evaluation framework for universal measures of similarity. Lang Resour Eval. 2016; 50(1):5–33. https://doi.org/10.1007/s10579-015-9318-3.
Lithgow-serrano O, Collado-Vides J. In the pursuit of semantic similarity for literature on microbial transcriptional regulation. J Intell Fuzzy Syst. 2019; 36(5):4777–86. https://www.doi.org/10.3233/JIFS-179026.
Deleger L, Li Q, Lingren T, Kaiser M, Molnar K, Stoutenborough L, Kouril M, Marsolo K, Solti I. Building gold standard corpora for medical natural language processing tasks. AMIA... Ann Symp Proc / AMIA Symp. AMIA Symp. 2012; 2012:144–53.
Torres-Moreno J-M, Sierra G, Peinl P. A German Corpus for Text Similarity Detection Tasks. 2017; 5(2). http://arxiv.org/abs/1703.03923.
Hallgren KA. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial. Tutor Quant Methods Psychol. 2012; 8(1):23–34. https://doi.org/10.20982/tqmp.08.1.p023.
Gwet K. Inter-Rater Reliability : Dependency on trait prevalence and marginal homogeneity. Stat Methods Inter-Reliability Assess. 2002; 2:1–9.
Vila M, Bertran M, Martí MA, Rodríguez H. Corpus annotation with paraphrase types: new annotation scheme and inter-annotator agreement measures. Lang Resour Eval. 2014; 49(1):77–105. https://doi.org/10.1007/s10579-014-9272-5.
Bhowmick PK, Mitra P, Basu A. An agreement measure for determining inter-annotator reliability of human judgements on affective text. Proc Work Hum Judgements Comput Linguist - HumanJudge '08. 2008; August:58–65. https://doi.org/10.3115/1611628.1611637.
Mchugh ML. Interrater reliability : the kappa statistic Importance of measuring interrater reliability Measurement of interrater reliability. Biochem Med (Zagreb). 2012; 22:276–82.
Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull. 1971; 76(5):378–82. https://doi.org/10.1037/h0031619.
Bartko JJ. The Intraclass Correlation Coefficient as a Measure of Reliability. Psychol Rep. 1966; 19(1):3–11. https://doi.org/10.2466/pr0.1966.19.1.3.
Kendall MG. Rank Correlation Methods. Oxford, England: Griffin; 1948.
Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960; 20(1):37–46. https://doi.org/10.1177/001316446002000104.
Gwet K. Kappa statistic is not satisfactory for assessing the extent of agreement between raters. Stat Methods Inter-Reliability Assess. 2002; 1:1–5.
Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977; 33(1):159. https://doi.org/10.2307/2529310.
Wongpakaran N, Wongpakaran T, Wedding D, Gwet KL. A comparison of Cohen's Kappa and Gwet's AC1 when calculating inter-rater reliability coefficients: a study conducted with personality disorder samples. BMC Med Res Methodol. 2013; 13:61. Available from: https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-13-61.
Kahneman D, Tversky A. Subjective probability: A judgment of representativeness. Cogn Psychol. 1972; 3:430–54. Available from: https://linkinghub.elsevier.com/retrieve/pii/0010028572900163.
Osgood CE. The nature and measurement of meaning. Psychol Bull. 1952; 49:197–237. Available from: https://doi.org/10.1037/h0055737.
Isaac AMC. Objective Similarity and Mental Representation. Australas J Philos. 2013; 91:683–704. Available from: http://www.tandfonline.com/doi/abs/10.1080/00048402.2012.728233.
Rubenstein H, Goodenoug JB. Contextual correlates of synonymy. Commun ACM. 1965; 8(10).
We acknowledge funding from UNAM, from FOINS CONACyT Fronteras de la Ciencia [project 15], and from the National Institutes of Health (grant number 5R01GM110597). CMA is a doctoral student from Programa de Doctorado en Ciencias Biomédicas, Universidad Nacional Autónoma de México (UNAM), and is the recipient of Ph.D. fellowship 576333 from CONACYT.
The corpus is available at https://github.com/JCollado-NLP/Corpus-Transcriptional-Regulation.
Computational Genomics, Centro de Ciencias Genómicas, Universidad Nacional Autónoma de México (UNAM). A.P., 565-A Cuernavaca, Morelos, 62100, México
Oscar Lithgow-Serrano
, Socorro Gama-Castro
, Cecilia Ishida-Gutiérrez
, Citlalli Mejía-Almonte
, Víctor H. Tierrafría
, Sara Martínez-Luna
, Alberto Santos-Zavaleta
, David Velázquez-Ramírez
& Julio Collado-Vides
Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (IIMAS), Universidad Nacional Autónoma de México (UNAM), Mexico City, México
Department of Biomedical Engineering, Boston University, Boston, Massachusetts, USA
Julio Collado-Vides
Search for Oscar Lithgow-Serrano in:
Search for Socorro Gama-Castro in:
Search for Cecilia Ishida-Gutiérrez in:
Search for Citlalli Mejía-Almonte in:
Search for Víctor H. Tierrafría in:
Search for Sara Martínez-Luna in:
Search for Alberto Santos-Zavaleta in:
Search for David Velázquez-Ramírez in:
Search for Julio Collado-Vides in:
OWLS carried out the experiment's design, the data analysis, and wrote the paper. JCV participated in the project design, annotation guidelines, writing and correction of the paper. All other authors refined the experiment design and annotation guidelines, participated in the consensus sessions, and performed the corpus annotation. All authors read and approved the final manuscript.
Correspondence to Oscar Lithgow-Serrano.
Lithgow-Serrano, O., Gama-Castro, S., Ishida-Gutiérrez, C. et al. Similarity corpus on microbial transcriptional regulation. J Biomed Semant 10, 8 (2019) doi:10.1186/s13326-019-0200-x
DOI: https://doi.org/10.1186/s13326-019-0200-x
Transcriptional-regulation | CommonCrawl |
Ernie and the Case of the Singing Sisters
On my way to work each morning I pass a news-agency that usually has a sandwich-board displaying the latest headlines of a somewhat disreputable tabloid newspaper (not that I would ever buy such a scandal sheet).
On Monday the headline read PLEIADES LAST CONCERT TONIGHT! On Tuesday it read PLEIADES STRUCK DOWN BY FOOD POISONING!! On Wednesday I read TRAGEDY STRIKES!!! MAIA DIES!! OTHER SISTERS RECOVERING! Thursday brought CORONER INVESTIGATES MYSTERY ILLNESS!!!! And finally on Friday (on my way to Ernie's place for dinner), MAIA POISONED!!!!! POLICE BAFFLED!! ...all very intriguing.
I was surprised to find Ernie in an odd mood when he opened the door. "It's just tragic" he exclaimed, as he ushered me into the living room "Poor Maia - poisoned - never to sing again". I told him I had no idea what Pleiades or Maia was. "Good grief!!", Ernie replied, "Do you live under a rock?". And he went on to explain that The Pleiades was his favorite geek-girl rock-band, the only one in the world where the members were seven identical septuplets and graduates of the Advanced School of Geometrical Disection. I was trying to chose some sympathetic words when there was a knock at the door. I answered it to find a middle-aged woman in a suit who introduced herself as Detective Superintendent Harriet Bosch - she explained that she needed Ernie's expert opinion to solve "a perplexing problem involving the Pleiades affair".
"When Maia died, and her sisters all became ill after supper with her sisters and the group's manager, I was called in to investigate", began DS Bosch. When dealing with possible foul play", she continued "I use the 4M approach. Means, Motive, Method, and Malefactor!".
"The MEANS became apparent during the initial investigation. Traces of a rare toxin from the Poison Sparrow Toad were found in blood samples from all seven sisters, but not the manager's. It's an unusual poison - up to a certain dose it only cause indigestion - but the tiniest amount above that dose is fatal. Six of the band-members were just under the fatal dose, but Maia was slightly above it. It was obvious that such a rare poison couldn't have been ingested accidentally, so we immediately initiated a murder investigation."
"The MOTIVE was easy to find. We discovered that Maia owned the rights to all the Pleiades music - she had just announced that she was about to go solo. Her sisters would be left with nothing but the wages they had earned for past concerts. We surmised that one or more of the sisters poisoned her - either in revenge, or to inherit a share of the royalties, worth millions."
"The METHOD is a bit tricky. The only things any of the band members ate were home made cakes at the after-concert party. Our theory is that one of the sisters mixed a precise measure of poison into her cake and gave Maya a slightly larger (and fatal) share, but the witness statements don't support our hypothesis - it seems as though everyone except the manager had an exactly equal share of all the cakes."
The detective passed over the written witness statements.
The Manager's Statement:
Each of the sisters brought along an identical circular cake.
That evening they had one straight knife, and three different sized circular 'cookie-cutters' to cut up their cakes.
Straight cuts were edge to edge, but in circular cuts the cookie-cutter could overlap past the edge of a cake. Cake parts were not moved between cuts.
The cakes were cut and eaten in the following order:
Maya made one cut with the first cookie cutter, plus two straight cuts, to divide her cake into eight exactly equal portions - one for each of us.
Because of my diet, I asked that the rest of the cakes should be divided into only seven pieces, with me left out.
Electra made one cut with the same cookie cutter, plus two straight cuts.
Alcyone made one cut with the second cookie cutter, plus two straight cuts.
Taygete made one cut with the third cookie cutter, plus two straight cuts.
Asterope made one cut with each of two previously used cookie cutters, plus one straight cut.
Celaeno made two cuts with one of the previously used cookie cutters, plus one straight cut.
Merope made one cut with one of the used cookie cutters, plus three cuts with another of the used cookie cutters.
The Surviving Sisters Statements (all 6 made identical statements):
All the manager's statements are true.
I divided my cake into exactly seven equal portions (but not necessarily seven pieces), and shared the portions exactly equally with each of my sisters.
Ernie read the statements, paused for a moment, and then picked up a pad and drew eight cryptic diagrams on it. "As far as METHOD goes, I agree that any one of the witnesses could be lying", he said, "so any one of the sisters could have poisoned their cake and given Maya a slightly larger piece. It's even a possible suicide suicide. But assuming there wasn't some sort of mass conspiracy", he continued "surely the most obvious MALEFACTOR is the witness who's statement could not possibly be true?". Ernie passed the diagrams to the DS who thanked him and then left the house without further comment
Ernie wouldn't discuss what information was in his note, but this morning I noticed a new bill-board: SINGING SISTER ARRESTED IN DAWN RAID!!!!!! To guarantee a fair trial, the suspect's name has been suppressed by judicial order, but I am just itching to know. Who did Ernie identify as the potential MALEFACTOR? Maybe a diagram would help.
I am accepting Sleafar's argument as he/she does state that Celaeno's statement cannot be true. To show that there is a solution for Electra, consider the following.
Consider you want to divide a circle into 4 regions with proportional areas A:B:C:C. Initially ignoring the areas C and C, lets try and get the ratio A:B correct. You can make the two cuts so that they that intersect on a diameter of a circle which bisects the angle theta between the two cuts (top left sketch below). If the intersection is too far from the origin of the circle then the area A will be too big relative to B (top left), if the intersection is too close to the origin then area B is too big (top right). As the areas vary continuously as the intersection is moved along the chosen diameter, there will be a solution that is 'just right' to give a correct ratio A:B for the chosen angle theta. Now (with all the solutions where ratio A:B is correct) note that if theta is very large, the areas C will be too small, but if the angle is very small then the areas C will be too large. Once again, as relative area C varies continuously with angle, there will be an angle that is just right to give the chosen ratio A:B:C:C. If we choose A:B:C:C to be 2:1:2:2, then make a copy scaled down by sqrt(2), we will have a smaller circle divided up into areas 1:1/2:1:1. Put this inside the large circle so the cuts coincide, and the smaller wedges are each 1/2 the area of the larger wedges, so you will have 6 equal areas 1:1:1:1:1:1 (the inside and outside parts of A, C, and C, plus two smaller areas 1/2:1/2 which can be added together to make the 7th equal share.
The approximate angle and offset from the center are ~75 degrees and 0.19*Radius. See below superimposed on Sleafar's approximation.
mathematics geometry word-problem dissection
Penguino
PenguinoPenguino
$\begingroup$ Can we take pi as 22/7 ? $\endgroup$ – Mea Culpa Nay May 6 '18 at 10:53
$\begingroup$ @u_ndefined Why wouldn't it be? Looks like a mix of geometry and logical deduction to me. Take a look at the other Ernie puzzles to get a better idea what types of things this series it's looking for. $\endgroup$ – DqwertyC May 6 '18 at 15:00
$\begingroup$ @Mea Culpa Nay I think that approximating pi to that extent could easily end up with the wrong person being poisoned. $\endgroup$ – Penguino May 6 '18 at 21:24
$\begingroup$ @Nope - and fixed. But I blame the tabloid newspaper for their poor proofreading. $\endgroup$ – Penguino May 7 '18 at 5:04
$\begingroup$ @Phylyp You could, but only if the numerator was extremely close to 7*pi. And since the recent subtraction, there are only six sisters in the remainder. $\endgroup$ – Penguino May 7 '18 at 21:07
I was able to limit the number of suspects to $2$ of the $6$ sisters. For the rest of this post, I assume the radius of each cake is $1$ (in other words the measurement unit used is the cake radius). This means the area of each cake equals $\pi$. Also, except in Mayas case, the word portion means $\frac{1}{7}$ of a cake.
The solution for this part is pretty straightforward:
We now know, that the cutter used by Maya has an area of $\frac{1}{2}\pi$ and a radius of $\sqrt{\frac{1}{2}}$.
Electra used the same cutter as Maya, and I wasn't able to find a way to make $7$ equal portions using this cutter. This means Electra is a possible suspect.
I guess something like this could be possible, but I'm not even close to prove the portions are of equal size (they obviously aren't in the picture).
Edit: After looking at the picture above, I think there aren't enough degrees of freedom for a valid solution. The red part must have the size of $1$ portion, the opposite part (orange and violet) must have the size of $2$ portions. This defines the position of the straight lines. The circle must split the orange/violet part into exactly equal parts, which defines the position of the cutter. And finally the cutter must also split the remaining $2$ parts into equal portions without beeing able to make further adjustments which seems highly unlikely.
Alcyone and Taygete
I cannot prove these are the only possible cutter sizes, but these are again the most straightforward solutions:
4 portions; area $\frac{4}{7}\pi$; radius $\sqrt{\frac{4}{7}}$
Using these cutters, it's possible to split the cakes like this:
Asterope
Asterope used the $4$ portion cutter and the $3$ portion cutter. The intersection of both cutters has the size of $2$ portions. The distance between the cutter centers is $\approx 0.475$ (calculated using this tool).
Celaeno
Knowing Asteropes solutions this looks pretty straightforward as well:
Using the $3$ portion cutter for an intersection with the size of $1$ portion we have a distance of the centers of $\approx 0.724$. The problem is, that we cut outside of the cake, which means the pieces left and right are too small, and the pieces on the top and bottom are too big.
It could be "fixed" by moving the cutter slightly inwards in each case, making the center piece bigger than the others and giving it to Maya. This means that Celaeno is a very probable suspect.
Edit: I also checked the numbers for the other cutters with a $1$ and $2$ portion intersection, and they aren't even close to a valid solution.
Merope used the $4$ portion cutter in the center, and the $3$ portion cutter on the sides. The intersections have the size of $1$ portion. The cutter centers have a distance of $\approx 0.841$.
SleafarSleafar
$\begingroup$ You are very close. Regarding your comment re Electra's cutting, there is a nice argument that involves only two-degrees-of-freedom plus a scaling symmetry that confirms that she can (or can't) cut the cake fairly. I will give you a few days to think about it and then accept your answer if nobody produces anything better. $\endgroup$ – Penguino May 15 '18 at 0:40
$\begingroup$ @Penguino I have looked at the Electra image for a while now, and the only possible scaling I can see is the "circular sector like" thing in the small circle, which could scale to another "sector like" thing in the big circle, if the lines cross at the right point. The scaling is of course determined by the radius difference, and l don't have the impression this knowledge is helping me at all. $\endgroup$ – Sleafar May 18 '18 at 14:48
Not the answer you're looking for? Browse other questions tagged mathematics geometry word-problem dissection or ask your own question.
Divide 3 cakes into 4 equal parts?
Ernie and the Geometric Ginger Cookies
Ernie and the Artificial Emmental
Ernie and the Cake Cutting
Ernie and the Revisitation of the Aunts
Ernie and the unfair division of the dessert
One Morning at the Coffeehouse
Ernie and the Mastermind
Ernie and the Optical Gyroscopes
Ernie and the Uneconomical Flat-breads | CommonCrawl |
Institut Camille Jordan (Lyon) June 16 - 18, 2010
[Main page] - [Schedule] - [Information] - [Registration] - [Participants]
All talks will be held in room Fokko du Cloux, 1st floor, Institut Camille Jordan, Université Lyon 1
9h30 - 10h30: L. Accardi
11h00 - 12h00: M. Kontsevich
9h00 - 10h00: J. Unterberger
10h15 - 11h15: K. Fredenhagen
11h30 - 12h30: K. Keller
12h00: Welcoming and lunch
(room 110)
Lunch (room 110)
14h00 - 15h00: R. Brunetti
15h15 - 16h15: V. Mastropietro
16h45 - 17h45: L. Cantini 14h00 - 15h00: M. Duetsch
15h15 - 16h15: K. Costello
16h45 - 17h45: O. Gwilliam 14h30 - 15h30: Mathematical Physics seminar
(talk by L. Accardi, room 112)
Luigi Accardi - Renormalized powers of white noise, infinitely divisible processes, the Virasoro--Zamolodchikov hierarchy and nonlinear Weyl relations
The arguments mentioned in the title emerged in different fields of physics and of mathematics, in different times and in connection with different problems. The program to develop analytical tools that allow to deal with the higher powers of white noise have brought to light the existence of deep and unexpected relations among these structures as well as with some famous open problems of classical probability, conformal field theory and string theory. The history of how these connections gradually emerged will be summarized in qualitative terms and with emphasis on open problems and on new conceptual features.
Romeo Brunetti - From classical to quantum field theories: perturbative and non-perturbative aspects (slides)
New developments in perturbative quantum field theories seem to shed new lights in the structural aspects of classical field theories. Viceversa, one may use these new findings to build up better quantization procedures.
Luigi Cantini - Field theory approach to off critical SLE(2) and SLE(4) (slides)
In recent years, after the breakthrough of Schramm, there has been a renewed interest in the study of interfaces in statistical models in 2 dimension at criticality. In this talk I will briefly review the description of these interfaces by Schramm-Loewner evolutions and their interplay with conformal field theory, then I will move on and address the problem of interfaces in massive theories. I will discuss how to use field theoretical methods to study the off-critical perturbation of SLE(4) (level lines of the Gaussian Free Field) and SLE(2) (Loop Erased Random Walk) obtained by adding a mass term to the action of the free boson and the free symplectic fermion. I'll show how to compute the off-critical statistics of the source in the Loewner equation describing the two dimensional interfaces, which amounts to adding a drift term that is given by the logarithmic derivative of the ratio of massive by massless partition functions.
Kevin Costello - Renormalization and effective field theory
I'll describe an approach to renormalization of quantum field theories based on the Batalin-Vilkovisky formalism and low-energy effective field theories.
Michael Dütsch - Connection between the renormalization groups of Stückelberg-Petermann and Wilson (slides)
The Stückelberg-Petermann renormalization group (RG) relies on the non-uniqueness of the S-matrix in causal perturbation theory (i.e. Epstein-Glaser renormalization); it is the family of all finite renormalizations. The RG in the sense of Wilson refers to the dependence of the theory on a cutoff. A new formalism for perturbative algebraic quantum field theory allows to clarify the relation between these different notions of RG. In particular we derive Polchinski's Flow Equation in the Epstein-Glaser framework.
Klaus Fredenhagen - Epstein-Glaser renormalization and dimensional regularization (slides)
(Stückelberg-Bogoliubov-)Epstein-Glaser renormalization is a conceptually clear and mathematically rigorous solution of the ultraviolet problem of perturbative quantum field theory. Its practical applicability, however, is restricted by the fact that, in its general form, it partially relies on nonconstructive arguments. Dimensional regularization, on the other hand, is very effective in practice, but much less transparent. It will be shown that a position space version of dimensional regularization, in the spririt of Bollini and Giambiagi, can be introduced within the recursion scheme of Epstein and Glaser, and, combined with the Main Theorem of Renormalization, delivers an explicit formula for the calculation of time ordered products, in close analogy to the Forest Formula of Zimmermann within BPHZ renormalization.
Owen Gwilliam - Factorization algebras in perturbative QFT
Using the approach to QFT described by Kevin Costello, we describe a deformation quantization-type theorem for QFTs and illustrate it with low-dimensional examples.
Kai J. Keller - Hopf algebraic aspects of perturbative algebraic quantum field theory
The formulation of perturbation theory in the algebraic approach to quantum field theory, developed in a series of articles by Brunetti, Dütsch, Fredenhagen, Hollands, and Wald brought to light the more profound structures of renormalization theory, and was shown to give a common basis to many different approaches to perturbative renormalization. I will show in my talk how the Hopf algebraic approach to renormalization can embedded into the functional framework of perturbative algebraic quantum field theory. The appearance of a Hopf algebra will be understood as a direct consequence of the renormalization freedom. Furthermore the implementation of dimensional regularization and minimal subtraction will lead to a generalization of the Connes-Kreimer theory of renormalization.
Maxim Kontsevich - Renormalization via OPE
I'll describe an approach to renormalization of a QFT given as a bundle of local fields on the space-time, together with an operator product expansion.
Vieri Mastropietro - Developments in the theory of universality (slides)
The universality hypothesis in statistical physics says that a number of macroscopic critical properties are largely independent of the microscopic structure, at least inside a universality class of systems. In the case of planar interacting Ising models, like Vertex or Ashkin-Teller models, this hypothesis means that the critical exponents, though model dependent, verify a set of universal extended scaling relations. The proof of several of such relations has been recently achieved; it is valid for generic non solvable models and it is based on the Renormalization Group methods developed in the context of constructive Quantum Field Theory. Extensions to quantum systems and several challenging open problems will be also presented.
Jérémie Unterberger - Fractional stochastic calculus by renormalization (slides)
Stochastic calculus is a fully developed theory for Brownian motion, but yet in its infancy for processes with more irregular paths. Rough path theory, initiated by T. Lyons in the 90es, shows how to solve pathwise differential equations driven by an irregular signal X out of a finite set of substitutes of iterated integrals of X (called: rough path over X) satisfying algebraic properties of geometric origin. We shall introduce a new, general method of construction of formal rough paths called Fourier normal algorithm, which comes naturally from the interplay between the Connes-Kreimer algebra of decorated rooted trees and the shuffle algebra. Rough paths with the correct regularity properties may be constructed for instance by the BPHZ renormalization algorithm for Feynman diagrams encoding the iterated integrals in the Gaussian case. Finally, time permitting, we shall show how to tackle directly this problem by using the tools of constructive quantum field theory, introducing an interaction term with a small coupling constant $\lambda$. An open conjecture is to show that one retrieves the above algebraic construction by letting $\lambda\to\infty$. | CommonCrawl |
The Annals of Probability
Ann. Probab.
Volume 38, Number 2 (2010), 570-604.
Coverage processes on spheres and condition numbers for linear programming
Peter Bürgisser, Felipe Cucker, and Martin Lotz
More by Peter Bürgisser
More by Felipe Cucker
More by Martin Lotz
Full-text: Open access
Enhanced PDF (882 KB)
Article info and citation
This paper has two agendas. Firstly, we exhibit new results for coverage processes. Let p(n, m, α) be the probability that n spherical caps of angular radius α in Sm do not cover the whole sphere Sm. We give an exact formula for p(n, m, α) in the case α∈[π/2, π] and an upper bound for p(n, m, α) in the case α∈[0, π/2] which tends to p(n, m, π/2) when α→π/2. In the case α∈[0, π/2] this yields upper bounds for the expected number of spherical caps of radius α that are needed to cover Sm.
Secondly, we study the condition number ${\mathscr{C}}(A)$ of the linear programming feasibility problem ∃x∈ℝm+1Ax≤0, x≠0 where A∈ℝn×(m+1) is randomly chosen according to the standard normal distribution. We exactly determine the distribution of ${\mathscr{C}}(A)$ conditioned to A being feasible and provide an upper bound on the distribution function in the infeasible case. Using these results, we show that $\mathbf{E}(\ln{\mathscr{C}}(A))\le2\ln(m+1)+3.31$ for all n>m, the sharpest bound for this expectancy as of today. Both agendas are related through a result which translates between coverage and condition.
Ann. Probab., Volume 38, Number 2 (2010), 570-604.
First available in Project Euclid: 9 March 2010
Permanent link to this document
https://projecteuclid.org/euclid.aop/1268143527
doi:10.1214/09-AOP489
Mathematical Reviews number (MathSciNet)
MR2642886
Zentralblatt MATH identifier
Primary: 60D05: Geometric probability and stochastic geometry [See also 52A22, 53C65] 52A22: Random convex sets and integral geometry [See also 53C65, 60D05] 90C05: Linear programming
Condition numbers covering processes geometric probability integral geometry linear programming
Bürgisser, Peter; Cucker, Felipe; Lotz, Martin. Coverage processes on spheres and condition numbers for linear programming. Ann. Probab. 38 (2010), no. 2, 570--604. doi:10.1214/09-AOP489. https://projecteuclid.org/euclid.aop/1268143527
[1] Agmon, S. (1954). The relaxation method for linear inequalities. Canad. J. Math. 6 382–392.
Zentralblatt MATH: 0055.35001
Digital Object Identifier: doi:10.4153/CJM-1954-037-2
[2] Bürgisser, P. and Amelunxen, D. (2008). Uniform smoothed analysis of a condition number for linear programming. Accepted for Math. Program. A. Available at arXiv:0803.0925.
arXiv: 0803.0925
[3] Cheung, D. and Cucker, F. (2001). A new condition number for linear programming. Math. Program. 91 163–174.
Digital Object Identifier: doi:10.1007/s101070100237
[4] Cheung, D. and Cucker, F. (2002). Probabilistic analysis of condition numbers for linear programming. J. Optim. Theory Appl. 114 55–67.
Digital Object Identifier: doi:10.1023/A:1015460004163
[5] Cheung, D., Cucker, F. and Hauser, R. (2005). Tail decay and moment estimates of a condition number for random linear conic systems. SIAM J. Optim. 15 1237–1261.
Digital Object Identifier: doi:10.1137/S105262340343470X
[6] Cucker, F. and Peña, J. (2002). A primal-dual algorithm for solving polyhedral conic systems with a finite-precision machine. SIAM J. Optim. 12 522–554.
[7] Cucker, F. and Wschebor, M. (2002). On the expected condition number of linear programming problems. Numer. Math. 94 419–478.
Digital Object Identifier: doi:10.1007/s00211-002-0385-1
[8] Dunagan, J., Spielman, D. A. and Teng, S.-H. (2009). Smoothed analysis of condition numbers and complexity implications for linear programming. Math. Programming. To appear. Available at http://arxiv.org/abs/cs/0302011v2.
[9] Dvoretzky, A. (1956). On covering a circle by randomly placed arcs. Proc. Natl. Acad. Sci. USA 42 199–203.
Digital Object Identifier: doi:10.1073/pnas.42.4.199
[10] Gilbert, E. N. (1965). The probability of covering a sphere with N circular caps. Biometrika 52 323–330.
Digital Object Identifier: doi:10.1093/biomet/52.3-4.323
[11] Goffin, J.-L. (1980). The relaxation method for solving systems of linear inequalities. Math. Oper. Res. 5 388–414.
Digital Object Identifier: doi:10.1287/moor.5.3.388
[12] Hall, P. (1985). On the coverage of k-dimensional space by k-dimensional spheres. Ann. Probab. 13 991–1002.
Digital Object Identifier: doi:10.1214/aop/1176992920
Project Euclid: euclid.aop/1176992920
[13] Hall, P. (1988). Introduction to the Theory of Coverage Processes. Wiley, New York.
[14] Hauser, R. and Müller, T. (2009). Conditioning of random conic systems under a general family of input distributions. Found. Comput. Math. 9 335–358.
Mathematical Reviews (MathSciNet): MR2496555
[15] Janson, S. (1986). Random coverings in several dimensions. Acta Math. 156 83–118.
Digital Object Identifier: doi:10.1007/BF02399201
Project Euclid: euclid.acta/1485890413
[16] Kahane, J.-P. (1959). Sur le recouvrement d'un cercle par des arcs disposés au hasard. C. R. Math. Acad. Sci. Paris 248 184–186.
[17] Miles, R. E. (1968). Random caps on a sphere. Ann. Math. Statist. 39 1371.
[18] Miles, R. E. (1969). The asymptotic values of certain coverage probabilities. Biometrika 56 661–680.
Digital Object Identifier: doi:10.1093/biomet/56.3.661
[19] Miles, R. E. (1971). Isotropic random simplices. Adv. in Appl. Probab. 3 353–382.
Digital Object Identifier: doi:10.2307/1426176
[20] Moran, P. A. P. and Fazekas de St. Groth, S. (1962). Random circles on a sphere. Biometrika 49 389–396.
[21] Motzkin, T. S. and Schoenberg, I. J. (1954). The relaxation method for linear inequalities. Canad. J. Math. 6 393–404.
Digital Object Identifier: doi:10.4153/CJM-1954-038-x
[22] Reitzner, M. (2002). Random points on the boundary of smooth convex bodies. Trans. Amer. Math. Soc. 354 2243–2278.
Digital Object Identifier: doi:10.1090/S0002-9947-02-02962-8
[23] Rosenblatt, F. (1962). Principles of Neurodynamics. Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington, DC.
[24] Santaló, L. A. (1976). Integral Geometry and Geometric Probability. Addison-Wesley, Reading, MA.
[25] Siegel, A. F. (1979). Asymptotic coverage distributions on the circle. Ann. Probab. 7 651–661.
Mathematical Reviews (MathSciNet): MR537212
[26] Siegel, A. F. and Holst, L. (1982). Covering the circle with random arcs of random sizes. J. Appl. Probab. 19 373–381.
[27] Solomon, H. (1978). Geometric Probability. SIAM, Philadelphia, PA.
[28] Stevens, W. L. (1939). Solution to a geometrical problem in probability. Ann. Eugenics 9 315–320.
[29] Wendel, J. G. (1962). A problem in geometric probability. Math. Scand. 11 109–111.
Digital Object Identifier: doi:10.7146/math.scand.a-10655
[30] Whitworth, W. A. (1965). DCC Exercises in Choice and Chance. Dover, New York.
[31] Zähle, M. (1990). A kinematic formula and moment measures of random sets. Math. Nachr. 149 325–340.
Digital Object Identifier: doi:10.1002/mana.19901490125
The Institute of Mathematical Statistics
Future Papers
Composition Operators from the Hardy Space to the Zygmund-Type Space on the Upper Half-Plane
Stević, Stevo, Abstract and Applied Analysis, 2009
Coefficient Estimates and Other Properties for a Class of Spirallike Functions Associated with a Differential Operator
Orhan, Halit, Răducanu, Dorina, Çağlar, Murat, and Bayram, Mustafa, Abstract and Applied Analysis, 2013
The Gaussian core model in high dimensions
Cohn, Henry and de Courcy-Ireland, Matthew, Duke Mathematical Journal, 2018
Exponential tail bounds for loop-erased random walk in two dimensions
Barlow, Martin T. and Masson, Robert, The Annals of Probability, 2010
Bayes minimax estimators of a location vector for densities in the Berger class
Fourdrinier, Dominique, Mezoued, Fatiha, and Strawderman, William E., Electronic Journal of Statistics, 2012
Quasi-Triangular Spaces, Pompeiu-Hausdorff Quasi-Distances, and Periodic and Fixed Point Theorems of Banach and Nadler Types
Włodarczyk, Kazimierz, Abstract and Applied Analysis, 2015
A Toda bracket in the stable homotopy groups of spheres
Liu, Xiugui, Algebraic & Geometric Topology, 2009
> Computable bounds of an 𝓁²-spectral gap for discrete Markov chains with band transition matrices
Hervé, Loïc and Ledoux, James, Journal of Applied Probability, 2016
An extension of a convergence theorem for Markov chains arising in population genetics
Möhle, Martin and Notohara, Morihiro, Journal of Applied Probability, 2016
On the Coverage of $k$-Dimensional Space by $k$-Dimensional Spheres
Hall, Peter, The Annals of Probability, 1985
euclid.aop/1268143527 | CommonCrawl |
", "upvoteCount": 0, "url": "https://studymaterialcenter.in/question/a-solid-sphere-of-radius-r-made-of-a-soft-material-of-bulk-modulus-k-is-surrounded-by-a-liquid-in-a-cylindrical-container-a-massless-piston-of-area-a-floats-on-the-surface-of-the-liquid-covering-entir/#acceptedAnswer" } } }
A solid sphere of radius $r$ made of a soft material of bulk modulus $\mathrm{K}$ is surrounded by a liquid in a cylindrical container. A massless piston of area a floats on the surface of the liquid, covering entire cross-section of cylindrical container. When a mass $m$ is placed on the surface of the piston to compress the liquid, the fractional decrement in the radius of the sphere $\left(\frac{d r}{r}\right)$, is :
A solid sphere of radius $r$ made of a soft material of bulk modulus $\mathrm{K}$ is surrounded by a liquid in a cylindrical container. A massless piston of area a floats on the surface of the liquid, covering entire cross-section of cylindrical container. When a mass $m$ is placed on the surface of the piston to compress the liquid, the fractional decrement in
the radius of the sphere $\left(\frac{d r}{r}\right)$, is :
$\frac{\mathrm{Ka}}{\mathrm{mg}}$
$\frac{\mathrm{Ka}}{3 \mathrm{mg}}$
$\frac{m g}{3 K a}$
$\frac{\mathrm{mg}}{\mathrm{Ka}}$
JEE Main Previous Year Single Correct Question of JEE Main from Physics Mechanical Properties of Solids chapter.
JEE Main Previous Year 2018
If the potential energy between two molecules is given by $U=-\frac{A}{r^{6}}+\frac{B}{r^{12}}$, then at equilibrium, separation between molecules, and the potential energy are:
A uniform cylindrical rod of length $\mathrm{L}$ and radius $r$, is made from a material whose Young's modulus of Elasticity equals $Y$. When this rod is heated by temperature $T$ and simultaneously subjected to a net longitudinal compressional force $\mathrm{F}$, its length remains unchanged. The coefficient of volume expansion, of the material of the rod, is (nearly) equal to:
In an environment, brass and steel wires of length $1 \mathrm{~m}$ each with areas of cross section $1 \mathrm{~mm}^{2}$ are used. The wires are connected in series and one end of the combined wire is connected to a rigid support and other end is subjected to elongation. The stress required to produce a net elongation of $0.2 \mathrm{~mm}$ is,
[Given, the Young's modulus for steel and brass are, respectively, $120 \times 10^{9} \mathrm{~N} / \mathrm{m}^{2}$ and $60 \times 10^{9} \mathrm{~N} / \mathrm{m}^{2}$ ]
The elastic limit of brass is $379 \mathrm{MPa}$. What should be the minimum diameter of a brass rod if it is to support a 400 $\mathrm{N}$ load without exceeding its elastic limit?
A steel wire having a radius of $2.0 \mathrm{~mm}$, carrying a load of $4 \mathrm{~kg}$, is hanging from a ceiling. Given that $\mathrm{g}=3.1 \mathrm{~A} \mathrm{~ms}^{-2}$, what will be the tensile stress that would be developed in the wire?
Young's moduli of two wires $A$ and $B$ are in the ratio $7: 4$. Wire $\mathrm{A}$ is $2 \mathrm{~m}$ long and has radius $\mathrm{R}$. Wire $\mathrm{B}$ is $1.5 \mathrm{~m}$ long and has radius $2 \mathrm{~mm}$. If the two wires stretch by the same length for a given load, then the value of $R$ is close to :
As shown in the figure, forces of $10^{5} \mathrm{~N}$ each are applied in opposite directions, on the upper and lower faces of a cube of side $10 \mathrm{~cm}$, shifting the upper face parallel to itself by $0.5 \mathrm{~cm}$. If the side of another cube of the same material is, $20 \mathrm{~cm}$, then under similar conditions as above, the displacement will be:
A thin $1 \mathrm{~m}$ long rod has a radius of $5 \mathrm{~mm}$. A force of $50 \pi \mathrm{kN}$ is applied at one end to determine its Young's modulus. Assume that the force is exactly known. If the least count in the measurement of all lengths is $0.01 \mathrm{~mm}$, which of the following statements is false ?
A uniformly tapering conical wire is made from a material of Young's modulus Y and has a normal, unextended length
L. The radii, at the upper and lower ends of this conical wire, have values $R$ and $3 R$, respectively. The upper end of the wire is fixed to a rigid support and a mass $M$ is suspended from its lower end. The equilibrium extended length, of this wire, would equal :
The pressure that has to be applied to the ends of a steel wire of length $10 \mathrm{~cm}$ to keep its length constant when its temperature is raised by $100^{\circ} \mathrm{C}$ is:
(For steel Young's modulus is $2 \times 10^{11} \mathrm{Nm}^{-2}$ and coefficient of thermal expansion is $1.1 \times 10^{-5} \mathrm{~K}^{-1}$ ) | CommonCrawl |
Comparison of the Biological Characteristics of Mesenchymal Stem Cells Derived from the Human Placenta and Umbilical Cord
Establishment of macaque trophoblast stem cell lines derived from cynomolgus monkey blastocysts
Shoma Matsumoto, Christopher J. Porter, … Satoshi Tanaka
Placenta-derived macaque trophoblast stem cells: differentiation to syncytiotrophoblasts and extravillous trophoblasts reveals phenotypic reprogramming
Jenna Kropp Schmidt, Logan T. Keding, … Thaddeus G. Golos
Umbilical cord tissue is a robust source for mesenchymal stem cells with enhanced myogenic differentiation potential compared to cord blood
Shivangi Mishra, Jayesh Kumar Sevak, … Suchitra D. Gopinath
Human trophoblast stem cell self-renewal and differentiation: Role of decorin
Pinki Nandi, Hyobin Lim, … Peeyush K. Lala
Comparison of human isogeneic Wharton's jelly MSCs and iPSC-derived MSCs reveals differentiation-dependent metabolic responses to IFNG stimulation
Liani Devito, Michail E. Klontzas, … Dusko Ilic
Establishment of porcine and human expanded potential stem cells
Xuefei Gao, Monika Nowak-Imialek, … Pentao Liu
In vitro establishment of expanded-potential stem cells from mouse pre-implantation embryos or embryonic stem cells
Jian Yang, David J. Ryan, … Pentao Liu
Developmental potential of aneuploid human embryos cultured beyond implantation
Marta N. Shahbazi, Tianren Wang, … Magdalena Zernicka-Goetz
BMP4 and perivascular cells promote hematopoietic differentiation of human pluripotent stem cells in a differentiation stage-specific manner
Suji Jeong, Borim An, … Seok-Ho Hong
Mingjun Wu1 na1,
Ruifan Zhang2 na1,
Qing Zou1,
Yaoyao Chen1,
Min Zhou1,
Xingjie Li1,
Ran Ran1 &
Qiang Chen1,3
Multipotent stem cells
Mesenchymal stem/stromal cells (MSCs) derived from placental tissue show great therapeutic potential and have been used in medical treatment, but the similarity and differences between the MSCs derived from various parts of the placenta remain unclear. In this study, we compared MSCs derived from different perinatal tissues, including the umbilical cord (UC), amniotic membrane (AM), chorionic plate (CP) and decidua parietalis (DP). Using human leukocyte antigen (HLA) typing and karyotype analysis, we found that the first three cell types were derived from the foetus, while the MSCs from the decidua parietalis were derived from the maternal portion of the placental tissue. Our results indicate that both foetal and maternal MSCs share a similar phenotype and multi-lineage differentiation potential, but foetal MSCs show a significantly higher expansion capacity than do maternal MSCs. Furthermore, MSCs from all sources showed significant differences in the levels of several paracrine factors.
Human placenta is well known to not only play a fundamental and essential role in foetal development, nutrition, and tolerance, but also function as a bank of MSCs. Placental tissue can be easily obtained as medical waste. Placenta-derived MSCs can be procured from this medical waste, free of invasive procedures such as adipose tissue collection, and there are no ethical controversies surrounding its use unlike the embryonic stem cells. Considering the complexity of the placenta, this tissue can be conceptually divided into the foetal side, consisting of the amnion, chorion and umbilical cord, and the maternal side, consisting of the decidua. Numerus reports have been published on the MSCs that originate from different parts of the placenta1,2,3,4,5,6,7,8,9,10,11. Many of the perinatal sources, including the amniotic membrane (AM), chorionic plate (CP), decidua parietalis (DP) and umbilical cord (UC), have advantages over adult sources such as BM in terms of their ease of availability, lack of donor site morbidity, naivety of cells, abundance of stem cells in tissues, and high capacity for proliferation7,12,13.
The placenta has been largely used to study MSCs, and several studies have already compared the features (phenotype and function) of MSCs isolated from different placental tissues14,15,16,17,18,19,20,21,22,23,24. However, the origin of MSCs derived from all sources (AM, CP, DP and UC) of the placenta have not been determined, and there is a lack of comprehensive comparisons between MSCs. Moreover, optimal sources for specific clinical applications remain to be identified25. The hypothesis that all MSCs, regardless of their origins, are identical in their quality and function ignores their differences in biology and potential therapeutic use, which cannot be defined and characterized by current methods in vitro26. MSCs are routinely defined in vitro by cell surface antigen expression and differentiation potential. These features are also known as the minimal MSC criteria proposed by the International Society for Cellular Therapies (ISCT)27. However, these minimal criteria are not specific for MSCs and cannot distinguish the connective tissue cells that share the same properties28. Cell-cell adhesion mediated by vascular cell adhesion protein 1 (VCAM-1) is known to be critical for T cell activation and leukocyte recruitment to the site of inflammation. Therefore, VCAM-1 plays an important role in evoking effective immune responses. VCAM-1 is also reported to be a biomarker for a subpopulation of chorionic villi-derived MSCs with unique immunosuppressive activity12. This finding suggests that a better understanding of the functional properties indicating the potential impact on future clinical applications may be achieved by identifying the molecular pathways and cytokine profiling of MSCs19,29.
In our study, we compared MSCs derived from the UC, AM, CP of foetal origin and the DP of maternal origin in the placenta to understand their similarities and differences. The morphology and immunophenotype (assessed by flow cytometry) were analysed. HLA typing and karyotype analysis were carried out to determine the origin of the MSCs. Growth kinetics were evaluated using the population doubling time (PDT) and CCK-8. Cytokine secretion function was quantitatively analysed using the enzyme-linked immunosorbent assay (ELISA) kit. Our data suggest that VCAM-1 could be used as a biomarker to determine the CP-derived MSCs.
Identification of placenta-derived MSCs
According to the ISCT criteria, the MSCs derived from AM, CP, DP and UC (Supplementary Fig. S1a,b) exhibited typical fibroblastoid, spindle-shaped morphology and displayed a high capacity to adhere to plastic when maintained in standard culture conditions using tissue culture flasks (Fig. 1a, top panel). There were significant differences in the cell isolation rates from different sources, ranging from 0.34 to 1.52 million single cells per gram tissue (Fig. 1b). According to our data, MSCs cultured from all sources could be established with a comparable positive rate.
Characterization and isolation yield of different types of MSCs derived from perinatal tissues. (a) All MSCs exhibited a similar morphology and became positive for oil red O (adipocytic differentiation), alcian blue (chondrocytic differentiation), and alizarin red (osteocytic differentiation). (b) Original raw material, MSC isolation yield. Data are presented as the mean ± SEM (*p < 0.05, **p < 0.005). (c) Flow cytometric analysis of CD106 expression in different MSCs. (d) Statistical result of CD106 expression in different MSCs. Data are presented as the mean ± SEM (***P < 0.0001).
After 21 days of induction with the respective induction media, AM-MSCs underwent low-level trilineage differentiation. In contrast, the three other types of MSCs showed relatively higher differentiation potential (Fig. 1a). CP-, DP-, and UC-MSCs from all three donors differentiated into all three induced lineages (adipocytes, osteoblasts and chondroblasts). AM-MSCs from donors 1 and 2 showed only adipogenic and osteogenic differentiation potential, and only donor 3 showed trilineage differentiation potential (Supplementary Fig. S2).
To determine the most significant differences among these MSCs, we compared the phenotypes of MSCs isolated from the human placenta using identical methods. Each type of MSC was tested in 10 donors. A series of cell markers was examined at passage 3 of in vitro cultivation, including the classical MSC phenotypes as defined by the ISCT criteria (CD14, CD34, CD45, CD73, CD90, CD105 and HLA-DR), embryonic stem cell markers (SOX2 and SSEA4) and VCAM-1, also known as CD106. AM-, CP-, DP- and UC-MSCs showed similar expression levels of MSC-specific surface markers (CD73, CD90 and CD105) and an absence of leucocyte, haematopoietic cell, or monocyte/macrophage markers (CD45, HLA-DR, CD34 and CD14) (Supplementary Fig. S3). All of these MSCs highly expressed the SOX2 and SSEA4 embryonic stem cell markers, as well as mesenchymal markers, including CD73, CD90 and CD105 (Supplementary Fig. S4). The most significant difference in their phenotype was the expression of CD106, which was expressed highly in CP-MSCs (81.10 ± 12.28%), moderately in UC-MSCs (12.07 ± 11.43%), and slightly in AM-MSCs (4.27 ± 4.39%). DP-MSCs did not express CD106 (Fig. 1c,d).
Origin determination
HLA analysis of the culture-expanded cells from the same placental sample (n = 3) showed that AM-, CP- and UC-derived MSCs were of foetal origin, and DP-derived MSCs were of maternal origin (Table 1). However, some of the culture-expanded DP-derived cell populations expressed both foetal- and maternal-specific alleles (data not shown).
Table 1 HLA typing of culture-expanded MSCs from the same placental sample.
To confirm that these MSCs in culture were derived from the foetal or maternal placenta, the cytogenetic karyotypes of the cells from the same placenta (n = 4) of male babies were analysed. XX sex chromosomes were detected in DP-MSCs, and XY chromosomes were detected in AM-, CP- and UC-MSCs (Fig. 2).
Karyotype analysis of different MSCs derived from different sources of the placenta of male babies (n = 3). G-band staining revealed that AM-, CP- and UC-MSCs were foetal cells exhibiting a normal 46, XY karyotype, and DP-MSCs were maternal cells exhibiting a normal 46, XX karyotype.
Growth characteristics
The growth curves of all MSCs show that the DP-MSCs grew the slowest (Fig. 3a). During cell proliferation, the MSCs were cultured up to passage 11. Based on our calculations of the cell population doubling time, the cell PDT of the UC-MSCs was 28.34 ± 2.89 h, and that of the AM-, CP- and DP-MSCs was 35.19 ± 9.28 h, 38.71 ± 9.27 h and 48.01 ± 8.26 h, respectively (Fig. 3b,c). Thus, the order of the growth rate of the cells was as follows (from the fastest to the slowest): UC-, AM-, CP- and DP-MSCs.
Proliferative potential of different sources of MSCs. The number of MSCs was counted each time following subculture from passages 3 to 11 (n = 3 donors). (a) Growth curves of different types of MSCs. (b) The population doubling time was also calculated based on cell counts. (c) Comparison of average population doubling time of different sources of MSCs following subculture from passages 3 to 11. Data are presented as the mean ± SEM (*P < 0.05. **P < 0.005. ***P < 0.0001).
Secretion patterns of selected growth factors and cytokines
Secretion of paracrine factors, including human angiopoietin-1 (Ang-1), hepatocyte growth factor (HGF), insulin-like growth factor I (IGF-I), prostaglandin E2 (PGE2), transforming growth factor beta 1 (TGF-β1), VCAM-1 and vascular endothelial growth factor (VEGF), in all MSCs was assessed using ELISA kits according to the manufacturer's instructions. MSCs from all sources showed significant differences in the levels of selected factors. AM-MSCs showed the highest secretion of PGE2 and TGF-β1. CP-MSCs showed the highest secretion of HGF and VCAM-1. DP-MSCs showed the highest secretion of Ang-1 and VEGF and the lowest secretion of TGF-β1, while UC-MSCs showed the highest secretion of IGF-I (Fig. 4).
Comparison of the secretion patterns of selected growth factors and cytokines. Differences in the four sources were determined to be significant and were labelled with a star if the P-value determined using ANOVA followed by Tukey's test was <0.05. Data are expressed as the mean ± SEM (*P < 0.05. **P < 0.005. ***P < 0.0001). Ang-1, angiopoietin-1; HGF, hepatocyte growth factor; IGF-I, insulin-like growth factor I; PGE2, prostaglandin E2; TGF-β1, transforming growth factor beta 1; VCAM-1, vascular cell adhesion molecule-1; VEGF, vascular endothelial growth factor.
In this study, we performed a side-by-side comparison of 4 populations of MSCs derived from perinatal tissues, including AM, CP, DP and UC. In summary, this study resulted in the following major conclusions:
First, we analysed the origin of different perinatal tissue-derived MSCs. HLA typing and karyotype analysis confirmed that AM-, CP- and UC-derived MSCs were of foetal origin, and DP-derived MSCs were of maternal origin. Moreover, we observed significant differences in the proliferative potential among the 4 populations of MSCs, and the proliferation rate from the fastest to the slowest was as follows: UC-, AM-, CP- and DP-MSCs. The growth curve showed that the proliferative capacity of the MSCs of foetal origin was significantly greater than that of the MSCs of maternal origin.
Second, we found that MSCs derived from different perinatal tissues are not identical in terms of their biological properties. Although MSCs from all sources were shown to express similar surface markers according to the ISCT criteria and some pluripotency related markers; for instance, SOX2 and SSEA4, CP-MSCs show the highest CD106 expression compared to the other three MSCs, which displays a positive correlation with the immunosuppressive effect. CD106 is known to play an important role in embryonic development in the formation of the umbilical cord and placenta30. Moreover, surface molecules, such as CD106 and CD54, are considered to be important for the immunomodulation of MSCs31.
Third, MSCs derived from different tissues have been demonstrated in numerous studies to differentiate into cells in the mesodermal lineage, such as adipocytes, osteoblasts and chondroblasts32,33,34,35. Our results demonstrated that there are quantitative differences between various populations of MSCs derived from different perinatal tissues with respect to their differentiation potential. Our data indicated that AM-MSCs underwent trilineage differentiation at a low level. Furthermore, the differentiation potential of foetal (AM origin) vs. adult (DP origin) MSCs in our work showed that the proliferative capacity of the adult (maternal) cells was significantly lower than that of the feotal cells which is inconsistent with their differentiation potential (Fig. 3, supplementary Fig. S2).
Fourth, the secretion patterns of selected growth factors and cytokines revealed that MSCs from all sources showed distinct differences in the levels of the selected factors. These factors were selected because multiple studies have shown that they are secreted by MSCs during inhibition of apoptosis, immunomodulation, anti-fibrotic processes, angiogenesis, chemotaxis and haematopoiesis induction/support in vitro or in vivo36,37,38,39,40,41. Recent studies have demonstrated that the high expression of HGF and VCAM-1 in MSCs was associated with a favourable angiogenic potency and displayed therapeutic efficacy in hindlimb ischaemia42,43.
In conclusion, our study compared MSCs derived from different perinatal tissues to better understand the similarities and differences among these cell types. The origin and purity of each cells was confirmed by HLA typing and karyotype analysis and showed that the first three cell types were of foetal origin and the last cell type was of maternal origin from the placental tissue. Although both foetal and maternal MSCs have similar phenotypes and multi-lineage differentiation potential, foetal MSCs showed a significantly higher expansion capacity than did maternal MSCs, and furthermore, MSCs from all sources showed significant differences in the levels of selected paracrine factors. These findings may offer clues to the clinical application of different types of MSCs. For instance, AM-MSCs may be used in the treatment of premature ovarian ageing due to their higher secretion of PGE2 and TGF-β144; CP-MSCs display potential pro-angiogenic activity due to the higher secretion of HGF and VCAM-143 and could be used in angiogenic therapy; and DP-MSCs show advantages in the treatment of critical limb ischaemia because of the higher secretion of VEGF and Ang-145. Compared to AM-, CP- and DP-MSCs, UC-MSCs secreted higher levels of a wide range of selected paracrine factors. Thus, UC-MSCs may be a source of cell therapy to treat other diseases. Furthermore, it would be necessary to identify the ability of MSCs derived from different sources differentiating into various types of cells specifically into the three germ layers such as ectoderm (epithelial and neuronal cells), mesoderm (endothelial cells and cardiomyocytes) and endoderm (hepatocytes and insulin producing β-cells). More functional studies are required to confirm these findings and to obtain a further understanding of the biological differences of MSCs from various sources so that the most suitable MSCs for treatment of specific diseases can be verified and acquired.
Isolation and culture of MSCs from the human placenta and umbilical cord
The experiments involving human tissue were approved by the Research Center for Stem Cell and Regenerative Medicine, Sichuan Neo-life Stem Cell Biotech INC./ Center for Stem Cell Research & Application, Institute of Blood Transfusion, Chinese Academy of Medical Sciences and Peking Union Medical College (CAMS & PUMC). All the experiments were carried out in accordance with the approved guidelines. Human placentae (n = 60) and umbilical cords (n = 13) were collected from healthy, full-term, uncomplicated pregnancies. Written informed consent was obtained from the mothers and the donors.
First, UCs were dissected longitudinally, and the arteries and veins were removed. The remaining pieces were chopped mechanically. Second, the decidua parietalis attached to the maternal side of the human placenta was manually separated from the chorion. Third, the placental amnion attached to the foetal side of the human placenta was separated from the chorionic plate. Finally, the chorionic plate without the amnion and decidua basalis was separated from the human placenta. All of the above three tissues were washed thoroughly with phosphate-buffered saline (PBS; pH 7.4) to remove excess blood. The tissues were rinsed in PBS and were extensively minced. All of the explants, including the UCs, were transferred into 100 mm plates (Corning, USA). Complete culture medium (Dulbecco's modified Eagle's medium/nutrient mixture F-12, DMEM-F12 containing 10% foetal bovine serum, 100 mg/mL streptomycin and 100 U/mL penicillin) was added to the plates, and the explants were cultured at 37 °C in a 5% CO2 incubator and left undisturbed to allow the cells to migrate from the explants. After 10–15 days, MSC-like cells were found around the fragments. MSCs were identified on the basis of their fibroblastic morphology and phenotypic characterization, which was performed after passage 3, and were used in subsequent experiments. The cell cultures at different time intervals were observed under an inverted phase contrast microscope (Leica DMI3000 B, Leica Microsystems Inc., Germany) and the images were captured using Leica Application Suite Version 3.8.0 software.
Determination of the maternal and foetal origin of MSCs
To analyse the origin of culture-expanded MSCs derived from the amnion, chorionic plate, decidua parietalis and umbilical cord, molecular HLA typing was performed on DNA obtained from expanded MSCs using PCR-SSP with an AllSet+ Gold SSP HLA-A\B\DRB1 kit (ONE LAMBDA, Canoga Park, CA).
Flow cytometry analysis
For phenotypic identification of the MSCs derived from all sources, a total of 1 × 106 cells were divided into aliquots in 1.5 mL microcentrifuge tubes, and the samples were centrifuged at 500 × g for 5 minutes. Pelleted cells were washed twice in phosphate-buffered saline (PBS) supplemented with 0.2% foetal bovine serum (FBS) (Gibco, Life Technologies, USA). The cells were then suspended in 50 μL of PBS with 1% bovine serum albumin (BSA), and the following cell surface epitopes were detected: anti-human CD73-PE, CD90-FITC, CD105-PE, VCAM-1-PE, CD166-PE, CD14-PE, CD34-PE, CD45-Pc7, HLA-DR-FITC (BD Biosciences, USA), SOX2-PE and SSEA4-PE (eBioscience, USA). Appropriate isotype controls were used for each antibody to assess for nonspecific antibody binding. The cells were then analysed using a flow cytometry instrument (FC500; Beckman Coulter, USA) and data processing software (FlowJo 10.0.7; TreeStar, USA).
Growth kinetics analysis
The proliferation of MSCs from P3 to P11 was assessed (n = 3). MSCs from all sources were inoculated on a six-well culture plate at a density of 7–10 × 105 cells/well, and the cells were counted until they reached 100% confluency. The PDT was calculated using the following formula:
$${\rm{PDT}}=({\rm{CT}}\times \,\mathrm{ln}\,2)/\,\mathrm{ln}({{\rm{N}}}_{{\rm{f}}}/{\rm{Ni}}),$$
where CT is the cell culture time, Ni is the initial number of cells, and Nf is the final number of cells46.
Proliferation assay
The proliferation of MSCs from all sources was determined using the Cell Counting Kit-8 (CCK-8, Dojindo Molecular Technology, Japan). MSCs were plated at a density of 2,000 cells per well in 96-well plates in standard culture medium. After 4 hours of incubation, 10 µL of CCK-8 was added to each well, and the plates were incubated at 37 °C. Optical density (OD) was measured every 24 hours with a spectrophotometer (Multiskan GO, Thermo Scientific) at 450 nm. Cell viability was calculated relative to the control.
In vitro differentiation assay for MSCs
AM-, CP-, DP- and UC-derived MSCs were differentiated into adipocytes, osteoblasts and chondrocytes after three passages as follows. In brief, for adipogenic, osteogenic or chondrogenic differentiation, MSCs from all sources were seeded into 12-well plates at 200,000 cells per well and were maintained in standard culture medium until confluency. Cells were exposed to adipogenic, osteogenic or chondrogenic induction medium (All from Gibco, Life Technologies, Grand Island, USA) for 21 days. Cells were fixed in 4% paraformaldehyde. To assess adipogenic differentiation, lipid droplets of differentiated cells were stained using oil red O. To assess osteogenic differentiation, cells were stained with alizarin red S. To assess chondrogenic differentiation, cells were stained with alcian blue. Control cells were maintained in standard culture medium over the same time period (All stains were procured from Sigma Aldrich, St Louis, USA). The stained plates were observed under an inverted phase contrast microscope (Leica DMI3000 B, Leica Microsystems Inc., Germany) and the images were captured using Leica Application Suite Version 3.8.0 software.
Karyotype analysis
To analyse the karyotype of the AM-, CP-, DP- and UC-derived MSCs from the same placenta (male new-born), cell division was blocked at metaphase with 0.1 μg/mL colcemid (Calbiochem, Germany) for 2 hours at 37 °C. The cells were washed and trypsinized, resuspended in 0.075 M KCl, incubated for 20 minutes at 37 °C, and fixed with methanol and acetic acid (3:1). G band standard staining was used to visualize the chromosomes. At least 20 metaphase-nuclei were detected in each sample. The cells in metaphase were analysed and reported on by a certified cytogenetic laboratory according to the International System for Human Cytogenetic Nomenclature.
Quantification of secreted factors
Culture supernatants were generated as follows. Cells were seeded in standard culture medium at a density of 10,000 cells/cm2. After 72 hours, cell-free supernatants were collected and were stored at −80 °C. The levels of hepatocyte growth factor (HGF), angiopoietin-1 (Ang-1), vascular endothelial growth factor (VEGF), vascular cell adhesion molecule-1 (VCAM-1), insulin-like growth factor I (IGF-I), prostaglandin E2 (PGE2) and transforming growth factor beta 1 (TGF-β1) were measured using the respective ELISA kit (Bio-Rad) according to the manufacturer's protocol.
Statistical analyses were performed using GraphPad Prism version 5.0 (California, USA). Comparisons of parameters for more than three groups were made by one-way analysis of variance (ANOVA) followed by Tukey's test. Parametric data are expressed as the means ± standard deviation (SD). A value of P < 0.05 was considered statistically significant.
Abumaree, M. H. et al. Phenotypic and functional characterization of mesenchymal stem cells from chorionic villi of human term placenta. Stem cell reviews 9, 16–31, https://doi.org/10.1007/s12015-012-9385-4 (2013).
Castrechini, N. M. et al. Decidua parietalis-derived mesenchymal stromal cells reside in a vascular niche within the choriodecidua. Reproductive sciences 19, 1302–1314, https://doi.org/10.1177/1933719112450334 (2012).
In 't Anker, P. S. et al. Isolation of mesenchymal stem cells of fetal or maternal origin from human placenta. Stem cells 22, 1338–1345, https://doi.org/10.1634/stemcells.2004-0058 (2004).
Yen, B. L. et al. Isolation of multipotent cells from human term placenta. Stem cells 23, 3–9, https://doi.org/10.1634/stemcells.2004-0098 (2005).
Kusuma, G. D. et al. Mesenchymal stem cells reside in a vascular niche in the decidua basalis and are absent in remodelled spiral arterioles. Placenta 36, 312–321, https://doi.org/10.1016/j.placenta.2014.12.014 (2015).
Miao, Z. et al. Isolation of mesenchymal stem cells from human placenta: comparison with human bone marrow mesenchymal stem cells. Cell biology international 30, 681–687, https://doi.org/10.1016/j.cellbi.2006.03.009 (2006).
Ilancheran, S., Moodley, Y. & Manuelpillai, U. Human fetal membranes: a source of stem cells for tissue regeneration and repair? Placenta 30, 2–10, https://doi.org/10.1016/j.placenta.2008.09.009 (2009).
Manuelpillai, U., Moodley, Y., Borlongan, C. V. & Parolini, O. Amniotic membrane and amniotic cells: potential therapeutic tools to combat tissue inflammation and fibrosis? Placenta 32(Suppl 4), S320–325, https://doi.org/10.1016/j.placenta.2011.04.010 (2011).
Nekanti, U. et al. Optimization and scale-up of Wharton's jelly-derived mesenchymal stem cells for clinical applications. Stem cell research 5, 244–254, https://doi.org/10.1016/j.scr.2010.08.005 (2010).
Pappa, K. I. & Anagnou, N. P. Novel sources of fetal stem cells: where do they fit on the developmental continuum? Regenerative medicine 4, 423–433, https://doi.org/10.2217/rme.09.12 (2009).
Abomaray, F. M. et al. Phenotypic and Functional Characterization of Mesenchymal Stem/Multipotent Stromal Cells from Decidua Basalis of Human Term Placenta. Stem cells international 2016, 5184601, https://doi.org/10.1155/2016/5184601 (2016).
Yang, Z. X. et al. CD106 identifies a subpopulation of mesenchymal stem cells with unique immunomodulatory properties. PloS one 8, e59354, https://doi.org/10.1371/journal.pone.0059354 (2013).
Article CAS PubMed PubMed Central ADS Google Scholar
Abumaree, M. H. et al. Phenotypic and Functional Characterization of Mesenchymal Stem/Multipotent Stromal Cells From Decidua Parietalis of Human Term Placenta. Reproductive sciences 23, 1193–1207, https://doi.org/10.1177/1933719116632924 (2016).
Wang, L. et al. Characterization of placenta-derived mesenchymal stem cells cultured in autologous human cord blood serum. Molecular medicine reports 6, 760–766, https://doi.org/10.3892/mmr.2012.1000 (2012).
Yamahara, K. et al. Comparison of angiogenic, cytoprotective, and immunosuppressive properties of human amnion- and chorion-derived mesenchymal stem cells. PloS one 9, e88319, https://doi.org/10.1371/journal.pone.0088319 (2014).
Article PubMed PubMed Central ADS Google Scholar
Chen, G. et al. Comparison of biological characteristics of mesenchymal stem cells derived from maternal-origin placenta and Wharton's jelly. Stem cell research & therapy 6, 228, https://doi.org/10.1186/s13287-015-0219-6 (2015).
Hwang, J. H. et al. Comparison of cytokine expression in mesenchymal stem cells from human placenta, cord blood, and bone marrow. Journal of Korean medical science 24, 547–554, https://doi.org/10.3346/jkms.2009.24.4.547 (2009).
Asgari, H. R. et al. Comparison of Human Amniotic, Chorionic, and Umbilical Cord Multipotent Mesenchymal Stem Cells Regarding Their Capacity for Differentiation Toward Female Germ Cells. Cellular reprogramming 19, 44–53, https://doi.org/10.1089/cell.2016.0035 (2017).
Heo, J. S., Choi, Y., Kim, H. S. & Kim, H. O. Comparison of molecular profiles of human mesenchymal stem cells derived from bone marrow, umbilical cord blood, placenta and adipose tissue. International journal of molecular medicine 37, 115–125, https://doi.org/10.3892/ijmm.2015.2413 (2016).
Dabrowski, F. A. et al. Comparison of the paracrine activity of mesenchymal stem cells derived from human umbilical cord, amniotic membrane and adipose tissue. The journal of obstetrics and gynaecology research, https://doi.org/10.1111/jog.13432 (2017).
Deihim, T., Yazdanpanah, G. & Niknejad, H. Different Light Transmittance of Placental and Reflected Regions of Human Amniotic Membrane That Could Be Crucial for Corneal Tissue Engineering. Cornea 35, 997–1003, https://doi.org/10.1097/ICO.0000000000000867 (2016).
Qin, S. Q. et al. Establishment and characterization of fetal and maternal mesenchymal stem/stromal cell lines from the human term placenta. Placenta 39, 134–146, https://doi.org/10.1016/j.placenta.2016.01.018 (2016).
Talwadekar, M. D., Kale, V. P. & Limaye, L. S. Placenta-derived mesenchymal stem cells possess better immunoregulatory properties compared to their cord-derived counterparts-a paired sample study. Scientific reports 5, 15784, https://doi.org/10.1038/srep15784 (2015).
Zhu, Y. et al. Placental mesenchymal stem cells of fetal and maternal origins demonstrate different therapeutic potentials. Stem cell research & therapy 5, 48, https://doi.org/10.1186/scrt436 (2014).
Prockop, D. J. & Oh, J. Y. Medical therapies with adult stem/progenitor cells (MSCs): a backward journey from dramatic results in vivo to the cellular and molecular explanations. Journal of cellular biochemistry 113, 1460–1469, https://doi.org/10.1002/jcb.24046 (2012).
Prockop, D. J. Repair of tissues by adult stem/progenitor cells (MSCs): controversies, myths, and changing paradigms. Molecular therapy: the journal of the American Society of Gene Therapy 17, 939–946, https://doi.org/10.1038/mt.2009.62 (2009).
Dominici, M. et al. Minimal criteria for defining multipotent mesenchymal stromal cells. The International Society for Cellular Therapy position statement. Cytotherapy 8, 315–317, https://doi.org/10.1080/14653240600855905 (2006).
Bianco, P. et al. The meaning, the sense and the significance: translating the science of mesenchymal stem cells into medicine. Nature medicine 19, 35–42, https://doi.org/10.1038/nm.3028 (2013).
Wegmeyer, H. et al. Mesenchymal stromal cell characteristics vary depending on their origin. Stem cells and development 22, 2606–2618, https://doi.org/10.1089/scd.2013.0016 (2013).
Gurtner, G. C. et al. Targeted disruption of the murine VCAM1 gene: essential role of VCAM-1 in chorioallantoic fusion and placentation. Genes & development 9, 1–14 (1995).
Ren, G. et al. Inflammatory cytokine-induced intercellular adhesion molecule-1 and vascular cell adhesion molecule-1 in mesenchymal stem cells are critical for immunosuppression. Journal of immunology 184, 2321–2328, https://doi.org/10.4049/jimmunol.0902023 (2010).
De Ugarte, D. A. et al. Comparison of multi-lineage cells from human adipose tissue and bone marrow. Cells, tissues, organs 174, 101–109, 71150 (2003).
Lorenz, K. et al. Multilineage differentiation potential of human dermal skin-derived fibroblasts. Experimental dermatology 17, 925–932, https://doi.org/10.1111/j.1600-0625.2008.00724.x (2008).
Kern, S., Eichler, H., Stoeve, J., Kluter, H. & Bieback, K. Comparative analysis of mesenchymal stem cells from bone marrow, umbilical cord blood, or adipose tissue. Stem cells 24, 1294–1301, https://doi.org/10.1634/stemcells.2005-0342 (2006).
Zhang, X. et al. Isolation and characterization of mesenchymal stem cells from human umbilical cord blood: reevaluation of critical factors for successful isolation and high ability to proliferate and differentiate to chondrocytes as compared to mesenchymal stem cells from bone marrow and adipose tissue. Journal of cellular biochemistry 112, 1206–1218, https://doi.org/10.1002/jcb.23042 (2011).
Chen, K. et al. Human umbilical cord mesenchymal stem cells hUC-MSCs exert immunosuppressive activities through a PGE2-dependent mechanism. Clinical immunology 135, 448–458, https://doi.org/10.1016/j.clim.2010.01.015 (2010).
Nemeth, K. et al. Bone marrow stromal cells use TGF-beta to suppress allergic responses in a mouse model of ragweed-induced asthma. Proceedings of the National Academy of Sciences of the United States of America 107, 5652–5657, https://doi.org/10.1073/pnas.0910720107 (2010).
Di Nicola, M. et al. Human bone marrow stromal cells suppress T-lymphocyte proliferation induced by cellular or nonspecific mitogenic stimuli. Blood 99, 3838–3843 (2002).
Shizukuda, Y., Tang, S., Yokota, R. & Ware, J. A. Vascular endothelial growth factor-induced endothelial cell migration and proliferation depend on a nitric oxide-mediated decrease in protein kinase Cdelta activity. Circulation research 85, 247–256 (1999).
Livingstone, C. Insulin-like growth factor-I (IGF-I) and clinical nutrition. Clinical science 125, 265–280, https://doi.org/10.1042/CS20120663 (2013).
Klopper, J. et al. High efficient adenoviral-mediated VEGF and Ang-1 gene delivery into osteogenically differentiated human mesenchymal stem cells. Microvascular research 75, 83–90, https://doi.org/10.1016/j.mvr.2007.04.010 (2008).
Gonzalez, P. L. et al. Chorion Mesenchymal Stem Cells Show SuperiorDifferentiation, Immunosuppressive, and Angiogenic Potentials in Comparison With Haploidentical Maternal Placental Cells. Stem cells translational medicine 4, 1109–1121, https://doi.org/10.5966/sctm.2015-0022 (2015).
Du, W. et al. VCAM-1+placenta chorionic villi-derived mesenchymal stem cells display potent pro-angiogenic activity. Stem cell research & therapy 7, 49, https://doi.org/10.1186/s13287-016-0297-0 (2016).
Ding, C. et al. Different therapeutic effects of cells derived from human amniotic membrane on premature ovarian aging depend on distinct cellular biological characteristics. Stem cell research & therapy 8, 173, https://doi.org/10.1186/s13287-017-0613-3 (2017).
Beegle, J. R. et al. Preclinical evaluation of mesenchymal stem cells overexpressing VEGF to treat critical limb ischemia. Molecular therapy. Methods & clinical development 3, 16053, https://doi.org/10.1038/mtm.2016.53 (2016).
Redaelli, S. et al. From cytogenomic to epigenomic profiles: monitoring the biologic behavior of in vitro cultured human bone marrow mesenchymal stem cells. Stem cell research & therapy 3, 47, https://doi.org/10.1186/scrt138 (2012).
We thank He Lin (Sichuan Academy of Medical & Sichuan Provincial People's Hospital, Chengdu, China) for his help in the karyotype analysis. This work was supported by a grant from Sichuan Neo-life Stem Cell Biotech INC., Chengdu, Sichuan, China.
Mingjun Wu and Ruifan Zhang contributed equally to this work.
Research Center for Stem Cell and Regenerative Medicine, Sichuan Neo-life Stem Cell Biotech INC, Chengdu, Sichuan, China
Mingjun Wu, Qing Zou, Yaoyao Chen, Min Zhou, Xingjie Li, Ran Ran & Qiang Chen
Department of Ophthalmology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Chengdu, Sichuan, China
Ruifan Zhang
Center for Stem Cell Research & Application, Institute of Blood Transfusion, Chinese Academy of Medical Sciences and Peking Union Medical College, Chengdu, Sichuan, China
Qiang Chen
Mingjun Wu
Qing Zou
Yaoyao Chen
Xingjie Li
Ran Ran
Q.C. and M.W. conceived the idea. M.W. and R.Z. designed the experiments and analysed the data. M.W., R.Z., Q.Z., Y.C., M.Z. and X.L. performed the experiments. M.W., R.Z. and R.R. participated in discussing the results and in writing the manuscript. All authors reviewed the manuscript.
Correspondence to Qiang Chen.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Wu, M., Zhang, R., Zou, Q. et al. Comparison of the Biological Characteristics of Mesenchymal Stem Cells Derived from the Human Placenta and Umbilical Cord. Sci Rep 8, 5014 (2018). https://doi.org/10.1038/s41598-018-23396-1
Myoglobin expression by alternative transcript in different mesenchymal stem cells compartments
Rosella Scrima
Francesca Agriesti
Nazzareno Capitanio
Stem Cell Research & Therapy (2022)
Mesenchymal stem cells derived from different perinatal tissues donated by same donors manifest variant performance on the acute liver failure model in mouse
Shanshan Li
Junfeng Wang
Wei Si
Mesenchymal stem cells and their microenvironment
Jiaxi Liu
Jinfang Gao
Liyun Zhang
Combined hypothermia and mesenchymal stem cells in animal models of neonatal hypoxic–ischaemic encephalopathy: a systematic review
Elliot J. Teo
Lara E. Jones
S. Tracey Bjorkman
Pediatric Research (2022)
Differences in chemotaxis of human mesenchymal stem cells and cervical cancer cells
Yizuo Song
Ruyi Li
Xueqiong Zhu
Apoptosis (2022)
Top 100 in Cell and Molecular Biology | CommonCrawl |
Last week (1)
Last 3 years (41)
Materials Research (23)
Earth and Environmental Sciences (20)
Statistics and Probability (6)
Journal of Materials Research (11)
Microscopy and Microanalysis (11)
British Journal of Nutrition (8)
MRS Online Proceedings Library Archive (8)
Disaster Medicine and Public Health Preparedness (7)
Geological Magazine (7)
Journal of Fluid Mechanics (6)
The Journal of Laryngology & Otology (6)
Epidemiology & Infection (5)
Communications in Computational Physics (3)
Prehospital and Disaster Medicine (3)
Radiocarbon (3)
The British Journal of Psychiatry (3)
BJPsych Open (2)
Chinese Journal of Agricultural Biotechnology (2)
European Journal of Applied Mathematics (2)
Plant Genetic Resources (2)
Powder Diffraction (2)
Quaternary Research (2)
Materials Research Society (21)
MSC - Microscopical Society of Canada (7)
Society for Disaster Medicine and Public Health, Inc. SDMPH (7)
International Astronomical Union (6)
Nestle Foundation - enLINK (6)
Royal College of Psychiatrists / RCPsych (5)
Global Science Press (4)
Ryan Test (4)
Nutrition Society (3)
Royal College of Speech and Language Therapists (3)
World Association for Disaster and Emergency Medicine (3)
AMA Mexican Society of Microscopy MMS (2)
JLO (1984) Ltd (2)
The New Zealand Society of Otolaryngology, Head and Neck Surgery (2)
AEPC Association of European Paediatric Cardiology (1)
Brazilian Society for Microscopy and Microanalysis (SBMM) (1)
Developmental Origins of Health and Disease Society (1)
International Psychogeriatric Association (1)
MiMi / EMAS - European Microbeam Analysis Society (1)
Neuroscience Education Institute (1)
Society for Academic and Primary Care (1)
Acoustic resonance mechanism for axisymmetric screech modes of underexpanded jets impinging on an inclined plate
Xiangru Li, Xuecheng Wu, Luhan Liu, Xiwen Zhang, Pengfei Hao, Feng He
Journal: Journal of Fluid Mechanics / Volume 956 / 10 February 2023
Published online by Cambridge University Press: 26 January 2023, A2
In this paper, the acoustic resonance mechanism for different axisymmetric screech modes of the underexpanded jets that impinge on an inclined plate is investigated experimentally. The ideally expanded Mach number of jets ( $M_j$) ranges from 1.05 to 1.56. The nozzle-to-plate distance at the jet axis and the impingement angle are respectively set as 5.0 $D$ and $30^{\circ }$, where $D$ is the nozzle exit diameter. The acoustic results show that the $M_j$ range for the A2 screech mode of impinging jets is broader than that of underexpanded free jets, and a new axisymmetric screech mode A3 appears. With the increase of $M_j$, the effect of the impinging plate on the shock cell structures of jets becomes obvious gradually, and the second suboptimal peaks are evident in the axial wavenumber spectra of mean shock structures. The coherent flow structures at screech frequencies are extracted from time-resolved schlieren images via the spectral proper orthogonal decomposition (SPOD). The axial wavenumber spectra of the selected SPOD modes suggest that the A1, A2 and A3 screech modes are respectively closed by the guided jet modes that are energized by the interactions between the Kelvin–Helmholtz wavepacket and the first three shock wavenumber peaks. The upstream- and downstream-propagating waves that constitute the screech feedback loop are analysed by applying wavenumber filters to the wavenumber spectra of SPOD modes. The frequencies of these three screech modes can be predicted by the phase constraints between the nozzle exit and the rear edge of the third shock cell. For the A3 mode, the inclined plate invades the third shock cell with the increase of $M_j$, and the phase constraint cannot be satisfied at the lower side of the jets, which leads the A3 mode to fade away. The present results suggest that external boundaries can modulate the frequency and mode of jet screech by changing the axial spacings of shock cells.
Evolution of tornado-like vortices in three-dimensional compressible rectangular cavity flows
Yong Luo, Hao Tian, Conghai Wu, Hu Li, Yimin Wang, Shuhai Zhang
Journal: Journal of Fluid Mechanics / Volume 955 / 25 January 2023
The spatial structure and time evolution of tornado-like vortices in a three-dimensional cavity are studied by topological analysis and numerical simulation. The topology theory of the unsteady vortex in the rectangular coordinate system (Zhang, Zhang & Shu, J. Fluid Mech., vol. 639, 2009, pp. 343–372) is generalized to the curvilinear coordinate system. Two functions $\lambda (q_1,t)$ and $q(q_1,t)$ are obtained to determine the topology structure of the sectional streamline pattern in the cross-section perpendicular to the vortex axis and the meridional plane, respectively. The spiral direction of the sectional streamlines in the cross-section perpendicular to the vortex axis depends on the sign of $\lambda (q_1,t)$. The types of critical points in the meridional plane depend on the sign of $q(q_1,t)$. The relation between the critical points of the streamline pattern in the meridional plane and that in the cross-section perpendicular to the vortex axis is set up. The flow in a three-dimensional rectangular cavity is numerically simulated by solving the three-dimensional Navier–Stokes equations using high-order numerical methods. The spatial structures and the time evolutions of the tornado-like vortices in the cavity are analysed with our topology theory. Both the bubble type and spiral type of vortex breakdown are observed. They have a close relationship with the vortex structure in the cross-section perpendicular to the vortex axis. The bubble-type breakdown has a conical core and the core is non-axisymmetric in the sense of topology. A criterion for the bubble type and the spiral type based on the spatial structure characteristic of the two breakdown types is provided.
BERRY–ESSEEN BOUND AND LOCAL LIMIT THEOREM FOR THE COEFFICIENTS OF PRODUCTS OF RANDOM MATRICES
Probability theory on algebraic and topological structures
Limit theorems
Tien-Cuong Dinh, Lucas Kaufmann, Hao Wu
Journal: Journal of the Institute of Mathematics of Jussieu , First View
Let $\mu $ be a probability measure on $\mathrm {GL}_d(\mathbb {R})$ , and denote by $S_n:= g_n \cdots g_1$ the associated random matrix product, where $g_j$ are i.i.d. with law $\mu $ . Under the assumptions that $\mu $ has a finite exponential moment and generates a proximal and strongly irreducible semigroup, we prove a Berry–Esseen bound with the optimal rate $O(1/\sqrt n)$ for the coefficients of $S_n$ , settling a long-standing question considered since the fundamental work of Guivarc'h and Raugi. The local limit theorem for the coefficients is also obtained, complementing a recent partial result of Grama, Quint and Xiao.
A graph method of description of driving behaviour characteristics under the guidance of navigation prompt message
Liping Yang, Yang Bian, Xiaohua Zhao, Yiping Wu, Hao Liu, Xiaoming Liu
Journal: The Journal of Navigation / Volume 75 / Issue 5 / September 2022
Published online by Cambridge University Press: 15 August 2022, pp. 1167-1189
Print publication: September 2022
To verify whether a graph is suitable for describing driver behaviour performance under the effects of navigation information, this study applies two types of prompt messages: simple and detailed. The simple messages contain only direction instructions, while the detailed messages contain distance, direction, road and lane instructions. A driving simulation experiment was designed to collect the empirical data. Two vehicle operating indicators (velocity and lateral offset), and two driver manoeuvre indicators (accelerator power and steering wheel angle) were selected, and T-test was used to compare the differences of behavioural performance. Driving behaviour graphs were constructed for the two message conditions; their characteristics and similarities were further analysed. Finally, the results of T-test of behavioural performance and similarity results of the driving behaviour graphs were compared. Results indicated that the two different types of prompt messages were associated with significant differences in driving behaviours, which implies that it is feasible to describe the characteristics of driving behaviours guided by navigation information using such graphs. This study provides a new method for systematically exploring the mechanisms affecting drivers' response to navigation information, and presents a new perspective for the optimisation of navigation information.
Reflections on China's primary care response to COVID-19: roles, limitations and implications
Xiao Tan, Chaojie Liu, Hao Wu
Journal: Primary Health Care Research & Development / Volume 23 / 2022
Published online by Cambridge University Press: 05 August 2022, e46
This study focuses on the role of primary care in China's response to COVID-19. A retrospective, reflective approach was taken using data available to one of the authors who led the national community response to COVID-19, first in Wuhan and then multiple cities in ten provinces/municipalities across the country. At the peak of the pandemic, primary care providers shoulder various public health responsibilities and work in close partnerships with other key stakeholders in the local communities. Primary care providers keep playing a 'sentinel'/surveillance role in identifying re-emerging cases after the elimination of community transmissions of COVID-19. Critically, however, the pandemic once again highlights some key limitations of the primary care sector, including the lack of gatekeeping, limited capacity and weak integration between medical care and public health.
Timing of gestational weight gain in association with birth weight outcomes: a prospective cohort study
Lixia Lin, Xi Chen, Chunrong Zhong, Li Huang, Qian Li, Xu Zhang, Meng Wu, Huanzhuo Wang, Sen Yang, Xiyu Cao, Guoping Xiong, Guoqiang Sun, Xuefeng Yang, Liping Hao, Nianhong Yang
Journal: British Journal of Nutrition , First View
Published online by Cambridge University Press: 18 July 2022, pp. 1-8
Maternal gestational weight gain (GWG) is an important determinant of infant birth weight, and having adequate total GWG has been widely recommended. However, the association of timing of GWG with birth weight remains controversial. We aimed to evaluate this association, especially among women with adequate total GWG. In a prospective cohort study, pregnant women's weight was routinely measured during pregnancy, and their GWG was calculated for the ten intervals: the first 13, 14–18, 19–23, 24–28, 29–30, 31–32, 33–34, 35–36, 37–38 and 39–40 weeks. Birth weight was measured, and small-for-gestational-age (SGA) and large-for-gestational-age were assessed. Generalized linear and Poisson models were used to evaluate the associations of GWG with birth weight and its outcomes after multivariate adjustment, respectively. Of the 5049 women, increased GWG in the first 30 weeks was associated with increased birth weight for male infants, and increased GWG in the first 28 weeks was associated with increased birth weight for females. Among 1713 women with adequate total GWG, increased GWG percent between 14 and 23 weeks was associated with increased birth weight. Moreover, inadequate GWG between 14 and 23 weeks, compared with the adequate GWG, was associated with an increased risk of SGA (43 (13·7 %) v. 42 (7·2 %); relative risk 1·83, 95 % CI 1·21, 2·76). Timing of GWG may influence infant birth weight differentially, and women with inadequate GWG between 14 and 23 weeks may be at higher risk of delivering SGA infants, despite having adequate total GWG.
Global linear instability analysis of thermal convective flow using the linearized lattice Boltzmann method
Hao-Kui Jiang, Kang Luo, Zi-Yao Zhang, Jian Wu, Hong-Liang Yi
Journal: Journal of Fluid Mechanics / Volume 944 / 10 August 2022
Published online by Cambridge University Press: 30 June 2022, A31
Modal global linear stability analysis of thermal convection is performed with the linearized lattice Boltzmann method (LLBM). The onset of Rayleigh–Bénard convection in rectangular cavities with conducting and adiabatic sidewalls and the instability of two-dimensional (2-D) and three-dimensional (3-D) natural convection in cavities are studied. The method of linearizing the local equilibrium probability distribution function that was first proposed by Pérez et al. (Theor. Comp. Fluid Dyn., vol. 31, 2017, pp. 643–664) is extended to solve the coupled linear Navier–Stokes equations together with the linear energy equation in this work. A multiscale analysis is also performed to recover the macroscopic linear Navier–Stokes equations from the discrete lattice Boltzmann equations for both the single and multiple relaxation time models. The present LLBM is implemented in the framework of the Palabos library. It is validated by calculating the linear critical value of 2-D natural convection that the LLBM with the multiple relaxation time model has an error less than 1 % compared with the spectral method. The instability mechanism of the flow is explained by kinetic energy transfer analysis. It is shown that the buoyancy mechanism and inertial mechanism tend to stabilize the Hopf bifurcation of the 2-D natural convection at Pr < 0.08 and Pr > 1, respectively. For 3-D natural convection, subcritical bifurcation of the Hopf type is found for low-Prandtl-number fluids (Pr < 0.1).
Plasma glucose levels and diabetes are independent predictors for mortality in patients with COVID-19
Hui Long, Jiachen Li, Rui Li, Haiyang Zhang, Honghan Ge, Hui Zeng, Xi Chen, Qingbin Lu, Wanli Jiang, Haolong Zeng, Tianle Che, Xiaolei Ye, Liqun Fang, Ying Qin, Qiang Wang, Qingming Wu, Hao Li, Wei Liu
Journal: Epidemiology & Infection / Volume 150 / 2022
Published online by Cambridge University Press: 16 May 2022, e106
This study is performed to figure out how the presence of diabetes affects the infection, progression and prognosis of 2019 novel coronavirus disease (COVID-19), and the effective therapy that can treat the diabetes-complicated patients with COVID-19. A multicentre study was performed in four hospitals. COVID-19 patients with diabetes mellitus (DM) or hyperglycaemia were compared with those without these conditions and matched by propensity score matching for their clinical progress and outcome. Totally, 2444 confirmed COVID-19 patients were recruited, from whom 336 had DM. Compared to 1344 non-DM patients with age and sex matched, DM-COVID-19 patients had significantly higher rates of intensive care unit entrance (12.43% vs. 6.58%, P = 0.014), kidney failure (9.20% vs. 4.05%, P = 0.027) and mortality (25.00% vs. 18.15%, P < 0.001). Age and sex-stratified comparison revealed increased susceptibility to COVID-19 only from females with DM. For either non-DM or DM group, hyperglycaemia was associated with adverse outcomes, featured by higher rates of severe pneumonia and mortality, in comparison with non-hyperglycaemia. This was accompanied by significantly altered laboratory indicators including lymphocyte and neutrophil percentage, C-reactive protein and urea nitrogen level, all with correlation coefficients >0.35. Both diabetes and hyperglycaemia were independently associated with adverse prognosis of COVID-19, with hazard ratios of 10.41 and 3.58, respectively.
Characterization of novel low-molecular-weight glutenin subunit genes from the diploid wild wheat relative Aegilops umbellulata
Wenyang Wang, Wenjun Ji, Lihua Feng, Shunzong Ning, Zhongwei Yuan, Ming Hao, Lianquan Zhang, Zehong Yan, Bihua Wu, Dengcai Liu, Lin Huang
Journal: Plant Genetic Resources / Volume 20 / Issue 1 / February 2022
Published online by Cambridge University Press: 12 May 2022, pp. 1-6
Low molecular weight glutenin subunits (LWM-GSs) play a crucial role in determining wheat flour processing quality. In this work, 35 novel LMW-GS genes (32 active and three pseudogenes) from three Aegilops umbellulata (2n = 2x = 14, UU) accessions were amplified by allelic-specific PCR. We found that all LMW-GS genes had the same primary structure shared by other known LMW-GSs. Thirty-two active genes encode 31 typical LMW-m-type subunits. The MZ424050 possessed nine cysteine residues with an extra cysteine residue located in the last amino acid residue of the conserved C-terminal III, which could benefit the formation of larger glutenin polymers, and therefore may have positive effects on dough properties. We have found extensive variations which were mainly resulted from single-nucleotide polymorphisms (SNPs) and insertions and deletions (InDels) among the LMW-GS genes in Ae. umbellulata. Our results demonstrated that Ae. umbellulata is an important source of LMW-GS variants and the potential value of the novel LMW-GS alleles for wheat quality improvement.
Machine-learning guided optimization of laser pulses for direct-drive implosions - CORRIGENDUM
Fuyuan Wu, Xiaohu Yang, Yanyun Ma, Qi Zhang, Zhe Zhang, Xiaohui Yuan, Hao Liu, Zhengdong Liu, Jiayong Zhong, Jian Zheng, Yutong Li, Jie Zhang
Journal: High Power Laser Science and Engineering / Volume 10 / 2022
Published online by Cambridge University Press: 04 May 2022, e17
Two-dimensional hydrodynamic schooling of two flapping swimmers initially in tandem formation
Xingjian Lin, Jie Wu, Liming Yang, Hao Dong
Journal: Journal of Fluid Mechanics / Volume 941 / 25 June 2022
Published online by Cambridge University Press: 27 April 2022, A29
The effect of hydrodynamic interactions on the collective locomotion of fish schools is still poorly understood. In this paper, the flow-mediated organization of two tandem flapping foils, which are free in both the longitudinal and lateral directions, is numerically studied. It is found that the tandem formation is unstable for two foils when they can self-propel in both the longitudinal and lateral directions. Three types of resultant regular formations are observed, i.e. semi-tandem formation, staggered formation and transitional formation. Which type of regular formation occurs depends on the flapping parameters and the initial longitudinal distance between the two foils. Moreover, there is a threshold value of the cycle-averaged longitudinal distance (which is approximately 0.55) below which both velocity enhancement and efficiency augmentation can be achieved by two foils in regular formations. The results obtained here may shed some light on understanding the emergence of regular formations of fish schools.
Geochronological and geochemical constraints on the origin of highly 13Ccarb-depleted calcite in basal Ediacaran cap carbonate
Zhongwu Lan, Shitou Wu, Nick M. W. Roberts, Shujing Zhang, Rong Cao, Hao Wang, Yueheng Yang
Journal: Geological Magazine / Volume 159 / Issue 8 / August 2022
Published online by Cambridge University Press: 04 April 2022, pp. 1323-1334
Ediacaran cap dolostone atop Marinoan glacial deposits contains complex sedimentary structures with extremely negative δ13Ccarb values in close association with oscillations in palaeoclimatic and oceanographic proxy records. However, the precise geological, geochronological and geochemical context of the cap dolostone is not clarified, which hampers us from correctly interpreting the extremely negative δ13Ccarb values and their causal relationships with the Snowball Earth hypothesis. In this study, we conducted detailed in situ geochronological and geochemical analyses on the calcite within the cap dolostone from the Ediacaran Doushantuo Formation in South China in order to define its formation and relationship to the Snowball Earth hypothesis. Petrographic observations show that formation of dolomite pre-dates precipitation of calcite and pyrite, which pre-dates quartz cementation in the basal cap carbonate. Calcite cement within the cap dolostone yielded a U–Pb age of 636.5 ± 7.4/17.8 Ma (2σ, MSWD = 1.6, n = 36/40), which is within uncertainty of a published dolomite U–Pb age of 632 ± 17 Ma (recalculated as 629.3 ± 16.7/22.9 Ma). These age constraints negate the possibility that the calcite cement was formed by late Ediacaran or Cambrian hydrothermal activity. The rare earth element distribution patterns suggest a dominant seawater origin overprinted by subsequent early Ediacaran hydrothermal activity. The combined age, petrographic and geochemical data suggest oxidization of methane clathrates in response to complicated interplay between eustasy and isostatic rebound and hydrothermal fluids.
Machine-learning guided optimization of laser pulses for direct-drive implosions
Published online by Cambridge University Press: 22 February 2022, e12
The optimization of laser pulse shapes is of great importance and a major challenge for laser direct-drive implosions. In this paper, we propose an efficient intelligent method to perform laser pulse optimization via hydrodynamic simulations guided by the genetic algorithm and random forest algorithm. Compared to manual optimizations, the machine-learning guided method is able to efficiently improve the areal density by a factor of 63% and reduce the in-flight-aspect ratio by a factor of 30% at the same time. A relationship between the maximum areal density and ion temperature is also achieved by the analysis of the big simulation dataset. This design method has been successfully demonstrated by the 2021 summer double-cone ignition experiments conducted at the SG-II upgrade laser facility and has great prospects for the design of other inertial fusion experiments.
Consistent brain structural abnormalities and multisite individualised classification of schizophrenia using deep neural networks
Yue Cui, Chao Li, Bing Liu, Jing Sui, Ming Song, Jun Chen, Yunchun Chen, Hua Guo, Peng Li, Lin Lu, Luxian Lv, Yuping Ning, Ping Wan, Huaning Wang, Huiling Wang, Huawang Wu, Hao Yan, Jun Yan, Yongfeng Yang, Hongxing Zhang, Dai Zhang, Tianzi Jiang
Journal: The British Journal of Psychiatry / Volume 221 / Issue 6 / December 2022
Print publication: December 2022
Previous analyses of grey and white matter volumes have reported that schizophrenia is associated with structural changes. Deep learning is a data-driven approach that can capture highly compact hierarchical non-linear relationships among high-dimensional features, and therefore can facilitate the development of clinical tools for making a more accurate and earlier diagnosis of schizophrenia.
To identify consistent grey matter abnormalities in patients with schizophrenia, 662 people with schizophrenia and 613 healthy controls were recruited from eight centres across China, and the data from these independent sites were used to validate deep-learning classifiers.
We used a prospective image-based meta-analysis of whole-brain voxel-based morphometry. We also automatically differentiated patients with schizophrenia from healthy controls using combined grey matter, white matter and cerebrospinal fluid volumetric features, incorporated a deep neural network approach on an individual basis, and tested the generalisability of the classification models using independent validation sites.
We found that statistically reliable schizophrenia-related grey matter abnormalities primarily occurred in regions that included the superior temporal gyrus extending to the temporal pole, insular cortex, orbital and middle frontal cortices, middle cingulum and thalamus. Evaluated using leave-one-site-out cross-validation, the performance of the classification of schizophrenia achieved by our findings from eight independent research sites were: accuracy, 77.19–85.74%; sensitivity, 75.31–89.29% and area under the receiver operating characteristic curve, 0.797–0.909.
These results suggest that, by using deep-learning techniques, multidimensional neuroanatomical changes in schizophrenia are capable of robustly discriminating patients with schizophrenia from healthy controls, findings which could facilitate clinical diagnosis and treatment in schizophrenia.
An improved adjoint-based ocean wave reconstruction and prediction method
Jie Wu, Xuanting Hao, Lian Shen
Journal: Flow: Applications of Fluid Mechanics / Volume 2 / 2022
Published online by Cambridge University Press: 24 January 2022, E2
We propose an improved adjoint-based method for the reconstruction and prediction of the nonlinear wave field from coarse-resolution measurement data. We adopt the data assimilation framework using an adjoint equation to search for the optimal initial wave field to match the wave field simulation result at later times with the given measurement data. Compared with the conventional approach where the optimised initial surface elevation and velocity potential are independent of each other, our method features an additional constraint to dynamically connect these two control variables based on the dispersion relation of waves. The performance of our new method and the conventional method is assessed with the nonlinear wave data generated from phase-resolved nonlinear wave simulations using the high-order spectral method. We consider a variety of wave steepness and noise levels for the nonlinear irregular waves. It is found that the conventional method tends to overestimate the surface elevation in the high-frequency region and underestimate the velocity potential. In comparison, our new method shows significantly improved performance in the reconstruction and prediction of instantaneous surface elevation, surface velocity potential and high-order wave statistics, including the skewness and kurtosis.
Esterification of naphthenic acids with various structures over tungstophosphoric acid-intercalated layer double hydroxide catalysts with various interlayer spacings
Yan Wu, Shiang He, Dongmei Li, Yang Li, Hao Wang
Journal: Clay Minerals / Volume 56 / Issue 3 / September 2021
Tungstophosphoric acid-intercalated MgAl layer double hydroxides (LDHs) are active catalysts for removing naphthenic acids (NAs) from petroleum via esterification. Due to their active sites being in the interlayer, the interlayer spacing of LDHs might affect their activity, particularly for NAs with various structures. Herein, two tungstophosphoric acid-intercalated MgAl LDHs with various interlayer spacings (d003 = 1.46 and 1.07 nm) synthesized by varying the ion-exchange time were used as catalysts for esterification between NAs and ethylene glycol. Six NAs with various side chains and rings were used as model compounds to investigate the effects of NA structures and d003 values on the activity of LDHs. In general, NAs with large molecule sizes and steric hindrances are less reactive over the same catalyst. The LDH with a larger d003 value favours the esterification of NAs regardless of their structure, particularly NAs with large molecule sizes and steric hindrances. However, a large d003 is less effective for esterification of NAs with conjugated carboxyl groups. An enlarged interlayer space might facilitate NA molecules to access the interlayer of LDHs so as to come into contact with the catalytic sites, making this process responsible for the enhanced reactivity. The esterification kinetics of cyclohexanecarboxylic acid over these LDHs follow a first-order reaction. The activation energies for the LDHs with large and small d003 values are 26.25 and 32.18 kJ mol–1, respectively.
Comparison of ARIMA, ES, GRNN and ARIMA–GRNN hybrid models to forecast the second wave of COVID-19 in India and the United States
Gang Wang, Tiantian Wu, Wudi Wei, Junjun Jiang, Sanqi An, Bingyu Liang, Li Ye, Hao Liang
Published online by Cambridge University Press: 02 November 2021, e240
As acute infectious pneumonia, the coronavirus disease-2019 (COVID-19) has created unique challenges for each nation and region. Both India and the United States (US) have experienced a second outbreak, resulting in a severe disease burden. The study aimed to develop optimal models to predict the daily new cases, in order to help to develop public health strategies. The autoregressive integrated moving average (ARIMA) models, generalised regression neural network (GRNN) models, ARIMA–GRNN hybrid model and exponential smoothing (ES) model were used to fit the daily new cases. The performances were evaluated by minimum mean absolute per cent error (MAPE). The predictive value with ARIMA (3, 1, 3) (1, 1, 1)14 model was closest to the actual value in India, while the ARIMA–GRNN presented a better performance in the US. According to the models, the number of daily new COVID-19 cases in India continued to decrease after 27 May 2021. In conclusion, the ARIMA model presented to be the best-fit model in forecasting daily COVID-19 new cases in India, and the ARIMA–GRNN hybrid model had the best prediction performance in the US. The appropriate model should be selected for different regions in predicting daily new cases. The results can shed light on understanding the trends of the outbreak and giving ideas of the epidemiological stage of these regions.
Association of plasma lead, cadmium and selenium levels with hearing loss in adults: National Health and Nutrition Examination Survey (NHANES) 2011–2012
Yaqin Tu, Guorun Fan, Nan Wu, Hao Wu, Hongjun Xiao
Journal: British Journal of Nutrition / Volume 128 / Issue 6 / 28 September 2022
Published online by Cambridge University Press: 29 October 2021, pp. 1100-1107
Print publication: 28 September 2022
To determine the association between hearing loss and environmental Pb, Cd and Se exposure, a total of 1503 American adults from National Health and Nutrition Examination Survey (NHANES) (2011–2012) were assessed. The average of four audiometric frequencies (0·5, 1, 2 and 4 kHz) was used to identify speech-frequency hearing loss (SFHL), while the average of 3 audiometric frequencies (3, 4 and 6 kHz) was used to identify high-frequency hearing loss (HFHL). HFHL adjusted OR determined by comparing the highest and lowest blood Pb and Cd quartiles were 1·98 (95 % CI: 1·27, 3·10) and 1·81 (95 % CI: 1·13, 2·90), respectively. SFHL was significantly associated with blood Cd with the OR = 2·42 for the highest quartile. When further stratified by age, this association appeared to be limited to adults aged 35–52 years. After stratified by gender, except for Pb and Cd, we observed that blood Se showed a dose-dependent association with SFHL in men. In women, only Cd showed a dose-dependent association with speech and high-frequency hearing loss. Hearing loss was positively associated with blood levels of Pb and Cd. Additionally, our study provided novel evidence suggesting that excessive Se supplement would increase SFHL risk in men.
Exponential mixing property for Hénon–Sibony maps of $\mathbb {C}^k$
HAO WU
Journal: Ergodic Theory and Dynamical Systems / Volume 42 / Issue 12 / December 2022
Published online by Cambridge University Press: 17 September 2021, pp. 3818-3830
Let f be a Hénon–Sibony map, also known as a regular polynomial automorphism of $\mathbb {C}^k$ , and let $\mu $ be the equilibrium measure of f. In this paper we prove that $\mu $ is exponentially mixing for plurisubharmonic observables.
Depressive symptoms and cognitive impairment: A 10-year follow-up study from the Survey of Health, Ageing and Retirement in Europe
Fei-Fei Han, Hui-Xin Wang, Jia-Jia Wu, Wu Yao, Chang-Fu Hao, Jin-Jing Pei
Journal: European Psychiatry / Volume 64 / Issue 1 / 2021
Depressive symptoms and cognitive impairment often coexisted in the elderly. This study investigates the effect of late-life depressive symptoms on risk of mild cognitive impairment (MCI).
A total of 14,231 dementia- and MCI free participants aged 60+ from the Survey of Health, Ageing, and Retirement in Europe were followed-up for 10 years to detect incident MCI. MCI was defined as 1.5 standard deviation (SD) below the mean of the standardized global cognition score. Depressive symptoms were assessed by a 12-item Europe-depression scale (EURO-D). Severity of depressive symptoms was grouped as: no/minimal (score 0–3), moderate (score 4–5), and severe (score 6–12). Significant depressive symptoms (SDSs) were defined as EURO-D score ≥ 4.
During an average of 8.2 (SD = 2.4)-year follow-up, 1,352 (9.50%) incident MCI cases were identified. SDSs were related to higher MCI risk (hazard ratio [HR] = 1.26, 95% confidence intervals [CI]: 1.10–1.44) in total population, individuals aged 70+ (HR = 1.35, 95% CI: 1.14–1.61) and women (HR = 1.28, 95% CI: 1.08–1.51) in Cox proportional hazard model adjusting for confounders. In addition, there was a dose–response association between the severity of depressive symptoms and MCI incidence in total population, people aged ≥70 years and women (p-trend <0.001).
Significant depressive symptoms were associated with higher incidence of MCI in a dose–response fashion, especially among people aged 70+ years and women. Treating depressive symptoms targeting older population and women may be effective in preventing MCI. | CommonCrawl |
Heterologous expression of a Streptomyces cyaneus laccase for biomass modification applications
Selin Ece1,
Camilla Lambertz1,
Rainer Fischer1,2 &
Ulrich Commandeur1
Laccases are used for the conversion of biomass into fermentable sugars but it is difficult to produce high yields of active laccases in heterologous expression systems. We overcame this challenge by expressing Streptomyces cyaneus CECT 3335 laccase in Escherichia coli (ScLac) and we achieved a yield of up to 104 mg L−1 following purification by one-step affinity chromatography. Stability and activity assays using simple lignin model substrates showed that the purified enzyme preparation was active over a broad pH range and at high temperatures, suggesting it would be suitable for biomass degradation. The reusability of ScLac was also demonstrated by immobilizing the enzyme on agarose beads with a binding yield of 33%, and by the synthesis of cross-linked enzyme aggregates with an initial activity recovery of 72%.
Laccases are the largest subgroup of the multi-copper oxidase protein superfamily (Ihssen et al. 2015). They can oxidize a broad range of substrates including phenolic compounds, azo dyes, aromatic amines, non-phenolic substrates (mostly with the help of mediators), anilines and aromatic thiols, and recalcitrant environmental pollutants (Canas and Camarero 2010; Majumdar et al. 2014; Margot et al. 2013; Widsten and Kandelbauer 2008). Each monomeric laccase contains four copper atoms located at three different positions, namely the type 1 (T1), type 2 (T2) and binuclear type 3 (T3) copper sites, all of which are involved in the oxidation of substrate molecules accompanied by the reduction of molecular oxygen to two molecules of water (Thurston 1994). The copper atoms bind histidine residues that are conserved among the laccases of different organisms (Claus 2003; Luis et al. 2004). The T1 copper gives the laccase its blue color and is also responsible for the final oxidation of the substrate. Electrons are transferred from the T1 copper site to the T2/T3 sites, where molecular oxygen is reduced to water. The T1 copper is characterized by its absorbance at 610 nm whereas the T3 copper shows weak absorbance at 330 nm. The T2 copper is colorless but it can be detected by electro-paramagnetic resonance spectroscopy (EPR) (Gunne et al. 2013; Thurston 1994).
Laccases are used in many biotechnological processes in the paper and pulp, textile, pharmaceutical and petrochemical industries, and also for the bioremediation of industrial wastes (Chandra and Chowdhary 2015; Morozova et al. 2007; Munk et al. 2015; Rodriguez Couto and Toca Herrera 2006; Roth and Spiess 2015). Laccases can also be combined with laccase mediator systems (LMS) such as 1-hydroxybenzotriazole (HBT) for the pretreatment and depolymerization of lignocellulosic biomass (Call and Mücke 1997). The digestibility of cellulose can be increased following lignin decomposition by laccases (Chen et al. 2012). In the presence of HBT, lignin can also be removed from whole woody and non-woody feedstocks to increase sugar and ethanol yields (Gutierrez et al. 2012), whereas the alternative mediator 2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) can allow an alkaline-stable laccase to selectively degrade lignin within lignocellulose and improve the enzymatic hydrolysis of wheat straw when combined with a steam explosion pretreatment (Qiu and Chen 2012).
Although laccases are ubiquitous, research has focused mainly on fungal laccases because many different isozymes have been identified, particularly among the white-rot fungi. Following the identification of the first bacterial laccase (Givaudan et al. 1993) many further examples were discovered (Chandra and Chowdhary 2015). The properties of bacterial laccases, such as their enantioselectivity and stability at high pH and high temperatures, are not yet understood in detail, but they have many advantages for applications such as the pretreatment of recalcitrant biomass. The large-scale production of fungal laccases is challenging because of the slow growth rates of fungi. They also have a narrower optimal pH range. These factors have made bacterial laccases a valuable alternative (Ausec et al. 2011; Bugg et al. 2011; Chandra and Chowdhary 2015).
Here we describe the heterologous expression of a laccase from Streptomyces cyaneus CECT 3335 in Escherichia coli (ScLac). After purification, the recombinant enzyme preparation was characterized and compared in terms of its activity against common substrates. Two immobilization methods were used to assess the reusability of the recombinant laccase, thus offering a way to reduce the costs of enzyme production.
The laccase coding sequence (GenBank HQ857207) was codon optimized for expression in E. coli (European Nucleotide Archive LT795002), synthesized by Genscript (Piscataway, USA) and transferred to the expression vector pET-22b(+) (Novagen, Darmstadt, Germany). The gene was inserted at the NdeI and XhoI sites using forward primer 5′-GGA ATT CCA TAT GGA AAC CGA TAT TAT TGA ACG CC-3′ and reverse primer 5′-AAG CTC GAG GCC GGT ATG GCC CGC GCC ATG-3′. A His6 tag was added to the C-terminus to enable protein purification by immobilized metal affinity chromatography (IMAC). E. coli BL21 (DE3) Star (Novagen, Merck KGaA, Darmstadt, Germany) was used as the expression host.
Expression and protein extraction
Transformed E. coli BL21 (DE3) Star cells were incubated overnight at 37 °C shaking at 180 rpm in 50 mL lysogeny broth (LB) containing 100 µg mL−1 ampicillin. The overnight culture was then used to inoculate 500 mL terrific broth (TB) supplemented with the same antibiotics, and the culture was incubated as described above until the optical density (OD600) reached 0.6. Laccase expression was then induced by adding 0.04 mM isopropyl β-d-1-thiogalactopyranoside (IPTG). Furthermore, 10 mM benzyl alcohol was added to the culture 20 min before the IPTG to induce the expression of native chaperones (de Marco et al. 2005). The laccase culture was incubated at 20 °C for 20 h at 180 rpm and the cells were harvested by centrifugation (5000×g, 15 min, 4 °C). The cell pellet was resuspended in 20 mM potassium phosphate buffer (pH 7.4) containing 20 mM imidazole and 300 mM NaCl. The cells were disrupted by sonication in the presence of 0.5 mM phenylmethylsulfonyl fluoride, 1 mg mL−1 lysozyme and 10 µg mL−1 DNase I. The cell suspension was then centrifuged (30,000×g, 30 min, 4 °C) and the supernatant was separated from the cell debris by passing through a 0.45-µm filter.
ScLac was purified by IMAC using 5 mL HiTrap Chelating Sepharose FF (GE Healthcare, Freiburg, Germany). The cell extract was applied to the column at a flow rate of 3 mL min−1 in potassium phosphate running buffer (pH 7.4) containing 20 mM imidazole and 300 mM NaCl (also used as the washing buffer). Bound proteins were eluted by using a gradient of imidazole (0–500 mM) with a total volume of 50 mL. Elution fractions (2 mL) were checked by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) using 5–12% Tris–glycine gels (Laemmli 1970). The gels were stained with Coomassie Brilliant Blue or blotted onto Hybond-C nitrocellulose membranes (GE Healthcare, Freiburg, Germany). The membranes were blocked with 5% (w/v) skimmed milk in phosphate buffered saline (PBS) for 1 h and incubated with a polyclonal antibody against the His6 tag (Rockland Immunochemicals, Limerick, USA) for at least 2 h at room temperature (RT) with constant shaking. After washing, the membranes were incubated with a polyclonal alkaline phosphatase-conjugated goat anti-rabbit (GARAP) secondary antibody (Dianova, Hamburg, Germany) and the signal was visualized with nitroblue tetrazolium chloride/5-bromo-4-chloro-3-indolyl phosphate (NBT/BCIP) p-toluidine salt (Carl Roth, Karlsruhe, Germany). The protein bands were compared with P7712S molecular weight markers (New England Biolabs, Ipswich, USA). All fractions containing the target protein were pooled and dialyzed against 100 mM HEPES (pH 7.5) overnight at 4 °C with one buffer change. The purified and quantified proteins were stored at 4 °C.
Following dialysis, 0.5 mM CuSO4 was added to the laccase solution (Pozdnyakova and Wittung-Stafshede 2001) and the sample was mixed slowly on ice for 2 h before centrifugation (30,000×g, 30 min, 4 °C) to remove any aggregates that may have formed during the incubation with CuSO4. The protein concentration was determined again by measuring the absorbance at 280 nm and using the molar extinction coefficient of the S. cyaneus laccase coding sequence including the C-terminal His6 tag. The purified and quantified protein was stored briefly at 4 °C or was frozen in liquid nitrogen for long-term storage at −80 °C.
SDS-PAGE was used to analyze 13 µg of the purified ScLac preparation. The enzyme solution was separated on 5–12% Tris–glycine gels before staining with Coomassie Brilliant Blue, and the single protein band was compared to the P7712S molecular weight marker.
Activity assays
Zymography assays were used for the initial verification of enzyme activity. Otherwise, the activity assays for ScLac were based on spectrophotometry, using an Infinite® 200 microplate reader (Tecan, Maennedorf Switzerland) at 30 °C in 100 mM MES buffer (pH 5.5). Activity was tested by measuring the oxidation of the following substrates: 2,6-dimethoxyphenol (DMP; Sigma-Aldrich, Darmstadt, Germany) at 468 nm (ε = 49,600 M−1 cm−1), ABTS at 420 nm (ε = 36,000 M−1 cm−1) and guaiacol (Sigma-Aldrich, Darmstadt, Germany) at 465 nm (ε = 26,600 M−1 cm−1). Control reactions were prepared under the same conditions with combinations of purified laccase and buffer, substrate and buffer, or buffer only. All activity assays were performed either in duplicates or in triplicates.
Characterization of the purified enzyme
The ultraviolet/visible (UV–Vis) spectra (230–700 nm) of 1 µM purified ScLac was recorded before and after incubation with CuSO4 using an Infinite 200 plate reader. The activity assay before and after the incubation with CuSO4 was carried out using DMP as the substrate with 0.4 µM ScLac.
The pH optimum of the laccase was determined by measuring the activity of the purified recombinant enzyme against DMP as described above, in a set of buffers with pH values ranging from 3.5 to 10.0. The buffers were 100 mM sodium acetate (pH 3.5–5.0), 100 mM 3-(N-morpholino)propanesulfonic acid (MOPS) (pH 5.5–7.5), 100 mM 2-amino-2-hydroxymethyl-propane-1,3-diol hydrochloric acid (Tris–HCl) (pH 8.0–8.5) and 100 mM glycine sodium hydroxide (pH 9.0–10.0). Each reaction was prepared with 0.4 µM ScLac and the reactions were followed at 468 nm for 15 min. Enzyme stability in the optimal buffer system was determined by incubating 0.4 µM ScLac for 1, 6 and 24 h at 30 °C before measuring residual activities against DMP as described above. Control reactions were set up with 0.4 µM laccase without incubation at 30 °C.
The temperature optimum of ScLac was determined by incubating 0.4 µM of the enzyme with 20 µM DMP in 100 mM MES buffer (pH 5.5) at various temperatures ranging from 25 to 90 °C for 5 min, and then measuring the absorbance values as described for DMP above. The influence of temperature on laccase stability was determined by incubating 0.4 µM ScLac in 100 mM MES buffer (pH 5.5) at 25, 30, 60 and 90 °C for 1 h, then chilling the protein samples on ice for 5 min and measuring the residual enzyme activity using DMP as described above. The diagrams for the characterization of ScLac were based on relative activities calculated by assigning the highest value in the dataset representing each enzyme as 100%.
Kinetic parameters were analyzed by measuring enzyme activities against guaiacol, DMP and ABTS (as described above) over the concentration range 5–95 µM under the optimal assay conditions. Kinetic constants were analyzed and calculated using GraphPad Prism v6 software (Statcon, Germany).
Immobilization of ScLac
A sample of purified ScLac was immobilized using two different methods: cross linked enzyme aggregates (CLEAs) and AminoLink™ Plus Coupling Resin (Thermo Fisher Scientific, Darmstadt, Germany). The latter was used as a carrier material benefiting from the Schiff base reaction between the primary amines of the protein and the aldehyde groups of the resin.
CLEAs were prepared as follows: 2.5 mL of 1 g mL−1 polyethylene glycol (PEG) 4000 was added dropwise to 5 mg of a purified ScLac sample on ice and incubated at 20 °C for 2 h, shaking at 200 rpm. The sample was mixed with 5 mM glutaraldehyde and incubated overnight under the same conditions. The CLEAs were then collected by centrifugation (5000×g, 10 min, 4 °C). The supernatant was removed and an activity assay with DMP was carried out as described above to check for free laccase in the washing fraction. The washing steps were repeated until no activity was detected in the washing buffer and the CLEAs were then resuspended in 1 mL 0.1 M MES buffer (pH 5.5). The activity recovery after the CLEA protocol was calculated by dividing the CLEA activity (U) by the activity of the free enzyme (U L−1) multiplied by the volume of the free enzyme used for immobilization, and then multiplying the resulting value by 100 (Eq. 1) (Lopez-Gallego et al. 2005).
Calculation of the activity recovery (%) of prepared CLEAs. ACLEA:
$${\text{Activity recovery}} \,\left( \% \right) = \frac{{{\text{A}}_{\text{CLEA}} }}{{{\text{A}}_{\text{free}} \times {\text{V}}_{\text{free}} }} \times 100$$
where activity (U) of prepared CLEA; AFree: activity (U mL−1) of free enzyme; VFree: volume (mL) of free enzyme used to prepare CLEAs.
Immobilization on AminoLink™ Plus Coupling Resin was achieved by mixing approximately 5 mg of purified ScLac with 10–50 µm aldehyde-functionalized agarose beads provided as a 50% slurry in 0.02% sodium azide buffer. The immobilization protocol was performed in 10 mL gravity-flow columns (Bio-Rad, Munich, Germany) that were never allowed to run dry. The columns were loaded with 500 μL of the bead slurry before equilibration with 6 mL coupling buffer (0.1 M sodium phosphate, 0.15 M NaCl, pH 7.2). The columns were then loaded with 4.5 mL of the protein sample (1.11 mg mL−1) and the slurry was mixed for 3–4 h at 4 °C. The samples were then drained and the beads were equilibrated in 3 mL coupling buffer before adding 1 mL 0.1 M sodium cyanoborohydride in coupling buffer and mixing overnight at 4 °C. The reaction buffer was drained and the beads were equilibrated with 2 mL quenching buffer (1 M Tris–HCl, pH 7.4). Following the equilibration, 1 mL 0.1 M sodium cyanoborohydride in quenching buffer was added to the beads and the slurry was mixed for a further 90 min. To complete the immobilization, the beads were washed with 6 mL washing buffer and stored in 500 μL immobilization storage buffer.
The protein concentration on the beads was calculated by subtracting the mass of protein after the first reaction step and the mass of the protein lost during the washing steps from the initial mass of protein, and dividing this by the volume of the bead slurry after immobilization. Protein concentrations for immobilization experiments were determined using the bicinchoninic acid assay (Pierce™ BCA Protein Assay Kit, Thermo Fisher Scientific, Darmstadt, Germany) according to the manufacturer's protocol. All samples and standards were measured either in duplicates or triplicates.
The reusability of the immobilized laccase preparations was assessed in five sequential activity assays against DMP. Immediately prior to the reusability assay, the enzyme samples were mixed vigorously. After each activity assay, beads and CLEAs were collected by either gravity or centrifugation (15,000×g, 5 min, 4 °C) and washed three times. Temperature stability and pH optima were determined as described above. The diagrams for the characterization of immobilized ScLac preparations were based on relative activities calculated by assigning the highest value in each dataset as 100% (except for the reusability assay). For the reusability assay, the first measurement in each dataset was set to 100%.
Heterologous expression of S. cyaneus CECT 3335 laccase in E. coli and purification by IMAC
Streptomyces cyaneus CECT 3335 laccase with a C-terminal His6 tag was expressed successfully in E. coli. After purification by single-step IMAC, SDS-PAGE analysis revealed a strong band for the enzyme preparation near the 70 kDa marker agreeing with the predicted molecular mass of 69.5 kDa (Fig. 1). High yields (up to 104 mg L−1) of purified recombinant ScLac were achieved using E. coli, suggesting that despite its simplicity, it remains a promising host organism for the production of substantial amounts of this enzyme for further biotechnological applications.
SDS-PAGE analysis of purified laccase expressed in E. coli. Lane 2 was loaded with 13 µg of protein and the gel was stained with Coomassie Brilliant Blue G250. 1 Molecular mass marker. 2 IMAC-purified ScLac sample
Analysis of the enzymatic activity and functional properties of recombinant ScLac
The visible spectra of the purified ScLac correlated with the typical spectra of blue laccases. A peak at ~600 nm, which was detected only after incubation with CuSO4, indicated that the T1 copper atom was incorporated into the protein structure (Thurston 1994) (Fig. 2). Activity assays using DMP before and after incubation with CuSO4 confirmed that the laccase was expressed as an apoprotein in E. coli and the addition of copper was necessary for the maturation and activation of the enzyme (Additional file 1: Fig. S1). Zymography assays using ABTS, l-DOPA and caffeic acid as substrates (Additional file 1: Fig. S2) also confirmed laccase activity following incubation with CuSO4.
Visible spectra of ScLac before (dashed line) and after (solid line) incubation with 0.5 mM CuSO4. We used 1 µM of recombinant enzyme to record the visible spectra between 500 and 700 nm
The pH activity profile of the purified laccase was determined using DMP as the substrate with a set of buffers covering a broad pH range (pH 3.5–10.0). The recombinant ScLac reached its maximum activity at pH 5.5 (Fig. 3a), and was active across a broad range of pH values (pH 3.5–8.5). The pH stability of ScLac was investigated by incubating an aliquot of the purified enzyme preparation in the same buffer set used to determine the pH optima for 0, 1, 6 and 24 h (Fig. 4a). ScLac generally lost activity over time, but the decline was more rapid at pH 3.5–5.5 than at pH values >6. Although ScLac had a pH optimum of 5.5, the stability profile suggested that the enzyme is also active at neutral and basic pH values.
Optimum pH (a) and temperature (b) profiles of ScLac. a The activity of the laccase was measured against 20 µM DMP at 30 °C in buffers with various pH values ranging from pH 3.5–10.0. b The activity of the laccase was measured against 20 µM DMP after incubation for 5 min in the optimal buffer system in a temperature range from 25 to 90 °C
The pH (a) and thermal (b) stability profiles of ScLac. a We incubated 0.4 µM ScLac in the optimal pH buffer for 1, 6 and 24 h at 30 °C and the residual activity was measured using 20 µM DMP. b We incubated 0.4 µM ScLac at 30, 60 and 90 °C for 1 h in the optimal pH buffer and the residual activities were tested against 20 µM DMP. Control assays were set up without pH or heat treatment
ScLac showed high activity at elevated temperatures, as previously described for other bacterial laccases and laccase-like multi-copper oxidases (LMCOs) (Ihssen et al. 2015; Koschorreck et al. 2008; Martins et al. 2002; Reiss et al. 2011; Sherif et al. 2013). It also showed a broad optimum temperature range of 30–90 °C (Fig. 3b). The heat stability of the recombinant enzyme was also determined during longer incubation periods. ScLac was incubated at 30, 60 and 90 °C for 1 h and subsequently tested for the remaining activity against DMP (Fig. 4b). The activity of the recombinant laccase increased during the incubation at 30 °C and it retained 50% of its initial activity following incubation at 60 °C but lost most of its activity at 90 °C.
Determination of substrate specificity and steady-state kinetics
The specific activities and kinetic constants were determined using three common laccase substrates (DMP, ABTS and guaiacol) that are also used to test other lignin-degrading enzymes such as peroxidases (Table 1). The kinetic parameters for this particular laccase are presented here for the first time and fit within the range of values reported for other recombinant bacterial laccases (Dwivedi et al. 2011; Ihssen et al. 2015).
Table 1 Specific activities and kinetic constants of ScLac against DMP, ABTS and guaiacol
ScLac was immobilized on agarose beads and by the preparation of CLEAs, the latter mediated by PEG precipitation and glutaraldehyde cross-linking. At the end of the CLEA procedure, 71.5% of the initial activity was recovered. The immobilization of ScLac on agarose beads using a Schiff base reaction achieved 33% immobilization efficiency (the proportion of bound protein). The reusability of the immobilized enzyme preparations was tested by performing five sequential activity assays using DMP as the substrate. The laccase immobilized on agarose beads maintained its activity in all five steps. The activity of the CLEAs declined to ~60% of the initial activity over the five steps, but the heat activation detected in the free enzyme sample was also observed in the CLEA preparation (Fig. 5).
The reusability of immobilized ScLac. The remaining activity of immobilized ScLac on agarose beads (a) or as CLEAs (b) was analyzed in sequential activity assays for five cycles using DMP as the substrate
The stability of the immobilized ScLac was tested after 1 h incubation at 30 °C and compared to the free laccase. The laccase immobilized on agarose beads retained 70% of its initial activity whereas the CLEAs retained 88% of their initial activity (Fig. 6a). The pH optima of the immobilized enzymes were also determined and were found to differ from the free enzyme preparation most likely due to changes in the structural conformation of the enzyme or the microenvironment induced by the immobilization method or the matrix (Bussamara et al. 2012; Guzik et al. 2014; Kumar et al. 2014). The laccase immobilized on agarose beads reached maximum activity at pH 6.5 and the CLEAs reached maximum activity at pH 4.5 (Fig. 6b). Although the immobilized ScLac can be reused at least five times, neither the stability nor the activity of the immobilized enzyme improved compared to the free enzyme.
Characterization of immobilized and free ScLac. a Thermal stability was determined by incubating the free and immobilized enzymes at 30 °C for 1 h and measuring the residual activities. Control assays were set up without heat treatment. b Optimum pH profiles of free ScLac, ScLac immobilized on agarose beads and ScLac CLEAs
Laccases are found in all three domains of life and catalyze biotechnologically significant reactions. The characterization of bacterial laccases has shown that they have unique properties and are more advantageous than laccases from fungi and plants. However, the yield of purified bacterial laccases is a limiting factor for biotechnological applications. Here we expressed a laccase from the lignocellulose-mineralizing bacterium S. cyaneus in E. coli for the first time, achieving high yields of the soluble recombinant enzyme (104 mg L−1). The extracellular expression of ScLac in its native host was previously carried out for 10 days in a submerged culture, yielding 8.19 mg protein from 100 mL of culture supernatant following five purification steps (Arias et al. 2003; Margot et al. 2013). In contrast, the heterologous expression of ScLac in E. coli achieved higher yields in a shorter cultivation time and required only a single affinity chromatography purification step.
Several recombinant bacterial laccases have been expressed in E. coli, but in most cases the yields were poor (Table 2). Whereas low yields in homologous systems may be caused by the need for submerged cultures, heterologous systems face challenges such as codon usage bias, the need for signal peptides, protein accumulation as inclusion bodies, the absence of necessary post-translational modifications, and enzyme inactivation during purification (Brijwani et al. 2010; Piscitelli et al. 2010). The heterologous expression of CotA laccase from Bacillus subtilis in E. coli has been achieved by several groups and has led to detailed characterization studies, structural and functional analysis, and the use of protein evolution to improve the yields of recombinant enzyme (Bento et al. 2005; Brissos et al. 2009; Martins et al. 2002; Osipov et al. 2015). A blue multi-copper oxidase from Marinomonas mediterranea (PpoA) was expressed in E. coli BL21 (DE3) and laccase activity was observed against the substrates l-3,4-dihydroxyphenylalanine (l-DOPA), DMP and syringaldazine (SGZ) in the soluble fractions of the cell extracts (Sanchez-Amat et al. 2001). However, the yield of the recombinant enzyme was not reported and the purification strategy was not described. A laccase-like phenol oxidase from Streptomyces griseus (EpoA) was expressed in E. coli with a C-terminal His6 tag for purification by affinity chromatography and ion exchange chromatography (Endo et al. 2003). A thermostable laccase from Streptomyces lavenduale was expressed in E. coli with a yield of up to 30 mg L−1 and 10 mg of pure protein was isolated after five purification steps (Suzuki et al. 2003).
Table 2 Bacterial laccases, laccase-like phenol oxidases and multi-copper oxidases produced by heterologous expression in E. coli
Other laccases and LMCOs have been expressed in E. coli (Table 2) including the Lbh1 multi-copper oxidase from Bacillus halodurans, which has alkaline laccase activity (Ruijssenaars and Hartmans 2004), a small laccase from Streptomyces coelicolor (SLAC) (Machczynski et al. 2004), the hyperthermophilic Tth laccase from Thermus thermophilus (Miyazaki 2005), a robust metallo-oxidase (McoA) from the hyperthermophilic bacterium Aquifex aeolicus (Fernandes et al. 2007), CotA laccases from Bacillus spp. (Brander et al. 2014; Guan et al. 2014; Koschorreck et al. 2008), and a pH-versatile, salt-resistant laccase from Streptomyces ipomoea (SilA) (Molina-Guijarro et al. 2009). Although the heterologous expression of these bacterial laccases in E. coli made it possible to characterize the recombinant laccases and perform detailed structural analysis, the yields were often low or the proteins formed inclusion bodies, which made purification more laborious. In contrast, we achieved the production of substantial amounts of a recombinant laccase for further applications.
UV/Vis spectra of the recombinant ScLac preparation and subsequent activity assays showed that the T1 copper was not incorporated into the recombinant protein structure during its expression in E. coli. A similar phenomenon was reported for recombinant EpoA, which was only active when expressed in the presence of 10 µM CuSO4. Otherwise, the enzyme accumulated in an inactive monomeric form (Endo et al. 2003). However, the addition of copper to the expression medium might not be advisable because excess copper is toxic and may inhibit cell growth (Bird et al. 2013; Grey and Steck 2001). The incorporation of the T1 copper into ScLac was achieved by incubating the purified enzyme preparation in a buffer system containing 0.5 mM copper, and confirmed by UV–Vis spectra and subsequent activity assays. Unlike T1 copper, the verification of T2 and T3 copper incorporation requires other techniques such as EPR rather than UV–Vis spectra. After incubation with CuSO4, enzymatic activity tests by zymography revealed laccase activity against ABTS, l-DOPA and caffeic acid. Further analysis of specific activities and kinetics showed that the recombinant ScLac was most active against ABTS followed by DMP and guaiacol, similar to other bacterial laccases.
The recombinant ScLac showed broad optimum pH and temperature ranges. ScLac showed its maximum activity at pH 5.5 and it retained its stability at neutral and basic pH values, in agreement with reports describing other bacterial laccases. This leads to the suggestion that ScLac also might be ideal for biotechnological applications where neutral or basic pH values are required over a longer period (Brander et al. 2014; Gunne and Urlacher 2012; Ruijssenaars and Hartmans 2004). An activity at neutral or basic pH values is often observed for bacterial laccases and laccase-like enzymes but less frequently for the fungal laccases (Christopher et al. 2014). For example, B. halodurans Lbh1 showed maximum activity against SGZ at pH 7.5–8.0, Streptomyces sviceus Ssl1 showed maximum activity against phenolic substrates such as DMP and guaiacol at pH 9.0 and against SGZ at pH 8.0, and a halotolerant alkaline laccase from Streptomyces psammoticus showed maximum activity at pH 8.5 and retained 97% of its initial activity after 90 min at pH 9.0 (Gunne and Urlacher 2012; Niladevi et al. 2008; Ruijssenaars and Hartmans 2004). High activities in alkaline solutions are ideal for industrial applications such as the bio-bleaching of Kraft pulp during paper production, lignin modification and total biomass degradation (Pometto and Crawford 1986; Ruijssenaars and Hartmans 2004; Si et al. 2015). Laccases often remain active for a short time at high temperatures (Reiss et al. 2011; Zhang et al. 2013) but industrial applications usually require prolonged reactions. Although ScLac lost most of its activity following the incubation at 90 °C, the stability profiles at 30 and 60 °C were promising. A phenomenon known as heat activation, which has already been reported for a Bacillus clausii LMCO expressed in E. coli (Brander et al. 2014), was observed for ScLac after incubation at 30 °C.
The cost of enzyme production is a major factor in the economics of enzyme-based biomass degradation processes and is dependent on the host cells and purification strategy (Klein-Marcuschamer et al. 2012). Such costs can be minimized if the enzyme is reused in multiple process cycles by means of immobilization. In this study, ScLac was immobilized using two distinct methods: agarose beads and cross-linked enzyme aggregates (CLEA). Unlike reported for several laccases (Cabana et al. 2007; Sinirlioğlu et al. 2013), the stability of immobilized ScLac was not improved in comparison to the free enzyme. However, the recovery and reusability of the enzyme was demonstrated successfully, suggesting that ScLac could be used for the development of cost-efficient biotechnological processes (Robinson 2015).
2,2′-azinobis(3-ethylbenzothiazoline-6-sulfonate)
CLEA:
cross-linked enzyme aggregate
DMP:
2,6-dimethoxyphenol
IMAC:
immobilized metal affinity chromatography
l-DOPA:
l-3,4-dihydroxyphenylalanine
LMCO:
laccase-like multi-copper oxidase
LMS:
laccase mediator system
MOPS:
3-(N-morpholino) propanesulfonic acid
phosphate-buffered saline
PEG:
polyethylene glycol
PMSF:
RT:
ScLac:
laccase from Streptomyces cyaneus expressed in E. coli
sodium dodecylsulfate polyacrylamide gel electrophoresis
SGZ:
syringaldezine(4-hydroxy-3,5-dimethoxybenzaldehyde azine)
veratryl alcohol
Arias ME, Arenas M, Rodriguez J, Soliveri J, Ball AS, Hernandez M (2003) Kraft pulp biobleaching and mediated oxidation of a nonphenolic substrate by laccase from Streptomyces cyaneus CECT 3335. Appl Environ Microbiol 69(4):1953–1958
Ausec L, van Elsas JD, Mandic-Mulec I (2011) Two- and three-domain bacterial laccase-like genes are present in drained peat soils. Soil Biol Biochem 43(5):975–983
Bento I, Martins LO, Gato Lopes G, Armenia Carrondo M, Lindley PF (2005) Dioxygen reduction by multi-copper oxidases: a structural perspective. Dalton Trans (21):3507–3513. doi:10.1039/b504806k
Bird LJ, Coleman ML, Newman DK (2013) Iron and copper act synergistically to delay anaerobic growth of bacteria. Appl Environ Microbiol 79(12):3619–3627. doi:10.1128/Aem.03944-12
Brander S, Mikkelsen JD, Kepp KP (2014) Characterization of an alkali- and halide-resistant laccase expressed in E. coli: cotA from Bacillus clausii. PLoS ONE 9(6):e99402. doi:10.1371/journal.pone.0099402
Brijwani K, Rigdon A, Vadlani PV (2010) Fungal laccases: production, function, and applications in food processing. Enzyme Res 2010:149748. doi:10.4061/2010/149748
Brissos V, Pereira L, Munteanu FD, Cavaco-Paulo A, Martins LO (2009) Expression system of CotA-laccase for directed evolution and high-throughput screenings for the oxidation of high-redox potential dyes. Biotechnol J 4(4):558–563. doi:10.1002/biot.200800248
Bugg TD, Ahmad M, Hardiman EM, Singh R (2011) The emerging role for bacteria in lignin degradation and bio-product formation. Curr Opin Biotechnol 22(3):394–400. doi:10.1016/j.copbio.2010.10.009
Bussamara R, Dall'agnol L, Schrank A, Fernandes KF, Vainstein MH (2012) Optimal conditions for continuous immobilization of Pseudozyma hubeiensis (strain HB85A) lipase by adsorption in a packed-bed reactor by response surface methodology. Enzyme Res 2012:329178. doi:10.1155/2012/329178
Cabana H, Jones JP, Agathos SN (2007) Preparation and characterization of cross-linked laccase aggregates and their application to the elimination of endocrine disrupting chemicals. J Biotechnol 132(1):23–31. doi:10.1016/j.jbiotec.2007.07.948
Call HP, Mücke I (1997) History, overview and applications of mediated lignolytic systems, especially laccase-mediator-systems (Lignozym(R)-process). J Biotechnol 53(2–3):163–202. doi:10.1016/S0168-1656(97)01683-0
Canas AI, Camarero S (2010) Laccases and their natural mediators: biotechnological tools for sustainable eco-friendly processes. Biotechnol Adv 28(6):694–705. doi:10.1016/j.biotechadv.2010.05.002
Chandra R, Chowdhary P (2015) Properties of bacterial laccases and their application in bioremediation of industrial wastes. Environ Sci Process Impacts 17(2):326–342. doi:10.1039/c4em00627e
Chen Q, Marshall MN, Geib SM, Tien M, Richard TL (2012) Effects of laccase on lignin depolymerization and enzymatic hydrolysis of ensiled corn stover. Bioresour Technol 117:186–192. doi:10.1016/j.biortech.2012.04.085
Christopher LP, Yao B, Ji Y (2014) Lignin biodegradation with laccase-mediator systems. Front Energy Res 2:12
Claus H (2003) Laccases and their occurrence in prokaryotes. Arch Microbiol 179(3):145–150. doi:10.1007/s00203-002-0510-7
de Marco A, Vigh L, Diamant S, Goloubinoff P (2005) Native folding of aggregation-prone recombinant proteins in Escherichia coli by osmolytes, plasmid- or benzyl alcohol-overexpressed molecular chaperones. Cell Stress Chaperon 10(4):329–339. doi:10.1379/Csc-139r.1
Durao P, Bento I, Fernandes AT, Melo EP, Lindley PF, Martins LO (2006) Perturbations of the T1 copper site in the CotA laccase from Bacillus subtilis: structural, biochemical, enzymatic and stability studies. J Biol Inorg Chem 11(4):514–526. doi:10.1007/s00775-006-0102-0
Durao P, Chen Z, Fernandes AT, Hildebrandt P, Murgida DH, Todorovic S, Pereira MM, Melo EP, Martins LO (2008a) Copper incorporation into recombinant CotA laccase from Bacillus subtilis: characterization of fully copper loaded enzymes. J Biol Inorg Chem 13(2):183–193. doi:10.1007/s00775-007-0312-0
Durao P, Chen ZJ, Silva CS, Soares CM, Pereira MM, Todorovic S, Hildebrandt P, Bento I, Lindley PF, Martins LO (2008b) Proximal mutations at the type 1 copper site of CotA laccase: spectroscopic, redox, kinetic and structural characterization of I494A and L386A mutants. Biochem J 412:339–346. doi:10.1042/Bj20080166
Dwivedi UN, Singh P, Pandey VP, Kumar A (2011) Structure-function relationship among bacterial, fungal and plant laccases. J Mol Catal B Enzym 68(2):117–128. doi:10.1016/j.molcatb.2010.11.002
Endo K, Hayashi Y, Hibi T, Hosono K, Beppu T, Ueda K (2003) Enzymological characterization of EpoA, a laccase-like phenol oxidase produced by Streptomyces griseus. J Biochem 133(5):671–677
Fernandes AT, Soares CM, Pereira MM, Huber R, Grass G, Martins LO (2007) A robust metallo-oxidase from the hyperthermophilic bacterium Aquifex aeolicus. FEBS J 274(11):2683–2694. doi:10.1111/j.1742-4658.2007.05803.x
Givaudan A, Effosse A, Faure D, Potier P, Bouillant ML, Bally R (1993) Polyphenol oxidase in Azospirillum lipoferum isolated from rice rhizosphere—evidence for laccase activity in nonmotile strains of Azospirillum lipoferum. FEMS Microbiol Lett 108(2):205–210. doi:10.1016/0378-1097(93)90586-Q
Grey BN, Steck TR (2001) Concentrations of copper thought to be toxic to Escherichia coli can induce the viable but nonculturable condition. Appl Environ Microbiol 67(11):5325–5327. doi:10.1128/Aem.67.11.5325-5327.2001
Guan ZB, Zhang N, Song CM, Zhou W, Zhou LX, Zhao H, Xu CW, Cai YJ, Liao XR (2014) Molecular cloning, characterization, and dye-decolorizing ability of a temperature- and pH-stable laccase from Bacillus subtilis X1. Appl Biochem Biotechnol 172(3):1147–1157. doi:10.1007/s12010-013-0614-3
Gunne M, Urlacher VB (2012) Characterization of the alkaline laccase Ssl1 from Streptomyces sviceus with unusual properties discovered by genome mining. PLoS ONE 7(12):e52360. doi:10.1371/journal.pone.0052360
Gunne M, Al-Sultani D, Urlacher VB (2013) Enhancement of copper content and specific activity of CotA laccase from Bacillus licheniformis by coexpression with CopZ copper chaperone in E. coli. J Biotechnol 168(3):252–255. doi:10.1016/j.jbiotec.2013.06.011
Gutierrez A, Rencoret J, Cadena EM, Rico A, Barth D, del Rio JC, Martinez AT (2012) Demonstration of laccase-based removal of lignin from wood and non-wood plant feedstocks. Bioresour Technol 119:114–122. doi:10.1016/j.biortech.2012.05.112
Guzik U, Hupert-Kocurek K, Wojcieszynska D (2014) Immobilization as a strategy for improving enzyme properties-application to oxidoreductases. Molecules 19(7):8995–9018. doi:10.3390/molecules19078995
Ihssen J, Reiss R, Luchsinger R, Thöny-Meyer L, Richter M (2015) Biochemical properties and yields of diverse bacterial laccase-like multicopper oxidases expressed in Escherichia coli. Sci Rep 5:10465. doi:10.1038/srep10465
Klein-Marcuschamer D, Oleskowicz-Popiel P, Simmons BA, Blanch HW (2012) The challenge of enzyme cost in the production of lignocellulosic biofuels. Biotechnol Bioeng 109(4):1083–1087. doi:10.1002/bit.24370
Koschorreck K, Richter SM, Ene AB, Roduner E, Schmid RD, Urlacher VB (2008) Cloning and characterization of a new laccase from Bacillus licheniformis catalyzing dimerization of phenolic acids. Appl Microbiol Biotechnol 79(2):217–224. doi:10.1007/s00253-008-1417-2
Kumar VV, Sivanesan S, Cabana H (2014) Magnetic cross-linked laccase aggregates-bioremediation tool for decolorization of distinct classes of recalcitrant dyes. Sci Total Environ 487:830–839. doi:10.1016/j.scitotenv.2014.04.009
Laemmli UK (1970) Cleavage of structural proteins during the assembly of the head of bacteriophage T4. Nature 227(5259):680–685
Lopez-Gallego F, Betancor L, Hidalgo A, Alonso N, Fernandez-Lafuente R, Guisan JM (2005) Co-aggregation of enzymes and polyethyleneimine: a simple method to prepare stable and immobilized derivatives of glutaryl acylase. Biomacromol 6(4):1839–1842. doi:10.1021/bm050088e
Luis P, Walther G, Kellner H, Martin F, Buscot F (2004) Diversity of laccase genes from basidiomycetes in a forest soil. Soil Biol Biochem 36(7):1025–1036. doi:10.1016/j.soilbio.2004.02.017
Machczynski MC, Vijgenboom E, Samyn B, Canters GW (2004) Characterization of SLAC: a small laccase from Streptomyces coelicolor with unprecedented activity. Protein Sci 13(9):2388–2397. doi:10.1110/ps.04759104
Majumdar S, Lukk T, Solbiati JO, Bauer S, Nair SK, Cronan JE, Gerlt JA (2014) Roles of small laccases from Streptomyces in lignin degradation. Biochemistry 53(24):4047–4058. doi:10.1021/bi500285t
Margot J, Bennati-Granier C, Maillard J, Blanquez P, Barry DA, Holliger C (2013) Bacterial versus fungal laccase: potential for micropollutant degradation. AMB Express 3(1):63. doi:10.1186/2191-0855-3-63
Martins LO, Soares CM, Pereira MM, Teixeira M, Costa T, Jones GH, Henriques AO (2002) Molecular and biochemical characterization of a highly stable bacterial laccase that occurs as a structural component of the Bacillus subtilis endospore coat. J Biol Chem 277(21):18849–18859. doi:10.1074/jbc.M200827200
Miyazaki K (2005) A hyperthermophilic laccase from Thermus thermophilus HB27. Extremophiles 9(6):415–425. doi:10.1007/s00792-005-0458-z
Molina-Guijarro JM, Perez J, Munoz-Dorado J, Guillen F, Moya R, Hernandez M, Arias ME (2009) Detoxification of azo dyes by a novel pH-versatile, salt-resistant laccase from Streptomyces ipomoea. Int Microbiol 12(1):13–21
Morozova OV, Shumakovich GP, Shleev SV, Yaropolov YI (2007) Laccase-mediator systems and their applications: a review. Appl Biochem Microbiol 43(5):523–535. doi:10.1134/S0003683807050055
Munk L, Sitarz AK, Kalyani DC, Mikkelsen JD, Meyer AS (2015) Can laccases catalyze bond cleavage in lignin? Biotechnol Adv 33(1):13–24. doi:10.1016/j.biotechadv.2014.12.008
Niladevi KN, Jacob N, Prema P (2008) Evidence for a halotolerant-alkaline laccase in Streptomyces psammoticus: purification and characterization. Process Biochem 43(6):654–660. doi:10.1016/j.procbio.2008.02.002
Osipov EM, Polyakov KM, Tikhonova TV, Kittl R, Dorovatovskii PV, Shleev SV, Popov VO, Ludwig R (2015) Incorporation of copper ions into crystals of T2 copper-depleted laccase from Botrytis aclada. Acta Crystallogr F Struct Biol Commun 71:1465–1469. doi:10.1107/S2053230x1502052x
Pereira L, Coelho AV, Viegas CA, Santos MM, Robalo MP, Martins LO (2009) Enzymatic biotransformation of the azo dye Sudan Orange G with bacterial CotA-laccase. J Biotechnol 139(1):68–77. doi:10.1016/j.jbiotec.2008.09.001
Piscitelli A, Pezzella C, Giardina P, Faraco V, Giovanni S (2010) Heterologous laccase production and its role in industrial applications. Bioeng Bugs 1(4):252–262. doi:10.4161/bbug.1.4.11438
Pometto AL, Crawford DL (1986) Effects of pH on lignin and cellulose degradation by Streptomyces viridosporus. Appl Environ Microbiol 52(2):246–250
Pozdnyakova I, Wittung-Stafshede P (2001) Biological relevance of metal binding before protein folding. J Am Chem Soc 123(41):10135–10136. doi:10.1021/ja016252x
Qiu W, Chen H (2012) Enhanced the enzymatic hydrolysis efficiency of wheat straw after combined steam explosion and laccase pretreatment. Bioresour Technol 118:8–12. doi:10.1016/j.biortech.2012.05.033
Reiss R, Ihssen J, Thöny-Meyer L (2011) Bacillus pumilus laccase: a heat stable enzyme with a wide substrate spectrum. BMC Biotechnol 11(1):9. doi:10.1186/1472-6750-11-9
Robinson PK (2015) Enzymes: principles and biotechnological applications. Essays Biochem 59:1–41. doi:10.1042/Bse0590001
Rodriguez Couto S, Toca Herrera JL (2006) Industrial and biotechnological applications of laccases: a review. Biotechnol Adv 24(5):500–513. doi:10.1016/j.biotechadv.2006.04.003
Roth S, Spiess AC (2015) Laccases for biorefinery applications: a critical review on challenges and perspectives. Bioprocess Biosyst Eng 38(12):2285–2313. doi:10.1007/s00449-015-1475-7
Ruijssenaars HJ, Hartmans S (2004) A cloned Bacillus halodurans multicopper oxidase exhibiting alkaline laccase activity. Appl Microbiol Biotechnol 65(2):177–182. doi:10.1007/s00253-004-1571-0
Sanchez-Amat A, Lucas-Elio P, Fernandez E, Garcia-Borron JC, Solano F (2001) Molecular cloning and functional characterization of a unique multipotent polyphenol oxidase from Marinomonas mediterranea. Biochim Biophys Acta 1547(1):104–116
Sherif M, Waung D, Korbeci B, Mavisakalyan V, Flick R, Brown G, Abou-Zaid M, Yakunin AF, Master ER (2013) Biochemical studies of the multicopper oxidase (small laccase) from Streptomyces coelicolor using bioactive phytochemicals and site-directed mutagenesis. Microb Biotechnol 6(5):588–597. doi:10.1111/1751-7915.12068
Si W, Wu Z, Wang L, Yang M, Zhao X (2015) Enzymological characterization of Atm, the first laccase from Agrobacterium sp. S5-1, with the ability to enhance in vitro digestibility of maize straw. PLoS ONE 10(5):e0128204. doi:10.1371/journal.pone.0128204
Sinirlioğlu ZA, Sinirlioğlu D, Akbas F (2013) Preparation and characterization of stable cross-linked enzyme aggregates of novel laccase enzyme from Shewanella putrefaciens and using malachite green decolorization. Bioresour Technol 146:807–811. doi:10.1016/j.biortech.2013.08.032
Suzuki T, Endo K, Ito M, Tsujibo H, Miyamoto K, Inamori Y (2003) A thermostable laccase from Streptomyces lavendulae REN-7: purification, characterization, nucleotide sequence, and expression. Biosci Biotechnol Biochem 67(10):2167–2175. doi:10.1271/bbb.67.2167
Thurston CF (1994) The structure and function of fungal laccases. Microbiology 140:19–26
Tonin F, Melis R, Cordes A, Sanchez-Amat A, Pollegioni L, Rosini E (2016) Comparison of different microbial laccases as tools for industrial uses. New Biotechnol 33(3):387–398. doi:10.1016/j.nbt.2016.01.007
Widsten P, Kandelbauer A (2008) Laccase applications in the forest products industry: a review. Enzyme Microb Technol 42(4):293–307. doi:10.1016/j.enzmictec.2007.12.003
Zhang XY, Tan J, Wei XH, Wang LJ (2013) Removal of Remazol turqoise blue G-133 from aqueous solution using modified waste newspaper fiber. Carbohydr Polym 92(2):1497–1502
SE designed and carried out the experiments, analyzed the results and wrote the manuscript. CL assisted in the experimental design and the interpretation of data, and reviewed the manuscript. RF and UC coordinated the study and reviewed the manuscript. All authors read and approved the final manuscript.
The authors are grateful to Dr. Richard Twyman for his assistance with editing the manuscript.
The data supporting the findings of this study are included in the main manuscript file and in the additional files.
We acknowledge financial support from the European Commission via the SuBiCat Initial Training Network, Call FP7-PEOPLE-2013-ITN (PITN-GA-2013-607044), and the Cluster of Excellence 'Tailor-made Fuels from Biomass' (EXC 236), which is funded through the Excellence Initiative by the German federal and state governments to promote science and research at German universities.
Institute for Molecular Biotechnology (Biology VII), RWTH Aachen University, Worringerweg 1, 52074, Aachen, Germany
Selin Ece, Camilla Lambertz, Rainer Fischer & Ulrich Commandeur
Fraunhofer Institute for Molecular Biology and Applied Ecology (IME), Forckenbeckstrasse 6, 52074, Aachen, Germany
Rainer Fischer
Selin Ece
Camilla Lambertz
Ulrich Commandeur
Correspondence to Ulrich Commandeur.
Additional figures.
Ece, S., Lambertz, C., Fischer, R. et al. Heterologous expression of a Streptomyces cyaneus laccase for biomass modification applications. AMB Expr 7, 86 (2017). https://doi.org/10.1186/s13568-017-0387-0
Laccase
Streptomyces cyaneus
Heterologous expression
Enzyme immobilization
Lignocellulose modification | CommonCrawl |
For any non-negative super-martingale, the probability that its maximum \max _ t X_ t ever exceeds a given value c is at most \textrm{E}[X_0]/c.
The Markov bound plays a fundamental role in the following sense: many probabilistic proofs, including, for example, the proof of the Chernoff bound, rely ultimately on the Markov bound. This note discusses a bound that plays a role similar to the Markov bound in a particular important scenario: when analyzing the maximum value achieved by a given non-negative super-martingale.
Here's a simple example. Alice goes to the casino with $1. At the casino, she plays the following game repeatedly: she bets half her current balance on a fair coin flip. (For example, on the first flip, she bets 50 cents, so she wins 50 cents with probability 1/2 and loses 50 cents with probability 1/2.) Will Alice's winnings ever reach $10 or more? The bound here says this happens with probability at most 1/10.
Markov bound
Markov bound for super-martingale maxima
Let X_0,X_1,X_2,\ldots be a non-negative super-martingale — a sequence of non-negative random variables that is non-increasing in expectation: \textrm{E}[X_ t\, |\, X_{t-1}] \le X_{t-1}, and X_ t \ge 0 for each t. The sequence may be finite or infinite.
Consider the event that \max _ t X_ t \ge c, for some given c. To bound the probability of this event, if we have a bound on the expectation of \max _ t X_ t we can use the Markov bound. For example, in the ideal case, if it happens that \textrm{E}[\max _ t X_ t] is at most \textrm{E}[X_0], then the Markov bound implies that the event in question happens with probability at most \textrm{E}[X_0]/c. Although \textrm{E}[\max _ t X_ t] can be much larger than \textrm{E}[X_0], the desired bound in any case:
Lemma (Markov for super-martingale maxima).
Fix any c\ge 0.
(a) \Pr [\max _ t X_ t \ge c] \le E[X_0]/c.
(b) \Pr [\max _ t X_ t > c] < E[X_0]/c.
In short, this bound substitutes for the Markov bound to give us a natural bound on the probability of the event \max _ t X_ t\ge c. Note that in most applications X_0 will be a fixed value independent of the outcome.
Proof idea
For our purposes, knowing how to use this bound is more important than knowing how to prove it. Here is the proof just for the sake of completeness.
To get the intuition, consider the following seemingly weaker bound. If T is any stopping time with finite expectation, then by Wald's equation \textrm{E}[X_ T] is at most \textrm{E}[X_0], so by Markov \Pr [X_ T \ge c] is at most \textrm{E}[X_0]/c. That is, the desired bound holds for the single value X_ T.
The proof of the lemma uses this argument, with T specifically defined to be the first time such that X_ T \ge c (if any, else \infty ). (In the example, this is analogous to Alice quitting as soon as her winnings reach $10.) This T is indeed a stopping time, and, crucially, the event \max _ t X_ t \ge c occurs only if X_ T \ge c. So the bound on \Pr [X_ T \ge c] from the previous paragraph implies the result. A technical obstacle is that T might not have finite expectation, but this is easily overcome via a limit argument.
Click for proof …
Assume without loss of generality that X_0 is a fixed value (non-random). (If not, then condition first on any particular value of X_0.)
Part (a). Let T=\min \{ t \, |\, X_ t \ge c\} . The probability in question is \Pr [T\lt \infty ], which equals \lim _{n\rightarrow \infty } \Pr [T \le n]. To prove (a), we prove that \forall n. \, \Pr [T\le n] \le X_0/c
Define T_ n = \min (n, T), so that T\le n \, \Leftrightarrow X_{T_ n} \ge c. Then T_ n is a stopping time with finite expectation, and, for t\le T_ n, each difference X_{t}-X_{t-1} is non-positive in expectation and uniformly bounded below (by 0 - c), so, by Wald's, \textrm{E}[X_{T_ n}]\le X_0.
Finally, \Pr [T\le n] = \Pr [X_{T_ n} \ge c], which by Markov is at most \textrm{E}[X_{T_ n}]/\, c \le X_0/c.
Part (b). Let T=\min \{ t \, |\, X_ t \gt c\} and T_ n = \min (n,T). The probability in question is \Pr [T\lt \infty ], which equals \lim _{n\rightarrow \infty } \Pr [T \le n], which equals
\begin{align*} \lim _{n\rightarrow \infty } \Pr [X_{T_ n} \gt c] & ~ \le ~ \lim _{n\rightarrow \infty } \frac{\textrm{E}[X_{T_ n}]}{\textrm{E}[X_{T_ n}\, |\, X_{T_ n} \gt c ]} \\ & ~ \le ~ \frac{X_0}{\lim _{n\rightarrow \infty } \textrm{E}[X_{T_ n}\, |\, X_{T_ n} \gt c]} \\ & ~ =~ \frac{X_0}{\textrm{E}[X_{T}\, |\, T \lt \infty ]}. \end{align*}
(The first inequality follows from \textrm{E}[X_{T_ n}]\, \ge \, \Pr [X_{T_ n} \gt c]\, \textrm{E}[X_{T_ n} \, |\, X_{T_ n} \gt c], using the non-negativity of X_{T_ n}. The second follows, using Wald's as in the proof of part (a), from \textrm{E}[X_{T_ n}] \le X_0. The third follows by calculation using that \lim _{n\rightarrow \infty }\Pr [T\ge n \, |\, T < \infty ] = 0.)
To conclude, note that (using \Pr [T\lt \infty ] \gt 0, for otherwise we are done) that by definition T\lt \infty \Rightarrow X_ T \gt c, so the expectation in the denominator on the right-hand side is a weighted average of values each of which exceeds c, and so must itself exceed c (that is, \textrm{E}[X_{T}\, |\, T \lt \infty ] = \textrm{E}[X_{T}\, |\, X_ T > c] > c).
Pessimistic estimator
(a) Given the value X_{t} at the current time t, as long as the bad event has not yet happened (that is, as long as \forall s \lt t. ~ X_ s \lt c ), the value \phi _ t = X_ t/c is a pessimistic estimator for the conditional probability of the event \exists t.~ X_ t \ge c. The value is initially \textrm{E}[X_0]/c, it is non-increasing in expectation with each step, and, as long as it remains less than 1, the event in question doesn't happen.
(b) The same \phi _ t is a pessimistic estimator for the event in part (b): the value is initially \textrm{E}[X_0]/c, it is non-increasing in expectation with each step, and, as long as it remains less than or equal to 1, the event in question doesn't happen. | CommonCrawl |
Results for 'automaticity'
1000+ found
Automatically Minded.Ellen Fridland - 2017 - Synthese 194 (11).details
It is not rare in philosophy and psychology to see theorists fall into dichotomous thinking about mental phenomena. On one side of the dichotomy there are processes that I will label "unintelligent." These processes are thought to be unconscious, implicit, automatic, unintentional, involuntary, procedural, and non-cognitive. On the other side, there are "intelligent" processes that are conscious, explicit, controlled, intentional, voluntary, declarative, and cognitive. Often, if a process or behavior is characterized by one of the features from either of the (...) above lists, the process or behavior is classified as falling under the category to which the feature belongs. For example, if a process is implicit this is usually considered sufficient for classifying it as "unintelligent" and for assuming that the remaining features that fall under the "unintelligent" grouping will apply to it as well. Accordingly, if a process or behavior is automatic, philosophers often consider it to be unintelligent. It is my goal in this paper to challenge the conceptual slip from "automatic" to "unintelligent". I will argue that there are a whole range of properties highlighted by the existing psychological literature that make automaticity a much more complex phenomenon than is usually appreciated. I will then go on to discuss two further important relationships between automatic processes and controlled processes that arise when we think about automatic processes in the context of skilled behavior. These interactions should add to our resistance to classifying automaticity as unintelligent or mindless. In Sect. 1, I present a few representative cases of philosophers classifying automatic processes and behaviors as mindless or unintelligent. In Sect. 2, I review trends in the psychology of automaticity in order highlight a complex set of features that are characteristic, though not definitive, of automatic processes and behaviors. In Sect. 3, I argue that at least some automatic processes are likely cognitively penetrable. In Sect. 4, I argue that the structure of skilled automatic processes is shaped diachronically by practice, training and learning. Taken together, these considerations should dislodge the temptation to equate "automatic" with "unintelligent". (shrink)
Mental States and Processes in Philosophy of Mind
Skills in Philosophy of Action
Automatic Actions: Challenging Causalism.Ezio Di Nucci - 2011 - Rationality Markets and Morals 2 (1):179-200.details
I argue that so-called automatic actions – routine performances that we successfully and effortlessly complete without thinking such as turning a door handle, downshifting to 4th gear, or lighting up a cigarette – pose a challenge to causalism, because they do not appear to be preceded by the psychological states which, according to the causal theory of action, are necessary for intentional action. I argue that causalism cannot prove that agents are simply unaware of the relevant psychological states when they (...) act automatically, because these content-specific psychological states aren't always necessary to make coherent rational sense of the agent's behaviour. I then dispute other possible grounds for the attribution of these psychological states, such as agents' own self-attributions. In the final section I introduce an alternative to causalism, building on Frankfurt's concept of guidance. (shrink)
Causal Theory of Action in Philosophy of Action
Habits in Philosophy of Action
Intentional Action in Philosophy of Action
Philosophy of Psychology, Misc in Philosophy of Cognitive Science
Reasons and Causes in Philosophy of Action
Beyond Automaticity: The Psychological Complexity of Skill.Elisabeth Pacherie & Myrto Mylopoulos - 2020 - Topoi 40 (3):649-662.details
The objective of this paper is to characterize the rich interplay between automatic and cognitive control processes that we propose is the hallmark of skill, in contrast to habit, and what accounts for its flexibility. We argue that this interplay isn't entirely hierarchical and static, but rather heterarchical and dynamic. We further argue that it crucially depends on the acquisition of detailed and well-structured action representations and internal models, as well as the concomitant development of metacontrol processes that can be (...) used to shape and balance it. (shrink)
Are Automatic Conceptual Cores the Gold Standard of Semantic Processing? The Context‐Dependence of Spatial Meaning in Grounded Congruency Effects.Lauren A. M. Lebois, Christine D. Wilson-Mendenhall & Lawrence W. Barsalou - 2015 - Cognitive Science 39 (8):1764-1801.details
According to grounded cognition, words whose semantics contain sensory-motor features activate sensory-motor simulations, which, in turn, interact with spatial responses to produce grounded congruency effects. Growing evidence shows these congruency effects do not always occur, suggesting instead that the grounded features in a word's meaning do not become active automatically across contexts. Researchers sometimes use this as evidence that concepts are not grounded, further concluding that grounded information is peripheral to the amodal cores of concepts. We first review broad evidence (...) that words do not have conceptual cores, and that even the most salient features in a word's meaning are not activated automatically. Then, in three experiments, we provide further evidence that grounded congruency effects rely dynamically on context, with the central grounded features in a concept becoming active only when the current context makes them salient. Even when grounded features are central to a word's meaning, their activation depends on task conditions. (shrink)
Aspects of Consciousness in Philosophy of Mind
Philosophy of Consciousness in Philosophy of Mind
Ethical Automaticity.Michael Brownstein & Alex Madva - 2012 - Philosophy of the Social Sciences 42 (1):68-98.details
Social psychologists tell us that much of human behavior is automatic. It is natural to think that automatic behavioral dispositions are ethically desirable if and only if they are suitably governed by an agent's reflective judgments. However, we identify a class of automatic dispositions that make normatively self-standing contributions to praiseworthy action and a well-lived life, independently of, or even in spite of, an agent's reflective judgments about what to do. We argue that the fundamental questions for the "ethics of (...) automaticity" are what automatic dispositions are (and are not) good for and when they can (and cannot) be trusted. (shrink)
Philosophy of Social Science, Miscellaneous in Philosophy of Social Science
Racism and Psychology in Philosophy of Gender, Race, and Sexuality
Automaticity in Virtuous Action.Clea F. Rees & Jonathan Webber - 2014 - In Nancy E. Snow & Franco V. Trivigno (eds.), The Philosophy and Psychology of Character and Happiness. Routledge. pp. 75-90.details
Automaticity is rapid and effortless cognition that operates without conscious awareness or deliberative control. An action is virtuous to the degree that it meets the requirements of the ethical virtues in the circumstances. What contribution does automaticity make to the ethical virtue of an action? How far is the automaticity discussed by virtue ethicists consonant with, or even supported by, the findings of empirical psychology? We argue that the automaticity of virtuous action is automaticity not (...) of skill, but of motivation. Automatic motivations that contribute to the virtuousness of an action include not only those that initiate action, but also those that modify action and those that initiate and shape deliberation. We then argue that both goal psychology and attitude psychology can provide the cognitive architecture of this automatic motivation. Since goals are essentially directed towards the agent's own action whereas attitudes are not, we argue that goals might underpin some virtues while attitudes underpin others. We conclude that consideration of the cognitive architecture of ethical virtue ought to engage with both areas of empirical psychology and should be careful to distinguish among ethical virtues. (shrink)
Action Theory, Miscellaneous in Philosophy of Action
Moral Motivation in Meta-Ethics
Personality in Normative Ethics
Skepticism about Character in Normative Ethics
Virtue Ethics and Practical Wisdom in Normative Ethics
Virtues and Vices in Normative Ethics
$62.95 new $154.72 used (collection) Amazon page
Automatic Constructive Appraisal as a Candidate Cause of Emotion.Agnes Moors - 2010 - Emotion Review 2 (2):139-156.details
Critics of appraisal theory have difficulty accepting appraisal (with its constructive flavor) as an automatic process, and hence as a potential cause of most emotions. In response, some appraisal theorists have argued that appraisal was never meant as a causal process but as a constituent of emotional experience. Others have argued that appraisal is a causal process, but that it can be either rule-based or associative, and that the associative variant can be automatic. This article first proposes empirically investigating whether (...) rule-based appraisal can also be automatic and then proposes investigating the automatic nature of constructive (instead of rule-based) appraisal because the distinction between rule-based and associative is problematic. Finally, it discusses experiments that support the view that constructive appraisal can be automatic. (shrink)
Emotions in Philosophy of Mind
Direct download (12 more)
Automatic and Effortful Processes in Memory.Lynn Hasher & Rose T. Zacks - 1979 - Journal of Experimental Psychology: General 108 (3):356-388.details
Controlled and Automatic Human Information Processing: Perceptual Learning, Automatic Attending, and a General Theory.Richard M. Shiffrin & Walter E. Schneider - 1977 - Psychological Review 84 (2):128-90.details
Tested the 2-process theory of detection, search, and attention presented by the current authors in a series of experiments. The studies demonstrate the qualitative difference between 2 modes of information processing: automatic detection and controlled search; trace the course of the learning of automatic detection, of categories, and of automatic-attention responses; and show the dependence of automatic detection on attending responses and demonstrate how such responses interrupt controlled processing and interfere with the focusing of attention. The learning of categories is (...) shown to improve controlled search performance. A general framework for human information processing is proposed. The framework emphasizes the roles of automatic and controlled processing. The theory is compared to and contrasted with extant models of search and attention. (shrink)
Control and Consciousness in Philosophy of Cognitive Science
Controlled & Automatic Processing: Behavior, Theory, and Biological Mechanisms.Walter Schneider & Jason M. Chein - 2003 - Cognitive Science 27 (3):525-559.details
The Automatic and the Ballistic: Modularity Beyond Perceptual Processes.Eric Mandelbaum - 2015 - Philosophical Psychology 28 (8):1147-1156.details
Perceptual processes, in particular modular processes, have long been understood as being mandatory. But exactly what mandatoriness amounts to is left to intuition. This paper identifies a crucial ambiguity in the notion of mandatoriness. Discussions of mandatory processes have run together notions of automaticity and ballisticity. Teasing apart these notions creates an important tool for the modularist's toolbox. Different putatively modular processes appear to differ in their kinds of mandatoriness. Separating out the automatic from the ballistic can help the (...) modularist diagnose and explain away some putative counterexamples to multimodal and central modules, thereby helping us to better evaluate the evidentiary status of modularity theory. (shrink)
Inference in Epistemology
Knowledge of Language in Philosophy of Language
Modularity and Cognitive Penetrability in Philosophy of Mind
Modularity in Cognitive Science in Philosophy of Cognitive Science
Visual Pathways in Philosophy of Cognitive Science
Are Automatic Imitation and Spatial Compatibility Mediated by Different Processes?Richard P. Cooper, Caroline Catmur & Cecilia Heyes - 2013 - Cognitive Science 37 (4):605-630.details
Automatic imitation or "imitative compatibility" is thought to be mediated by the mirror neuron system and to be a laboratory model of the motor mimicry that occurs spontaneously in naturalistic social interaction. Imitative compatibility and spatial compatibility effects are known to depend on different stimulus dimensions—body movement topography and relative spatial position. However, it is not yet clear whether these two types of stimulus–response compatibility effect are mediated by the same or different cognitive processes. We present an interactive activation model (...) of imitative and spatial compatibility, based on a dual-route architecture, which substantiates the view they are mediated by processes of the same kind. The model, which is in many ways a standard application of the interactive activation approach, simulates all key results of a recent study by Catmur and Heyes (2011). Specifically, it captures the difference in the relative size of imitative and spatial compatibility effects; the lack of interaction when the imperative and irrelevant stimuli are presented simultaneously; the relative speed of responses in a quintile analysis when the imperative and irrelevant stimuli are presented simultaneously; and the different time courses of the compatibility effects when the imperative and irrelevant stimuli are presented asynchronously. (shrink)
Automatically Classifying Case Texts and Predicting Outcomes.Kevin D. Ashley & Stefanie Brüninghaus - 2009 - Artificial Intelligence and Law 17 (2):125-165.details
Work on a computer program called SMILE + IBP (SMart Index Learner Plus Issue-Based Prediction) bridges case-based reasoning and extracting information from texts. The program addresses a technologically challenging task that is also very relevant from a legal viewpoint: to extract information from textual descriptions of the facts of decided cases and apply that information to predict the outcomes of new cases. The program attempts to automatically classify textual descriptions of the facts of legal problems in terms of Factors, a (...) set of classification concepts that capture stereotypical fact patterns that effect the strength of a legal claim, here trade secret misappropriation. Using these classifications, the program can evaluate and explain predictions about a problem's outcome given a database of previously classified cases. This paper provides an extended example illustrating both functions, prediction by IBP and text classification by SMILE, and reports empirical evaluations of each. While IBP's results are quite strong, and SMILE's much weaker, SMILE + IBP still has some success predicting and explaining the outcomes of case scenarios input as texts. It marks the first time to our knowledge that a program can reason automatically about legal case texts. (shrink)
Formal Legal Reasoning in Philosophy of Law
Automatic Mechanisms for Social Attention Are Culturally Penetrable.Adam S. Cohen, Joni Y. Sasaki, Tamsin C. German & Heejung S. Kim - 2017 - Cognitive Science 41 (1):242-258.details
Are mechanisms for social attention influenced by culture? Evidence that social attention is triggered automatically by bottom-up gaze cues and is uninfluenced by top-down verbal instructions may suggest it operates in the same way everywhere. Yet considerations from evolutionary and cultural psychology suggest that specific aspects of one's cultural background may have consequence for the way mechanisms for social attention develop and operate. In more interdependent cultures, the scope of social attention may be broader, focusing on more individuals and relations (...) between those individuals. We administered a multi-gaze cueing task requiring participants to fixate a foreground face flanked by background faces and measured shifts in attention using eye tracking. For European Americans, gaze cueing did not depend on the direction of background gaze cues, suggesting foreground gaze alone drives automatic attention shifting; for East Asians, cueing patterns differed depending on whether the foreground cue matched or mismatched background cues, suggesting foreground and background gaze information were integrated. These results demonstrate that cultural background influences the social attention system by shifting it into a narrow or broad mode of operation and, importantly, provides evidence challenging the assumption that mechanisms underlying automatic social attention are necessarily rigid and impenetrable to culture. (shrink)
Automaticity in Social-Cognitive Processes.John A. Bargh, Kay L. Schwader, Sarah E. Hailey, Rebecca L. Dyer & Erica J. Boothby - 2012 - Trends in Cognitive Sciences 16 (12):593-605.details
Philosophy of Psychology in Philosophy of Cognitive Science
Automatic and Polynomial-Time Algebraic Structures.Nikolay Bazhenov, Matthew Harrison-Trainor, Iskander Kalimullin, Alexander Melnikov & Keng Meng Ng - 2019 - Journal of Symbolic Logic 84 (4):1630-1669.details
A structure is automatic if its domain, functions, and relations are all regular languages. Using the fact that every automatic structure is decidable, in the literature many decision problems have been solved by giving an automatic presentation of a particular structure. Khoussainov and Nerode asked whether there is some way to tell whether a structure has, or does not have, an automatic presentation. We answer this question by showing that the set of Turing machines that represent automata-presentable structures is ${\rm{\Sigma (...) }}_1^1 $-complete. We also use similar methods to show that there is no reasonable characterisation of the structures with a polynomial-time presentation in the sense of Nerode and Remmel. (shrink)
Automatically Interpreting All Faults, Unconformities, and Horizons From 3D Seismic Images.Xinming Wu & Dave Hale - 2016 - Interpretation: SEG 4 (2):T227-T237.details
Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although others automate the extraction of each type of these surfaces to some extent, it is difficult to automatically interpret a seismic image with all three types of surfaces because they could intersect with each other. For example, horizons can be especially difficult to extract from a seismic image complicated by faults and unconformities because a horizon surface can be dislocated (...) at faults and terminated at unconformities. We have proposed a processing procedure to automatically extract all the faults, unconformities, and horizon surfaces from a 3D seismic image. In our processing, we first extracted fault surfaces, estimated fault slips, and undid the faulting in the seismic image. Then, we extracted unconformities from the unfaulted image with continuous reflectors across faults. Finally, we used the unconformities as constraints for image flattening and horizon extraction. Most of the processing was image processing or array processing and was achieved by efficiently solving partial differential equations. We used a 3D real example with faults and unconformities to demonstrate the entire image processing. (shrink)
How Automatic Are Crossmodal Correspondences?Charles Spence & Ophelia Deroy - 2013 - Consciousness and Cognition 22 (1):245-260.details
The last couple of years have seen a rapid growth of interest in the study of crossmodal correspondences – the tendency for our brains to preferentially associate certain features or dimensions of stimuli across the senses. By now, robust empirical evidence supports the existence of numerous crossmodal correspondences, affecting people's performance across a wide range of psychological tasks – in everything from the redundant target effect paradigm through to studies of the Implicit Association Test, and from speeded discrimination/classification tasks through (...) to unspeeded spatial localisation and temporal order judgment tasks. However, one question that has yet to receive a satisfactory answer is whether crossmodal correspondences automatically affect people's performance , as opposed to reflecting more of a strategic, or top-down, phenomenon. Here, we review the latest research on the topic of crossmodal correspondences to have addressed this issue. We argue that answering the question will require researchers to be more precise in terms of defining what exactly automaticity entails. Furthermore, one's answer to the automaticity question may also hinge on the answer to a second question: Namely, whether crossmodal correspondences are all 'of a kind', or whether instead there may be several different kinds of crossmodal mapping . Different answers to the automaticity question may then be revealed depending on the type of correspondence under consideration. We make a number of suggestions for future research that might help to determine just how automatic crossmodal correspondences really are. (shrink)
Science of Consciousness in Philosophy of Cognitive Science
Automatically Elicited Fear: Conditioned Skin Conductance Responses to Masked Facial Expressions.Francisco Esteves, Ulf Dimberg & Arne öhman - 1994 - Cognition and Emotion 8 (5):393-413.details
Emotion and Consciousness in Psychology in Philosophy of Cognitive Science
The Automaticity of Visual Statistical Learning.Nicholas B. Turk-Browne, Justin A. Jungé & Brian J. Scholl - 2005 - Journal of Experimental Psychology: General 134 (4):552-564.details
Automatic Actions: Agency, Intentionality, and Responsibility.Christoph Lumer - 2017 - Philosophical Psychology 30 (5):616-644.details
This article discusses a challenge to the traditional intentional-causalist conceptions of action and intentionality as well as to our everyday and legal conceptions of responsibility, namely the psychological discovery that the greatest part of our alleged actions are performed automatically, that is unconsciously and without a proximal intention causing and sustaining them. The main part of the article scrutinizes several mechanisms of automatic behavior, how they work, and whether the resulting behavior is an action. These mechanisms include actions caused by (...) distal implementation intentions, four types of habit and habitualization, mimicry, and semantically induced automatic behavior. According to the intentional-causalist criterion, the automatic behaviors resulting from all but one of these mechanisms turn out to be actions and to be intentional; and even the behavior resulting from the remaining mechanism is something we can be responsible for. Hence, the challenge, seen from close up, does not really call the traditional conception of action and intentionality into question. (shrink)
Automatic Preference for White Americans: Eliminating the Familiarity Explanation.Anthony Greenwald - manuscriptdetails
Using the Implicit Association Test (IAT), recent experiments have demonstrated a strong and automatic positive evaluation of White Americans and a relatively negative evaluation of African Americans. Interpretations of this finding as revealing pro-White attitudes rest critically on tests of alternative interpretations, the most obvious one being perceivers' greater familiarity with stimuli representing White Americans. The reported experiment demonstrated that positive attributes were more strongly associated with White than Black Americans even when (a) pictures of equally unfamiliar Black and White (...) individuals were used as stimuli and (b) differences in stimulus familiarity were statistically controlled. This experiment indicates that automatic race associations captured by the IAT are not compromised by stimulus familiarity, which in turn strengthens the conclusion that the IAT measures automatic evaluative associations. © 2000 Academic Press.. (shrink)
American Philosophy in Philosophy of the Americas
Minorities in Social and Political Philosophy
Whiteness in Philosophy of Gender, Race, and Sexuality
Automatic Implementation of Fuzzy Reasoning Spiking Neural P Systems for Diagnosing Faults in Complex Power Systems.Haina Rong, Kang Yi, Gexiang Zhang, Jianping Dong, Prithwineel Paul & Zhiwei Huang - 2019 - Complexity 2019:1-16.details
As an important variant of membrane computing models, fuzzy reasoning spiking neural P systems were introduced to build a link between P systems and fault diagnosis applications. An FRSN P system offers an intuitive illustration based on a strictly mathematical expression, a good fault-tolerant capacity, a good description for the relationships between protective devices and faults, and an understandable diagnosis model-building process. However, the implementation of FRSN P systems is still at a manual process, which is a time-consuming and hard (...) labor work, especially impossible to perform on large-scale complex power systems. This manual process seriously limits the use of FRSN P systems to diagnose faults in large-scale complex power systems and has always been a challenging and ongoing task for many years. In this work we develop an automatic implementation method for automatically fulfilling the hard task, named membrane computing fault diagnosis method. This is a very significant attempt in the development of FRSN P systems and even of the membrane computing applications. MCFD is realized by automating input and output, and diagnosis processes consists of network topology analysis, suspicious fault component analysis, construction of FRSN P systems for suspicious fault components, and fuzzy inference. Also, the feasibility of the FRSN P system is verified on the IEEE14, IEEE 39, and IEEE 118 node systems. (shrink)
Automatic Evaluation of Design Alternatives with Quantitative Argumentation.Pietro Baroni, Marco Romano, Francesca Toni, Marco Aurisicchio & Giorgio Bertanza - 2015 - Argument and Computation 6 (1):24-49.details
This paper presents a novel argumentation framework to support Issue-Based Information System style debates on design alternatives, by providing an automatic quantitative evaluation of the positions put forward. It also identifies several formal properties of the proposed quantitative argumentation framework and compares it with existing non-numerical abstract argumentation formalisms. Finally, the paper describes the integration of the proposed approach within the design Visual Understanding Environment software tool along with three case studies in engineering design. The case studies show the potential (...) for a competitive advantage of the proposed approach with respect to state-of-the-art engineering design methods. (shrink)
Automatic Continuity of Group Homomorphisms.Christian Rosendal - 2009 - Bulletin of Symbolic Logic 15 (2):184-214.details
We survey various aspects of the problem of automatic continuity of homomorphisms between Polish groups.
Model Theory in Logic and Philosophy of Logic
Can Automatic Calculating Machines Be Said to Think?M. H. A. Newman, Alan M. Turing, Geoffrey Jefferson, R. B. Braithwaite & S. Shieber - 2004 - In Stuart M. Shieber (ed.), The Turing Test: Verbal Behavior as the Hallmark of Intelligence. MIT Press.details
$8.07 used $30.00 new $40.00 from Amazon (collection) Amazon page
Automatic Preference for White Americans: Eliminating the Familiarity Explanation.Debbie E. McGhee - unknowndetails
Using the Implicit Association Test, recent experiments have demonstrated a strong and automatic positive evaluation of White Americans and a relatively negative evaluation of African Americans. Interpretations of this finding as revealing pro-White attitudes rest critically on tests of alternative interpretations, the most obvious one being perceivers' greater familiarity with stimuli representing White Americans. The reported experiment demonstrated that positive attributes were more strongly associated with White than Black Americans even when pictures of equally unfamiliar Black and White individuals were (...) used as stimuli and differences in stimulus familiarity were statistically controlled. This experiment indicates that automatic race associations captured by the IAT are not compromised by stimulus familiarity, which in turn strengthens the conclusion that the IAT measures automatic evaluative associations. © 2000 Academic Press. (shrink)
Doing Without Deliberation: Automatism, Automaticity, and Moral Accountability,.Neil Levy & Tim Bayne - 2004 - International Review of Psychiatry 16 (4):209-15.details
Actions performed in a state of automatism are not subject to moral evaluation, while automatic actions often are. Is the asymmetry between automatistic and automatic agency justified? In order to answer this question we need a model or moral accountability that does justice to our intuitions about a range of modes of agency, both pathological and non-pathological. Our aim in this paper is to lay the foundations for such an account.
Control and Responsibility in Meta-Ethics
Moral Responsibility, Misc in Meta-Ethics
Philosophy of Action, Misc in Philosophy of Action
Psychopathology and Responsibility in Meta-Ethics
Responsibility and Reactive Attitudes in Meta-Ethics
Educated Intuitions. Automaticity and Rationality in Moral Judgement.Hanno Sauer - 2012 - Philosophical Explorations 15 (3):255-275.details
Moral judgements are based on automatic processes. Moral judgements are based on reason. In this paper, I argue that both of these claims are true, and show how they can be reconciled. Neither the automaticity of moral judgement nor the post hoc nature of conscious moral reasoning pose a threat to rationalist models of moral cognition. The relation moral reasoning bears to our moral judgements is not primarily mediated by episodes of conscious reasoning, but by the acquisition, formation and (...) maintenance ? in short: education ? of our moral intuitions. (shrink)
Moral Judgment, Misc in Meta-Ethics
Automatic Guidance of Attention From Working Memory.David Soto, John Hodsoll, Pia Rotshtein & Glyn W. Humphreys - 2008 - Trends in Cognitive Sciences 12 (9):342-348.details
Salomon: Automatic Abstracting of Legal Cases for Effective Access to Court Decisions. [REVIEW]Caroline Uyttendaele, Marie-Francine Moens & Jos Dumortier - 1998 - Artificial Intelligence and Law 6 (1):59-79.details
The SALOMON project is a contribution to the automatic processing of legal texts. Its aim is to automatically summarise Belgian criminal cases in order to improve access to the large number of existing and future cases. Therefore, techniques are developed for identifying and extracting relevant information from the cases. A broader application of these techniques could considerably simplify the work of the legal profession.A double methodology was used when developing SALOMON: the cases are processed by employing additional knowledge to interpret (...) structural patterns and features on the one hand and by way of occurrence statistics of index terms on the other. As a result, SALOMON performs an initial categorisation and structuring of the cases and subsequently extracts the most relevant text units of the alleged offences and of the opinion of the court. The SALOMON techniques do not themselves solve any legal questions, but they do guide the user effectively towards relevant texts. (shrink)
Controlled and Automatic Human Information Processing: I. Detection, Search, and Attention.Walter Schneider & Richard M. Shiffrin - 1977 - Psychological Review 84 (1):1-66.details
Cognitive Psychology in Philosophy of Cognitive Science
How Automatic is "Automatic Vigilance"? The Role of Working Memory in Attentional Interference of Negative Information.Lotte F. Van Dillen & Sander L. Koole - 2009 - Cognition and Emotion 23 (6):1106-1117.details
(2009). How automatic is "automatic vigilance"? The role of working memory in attentional interference of negative information. Cognition & Emotion: Vol. 23, No. 6, pp. 1106-1117.
Automaticity, Consciousness and Moral Responsibility.Simon Wigley - 2007 - Philosophical Psychology 20 (2):209-225.details
Cognitive scientists have long noted that automated behavior is the rule, while consciousness acts of self-regulation are the exception to the rule. On the face of it automated actions appear to be immune to moral appraisal because they are not subject to conscious control. Conventional wisdom suggests that sleepwalking exculpates, while the mere fact that a person is performing a well-versed task unthinkingly does not. However, our apparent lack of conscious control while we are undergoing automaticity challenges the idea (...) that there is a relevant moral difference between these two forms of unconscious behavior. In both cases the agent lacks access to information that might help them guide their actions so as to avoid harms. In response it is argued that the crucial distinction between the automatic agent and the agent undergoing an automatism, such as somnambulism or petit mal epilepsy, lies in the fact that the former can preprogram the activation and interruption of automatic behavior. Given that, it is argued that there is elbowroom for attributing responsibility to automated agents based on the quality of their will. (shrink)
SaltSeg: Automatic 3D Salt Segmentation Using a Deep Convolutional Neural Network.Yunzhi Shi, Xinming Wu & Sergey Fomel - 2019 - Interpretation 7 (3):SE113-SE122.details
Salt boundary interpretation is important for the understanding of salt tectonics and velocity model building for seismic migration. Conventional methods consist of computing salt attributes and extracting salt boundaries. We have formulated the problem as 3D image segmentation and evaluated an efficient approach based on deep convolutional neural networks with an encoder-decoder architecture. To train the model, we design a data generator that extracts randomly positioned subvolumes from large-scale 3D training data set followed by data augmentation, then feed a large (...) number of subvolumes into the network while using salt/nonsalt binary labels generated by thresholding the velocity model as ground truth labels. We test the model on validation data sets and compare the blind test predictions with the ground truth. Our results indicate that our method is capable of automatically capturing subtle salt features from the 3D seismic image with less or no need for manual input. We further test the model on a field example to indicate the generalization of this deep CNN method across different data sets. (shrink)
Automatic and Controlled Response Inhibition: Associative Learning in the Go/No-Go and Stop-Signal Paradigms.Frederick Verbruggen & Gordon D. Logan - 2008 - Journal of Experimental Psychology: General 137 (4):649-672.details
Automatically Running Experiments on Checking Multi-Party Contracts.Adilson Luiz Bonifacio & Wellington Aparecido Della Mura - 2020 - Artificial Intelligence and Law 29 (3):287-310.details
Contracts play an important role in business management where relationships among different parties are dictated by legal rules. Electronic contracts have emerged mostly due to technological advances and electronic trading between companies and customers. New challenges have then arisen to guarantee reliability among the stakeholders in electronic negotiations. In this scenario, automatic verification of electronic contracts appeared as an imperative support, specially the conflict detection task of multi-party contracts. The problem of checking contracts has been largely addressed in the literature, (...) but there are few, if any, methods and practical tools that can deal with multi-party contracts using a contract language with deontic and dynamic aspects as well as relativizations, over the same formalism. In this work we present an automatic checker for finding conflicts on multi-party contracts modeled by an extended contract language with deontic operators and relativizations. Moreover a well-known case study of sales contract is modeled and automatically verified by our tool. Further, we performed practical experiments in order to evaluate the efficiency of our method and the practical tool. (shrink)
Automatic Argumentative Analysis for Interaction Mining.Vincenzo Pallotta & Rodolfo Delmonte - 2011 - Argument and Computation 2 (2-3):77 - 106.details
Interaction mining is about discovering and extracting insightful information from digital conversations, namely those human?human information exchanges mediated by digital network technology. We present in this article a computational model of natural arguments and its implementation for the automatic argumentative analysis of digital conversations, which allows us to produce relevant information to build interaction business analytics applications overcoming the limitations of standard text mining and information retrieval technology. Applications include advanced visualisations and abstractive summaries.
Philosophy of Artificial Intelligence in Philosophy of Cognitive Science
Chronic Automaticity in Addiction: Why Extreme Addiction is a Disorder.Steve Matthews - 2017 - Neuroethics 10 (1):199-209.details
Marc Lewis argues that addiction is not a disease, it is instead a dysfunctional outcome of what plastic brains ordinarily do, given the adaptive processes of learning and development within environments where people are seeking happiness, or relief, or escape. They come to obsessively desire substances or activities that they believe will deliver happiness and so on, but this comes to corrupt the normal process of development when it escalates beyond a point of functionality. Such 'deep learning' emerges from consumptive (...) habits, or 'motivated repetition', and although addiction is bad, it ferments out of the ordinary stuff underpinning any neural habit. Lewis gives us a convincing story about the process that leads from ordinary controlled consumption through to quite heavy addictive consumption, but I claim that in some extreme cases the eventual state of deep learning tips over into clinically significant impairment and disorder. Addiction is an elastic concept, and although it develops through mild and moderate forms, the impairment we see in severe cases needs to be acknowledged. This impairment, I argue, consists in the chronic automatic consumption present in late stage addiction. In this condition, the desiring self largely drops out the picture, as the addicted individual begins to mindlessly consume. This impairment is clinically significant because the machinery of motivated rationality has become corrupted. To bolster this claim I compare what is going on in these extreme cases with what goes on in people who dissociate in cases of depersonalization disorder. (shrink)
Compulsion and Addiction in Philosophy of Action
Neuroethics in Applied Ethics
The Concept of Disease in Philosophy of Science, Misc
Automatic Integration of Social Information in Emotion Recognition.Christian Mumenthaler & David Sander - 2015 - Journal of Experimental Psychology: General 144 (2):392-399.details
Automaticity in Action: The Unconscious as Repository of Chronic Goals and Motives.John A. Bargh - 1996 - In P. Gollwitzer & John A. Bargh (eds.), The Psychology of Action: Linking Cognition and Motivation to Behavior. Guilford. pp. 457.details
Action and Consciousness in Psychology in Philosophy of Cognitive Science
The Normativity of Automaticity.Michael Brownstein & Alex Madva - 2012 - Mind and Language 27 (4):410-434.details
While the causal contributions of so-called 'automatic' processes to behavior are now widely acknowledged, less attention has been given to their normative role in the guidance of action. We develop an account of the normativity of automaticity that responds to and builds upon Tamar Szabó Gendler's account of 'alief', an associative and arational mental state more primitive than belief. Alief represents a promising tool for integrating psychological research on automaticity with philosophical work on mind and action, but Gendler (...) errs in overstating the degree to which aliefs are norm-insensitive. (shrink)
Belief, Misc in Philosophy of Mind
Ethical Theories, Misc in Normative Ethics
Practical and Theoretical Reasoning in Philosophy of Action
Automatic Evaluation Isn't That Crude! Moderation of Masked Affective Priming by Type of Valence.Dirk Wentura & Juliane Degner - 2010 - Cognition and Emotion 24 (4):609-628.details
Automatic Diagnosis of Microgrid Networks' Power Device Faults Based on Stacked Denoising Autoencoders and Adaptive Affinity Propagation Clustering.Fan Xu, Xin Shu, Xiaodi Zhang & Bo Fan - 2020 - Complexity 2020:1-24.details
This paper presents a model based on stacked denoising autoencoders in deep learning and adaptive affinity propagation for bearing fault diagnosis automatically. First, SDAEs are used to extract potential fault features and directly reduce their high dimension to 3. To prove that the feature extraction capability of SDAEs is better than stacked autoencoders, principal component analysis is employed to compare and reduce their dimension to 3, except for the final hidden layer. Hence, the extracted 3-dimensional features are chosen as the (...) input for adAP cluster models. Compared with other traditional cluster methods, such as the Fuzzy C-mean, Gustafson–Kessel, Gath–Geva, and affinity propagation, clustering algorithms can identify fault samples without cluster center number selection. However, AP needs to set two key parameters depending on manual experience—the damping factor and the bias parameter—before its calculation. To overcome this drawback, adAP is introduced in this paper. The adAP clustering algorithm can find the available parameters according to the fitness function automatic. Finally, the experimental results prove that SDAEs with adAP are better than other models, including SDAE-FCM/GK/GG according to the cluster assess index and the classification error rate. (shrink)
Automatic Processing of Psychological Distance: Evidence From a Stroop Task.Yoav Bar-Anan, Nira Liberman, Yaacov Trope & Daniel Algom - 2007 - Journal of Experimental Psychology: General 136 (4):610-622.details
Linking Automatic Evaluation to Mood and Information Processing Style: Consequences for Experienced Affect, Impression Formation, and Stereotyping.Tanya L. Chartrand, Rick B. van Baaren & John A. Bargh - 2006 - Journal of Experimental Psychology: General 135 (1):70-77.details
On the Automaticity and Ethics of Belief.Uwe Peters - 2017 - Teoria:99–115..details
Recently, philosophers have appealed to empirical studies to argue that whenever we think that p, we automatically believe that p (Millikan 2004; Mandelbaum 2014; Levy and Mandelbaum 2014). Levy and Mandelbaum (2014) have gone further and claimed that the automaticity of believing has implications for the ethics of belief in that it creates epistemic obligations for those who know about their automatic belief acquisition. I use theoretical considerations and psychological findings to raise doubts about the empirical case for the (...) view that we automatically believe what we think. Furthermore, I contend that even if we set these doubts aside, Levy and Mandelbaum's argument to the effect that the automaticity of believing creates epistemic obligations is not fully convincing. (shrink)
Ethics of Belief in Epistemology
Social Psychology in Philosophy of Cognitive Science
The Nature of Belief in Philosophy of Mind
Automatic Proof Generation in an Axiomatic System for $\mathsf{CPL}$ by Means of the Method of Socratic Proofs.Aleksandra Grzelak & Dorota Leszczyńska-Jasion - 2018 - Logic Journal of the IGPL 26 (1):109-148.details
The Automaticity of Everyday Life.R. Wyer (ed.) - 1988 - Lawrence Erlbaum.details
This 10th book in the series addresses automaticity and how it relates to social behavior.
Automatic Generation of Cognitive Theories Using Genetic Programming.Enrique Frias-Martinez & Fernand Gobet - 2007 - Minds and Machines 17 (3):287-309.details
Cognitive neuroscience is the branch of neuroscience that studies the neural mechanisms underpinning cognition and develops theories explaining them. Within cognitive neuroscience, computational neuroscience focuses on modeling behavior, using theories expressed as computer programs. Up to now, computational theories have been formulated by neuroscientists. In this paper, we present a new approach to theory development in neuroscience: the automatic generation and testing of cognitive theories using genetic programming (GP). Our approach evolves from experimental data cognitive theories that explain "the mental (...) program" that subjects use to solve a specific task. As an example, we have focused on a typical neuroscience experiment, the delayed-match-to-sample (DMTS) task. The main goal of our approach is to develop a tool that neuroscientists can use to develop better cognitive theories. (shrink)
1 — 50 / 1000 | CommonCrawl |
Supersymmetric Electroweak Renormalization of the Z-Width in the MSSM (I) [PDF]
D. Garcia,R. A. Jimenez,J. Sola
Physics , 1994, DOI: 10.1016/0370-2693(95)00031-F
Abstract: Within the framework of the MSSM, we compute the complete set of electroweak one-loop supersymmetric quantum effects on the width $\Gamma_Z$ of the $Z$-boson in the on-shell renormalization scheme. Numerical analyses of the corrections to the various partial widths into leptons and quarks are presented. On general grounds, the average size of the electroweak SUSY corrections to $\Gamma_Z$ may well saturate the level of the present theoretical uncertainties, even if considering the full supersymmetric spectrum lying in the neighbourhood of the unaccessible LEP 200 range. Remarkably enough, for the present values of the top quark mass, the electroweak SUSY effects could be, globally, very close or even bigger than the electroweak SM corrections, but opposite in sign. Therefore, in the absence of theoretical errors, there are large regions of parameter space where one could find that, effectively, the electroweak SM corrections are ``missing'', or even having the ``wrong'' sign. This should be helpful in discriminating between the SM and the MSSM. However, an accurate prediction of the electroweak quantum effects on $\Gamma_Z$ will only be possible, if $\Delta r$ and $\alpha_s$ are pinned down in the future with enough precision.
Electroweak Supersymmetric Quantum Corrections to the Top Quark Width [PDF]
David Garcia,Ricardo A. Jimenez,Joan SOLA,Wolfgang Hollik
Physics , 1994, DOI: 10.1016/0550-3213(94)90269-0
Abstract: Within the framework of the MSSM, we compute the electroweak one-loop supersymmetric quantum corrections to the width $\Gamma (t\rightarrow W^{+}\, b)$ of the canonical main decay of the top quark. The results are presented in two on-shell renormalization schemes parametrized either by $\alpha$ or $G_F$. While in the standard model, and in the Higgs sector of the MSSM, the electroweak radiative corrections in the $G_F$-scheme are rather insensitive to the top quark mass and are of order of $1\%$ at most, the rest (``genuine'' part) of the supersymmetric quantum effects in the MSSM amount to a non-negligible correction that could be about one order of magnitude larger, depending on the top quark mass and of the region of the supersymmetric parameter space. These new electroweak effects, therefore, could be of the same order (and go in the same direction) as the conventional leading QCD corrections.
Supersymmetric Quantum Effects on the hadronic width of a heavy charged Higgs boson in the MSSM [PDF]
Joan Sola
Physics , 1997,
Abstract: We discuss the QCD and leading electroweak corrections to the hadronic width of the charged Higgs boson of the MSSM. In our renormalization framework, tan(beta) is defined through Gamma(H^+ -> tau^+ nu_{tau}). We show that a measurement of the hadronic width of H^\pm and/or of the branching ratio of its tau-decay mode with a modest precision of ~20% could be sufficient to unravel the supersymmetric nature of H^\pm in full consistency with the low-energy data from radiative B-meson decays.
STRONG SUPERSYMMETRIC QUANTUM EFFECTS ON THE TOP QUARK WIDTH [PDF]
Andreas DABELSTEIN,Wolfgang HOLLIK,Christoph JUENGER,Ricardo A. JIMENEZ,Joan SOLA
Abstract: We compute the one-loop supersymmetric QCD quantum effects on the width $\Gamma (t\rightarrow W^{+}\, b)$ of the canonical main decay of the top quark within the framework of the MSSM. The corrections can be of either sign depending on whether the stop squark mass is above or below the top quark decay threshold into stop and gluino $\Gamma (t\rightarrow\tilde{t} \,\tilde{g})$. For $m_{\tilde{t}}$ above that threshold, the corrections are negative and can be of the same order (and go in the same direction) as the ordinary QCD corrections, even for stop and gluino masses of ${\cal O}(100)\,GeV$. Since the electroweak supersymmetric quantum effects turn out to be also of the same sign and could be of the same order of magnitude, the total MSSM correction to the top quark width could potentially result in a rather large ${\cal O}(10-25)\%$ reduction of $\Gamma (t\rightarrow W^{+}\, b)$ far beyond the conventional QCD expectations.
Supersymmetric three-body decays of the Top Quark in the MSSM [PDF]
Jaume Guasch,Joan Sola
Abstract: We survey all possible supersymmetric three-body decays of the top quark in the framework of the MSSM and present detailed numerical analyses of the most relevant cases. Although the two-body channels are generally dominant, it is not inconceivable that some or all of our most favourite two-body SUSY candidates could be suppressed. In this event there is still the possibility that some of the available three-body SUSY modes might exhibit a substantial branching fraction and/or carry exotic signatures that would facilitate their identification. Furthermore, in view of the projected inclusive measurement of the top-quark width $\Gamma_t$ in future colliders, one should have at one's disposal the full second order correction (electroweak and strong) to the value of that parameter in the MSSM. Our analysis confirms that some supersymmetric three-body decays could be relevant and thus contribute to $\Gamma_t$ at a level comparable to the largest one loop supersymmetric effects on two-body modes recently computed in the literature.
FCNC top decays into Higgs bosons in the MSSM [PDF]
Jaume Guasch
Abstract: We compute the partial width of the FCNC top quark decay t->c h in the framework of the Minimal Supersymmetric Standard Model, where h = h0, H0, A0 is any of the neutral Higgs of the MSSM. We include the SUSY electroweak, Higgs, and SUSY-QCD contributions. Our results substantially improve previous estimations on the subject, and we find that there is a possibility that they can be measured at LHC.
Heavy charged Higgs boson decaying into top quark in the MSSM [PDF]
J. A. Coarasa,David Garcia,Jaume Guasch,Ricardo A. Jimenez,Joan Sola
Physics , 1997, DOI: 10.1016/S0370-2693(98)00255-X
Abstract: Observing a heavy charged Higgs boson produced in the near future at the Tevatron or at the LHC would be instant evidence of physics beyond the Standard Model. Whether such a Higgs boson would be supersymmetric or not it could only be decided after accurate prediction of its properties. Here we compute the decay width of the dominant decay of such a boson, namely H^+ -> t \bar{b}, including the leading electroweak corrections originating from large Yukawa couplings within the MSSM. These electroweak effects turn out to be of comparable size to the O(alpha_s) QCD corrections in relevant portions of the MSSM parameter space. Our analysis incorporates the stringent low-energy constraints imposed by radiative B-meson decays.
Global Fits of the SM and MSSM to Electroweak Precision Data [PDF]
W. de Boer,A. Dabelstein,W. Hollik,W. Moesle,U. Schwickerath
Physics , 1996, DOI: 10.1007/s002880050508
Abstract: The Minimal supersymmetric extension of the Standard Model (MSSM) with light stops, charginos or pseudoscalar Higgs bosons has been suggested as an explanation of the too high value of the branching ratio of the Z0 boson into b quarks (Rb anomaly). A program including all radiative corrections to the MSSM at the same level as the radiative corrections to the SM has been developed and used to perform global fits to all electroweak data from LEP, SLC and the Tevatron. The probability of the global fit improves from 8% in the SM to 18% in the MSSM. Including the b->s gamma rate, as measured by CLEO, reduces the probability from 18% to 15%. In the constrained MSSM requiring unification and electroweak symmetry breaking no improvement of Rb is possible.
Electroweak Baryogenesis in Supersymmetric Variants [PDF]
Michael G. Schmidt
Physics , 2001, DOI: 10.1016/S0920-5632(01)01519-5
Abstract: We argue that the creation of a baryon asymmetry in the early universe is an intriguing case where several aspects of ``Beyond'' physics are needed. We then concentrate on baryogenesis in a strong first-order phase transition and discuss that supersymmetric variants of the electroweak theory (MSSM and some version of NMSSM) rather naturally provide the necessary ingredients. The charginos and the stops play a prominent role. We present CP-violating dispersion relations in the chargino sector and show results of a concrete model calculation for the asymmetry production based on quasi-classical Boltzmann transport equations and sphaleron transitions in the hot electroweak phase.
Two New Supersymmetric Options for Two Higgs Doublets at the Electroweak Energy Scale [PDF]
Ernest Ma
Abstract: Contrary to common belief, the requirement that supersymmetry exists and that there are two Higgs doublets and no singlet at the electroweak energy scale does not necessarily result in the minimal supersymmetric standard model (MSSM). Two interesting alternatives are presented. | CommonCrawl |
Applied Water Science
March 2017 , Volume 7, Issue 1, pp 165–173 | Cite as
Effect of carbon source on acclimatization of nitrifying bacteria to achieve high-rate partial nitrification of wastewater with high ammonium concentration
Seyyed Alireza Mousavi
Shaliza Ibrahim
Mohamed Kheireddine Aroua
Experiments in two laboratory-scale sequential batch reactors were carried out to investigate the effect of heterotrophic bacteria on nitrifying bacteria using external carbon sources. Partial nitrification of ammonium-rich wastewater during short-term acclimatization enriched the activity of ammonia-oxidizing bacteria in both reactors. Heterotrophic bacteria exhibited a minor effect on nitrifying bacteria, and complete removal of ammonium occurred at a rate of 41 mg L−1 h−1 in both reactors. The main strategy of this research was to carry out partial nitrification using high-activity ammonia-oxidizing bacteria with a high concentration of free ammonia (70 mg L−1). The NO2 −/(NO3 − + NO2 −) ratio was greater than 0.9 in both reactors most of the time.
Nitrification Nitrifying bacteria Ammonium-rich wastewater Partial nitrification
The uncontrolled discharge of wastewater containing ammonia in water bodies through nitrogen-rich wastewaters has been considered as a worldwide human health threat and toxicity to aquacultures (Chen et al. 2006; Mousavi et al. 2012). Among different sources of nitrogen components, anaerobic sludge digesters effluent (sludge rejected water) generally contains 15–25 % of the total nitrogen load in a flow and is recycled to the head of the sewage treatment works (Mata-Alvarez 2002; Dosta et al. 2007). In addition, the remaining COD in this effluent is weakly biodegradable. Different processes have been tested to find a suitable method for treating this kind of wastewater (Van Kempen et al. 2001).
In the past decades, both physicochemical and biological methods have been used for ammonium removal from wastewater to obtain discharge standards (Komorowska-Kaufman et al. 2006). Some drawbacks of physicochemical technologies for ammonia removal have shifted research interest toward biological nitrogen removal (BNR) as a promising method for eliminating ammonia from wastewater (Dosta et al. 2007). Biological ammonia removal normally takes place in two steps, namely, nitrification and denitrification. Nitrification is also a two-step process, where ammonium is firstly oxidised to nitrite by ammonium-oxidising biomass (AOB). This process is called nitritation and its stoichiometry is:
$$\begin{gathered} {\text{NH}}_{ 4}^{ + } + 3 {\text{/2O}_{ 2}} \to {\text{NO}}_{ 2}^{ - } + 2 {\text{H}}^{ + } + {\text{H}}_{ 2} {\text{O}} \hfill \\ \end{gathered}$$
Secondly, nitrite is oxidised to nitrate by nitrite-oxidising biomass (NOB). This process is called nitratation and its stoichiometry is:
$$\begin{gathered} {\text{NO}}_{ 2}^{ - } + 1 {\text{/2O}}_{ 2} \to {\text{NO}}_{ 3}^{-} \hfill \\ \end{gathered}$$
The rate of the nitrification process depends on the activities of nitrifying bacteria and is affected by environmental and operational parameters (e.g., temperature, pH microorganism population, organic carbon and nitrogen concentration). Optimizing factors affecting the nitrogen removal process is thus necessary to build up nitrifying bacteria and increase the effectiveness of wastewater treatment investigations (Komorowska-Kaufman et al. 2006). The heterotroph/autotroph population ratio depends on the organic carbon/nitrogen ratio (C/N) in wastewater. According to the results of previous studies, at high ratio of C/N, heterotrophic bacteria dominate the nitrifying bacteria, resulting in a decrease of ammonium removal (Okabe et al. 1996; Campos et al. 1999; Carrera et al. 2004; Wu et al. 2008). Rostron et al. (2001) investigated the effect of COD (glucose) addition on nitrification at 12 h of operational HRT. In this condition, heterotrophic bacteria grow rapidly and reduce influent COD by 90 % within 10 days of adding 500 mg L–1 COD to the feed. Results indicated that all three reactors lost nitrate production because of limited oxygen for nitrifying bacteria, which can be attributed to the dominance of heterotrophs (Rostron et al. 2001).
Research has shown the complexity of AOB enrichment in a single reactor and considerably low rates of system efficiency. On the other hand, a solid retention time (as a controlling parameter) of less than 4 days has been reported to cause washout of nitrifying microorganisms, thus reducing the nitrification rate (Campos et al. 1999). In addition, ammonia oxidation in a single reactor is usually limited to 0.2 kg N-NH4 + m−3 per day. Therefore, using two biological units, with the nitrification process taking place individually in the subsequent unit, to prevail over the above-described limitations was recommended (Campos et al. 1999; Wu et al. 2008). On the other hand, using a single reactor for nitrification increases the initial capital and maintenance cost, which encourage researchers to develop energy-saving nitrogen elimination systems and increase the nitrification rate by applying cost-effective methods, such as SHARON (Van Kempen et al. 2001) and Anammox (Volcke et al. 2006), for treatment of sludge rejected water (Chen et al. 2010).
This preliminary study evaluated the role of the C/N ratio, among several factors affecting microbial growth, as an inhibitor in the nitrification process. In addition, the feasibility of partial nitrification (PN) as a cost-effective process was investigated during enrichment of nitrifying bacteria with a high concentration of ammonium. This work lays the groundwork for further research on bio-electrochemical nitrogen elimination.
BNR via nitrite
This section discusses the importance and mechanisms of PN to elucidate the process. PN occurs via AOB according to Eq. 1, but activated sludge is a mixed culture containing both AOB and NOB. As such, researchers have carried out BNR via nitrite by adjusting the environmental and operational parameters (pH, dissolved oxygen, temperature, and substrate concentrations) to limit the growth of NOB and enrich AOB, which cause nitrite accumulation (Dosta et al. 2007; Blackburne et al. 2008; Chen et al. 2010). Savings of 25 % in aeration costs using low concentrations of dissolved oxygen only to enrich AOB and a reduction of 40 % of the external carbon source needed during denitrification by limiting NOB in PN have been reported (Ruiz et al. 2003). Free ammonia (FA) and free nitrous acid are inhibition parameters that play key roles in PN. The values of both substrates depend on the total ammonia concentration, pH, and temperature (Grunditz and Dalhammar 2001; Whang et al. 2009). Previous studies have shown the inhibitory effect of high concentrations of FA and HNO2 on AOB and NOB in PN (Chen et al. 2010). Blackburne et al. (2008) investigated the role of high-concentration FA in inhibiting Nitrobacter and Nitrospira activities and found that Nitrospira species are much more sensitive to low concentrations of FA than Nitrobacter species. In addition, NOB were found to be inhibited at concentrations higher than 0.1–1 mg NH3 L−1 and/or 0.2–2.8 mg HNO2 L−1, whereas AOB were inhibited by unionized ammonia concentrations higher than 10–150 mg NH3 L−1. These findings suggest that enrichment of nitrifying bacteria can enhances the nitrification rate. For example, Zheng et al. (2004) examined the high activity of AOB in a pure culture (6–8 g NH4 −-N g−1 VSS per day). Moreover, Chen, et al. (2010) investigated the enrichment of high nitrifier activity with the aim of enhancing the performance of the PN process and reported a specific ammonium oxidation rate (2.78 g NH4 +-N g−1 VSS per day) higher than previously reported values (0.6 g NH4 +-N g−1 VSS per day (Ciudad et al. 2007), 1.54 g NH4 +-N g−1 VSS per day (Kim et al. 2009), and 2.76 g NH4 +-N g−1 VSS per day (Jianlong and Ning 2004)).
Seed sludge and synthetic wastewater
A biomass with a mixed culture nitrifying bacteria was obtained from the activated sludge of an urban wastewater treatment plant (WWTP) in Pantai Dalam, Kuala Lumpur, Malaysia. The activated sludge was filtered to remove wastes and washed repeatedly to remove internal nitrogen components (NH4 +, NO2 −, and NO3 −). The sludge was then dewatered and kept in a growth medium in a cold room (4 °C) for future use. Sludge with an initial mixed-liquor suspended solid (MLSS) concentration of 2 g L−1 was inoculated into two 5 L sequencing batch reactors to acclimatize with high-strength ammonium. A headspace of 1 L was provided to prevent any solid loss generally caused by foaming. The reactors were fed with synthetic wastewater according to Table 1. The synthetic wastewater was stored in a cold room at temperature below 4 °C. The feed temperature increased to 25 °C before input to the SBRs by a water bath. The SBRs were fed with synthetic wastewater containing different C/N ratio with 1,000 mg L−1 of (NH4)2SO4 as the source of nitrogen. The source of phosphorus was 200 mg L−1 KH2PO4, and carbon was prepared using 3,000 mg L−1 NaHCO3 to achieve a suitable ratio of C/N. Trace elements were adjusted by adding 1 mL/L of stock solution according to Table 1.
Synthetic wastewater compositions
Concentration (mg L−1)
NaHCO3
KH2PO4
400 (as P)
MgSO4·(g/l)
0 and 1,000
Composition of trace element solution (1 ml/pre liter of reactor)
ZnSO4·7H2O
CoCL2·6H2O
MnCl2·4H2O
CuSO4·5H2O
(NH4)6Mo7O24·4H2O
CaCl2·2H2O
FeSO4·7H2O
H3BO3
NiSO4·6H2O
Experimental setup
The enrichment of high-activity AOB was conducted in a laboratory-scale sequencing batch reactor (SBR) with a working volume of 5 L (Fig. 1). Two reactors (R1, R2) were run in multiple cycles with sequencing stages of 23 h reaction, 50 min settling, 5 min decanting, and 5 min filling, with each cycle lasting no less than 24 h. The reactors were provided with a thermostatic jacket, and temperature was maintained at 30 ± 0.5 °C using a thermostatic bath. The suspension medium was mechanically agitated throughout the reaction time. The stirring rate was controlled to be adequate (200 rpm) to create a uniform biomass suspension. Two air pumps (HAILEA, ACO-9820, China) supplied air that was fed from the bottom of the reactors, and dissolved oxygen was measured with an electrode (METTLER TOLEDO, O2-sensor, Switzerland) and maintained higher than 3 mg L−1 by adjusting the air flow rate manually. pH was measured with an electrode (METTLER TOLEDO, ph-sensor, Switzerland) and adjusted between 7.3 and 7.9 by automatic injection of acid (H2SO4; 1 N) or alkaline (NaOH 1 N) solution, respectively. The decanting ratio of feed was 0.5. At the end of each settling phase, 50 % of the reactor contents was decanted and replaced with new feed. Furthermore, no sludge removal and MLSS of effluent returned to the reactors, with the centrifugation at 3,600 rpm for 10 min for all decanted samples.
Schematic diagram of the experimental apparatus (SBR)
The samples were either analyzed immediately or stored at temperature below 4 °C until subjected to analysis. The determination of ammonium, nitrate, and nitrite concentrations was done using an Advanced Compact IC 861 (Metrohm® Ltd., Herisau, Switzerland) ion chromatograph (IC) and guard column. The eluents to determine cation and anion were prepared using ultrapure water (18.2 µs) containing pyridin-2, 6-dicarbonsäure (0.117 g/L), and HNO3 (0.11 mL/L) for cation, and NaCO3 (0.3392 g/L), NaHCO3 (0.084 mg/L), and H2SO4 (0.1 mol) for anion, respectively. Before analyses, the samples were centrifuged and filtered with a 0.2 µm filter. The process temperature, pH, DO, and ORP were continually monitored by a digital controller. In addition, the MLSS and MLVSS were determined following standard methods (APHA et al. 2012). Experiments were repeated if an error higher than 5 % occurred in the sample analysis. By adjusting pH, temperature, and the remaining concentration of ammonium, FA and FNA concentrations were calculated according to Eqs. 3 and 4 (Chen et al. 2010). The operating conditions in this research are summarized in Table 2.
Operational conditions of the SBRs
30 ± 0.5
DO (mg L−1)
HRT (h)
Cycle (h)
MLSS (mg L−1)
MLVSS (mg L−1)
NH4-N (mg L−1)
COD/N
Volumetric exchange rate (%)
$${\text{FA}}\left( {\text{mg/L}} \right) = \frac{17}{14} \times \frac{{[{\text{NH}}_{4}\cdot \text{N}] \times 10^{\text{pH}} }}{{{\text{exp}}{{[{6344/}{{\left( {273 + T} \right)}}] + 10^{\text{pH}} }} }}$$
$${\text{FNA}}\left( {{\text{mg/L}} } \right) = \frac{46}{14} \times \frac{{[{\text{NO}}_{2}^{ -}\cdot \text{N}] }}{{{\text{exp}}{{[{ - 2300/}{{\left( {273 + T} \right)}}] \times 10^{\text{pH}} }} }}$$
Acclimatization performance of the HA-AOB
The experiments were carried out by feeding the reactors in parallel at 30 ± 0.5 °C with an initial NH4 +-N concentration of 1,000 mg L−1, which is considered to be highly contaminated ammonium wastewater. The reaction time (RT) was maintained at 23 h for both reactors. Ammonium reduction and generation of nitrite and nitrate were observed soon after the run of reactors. The total cycle lengths for SBR due to high concentration of ammonia, slow growth of autotrophic bacteria, and limit of interference of loss of biomass during decanting time were maintained until complete nitrification of 1,000 mg L−1 NH4 +-N within 24 h.
The startup of reactors was subject to the growth and adaptation of HA-AOB and the inhibition of NOB with the presence and absence of heterotrophic bacteria. Synthetic wastewater containing glucose as a carbon source was added to R2 to investigate the effect of heterotrophic bacteria on the nitrification process. The addition had a marginal effect on the nitrification rate. pH and DO were automatically kept within the range of 7.6 ± 0.3 and ≥3 mg L−1 for both groups of bacteria (heterotrophic and autotrophic bacteria, respectively) (Gerardi 2002; Wu et al. 2008). An MLSS concentration of 2 g L−1 was maintained throughout the experiment with mixed liquor purged from the reactors, and the centrifuged biomass of effluent was returned to the reactors. Therefore, stable operation was assumed throughout the experiment.
Effect of C/N
Figure 2 shows that a complete removal of ammonium in R1 (C/N ratio 0) was achieved at a shorter cycle (144 h) than R2 (C/N ratio 0.5), in which complete ammonium removal occurred after 156 h. However, at the beginning of the process, R2 showed higher efficiency in reducing ammonium, which may be due to quick activation of the process by heterotrophic bacteria. Figure 2 demonstrates that in R1 after 11.5 days during four cycles, acclimatization of nitrifying bacteria at C/N = 0 was observed. The ammonium reduction was less in R2 than R1, which can be due to glucose adding. The presence of organic carbon in R2 can support the growth of heterotrophs and inhibit the activity of nitrifying bacteria (Van Benthum et al. 1997). Researchers confirmed the improvement of nitrification at low temperatures in the presence of heterotrophic bacteria, because they act as a protective layer for the nitrifying bacteria. Nevertheless, in this situation, the specific activity of AOB is lesser than when only nitrifying bacteria are present (Germain et al. 2007; Wu et al. 2008).
Ammonium removal profile during growth and adaptation of nitrifying bacteria (R1 = without COD and R2 = with COD)
Partial nitrification
Substrate concentration is an important parameter in the nitrification process, which can control the rate of ammonium/nitrite oxidation because of its inhibitory effects (Gerardi 2002). The prevailing strategies for incomplete nitrification and depressing the activity of NOB have been mentioned previously. Among them, the accumulation of free FA and FNA has strong and complicated effects on the inhibition of NOB activity (Qiao et al. 2010). The concentrations of both FA and FNA according to Eqs. 3 and 4 are influenced by operational pH, substrate concentration, and temperature (Chen et al. 2010). The threshold inhibition of FA and FNA reported by Anthonisen et al. (1976) is that AOB was inhibited from 10 mg FA/L to 150 mg FA/L, while the inhibition of NOB began at a concentration of 0.1–1 mg FA/L. The FAN inhibition of FNA takes place with nitrobacteria at FNA concentrations between 0.22 and 2.8 mg L−1 (Anthonisen et al. 1976; Yang et al. 2010), and researchers reported the inhibition of AOB at FNA concentrations higher than 0.2 mg L−1 (Mosquera-Corral et al. 2005; Qiao et al. 2010).
The effect of the initial organic carbon concentration as a controlling factor in the nitrification process was investigated, using the effect of FA and FNA concentrations as the main strategy to inhibit NOB activity to achieve high PN. Figure 3 shows the FA and FNA concentrations during the acclimatization of nitrifier bacteria at different ratios of C/N. In the first cycle, the FA and FNA concentrations decreased with enhancement of ammonium elimination (R1 and R2). According to the results, due to a high concentration of FA (about 70 mg L−1) and FNA (0.2 mg L−1), ammonium-oxidizing bacteria were dominant and accumulation of NO3 −-N was negligible in the first cycle of the process. The result depicted in Fig. 4 shows that NO2-N/(NO2-N + NO3-N) ratio was greater than ratios that were achieved in previous studies by Chen et al. (2010), Ruiz et al. (2003), and Ciudad et al. (2007). The advantages of the current strategy are less washing out and suitable enrichment of AOB, which is aim to be used in a bio-electrochemical reactor (BER).
The a FA and b FNA concentration in R1 (without COD) and R2 (with COD) (first stage; acclimatization of nitrifying bacteria)
Concentration profiles of ammonium nitrogen, nitrate nitrogen and nitrite nitrogen during PN and reaction rates described
Kinetics study
Nitrification comprises oxidation of ammonium to nitrate via nitrite and possibly N2O by autotrophic bacteria, as shown by Eq. 5 (Arie Kremen et al. 2005). Researchers have used different kinds of models to describe the rate of nitrification: zero-order kinetics, first-order kinetics (Arie Kremen et al. 2005), and Monod and Haldane model (Chen et al. 2010). According to the zero-order kinetic equation, the rates of NH4 + oxidation and NO3 − production (where NO2 − serves as the substrate) are depicted as Eqs. 6 and 7. Therefore, the rate of NO2 − formation is estimated as in Eq. 8, where KNH4 + is the specific ammonium oxidation rate (mg NH4 +-NL−1h−1), KNO3 − is the specific nitrate rate (mg NO3 −-NL−1h−1), and KNO2 − is the specific nitrate rate (mg NO3 −-NL−1h−1).
The rate of ammonium removal according to Eq. (6) in K1 was 41.66 mg NH4 +-NL−1h−1. However, regarding the rate at the beginning of the process, the first cycle in K2 was less than that in K1, and complete removal of ammonium in K2 was conducted after K1, but the result for both reactors showed comparable ammonium removal rates. This means that the rate of ammonium removal was not affected strongly by the presence of heterotrophic bacteria. This was also confirmed by the results of the production rate of NO2 − and NO3 − (Fig. 4). The gradual oxidation of ammonium resulted in an increase in the nitrate and nitrite concentration, wherein the highest accumulation of nitrite concentration, 936 mg L−1, was observed at the fourth cycle of experiments. The ratio of NO2 −/(NO3 − + NO2 −) was most of the time more than 0.9 for both reactors. Figure 4 confirms the results during 24 h monitoring of the PN system.
$${\text{NH}}_{4}^{+} \mathop\rightarrow\limits^{\frac{\text{Nitrosomonas}} {{\text{K}}_{{{\text{NH}}_{4}^{+}}}}}{{\text{NO}}_{2}^{-}} \mathop\rightarrow\limits^{\frac{\text{Nitrobacter}}{{{\text{K}}_{{\text{NO}} _{3}^{-}}}}}{{\text{NO}}_{3}^{-}}$$
$$\frac{{{\text{d}}C_{{{\text{NH}}_{4}^{ + } }} }}{{{\text{d}}t}} = - K_{{{\text{NH}}_{4}^{ + } }}$$
$$\frac{{{\text{d}}C_{{{\text{NO}}_{2}^{ - } }} }}{{{\text{d}}t}} = K_{{{\text{NH}}_{4}^{ + } }} - K_{{{\text{NO}}_{2}^{ - } }} \,\,\ldots (\text{when ammonium is present})$$
$$\frac{{{\text{d}}C_{{{\text{NO}}_{3}^{ - } }} }}{{{\text{d}}t}} = - K_{{{\text{NO}}_{3}^{ - } }} \,\,\ldots (\text{when nitrite is absent})$$
Sludge characteristics
The sludge volume index (SVI) was used to monitor the settling characteristics of activated sludge in both reactors. The initial SVI of sludge was 123 mg L−1, which decreased to 97 mg L−1 after nine cycles of nitrification process operation. The results of the measured SVI during the process showed an increase of biomass density, which caused the improvement of sludge-settling properties. During the first cycle, high SVI resulted in the unsuitability of biomass settling. Therefore, the decanted portion of this cycle contained high MLSS. Nevertheless, during subsequent cycles, the settling ability of biomass gradually improved because of high assimilation of substrate by a mixed culture of bacteria in both reactors.
Field emission scanning electron microscopy (FESEM) of the microorganism in SBRs was conducted to observe the morphology of the seed sludge (day 0 and day 21) from the reactor with and without COD. The image of the seed sludge according to Fig. 5a on day 0 shows that the biomass from the urban wastewater treatment plant consists of straight rod, curved rod, and vibroid-shaped bacterial cells, with different sizes of about 0.38–0.6 µm and 0.5–1.26 µm. FESEM observation of sludge revealed an abundance of nitrifying bacteria, which are considered as AOBs and NOBs in biomass (Fig. 5b) (Gerardi 2002; Qiao et al. 2010; Yusof et al. 2010).
FESEM images of activated sludge in SBR. a Bacteria at raw sludge; b bacteria in reactor 1 on day 21, c seeded bacteria in reactor 2 on day 21
Oxygen consumption rate
The oxygen uptake rate (OUR) test is a simple, readily available, and familiar way to monitor the nitrification process. The test was applied in all cycles for both reactors. Therefore, the short time span oxygen utilization rate (OUR) test was used to evaluate the nitrification activity in both reactors. At the beginning of the process, the first cycle of OUR was low and the concentration of DO was very high, but with an increase of bacterial activity. The test was conducted for the last cycle of the partial nitrification process at the beginning, middle, and at the end of process. The dissolved oxygen uptake rate was determined by linear regression from the slope of the oxygen utilization curve as shown in Fig. 6. The results show more oxygen uptake rate in reactor R2 (2.86 mgO2 g VSS−1 min−1) because of the presence of heterotrophic bacteria.
Oxygen uptake rate (OUR) by nitrifying bacteria and heterotrophic bacteria in R1 and R2 (during the beginning (I), middle (II), and end (III) of the cycle)
The results of applying various C/N ratios showed that a C/N ratio of 0 is the most suitable for resulting in faster ammonium removal, in contrast to the use of a C/N ratio of 0.5, in which the nitrification process was slightly inhibited and complete removal of ammonium was carried out for a longer time. Factors FA and FNA are the main strategies of NOB inhibition, where the most significant inhibition was conducted by FA over NOB in both reactors. The rate of ammonium removal was 41.66 mg NH4 −-NL−1 h−1, and the acclimatization of biomass was conducted at a high ratio of NO2 −-N/(NO2 −-N + NO3 −-N). The results confirmed that enriched biomass could be inoculums in a PN system to produce high concentration of NO2 − with improvement of biomass physical characteristics during an acclimatization process.
The authors are thankful for the financial support from Universiti Malaya (UM) through Grant No. RG 077/09SUS and the Department of Civil Engineering, UM, for the use of facilities.
Anthonisen AC, Loehr RC, Prakasam TBS, Srinath EG (1976) Inhibition of nitrification by ammonia and nitrous acid. I J Water Pollut Control 48:835–852Google Scholar
APHA et al. (2012) Standard methods for the examination of water and wastewater. American Public Health AssociationGoogle Scholar
Arie Kremen JB, Shavit URI, Shaviv AVI (2005) Model demonstrating the potential for coupled nitrification denitrification in soil aggregates. Environ Sci Technol 39:4180–4188CrossRefGoogle Scholar
Blackburne R, Yuan Z, Keller J (2008) Demonstration of nitrogen removal via nitrite in a sequencing batch reactor treating domestic wastewater. Water Res 42:2166–2176CrossRefGoogle Scholar
Campos JL, Garrido-Fermindez JM, a R L. Méndez JM (1999) Nitrification at high ammonia loading rates in an activated sludge unit. Bioresour Technol 68:141–148CrossRefGoogle Scholar
Carrera J, Vicent T, Lafuente J (2004) Effect of influent COD/N ratio on biological nitrogen removal (BNR) from high-strength ammonium industrial wastewater. Process Biochem 39:2035–2041CrossRefGoogle Scholar
Chen S, Ling J, Blancheton JP (2006) Nitrification kinetics of biofilm as affected by water quality factors. Aquacult Eng 34:179–197CrossRefGoogle Scholar
Chen J, Zheng P, Yu Y, Mahmood Q, Tang C (2010) Enrichment of high activity nitrifers to enhance partial nitrification process. Bioresour Technol 101:7293–7298CrossRefGoogle Scholar
Ciudad G, González R, Bornhardt C, Antileo C (2007) Modes of operation and pH control as enhancement factors for partial nitrification with oxygen transport limitation. Water Res 41:4621–4629CrossRefGoogle Scholar
Dosta J, Galí A, Benabdallah El-Hadj T, Macé S, Mata-Álvarez J (2007) Operation and model description of a sequencing batch reactor treating reject water for biological nitrogen removal via nitrite. Bioresour Technol 98:2065–2075CrossRefGoogle Scholar
Gerardi MH (2002) Nitrification in the activated sludge process. Wiley Online Library, New YorkCrossRefGoogle Scholar
Germain E, Bancroft L, Dawson A, Hinrichs C, Fricker L, Pearce P (2007) Evaluation of hybrid processes for nitrification by comparing MBBR/AS and IFAS configurations. Water Sci Technol J Internatl Association Water Pollut Res 55:43CrossRefGoogle Scholar
Grunditz C, Dalhammar G (2001) Development of nitrification inhibition assays using pure cultures of nitrosomonas and nitrobacter. Water Res 35:433–440CrossRefGoogle Scholar
Jianlong W, Ning Y (2004) Partial nitrification under limited dissolved oxygen conditions. Process Biochem 39:1223–1229CrossRefGoogle Scholar
Kim J-H, Guo X, Behera SK, Park H-S (2009) A unified model of ammonium oxidation rate at various initial ammonium strength and active ammonium oxidizer concentrations. Bioresour Technol 100:2118–2123CrossRefGoogle Scholar
Komorowska-Kaufman M, Majcherek H, Klaczynski E (2006) Factors affecting the biological nitrogen removal from wastewater. Process Biochem 41:1015–1021CrossRefGoogle Scholar
Mata-Alvarez SMaJ (2002) Utilization of SBR technology for wastewater treatment: an overview. Ind Eng Chem Res 41:5539–5553CrossRefGoogle Scholar
Mosquera-Corral A, Gonzalez F, Campos J, Mendez R (2005) Partial nitrification in a SHARON reactor in the presence of salts and organic carbon compounds. Process Biochem 40:3109–3118CrossRefGoogle Scholar
Mousavi S, Ibrahim S, Aroua MK (2012) Sequential nitrification and denitrification in a novel palm shell granular activated carbon twin-chamber upflow bio-electrochemical reactor for treating ammonium-rich wastewater. Bioresour Technol 125:256–266CrossRefGoogle Scholar
Okabe S, Oozawa Y, Hirata K, Watanabe Y (1996) Relationship between population dynamics of nitrifiers in biofilms and reactor performance at various C:N ratios. Water Res 30:1563–1572CrossRefGoogle Scholar
Qiao S, Matsumoto N, Shinohara T, Nishiyama T, Fujii T, Bhatti Z, Furukawa K (2010) High-rate partial nitrification performance of high ammonium containing wastewater under low temperatures. Bioresour Technol 101:111–117CrossRefGoogle Scholar
Rostron WM, Stuckey DC, Young AA (2001) Nitrification of high strength ammonia wastewaters: comparative study of immobilisation media. Water Res 35:1169–1178CrossRefGoogle Scholar
Ruiz G, Jeison D, Chamy R (2003) Nitrification with high nitrite accumulation for the treatment of wastewater with high ammonia concentration. Water Res 37:1371–1377CrossRefGoogle Scholar
Van Benthum W, Van Loosdrecht M, Heijnen J (1997) Control of heterotrophic layer formation on nitrifying biofilms in a biofilm airlift suspension reactor. Biotechnol Bioeng 53:397–405CrossRefGoogle Scholar
Van Kempen R, Mulder J, Uijterlinde C, Loosdrecht M (2001) Overview: full scale experience of the SHARON® process for treatment of rejection water of digested sludge dewatering. Water Sci Technol 44:145–152Google Scholar
Volcke EIP, Gernaey KV, Vrecko D, Jeppsson U, van Loosdrecht MCM, Vanrolleghem PA, Kroiss H (2006) Plant-wide (BSM 2) evaluation of reject water treatment with a SHARON–Anammox process. CiteseerGoogle Scholar
Whang L-M, Chien IC, Yuan S-L, Wu Y-J (2009) Nitrifying community structures and nitrification performance of full-scale municipal and swine wastewater treatment plants. Chemosphere 75:234–242CrossRefGoogle Scholar
Wu G, Rodgers M, Zhan X (2008) Nitrification in sequencing batch reactors with and without glucose addition at 11 °C. Biochem Eng J 40:373–378CrossRefGoogle Scholar
Yang J, Zhang L, Daisuke H, Takahiro S, Ma Y, Li Z, Furukawa K (2010) High rate partial nitrification treatment of reject wastewater. J Biosci Bioeng 110:436–440CrossRefGoogle Scholar
Yusof N, Hassan MA, Phang LY, Tabatabaei M, Othman MR, Mori M, Wakisaka M, Sakai K, Shirai Y (2010) Nitrification of ammonium-rich sanitary landfill leachate. Waste Manag 30:100–109CrossRefGoogle Scholar
Zheng P, Xu XY, Hu BL (2004) Novel theories and technologies for biological nitrogen removal. Science Press, BeijingGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
1.Department of Environmental HealthKermanshah University of Medical SciencesKermanshahIran
2.Department of Civil Engineering, Faculty of EngineeringUniversity of MalayaKuala LumpurMalaysia
3.Department of Chemical Engineering, Faculty of EngineeringUniversity of MalayaKuala LumpurMalaysia
Mousavi, S.A., Ibrahim, S. & Aroua, M.K. Appl Water Sci (2017) 7: 165. https://doi.org/10.1007/s13201-014-0229-z
Accepted 22 July 2014
DOI https://doi.org/10.1007/s13201-014-0229-z | CommonCrawl |
Higher-order accurate Runge-Kutta discontinuous Galerkin methods for a nonlinear Dirac model
DCDS-B Home
Dynamic bifurcation theory of Rayleigh-Bénard convection with infinite Prandtl number
May 2006, 6(3): 605-622. doi: 10.3934/dcdsb.2006.6.605
Analysis of a nonlinear system for community intervention in mosquito control
M. Predescu 1, , R. Levins 2, and T. Awerbuch-Friedlander 2,
Department of Mathematics, Bentley College, 175 Forest Street, Waltham, MA 02452, United States
Department of Population and International Health, Harvard School of Public Health, 665 Huntington Avenue, Boston, MA 02115, United States, United States
Received March 2005 Revised December 2005 Published February 2006
Non-linear difference equation models are employed in biology to describe the dynamics of certain populations and their interaction with the environment. In this paper we analyze a non-linear system describing community intervention in mosquito control through management of their habitats. The system takes the general form:
$x_{n+1}= a x_{n}h(p y_{n})+b h(q y_{n})$ n=0,1,...
$y_{n+1}= c x_{n}+d y_{n}$
where the function $h\in C^{1}$ ( [ $0,\infty$) $\to $ [$0,1$] ) satisfying certain properties, will denote either $h(t)=h_{1}(t)=e^{-t}$ and/or $h(t)=h_{2}(t)=1/(1+t).$ We give conditions in terms of parameters for boundedness and stability. This enables us to explore the dynamics of prevalence/community-activity systems as affected by the range of parameters.
Keywords: global stability., Dynamics, boundedness and persistence, attracting intervals.
Mathematics Subject Classification: 39A11; Secondary: 92D4.
Citation: M. Predescu, R. Levins, T. Awerbuch-Friedlander. Analysis of a nonlinear system for community intervention in mosquito control. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 605-622. doi: 10.3934/dcdsb.2006.6.605
Kai Liu, Zhi Li. Global attracting set, exponential decay and stability in distribution of neutral SPDEs driven by additive $\alpha$-stable processes. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3551-3573. doi: 10.3934/dcdsb.2016110
Hal L. Smith, Horst R. Thieme. Persistence and global stability for a class of discrete time structured population models. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4627-4646. doi: 10.3934/dcds.2013.33.4627
Nguyen Thieu Huy, Vu Thi Ngoc Ha, Pham Truong Xuan. Boundedness and stability of solutions to semi-linear equations and applications to fluid dynamics. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2103-2116. doi: 10.3934/cpaa.2016029
Ö. Uğur, G. W. Weber. Optimization and dynamics of gene-environment networks with intervals. Journal of Industrial & Management Optimization, 2007, 3 (2) : 357-379. doi: 10.3934/jimo.2007.3.357
Antoine Perasso. Global stability and uniform persistence for an infection load-structured SI model with exponential growth velocity. Communications on Pure & Applied Analysis, 2019, 18 (1) : 15-32. doi: 10.3934/cpaa.2019002
Kazuo Yamazaki, Xueying Wang. Global stability and uniform persistence of the reaction-convection-diffusion cholera epidemic model. Mathematical Biosciences & Engineering, 2017, 14 (2) : 559-579. doi: 10.3934/mbe.2017033
Fuchen Zhang, Xiaofeng Liao, Chunlai Mu, Guangyun Zhang, Yi-An Chen. On global boundedness of the Chen system. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1673-1681. doi: 10.3934/dcdsb.2017080
Qi Wang, Yang Song, Lingjie Shao. Boundedness and persistence of populations in advective Lotka-Volterra competition system. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2245-2263. doi: 10.3934/dcdsb.2018195
Cemil Tunç. Stability, boundedness and uniform boundedness of solutions of nonlinear delay differential equations. Conference Publications, 2011, 2011 (Special) : 1395-1403. doi: 10.3934/proc.2011.2011.1395
Guihong Fan, Yijun Lou, Horst R. Thieme, Jianhong Wu. Stability and persistence in ODE models for populations with many stages. Mathematical Biosciences & Engineering, 2015, 12 (4) : 661-686. doi: 10.3934/mbe.2015.12.661
Pierre Magal. Global stability for differential equations with homogeneous nonlinearity and application to population dynamics. Discrete & Continuous Dynamical Systems - B, 2002, 2 (4) : 541-560. doi: 10.3934/dcdsb.2002.2.541
Marcel Freitag. Global existence and boundedness in a chemorepulsion system with superlinear diffusion. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5943-5961. doi: 10.3934/dcds.2018258
Vincent Calvez, Lucilla Corrias. Blow-up dynamics of self-attracting diffusive particles driven by competing convexities. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2029-2050. doi: 10.3934/dcdsb.2013.18.2029
Evariste Sanchez-Palencia, Jean-Pierre Françoise. Topological remarks and new examples of persistence of diversity in biological dynamics. Discrete & Continuous Dynamical Systems - S, 2019, 12 (6) : 1775-1789. doi: 10.3934/dcdss.2019117
Yu Yang, Shigui Ruan, Dongmei Xiao. Global stability of an age-structured virus dynamics model with Beddington-DeAngelis infection function. Mathematical Biosciences & Engineering, 2015, 12 (4) : 859-877. doi: 10.3934/mbe.2015.12.859
Cyrine Fitouri, Alain Haraux. Boundedness and stability for the damped and forced single well Duffing equation. Discrete & Continuous Dynamical Systems - A, 2013, 33 (1) : 211-223. doi: 10.3934/dcds.2013.33.211
Wei Wang, Yan Li, Hao Yu. Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3663-3669. doi: 10.3934/dcdsb.2017147
Hao Yu, Wei Wang, Sining Zheng. Global boundedness of solutions to a Keller-Segel system with nonlinear sensitivity. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1317-1327. doi: 10.3934/dcdsb.2016.21.1317
Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6099-6121. doi: 10.3934/dcds.2017262
Hua Zhong, Chunlai Mu, Ke Lin. Global weak solution and boundedness in a three-dimensional competing chemotaxis. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 3875-3898. doi: 10.3934/dcds.2018168
M. Predescu R. Levins T. Awerbuch-Friedlander | CommonCrawl |
Exploring wealth-related inequalities in maternal and child health coverage in Latin America and the Caribbean
Manuel Colomé-Hidalgo ORCID: orcid.org/0000-0002-4562-64911,
Juan Donado Campos2 &
Ángel Gil de Miguel1
BMC Public Health volume 21, Article number: 115 (2021) Cite this article
Maternal and child health have shown important advances in the world in recent years. However, national averages indicators hide large inequalities in access and quality of care in population subgroups. We explore wealth-related inequalities affecting health coverage and interventions in reproductive, maternal, newborn, and child health in Latin America and the Caribbean.
We analyzed representative national surveys from 15 countries conducted between 2001 and 2016. We estimated maternal-child health coverage gaps using the Composite Coverage Index – a weighted average of interventions that include family planning, maternal and newborn care, immunizations, and treatment of sick children. We measured absolute and relative inequality to assess gaps by wealth quintile. Pearson's correlation coefficient was used to test the association between the coverage gap and population attributable risk.
The Composite Coverage Index showed patterns of inequality favoring the wealthiest subgroups. In eight countries the national coverage was higher than the global median (78.4%; 95% CI: 73.1–83.6) and increased significantly as inequality decreased (Pearson r = 0.9; p < 0.01).
There are substantial inequalities between socioeconomic groups. Reducing inequalities will improve coverage indicators for women and children. Additional health policies, programs, and practices are required to promote equity.
Reproductive, Maternal, Newborn, and Child Health (RMNCH) has been a global health policy priority for the past decade [1]. The Millennium Development Goals (MDGs) contributed enormously to the health of women and children, managing to reduce maternal and under-5 years' old mortality and improved other indicators such as access to contraceptives, skilled attendance at childbirth, and measles vaccination [2]. Despite the progress, most regions did not reach the proposed goals, showing uneven progress that has left gaps between countries, especially in Latin America and the Caribbean (ALC) [3, 4].
The 2030 agenda for Sustainable Development Goals (SDGs) broadens the scope of the MDGs, assuming the commitment to leave no one behind. The SDG-3.8 promotes universal health coverage in terms of access to quality healthcare services, medicines, and vaccines for all [5]. More granular analysis of indicators can show whether all subgroups of the population will benefit from national progress or not [6]. Monitoring inequalities allow identifying vulnerable groups and prioritizing interventions in those who need it the most, thus promoting health coverage through equity [7]. We analyzed the Composite Coverage Index (CCI) as an indicator of universal healthcare coverage gaps in women and children. The index combines preventive and curative interventions throughout the continuum of care, family planning, maternal and newborn care, immunization, and treatment of sick children and has been used to monitor SDGs progress [8, 9].
Previous studies have emphasized the wealth-related inequalities between countries implementing the CCI, but only a few have focused on the LAC situation [10,11,12]. Therefore, the scope of health interventions and the level of improvement needed to narrow the gap needs to be adequately defined. This study explores wealth-related inequalities in RMNCH care coverage and its impact on reducing the gap in the LAC countries between 2001 and 2016.
This was a descriptive study based on secondary RMNCH coverage data obtained from the World Health Organization (WHO) Health Equity Assessment Toolkit (HEAT) software version 3.1 [13]. HEAT performs health inequality measures calculations from the WHO Health Equity Monitor Database [14]. The database includes data from Demographic Health Surveys (DHS), Multiple Indicators Cluster Survey (MICS) and Reproductive Health Surveys (RHS). The surveys carried out national representative and standardized interviews with women 15–49 years old. We included 15 of 22 countries with surveys conducted between 2001 and 2016 based on the availability of recent data on the Composite Coverage Index and wealth quintile.
The CCI is a weighted score based on aggregate estimates of eight essential interventions for the continuum of care for women and children, from before pregnancy to delivery, the immediate postnatal period, and childhood [7, 15]. The index is calculated using the formula:
$$ CCI=\frac{1}{4}\left( DFPS+\frac{ANC4+ SBA}{2}+\frac{BCG+2\mathrm{DPT}3+\mathrm{MCV}}{4}+\frac{ORS+ CPNM}{2}\right) $$
where DFPS = satisfied demand for modern family planning methods; ANC4 = prenatal care (at least four visits); SBA = deliveries attended by qualified personnel; BCG = one dose of Bacillus Calmette-Guérin vaccine; DPT3 = three or more doses of diphtheria-tetanus-pertussis vaccine; MCV = at least one dose of measles vaccine; ORS = children with diarrhea receiving oral rehydration therapy and continuous feeding; NSCLC = children with pneumonia symptoms taken to a health center [16].
We calculated CCI's, mean, median, interquartile range and standard deviation for the region. We analyzed socioeconomic inequality using the wealth index, which is an estimate based on the ownership of selected assets, housing construction materials, and access to basic services. The details of wealth index estimation have been previously described [17]. Households are classified from the poorest (Q1) to the richest (Q5) [18].
To compare patterns of inequality between and within countries, first, we calculated the coverage difference to show the magnitude of absolute inequality (Q5-Q1); second, the coverage ratio to show proportional differences between groups (Q5 / Q1) and third, the ratio of differences between coverages in lower (Q1-Q2) and higher quintiles (Q4-Q5). We calculated the relative concentration index and slope index to describe inequalities in all subgroups. Finally, we use population attributable risk (PAR) to show the possible improvement if the general population hypothetically had the same coverage level as the wealthiest quintile (CCI-Q5). We estimated the PAR percentage (PAR%) to show the proportion of improvement in national coverage if socioeconomic inequality would have been eliminated (PAR / CCI * 100) [19]. We used Pearson correlation to measure the degree of relationship between the CCI and the PAR%. The analyses were performed using Microsoft Excel and HEAT Plus software.
Supplementary Table 1 shows the average coverage by wealth quintile for each of the maternal and child health interventions. The coverage gap tended to be smaller as the income level improved. National coverage was greater than 78% in all interventions except family planning and treatment of sick children. The greatest inequality occurred in skilled attendance at birth and prenatal care, where the difference between the wealthiest and the poorest was 26.4 and 17.3%, respectively. The difference was relatively smaller in the immunization indicators, where the absolute inequality was more pronounced in the coverage of DTP3 than in BCG and measles. The difference ratio was well over 1.0 for most of the interventions, showing a wide gap to the detriment of the poorest quintile, except in the vaccination against measles.
Table 1 shows the coverage gaps and inequalities by wealth quintiles for each country. The national median was 78.4% (Range: 49.8% [Haiti] – 86.6% [El Salvador]) and from 71% for the poorest quintiles and 82% for the wealthiest. In three countries - Haiti, Bolivia, and Guatemala - wide differences (> 21 percentage points) were observed between the wealthiest and poorest quintiles. Guyana, Costa Rica, and Paraguay were the only countries with the lowest coverage in the wealthiest quintile. Belize, Costa Rica, the Dominican Republic, El Salvador, Guyana, Honduras, Mexico, and Paraguay showed low levels of inequality, where the difference between the wealthiest and poorest quintiles was 10 percentage points or less. Haiti was the country with the highest level of relative inequality, with coverage in the wealthiest quintile that exceeds that of the poorest by a factor of 1.7. The ratio of differences between the lowest and highest quintiles was greater than 1.0 in nine countries, showing a predominant pattern of higher inequality where the wealthiest quintile had disproportionately less coverage than all the other quintiles, led by Colombia. Reducing wealth-related inequality had the potential to narrow the national gap between 1% (Costa Rica) and 27.9% (Haiti). If all countries could reach the median overall coverage for the wealthiest quintile, the gap would decrease by 3.6 percentage points (95% CI: 2.7–7.1).
Table 1 Inequality gaps in CCI by wealth quintile, LAC 2001–2016
LAC countries showed a pattern of marginal exclusion in maternal-child health coverage, highlighting the need to address interventions oriented to the most disadvantaged population and also a pattern of higher wealth-related inequality in CCI coverage to the detriment of the poorest quintile (Fig. 1-2). Figure 3 shows the relationship between the CCI gap and PAR% in the study countries. It was observed that healthcare coverage increased significantly as inequality decreased (Pearson r = 0.9; p < 0.01). To achieve equality in the distribution of RMNCH interventions, Haiti (27.9%), Guatemala (14.8%) and Bolivia (17.8%) would need to make a greater effort to reduce the ICC gap at their respective levels.
Latest situation of CCI coverage by economic status, LAC 2001–2016. Own elaboration based on study data. a Dashed lines indicate the median
Difference in CCI by country according to wealth quintile, LAC 2001–2016. a. Source: Own elaboration based on study data. a Dashed lines indicate the median
Coverage gap at the national level versus population attributable risk in LAC countries, 2001–2016.a. Source: Own elaboration based on study data. a Dashed lines indicate the median
The LAC region has experienced a considerable improvement in maternal and child health post-2015 sustainable development agenda [7]. Despite the progress, it is currently considered the most unequal region in the world, which represents a major challenge for the SDGs [20].
We explore current wealth-related inequalities in RMNCH coverage in 15 LAC countries. Our findings reveal important inequalities in maternal and child health interventions, pointing out that in some groups of the population women and children are lagging.
As shown in this study, essential preventive and curative interventions showed a monotonous pattern with lower levels in the poorest quintile. The inequality gap was greater in interventions that required a functional health system and recurrent interaction with healthcare personnel, except in immunizations. Although approximately 80% of the population benefited from the eight essential interventions, coverage of RMNCH interventions was lower than that in more than half of the poorest countries. Only Costa Rica and El Salvador reached this level in the poorest quintile. The difference between the wealthiest and the poorest was at least 9.8 percentage points in more than half of the countries. Haiti, Bolivia, Guatemala, Peru, and Nicaragua showed lower national coverage and absolute inequality above the regional median. Colombia showed greater inequality of coverage in the top quintiles despite not having a wide gap like other countries. These findings imply the need for health systems that prioritize adequate care to reduce the gaps in women and children from the poorest households [7, 10]. Although the countries of the region have indeed implemented reforms to provide health services without the risk of impoverishment, an approach of social determinants and human rights that considers the dimensions of inequality is still required: income, gender, place of residence and education, among others [21, 22].
Achieving equity represents a much greater challenge for Colombia, Costa Rica, Haiti, Honduras, Mexico, and Panama than for other countries in the region, since they are part of the ten most unequal countries in the world [23]. If wealth-related inequalities were eliminated, most countries could achieve coverage of RMNCH interventions of more than 82%. The relationship between CCI and PAR% suggests that to reduce the gap in coverage of health services, the implementation of policies and programs can be effective in addressing inequalities within each country [11]. Policies should be focused on five areas: (i) development of health infrastructure; (ii) health promotion; (iii) health human resources; (iv) healthcare financing, and (v) quality of care [24,25,26].
There is a political commitment to understanding inequalities, encompassing efforts to support the monitoring and evaluation of inequities, health policies, and systems. However, the possibilities of achieving the SDG goals will depend on the ability of countries to accelerate and maximize their achievements in well-being [27]. The study, publication and discussion of the determinants of equity in the coverage of interventions and their impact on health contribute to increases in the effectiveness of public policies [28].
This study has several limitations. Coverage estimates are based on reanalyzed data from demographic surveys with a cross-sectional design. The analysis is limited to the availability of recent surveys in each country for latest situation analysis. Because the ICC is a group indicator, HEAT does not provide sufficient data to estimate the standard error using resampling methods [7]. The household ranking of the wealth index may vary by year and country. The described limitations could underestimate the CCI in study countries, particularly utilizing an index based on selected RMNCH health interventions. Despite the limitations, our findings are based on the best method to explore gaps in care coverage between rich and poor [8].
Overall, our results suggest that women and children from the poorest households in LAC are far from achieving universal health coverage due to inequalities. Our findings show how RMNCH coverage could improve if inequalities were eliminated. Overcoming inequalities will substantially reduce the extreme poverty gap, maternal and child mortality, and promote sustainable development. Future research is needed to monitor inequalities as a critical component tracking the progress of the SDGs so that no one is left behind. We hope that our findings contribute to the design of public policies and strategies to reduce inequalities for women and children in the LAC region.
The datasets used in this article are available in the WHO Health Equity Monitor Database repository at http://apps.who.int/gho/data/node.main.HE-1540?lang=en. Individual data sets are available by the previous request and can be accessed by UNICEF http://mics.unicef.org/ and DHS http://dhsprogram.com/ websites.
ALC:
ANC4:
Prenatal care (at least four visits)
BCG:
One dose of Bacillus Calmette-Guérin vaccine
CCI:
Composite Coverage Index
Confidence interval
DFPS:
Satisfied demand for modern family planning methods
DHS:
Demographic Health Survey
Demographic Health Surveys
DPT:
Three or more doses of diphtheria-tetanus-pertussis vaccine
HEAT:
Health Equity Assessment Toolkit
MCV:
At least one dose of measles vaccine
MDGs:
MICS:
Multiple Cluster Indicator Survey
NSCLC:
Children with pneumonia symptoms taken to a health center
ORS:
Children with diarrhea receiving oral rehydration therapy and continuous feeding
Population attributable risk
PAR%:
Percentage of population attributable risk
RD:
Ratio for differences
RCI:
Relative concentration index
RMNCH:
Reproductive, Maternal, Newborn, and Child Health
SBA:
Deliveries attended by qualified personnel
SII:
Slope index of inequality
Akseer N, Bhatti Z, Rizvi A, Salehi AS, Mashal T, Bhutta ZA. Coverage and inequalities in maternal and child health interventions in Afghanistan. BMC Public Health. 2016;16(Suppl 2):797. https://doi.org/10.1186/s12889-016-3406-1.
Naciones Unidas. Objetivos de Desarrollo del Milenio. New York; 2015. http://www.un.org/millenniumgoals/2015_MDG_Report/pdf/MDG.2015.rev.(July1).pdf. Accessed 15 May 2020.
Bryce J, Black RE, Victora CG. Millennium development goals 4 and 5: progress and challenges. BMC Med. 2013;11:11–4.
Comisión Económica para América Latina y el Caribe. América Latina y el Caribe: una mirada al futuro desde los objectivos de desarrollo del milenio: informe regional de monitoreo de los objectivos de desarrollo de milenio (ODM) en América Latina y el Caribe 2015. Santiago; 2015. https://repositorio.cepal.org/bitstream/handle/11362/38923/S1500709_es.pdf.
United Nations. Resolution a/RES/70/1. Transforming our world: the 2030 agenda for sustainable development. New York: Seventieth United Nations General Assembly; 2016. 15 September 2015–13. https://undocs.org/sp/A/RES/70/1. [Accessed 25 Apr 2020].
Barros AJD, Wehrmeister FC, Ferreira LZ, Vidaletti LP, Hosseinpoor AR, Victora CG. Are the poorest poor being left behind? Estimating global inequalities in reproductive, maternal, newborn and child health. BMJ Glob Health. 2020;5:1–9. https://doi.org/10.1136/bmjgh-2019-002229.
Wehrmeister FC, Restrepo-Mendez MC, Franca GVA, Victora CG, Barros AJD. Summary indices for monitoring universal coverage in maternal and child health care. Bull World Health Organ. 2016;94:903–12. https://doi.org/10.2471/BLT.16.173138.
Countdown to 2030 Collaboration. Tracking progress towards universal coverage for reproductive, maternal, newborn, and child health. ResearchOnline. 2018;61:27–37.
Mujica OJ, Moreno CM. From words to action: measuring health inequalities to "leave no one behind". Pan Am J Public Heal. 2019;43:e12. https://doi.org/10.26633/RPSP.2019.12.
Restrepo-Méndez MC, Barros AJD, Requejo J, Durán P, Serpa LAF, França GVA, et al. Progress in reducing inequalities in reproductive, maternal, newborn, and child health in Latin America and the Caribbean: an unfinished agenda. Rev Panam Salud Publica. 2015;38:9–16 https://iris.paho.org/handle/10665.2/10003.
Hosseinpoor AR, Victora CG, Bergen N, Barros AJD, Boerma T. Towards universal health coverage: the role of within-country wealth-related inequality in 28 countries in sub-Saharan Africa. Bull World Health Organ. 2011;89:881–90. https://doi.org/10.2471/BLT.11.087536.
Marbach M. Mind the gap: equity and trends in coverage of maternal, newborn, and child health services in 54 countdown countries countdown. Res Polit. 2008;5:1259–67. https://doi.org/10.1177/2053168018803239.
Health Equity Assessment Toolkit (HEAT). Software for exploring and comparing health inequalities in countries. Built-in database edition. Version 3.1. Geneva: World Health Organization; 2019.
World Health Organization. Global Health Observatory data repository. Health equity monitor database; 2019. https://apps.who.int/gho/data/node.main.nHE-1540?lang=en. Accessed 21 Dec 2019.
Wehrmeister FC, Barros AJD, Hosseinpoor AR, Boerma T, Victora CG. Measuring universal health coverage in reproductive, maternal, newborn and child health: an update of the composite coverage index; 2020. p. 1–10. https://doi.org/10.1371/journal.pone.0232350.
World Health Organization. Composite coverage index (%). The Global Health Observatory. https://www.who.int/data/gho/indicator-metadata-registry/imr-details/4489. [Accessed 24 Apr 2020].
Rutstein SOJK. The DHS wealth index. DHS comparative reports no. 6. 1st edition. Calverton: ORC Macro; 2004. https://dhsprogram.com/pubs/pdf/CR6/CR6.pdf.
World Health Organization. Techical notes. Reproductive, maternal, newborn and child health (RMNCH) interventions, combined. New York. www.who.int/gho/indicator_registry/en/. Accessed 25 Apr 2020.
Schneider M, Castillo-Salgado C, Bacallao J, Loyola E, Mujica O, Vidaurre MRA. Métodos de medición de las desigualdades. Rev Panam Salud Pública. 2002;12:398–415. https://doi.org/10.1186/s12913-018-3766-6.
Programa de las Naciones Unidas para el Desarrollo. Más allá del ingreso, más allá de los promedios, más allá del presente: Desigualdades del desarrollo en el siglo XXI. New York; 2019. http://hdr.undp.org/sites/default/files/hdr_2019_overview_-_spanish.pdf.
Etienne CF. Equidad en los sistemas de salud. Rev Panam Salud Publica/Pan Am J Public Heal. 2013;33:79–82. https://doi.org/10.1590/S1020-49892013000200001.
Adegbosin AE, Zhou H, Wang S, Stantic B, Sun J. Systematic review and meta-analysis of the association between dimensions of inequality and a selection of indicators of reproductive, maternal, newborn and child health (RMNCH). J Glob Health. 2019;9:1–13. https://doi.org/10.7189/jogh.09.010429.
World Bank. Poverty and shared prosperity: taking on inequality 2016. Washington, D.C: World Bank; 2016.
Lassi ZS, Salam RA, Das JK, Bhutta ZA. Essential interventions for maternal, newborn and child health: background and methodology. Reprod Health. 2014;11(Suppl 1):1–7. https://doi.org/10.1186/1742-4755-11-S1-S1.
Brizuela V, Tunçalp Ö. Global initiatives in maternal and newborn health. Obstet Med. 2017;10:21–5. https://doi.org/10.1177/1753495X16684987.
Bright T, Felix L, Kuper H, Polack S. A systematic review of strategies to increase access to health services among children in low and middle income countries. BMC Health Serv Res. 2017;17:1–19. https://doi.org/10.1186/s12913-017-2180-9.
Bhutta ZA, Chopra M. Devolving countdown to countries: using global resources to support regional and national action. BMC Public Health. 2016;16(Suppl 2):1–2. https://doi.org/10.1186/s12889-016-3400-7.
Moucheraud C, Owen H, Singh NS, Ng CK, Requejo J, Lawn JE, et al. Countdown to 2015 country case studies: what have we learned about processes and progress towards MDGs 4 and 5? BMC Public Health. 2016;16 Suppl 2. https://doi.org/10.1186/s12889-016-3401-6.
We thank Cesar Matos and Antonio Peramo for comments, suggestions, and language improvement. Special thanks to Carlos Sosa for the statistic notes.
The authors declare that there was no funding associated with this study.
Instituto Tecnológico de Santo Domingo, Universidad Rey Juan Carlos, Madrid, Spain
Manuel Colomé-Hidalgo & Ángel Gil de Miguel
Universidad Autónoma de Madrid, Madrid, Spain
Juan Donado Campos
Manuel Colomé-Hidalgo
Ángel Gil de Miguel
MC conceived and designed the study, carried out the statistical analysis, and drafted the paper; JD and AG analyzed the data, interpreted the results, and contributed to drafting the manuscript. The authors read and approved the final manuscript.
Correspondence to Manuel Colomé-Hidalgo.
All analyses are based on publicly available data from demographic surveys.
Additional file 1: Table S1
. Mean coverage of inequality gaps in interventions by wealth quintile, LAC 2001–2016.
Colomé-Hidalgo, M., Campos, J.D. & de Miguel, Á.G. Exploring wealth-related inequalities in maternal and child health coverage in Latin America and the Caribbean. BMC Public Health 21, 115 (2021). https://doi.org/10.1186/s12889-020-10127-3
Socioeconomic factors
Caribbean region | CommonCrawl |
Way number eight of looking at the correlation coefficient
This post has an accompanying Jupyter Notebook!
Back in August, I wrote about how, while taking the Data 8X series of online courses1, I had learned about standard units and about how the correlation coefficient of two (one-dimensional) data sets can be thought of as either
the slope of the linear regression line through a two-dimensional scatter plot of the two data sets when in standard units, or
the average cross product of the two data sets when in standard units.
In fact, there are lots more ways to interpret the correlation coefficient, as Rodgers and Nicewander observed in their 1988 paper "Thirteen Ways to Look at the Correlation Coefficient". The above two ways of interpreting it are are number three ("Correlation as Standardized Slope of the Regression Line") and number six ("Correlation as the Mean Cross-Product of Standardized Variables2") on Rodgers and Nicewander's list, respectively.
But that still leaves eleven whole other ways of looking at the correlation coefficient! What about them?
I started looking through Rodgers and Nicewander's paper, trying to figure out if I would be able to understand any of the other ways to look at the correlation coefficient. Way number eight ("Correlation as a Function of the Angle Between the Two Variable Vectors") piqued my interest. I know what angles, functions, and vectors are! But what are "variable vectors"?
Turning our data inside out
Rodgers and Nicewander write:
The standard geometric model to portray the relationship between variables is the scatterplot. In this space, observations are plotted as points in a space defined by variable axes.
That's the kind of thing I wrote about back in August. For instance, here's a scatter plot showing the relationship between my daughter's height and weight, according to measurements taken during the first year of her life. There are eight data points, each corresponding to one observation — that is, one pair of height and weight measured at a particular doctor visit.
These measurements are in standard units, ranging from less than -1 (meaning less than one standard deviation below average for the data set) to near zero (meaning near average for the data set) to more than 1 (meaning more than one standard deviation above average for the data set). (If you're not familiar with standard units, my previous post goes into detail about them.) I also have another scatter plot in centimeters and kilograms, if you're curious.
Rodgers and Nicewander continue:
An "inside out" version of this space — usually called "person space" — can be defined by letting each axis represent an observation. This space contains two points — one for each variable — that define the endpoints of vectors in this (potentially) huge dimensional space.
…Whoooooa.
So, instead of having height and weight as axes, they want us to take each of the eight rows of our table — each observation — and make those be our axes. And the two axes we have now, height and weight, would then become points in that eight-dimensional space.
In other words, we want to take our table of data — which looks like this, where rows correspond to points and columns correspond to axes on our scatter plot —
Height (standard units)
Weight (standard units)
2017-07-28 -1.26135 -1.3158
2017-08-07 -1.08691 -1.13054
— and turn it sideways, like this:
Height (standard units) -1.26135 -1.08691 …
Weight (standard units) -1.3158 -1.13054 …
Now we have two points, one for each of height and weight, and eight axes, one for each of our eight observations.
Paring down to three dimensions
Eight dimensions are hard to visualize, so for simplicity's sake, let's pare it down to just three dimensions by picking out three observations to think about. I'll pick the first, the last, and one in the middle. Specifically, I'll pick the observations from when my daughter was four days old, about six months old, and about a year old:
Height (standard units) -1.26135 0.617255 1.63707
Weight (standard units) -1.3158 0.728253 1.41777
What do we get when we visualize this sideways data set as a three-dimensional scatter plot? Something like this:
What's going on here? We're looking at points in "person space", where, as Rodgers and Nicewander explain, each axis represents an observation. In this case, there are three observations, so we have three axes. And there are two points, as promised — one for each of height and weight.
If we look at the difference between the two points on the z-axis — that is, the axis for the 07/30/2018 observation — we can see that the darker-colored blue dot is higher up. It must represent the "height" variable, then, with coordinates (-1.26135, 0.617255, 1.63707). That means that the other, lighter-colored blue dot, with coordinates (-1.3158, 0.728253, 1.41777), must represent the "weight" variable.
I've also plotted vectors going from the origin to each of the two points, and these, finally, are what Rodgers and Nicewander mean by "variable vectors"!
The angle between variable vectors
Continuing with the paper:
If the variable vectors are based on centered variables, then the correlation has a relationship to the angle $\alpha$ between the variable vectors (Rodgers 1982): $r = \textrm{cos}(\alpha)$.
Oooh. Okay, so first of all, are our variable vectors "based on centered variables"? From what Google tells me, you center a variable by subtracting the mean from each value of the variable, resulting in a variable with zero mean. The variables we're dealing with here are in standard units, and so the mean is already zero. So, they're already centered! Hooray.
Finding the angle between [-1.26135, 0.617255, 1.63707] and [-1.3158, 0.728253, 1.41777] and taking its cosine, we can compute $r$ to be 0.9938006245545371. Almost 1! That means that, just like last time, we have an almost perfect linear correlation.
It's a bit different from what we got for $r$ last time, which was 0.9910523777994954. But that's because, for the sake of visualization, we decided to only look at three of the observations. To get more accuracy, we can go back to all eight dimensions. We may not be able to visualize them, but we can still measure the angle between them! Doing that, we get 0.9910523777994951, which is the same as we had last time, modulo 0.0000000000000003 worth of numerical imprecision. I'll take it.
So, that's way number eight of looking at the correlation coefficient — as the angle between two variable vectors in "person space"!
Why do Rodgers and Nicewander call it "person space"? I wonder if it's because it's common in statistics for an observation — a row in our original table — to correspond to a single person. It seems to also sometimes be called "subject space", "observation space", or "vector space". For instance, here's a stats.SE answer that shows an example contrasting "variable space" — that is, the usual kind of scatter plot, with an axis for each variable — with "subject space".
I had never heard any of these terms before I saw Rodgers and Nicewander's paper, but apparently it's not just me! A 2002 paper by Chong et al. in the Journal of Statistics Education laments that the concept of subject space (as opposed to variable space) often isn't taught:
There are many common misconceptions regarding factor analysis. For example, students do not know that vectors representing latent factors rotate in subject space, rather than in variable space. Consequently, eigenvectors are misunderstood as regression lines, and data points representing variables are misperceived as data points depicting observations. The topic of subject space is omitted by many statistics textbooks, and indeed it is a very difficult concept to illustrate.
And the lack of uniform terminology seems to be part of the problem. Chong et al. get delightfully snarky in their discussion of this:
In addition, the only text reviewed explaining factor analysis in terms of variable space and vector space is Applied Factor Analysis in the Natural Sciences by Reyment and Joreskog (1993). No other textbook reviewed uses the terms "subject space" or "person space." Instead vectors are presented in "Euclidean space" (Joreskog and Sorbom 1979), "Cartesian coordinate space" (Gorsuch 1983), "factor space" (Comrey and Lee 1992; Reese and Lochmüller 1998), and "n-dimensional space" (Krus 1998). The first two phrases do not adequately distinguish vector space from variable space. A scatterplot representing variable space is also a Euclidean space or a Cartesian coordinate space. The third is tautological. Stating that factors are in factor space may be compared to stating that Americans are in America.
For their part, Rodgers and Nicewander want to encourage more people to use this angle-between-variable-vectors interpretation of $r$. They write:
Visually, it is much easier to view the correlation by observing an angle than by looking at how points cluster about the regression line. In our opinion, this interpretation is by far the easiest way to "see" the size of the correlation, since one can directly observe the size of an angle between two vectors. This inside-out space that allows $r$ to be represented as the cosine of an angle is relatively neglected as an interpretational tool, however.
I have mixed feelings about this. On the one hand, yeah, it's easier to just look at one angle between two vectors in observation space (or person space, or vector space, or subject space, or whatever you want to call it) than to have to squint at a whole bunch of points in variable space. On the other hand, for most of us it probably feels pretty strange to have, say, a "July 28, 2017" axis instead of a "height" axis. Moreover, the observation space is really hard to visualize once you get past three dimensions, so it's hard to blame people for not wanting to think about it. I can visualize lots of points, but only a few axes, so using axes to represent observations (which we may have quite a lot of) and points to represent variables (which, when dealing with bivariate correlation, we have two of) seems like a rather backwards use of my cognitive resources! Nevertheless, I'm sure there are times when this approach is handy.
Since August, I finished the final course in the Data 8X sequence and am now a proud haver of the <airquotes>Foundations of Data Science Professional Certificate<airquotes> from <airquotes>BerkeleyX<airquotes>. ↩
When Rodgers and Nicewander speak of a "variable", they mean it in the statistician's sense, meaning something like "feature" (like "height" or "width"), not in the [computer scientist's sense][1]. When I say "one-dimensional data set", that's a synonym for "variable". ↩
Posted by Lindsey Kuper Jan 31st, 2019 statistics
« Course retrospective: Languages and Abstractions for Distributed Programming Jane Street Tech Talk: "Abstractions for Expressive, Efficient Parallel and Distributed Computing" »
"Toward Domain-Specific Solvers for Distributed Consistency" will appear at SNAPL 2019, March 2019
Jane Street Tech Talk: "Abstractions for Expressive, Efficient Parallel and Distributed Computing", February 2019
You don't need a 4.0 to go to grad school, July 2018
"Toward Scalable Verification for Safety-Critical Deep Networks" at SysML 2018, February 2018 | CommonCrawl |
Ch 04: Quadratic Equations
fsc fsc part1 important questions fsc 1
Reduce $x^{-2}-10=3x^{-1}$ to quadratic form — BISE Gujrawala(2015)
Show that $x^3-y^3=(x-y)(x-wy)(x-w^2y)$ — BISE Gujrawala(2015)
If $n$ is an odd integer, is $(x+a)$ factor of $(x^n+a^n)$? — BISE Gujrawala(2015)
If the roots of $px^2+qx+q=0$ are $\alpha$, $\beta$,then prove that $$\sqrt {\frac{\alpha}{\beta}}+\sqrt {\frac{\beta}{\alpha}}+\sqrt{\frac{p}{q}}=0$$ — BISE Gujrawala(2017),BISE Sagodha(2017
Solve the following system of equations — BISE Gujrawala(2017) $${\begin{array}{c} x^2-5xy+6y^2=0\\x^2+y^2=45\end{array}}$$
Find the roots of the equation $4x^2+7x-1=0$ — BISE Gujrawala(2017)
When the polynomial $x^3+2x^2+kx+4$ is divided by $x-2$, then the reminder is $14$, find $k$. — BISE Gujrawala(2017)
Show that the roots of the equation $x^2-2(m+\frac{1}{m})x+3=0$, $m \neq 0$ will be real — BISE Gujrawala(2017)
Solve the equation $x(x+7)=(2x-1)(x+4)$ by factorization — BISE Sargodha(2015)
If $\alpha$, $\beta$ are the roots of $3x^2-2x+4=0$, find the values of $\frac{\alpha}{\beta}+\frac{\beta}{\alpha}$ — BISE Sargodha(2015)
For what value of $m$ will the roots of the equation $(m+1)x^2+2(m+3)x+m+8=0$ — BISE Sargodha(2015)
Evaluate $(1+w-w^2)(1-w+w^2)$ — BISE Sargodha(2016)
Discuse nature of roots of equation $x^2-5x+6=0$ — BISE Sargodha(2016)
Solve that the roots of $x^2+(mx+c)^2=a^2$ will be equal if $c^2=a^2(1+m^2)$ — BISE Sargodha(2016)
When $x^3+kx^2-7x+6$ is divided by $x+2$ the reminder is $-4$? Find the value of $k$. — BISE Lahore(2017)
Prove that $1+w+w^2=0$ — BISE Lahore(2017)
Solve $3^{2x-1}-12.3^x+81=0$ — BISE Lahore(2017)
When $x^4+2x^3+kx^2+3$ is dividing by $x-2$ the reminder is $1$. Find the value of $k$ — FBISE (2016)
If $\alpha$, $\beta$ are the roots of the equation $5x^2-x-2=0$ then form the equation whose roots are $\frac{3}{\alpha}$ and $\frac{3}{\beta}$. — FBISE (2016)
Show that the roots of the equation $(x-a)(x-b)+(x-b)(x-c)+(x-c)(x-a)=0$ are real, also show that the roots will be equal only if $a=b=c$ — FBISE (2017)
Find the values of $a$ and $b$ if $-2$ and $2$ are the roots of the polynomial $x^3-4x^2+ax+b$ — FBISE (2017)
FSc, FSc Part1, Important Questions FSc 1
fsc-part1-ptb/important-questions/ch04-quadratic-equations
by M. Izhar | CommonCrawl |
MathsGee Courses
Chatbot Development
Data Analysis Consultancy
Executive Data Coaching
EdTech Consultancy Services
STEM Training Programs
STEM Education CSI Strategy
Course Development Services
TEDSF
Home Mathematics Core Maths Skills Learn Latex - Mathematics for Machine Learning
Learn Latex – Mathematics for Machine Learning
June 28, 2018 Edzai Conilias Zvobwo
Category: Core Maths Skills,Mathematics
1. Delimiters
See how the delimiters are of reasonable size in these examples
\left(a+b\right)\left[1-\frac{b}{a+b}\right]=a\,,
\sqrt{|xy|}\leq\left|\frac{x+y}{2}\right|,
even when there is no matching delimiter
\int_a^bu\frac{d^2v}{dx^2}\,dx
=\left.u\frac{dv}{dx}\right|_a^b
-\int_a^b\frac{du}{dx}\frac{dv}{dx}\,dx.
2. Spacing
Differentials often need a bit of help with their spacing as in
\iint xy^2\,dx\,dy
=\frac{1}{6}x^2y^3,
whereas vector problems often lead to statements such as
u=\frac{-y}{x^2+y^2}\,,\quad
v=\frac{x}{x^2+y^2}\,,\quad\text{and}\quad
w=0\,.
Occasionally one gets horrible line breaks when using a list in mathematics such as listing the first twelve primes \(2,3,5,7,11,13,17,19,23,29,31,37\)\,.
In such cases, perhaps include \mathcode`\,="213B inside the inline maths environment so that the list breaks: \(\mathcode`\,="213B 2,3,5,7,11,13,17,19,23,29,31,37\)\,.
Be discerning about when to do this as the spacing is different.
Arrays of mathematics are typeset using one of the matrix environments as
\begin{bmatrix}
1 & x & 0 \\
0 & 1 & -1
\end{bmatrix}\begin{bmatrix}
1 \\
y \\
\end{bmatrix}
=\begin{bmatrix}
1+xy \\
\end{bmatrix}.
Case statements use cases:
|x|=\begin{cases}
x, & \text{if }x\geq 0\,, \\
-x, & \text{if }x< 0\,.
\end{cases}
Many arrays have lots of dots all over the place as in
\begin{matrix}
-2 & 1 & 0 & 0 & \cdots & 0 \\
1 & -2 & 1 & 0 & \cdots & 0 \\
0 & 1 & -2 & 1 & \cdots & 0 \\
0 & 0 & 1 & -2 & \ddots & \vdots \\
\vdots & \vdots & \vdots & \ddots & \ddots & 1 \\
0 & 0 & 0 & \cdots & 1 & -2
\end{matrix}
4. Equation arrays
In the flow of a fluid film we may report
\begin{eqnarray}
u_\alpha & = & \epsilon^2 \kappa_{xxx}
\left( y-\frac{1}{2}y^2 \right),
\label{equ} \\
v & = & \epsilon^3 \kappa_{xxx} y\,,
\label{eqv} \\
p & = & \epsilon \kappa_{xx}\,.
\label{eqp}
\end{eqnarray}
Alternatively, the curl of a vector field $(u,v,w)$ may be written
with only one equation number:
\omega_1 & = &
\frac{\partial w}{\partial y}-\frac{\partial v}{\partial z}\,,
\nonumber \\
\frac{\partial u}{\partial z}-\frac{\partial w}{\partial x}\,,
\label{eqcurl} \\
\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}\,.
\nonumber
Whereas a derivation may look like
\begin{eqnarray*}
(p\wedge q)\vee(p\wedge\neg q) & = & p\wedge(q\vee\neg q)
\quad\text{by distributive law} \\
& = & p\wedge T \quad\text{by excluded middle} \\
& = & p \quad\text{by identity}
\end{eqnarray*}
5. Functions
Observe that trigonometric and other elementary functions are typeset
properly, even to the extent of providing a thin space if followed by
a single letter argument:
\exp(i\theta)=\cos\theta +i\sin\theta\,,\quad
\sinh(\log x)=\frac{1}{2}\left( x-\frac{1}{x} \right).
With sub- and super-scripts placed properly on more complicated
functions,
\lim_{q\to\infty}\|f(x)\|_q
=\max_{x}|f(x)|,
and large operators, such as integrals and
e^x & = & \sum_{n=0}^\infty \frac{x^n}{n!}
\quad\text{where }n!=\prod_{i=1}^n i\,, \\
\overline{U_\alpha} & = & \bigcap_\alpha U_\alpha\,.
In inline mathematics the scripts are correctly placed to the side in
order to conserve vertical space, as in
\(
1/(1-x)=\sum_{n=0}^\infty x^n.
\)
6. Accents
Mathematical accents are performed by a short command with one
argument, such as
\tilde f(\omega)=\frac{1}{2\pi}
\int_{-\infty}^\infty f(x)e^{-i\omega x}\,dx\,,
\dot{\vec \omega}=\vec r\times\vec I\,.
7. Command definition
\newcommand{\Ai}{\operatorname{Ai}}
The Airy function, $\Ai(x)$, may be incorrectly defined as this
\Ai(x)=\int\exp(s^3+isx)\,ds\,.
\newcommand{\D}[2]{\frac{\partial #2}{\partial #1}}
\newcommand{\DD}[2]{\frac{\partial^2 #2}{\partial #1^2}}
\renewcommand{\vec}[1]{\boldsymbol{#1}}
This vector identity serves nicely to illustrate two of the new
\vec\nabla\times\vec q
=\vec i\left(\D yw-\D zv\right)
+\vec j\left(\D zu-\D xw\right)
+\vec k\left(\D xv-\D yu\right).
8. Theorems et al.
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{definition}[theorem]{Definition}
\begin{definition}[right-angled triangles] \label{def:tri}
A right-angled triangle is a triangle whose sides of length~\(a\), \(b\) and~\(c\), in some permutation of order, satisfies \(a^2+b^2=c^2\).
\end{definition}
\begin{lemma}
The triangle with sides of length~\(3\), \(4\) and~\(5\) is right-angled.
\end{lemma}
This lemma follows from the Definition~def:tri as \(3^2+4^2=9+16=25=5^2\).
\begin{theorem}[Pythagorean triplets] \label{thm:py}
Triangles with sides of length \(a=p^2-q^2\), \(b=2pq\) and \(c=p^2+q^2\) are right-angled triangles.
\end{theorem}
Prove this Theorem~thm:py by the algebra
\(a^2+b^2 =(p^2-q^2)^2+(2pq)^2
=p^4-2p^2q^2+q^4+4p^2q^2
=p^4+2p^2q^2+q^4
=(p^2+q^2)^2 =c^2\).
Edzai Conilias Zvobwo
https://mathsgee.com
Edzai Conilias Zvobwo is passionate about empowering Africans through mathematics, problem-solving techniques and media. As such, he founded MathsGee. Through this organisation, he has helped create an ecosystem for disseminating information, training, and supporting STEM education to all African people.
A maths evangelist who teaches mathematical thinking as a life skill, Edzai's quest has seen him being named the SABC Ambassador for STEM; he has been invited to address Fortune 500 C-suite executives at the Mobile 360 North America; was nominated to represent Southern Africa at the inaugural United Nations Youth Skills Day in New York; was invited to be a contributor to the World Bank Group Youth Summit in 2016; has won the 2014 SADC Protocol on Gender and Development award for his contribution to women's empowerment in education; and has partnered with local and global firms in STEM interventions.
Google Digital Skills Africa
IBM Digital Nation Africa
Core Maths Skills
Disease & Disorders
Finances and Banking
IT & Software Development
Series & Sequences
Wiki4Her
Google joins Africa 4IR Skills Race
| Edzai Conilias Zvobwo
It is no secret that Africa is the "virgin continent" with the lowest technology penetration.…
Google Course – Become confident with self-promotion
One of the major skills of the 21st century is selling your profile on the…
Youth as a driving force for good governance in Africa
It is no secret that positions of power in Africa are held by the old…
N'Djamena to host Inaugural APRM Youth Symposium
Bill Gates, the billionaire founder of Microsoft always says, "If you do not measure it,…
Joburg Youths to benefit from Maths and Coding Programme
4IR is here. Feel it, it is really here. What role will Africa play in…
africa analysis business content course courses data data science debt development digital education Empowerment future GIS google Health ibm ict innovation interactive learn learning management marketing mathematics Maths mathsgee online physics probability quiz school science Security skills south-africa startup statistics study teach wiki4her women work youth
Assist Us
Publish A Course
Request A Course
Graduate Testimonials
ICT Skills Africa
Content Repurposing
M & E
Edtech Consultancy
STEM CSI Strategy
Copyright 2019 - All Rights Reserved | Curated by MathsGee | CommonCrawl |
Ion size effects on individual fluxes via Poisson-Nernst-Planck systems with Bikerman's local hard-sphere potential: Analysis without electroneutrality boundary conditions
Two codimension-two bifurcations of a second-order difference equation from macroeconomics
June 2018, 23(4): 1601-1621. doi: 10.3934/dcdsb.2018063
Palindromic control and mirror symmetries in finite difference discretizations of 1-D Schrödinger equations
Katherine A. Kime ,
Department of Mathematics and Statistics, University of Nebraska Kearney, Kearney, Nebraska 68849, USA
* Corresponding author: Katherine A. Kime
Received May 2017 Published June 2018 Early access February 2018
We consider discrete potentials as controls in systems of finite difference equations which are discretizations of a 1-D Schrödinger equation. We give examples of palindromic potentials which have corresponding steerable initial-terminal pairs which are not mirror-symmetric. For a set of palindromic potentials, we show that the corresponding steerable pairs that satisfy a localization property are mirror-symmetric. We express the initial and terminal states in these pairs explicitly as scalar multiples of vector-valued functions of a parameter in the control.
Keywords: Mirror, symmetry, palindromic, control, potential, Schrödinger, complex-valued matrix.
Mathematics Subject Classification: Primary: 93B03, 93B40, 93C20; Secondary: 81Q05, 81Q93.
Citation: Katherine A. Kime. Palindromic control and mirror symmetries in finite difference discretizations of 1-D Schrödinger equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1601-1621. doi: 10.3934/dcdsb.2018063
G. D. Akrivis and V. A. Dougalis, Finite difference discretization with variable mesh of the Schrödinger equation in a variable domain, Bulletin Greek Mathematical Society, 31 (1990), 19-28. Google Scholar
K. Beauchard, Local controllability of a 1-D Schrödinger equation, J. Math. Pures Appl., 84 (2005), 851-956. doi: 10.1016/j.matpur.2005.02.005. Google Scholar
D. Bohm, Quantum Theory, Dover Publications Inc., New York, 1989. Google Scholar
U. Boscain, J.-P. Gauthier, F. Rossi and M. Sigalotti, Approximate controllability, exact controllability and conical eigenvalue intersectons for quantum mechanical systems, Comm. Math. Phys., 333 (2015), 1225-1239. doi: 10.1007/s00220-014-2195-6. Google Scholar
T. Boykin and G. Klimeck, The discretized Schrödinger equation and simple models for semiconductor quantum wells, Eur. J. Phys., 25 (2004), 503-514. doi: 10.1088/0143-0807/25/4/006. Google Scholar
M. Buttiker and R. Landauer, Traversal time for tunneling, Advances in Solid State Physics, 25 (2007), 711-717. doi: 10.1007/BFb0108208. Google Scholar
R. Burden and J. Faires, Numerical Analysis, 5th edition, PWS, Boston, 1993. Google Scholar
T. Chan and L. Shen, Stability analysis of difference schemes for variable coefficient Schrödinger type equations, SIAM. J. Numer. Anal., 24 (1987), 336-349. doi: 10.1137/0724025. Google Scholar
K. Beauchard and J.-M. Coron, Controllability of a quantum particle in a moving potential well, Journal of Functional Analysis, 232 (2006), 328-389. doi: 10.1016/j.jfa.2005.03.021. Google Scholar
A. Goldberg, H. Schey and J. Schwartz, Computer-generated motion pictures of one-dimensional quantum-mechanical transmission and reflection phenomena, American Journal of Physics, 35 (1967), 177-186. doi: 10.1119/1.1973991. Google Scholar
A. Hof, O. Knill and B. Simon, Singular continuous spectrum for palindromic Schrödinger operators, Communications in Mathematical Physics, 174 (1995), 149-159. doi: 10.1007/BF02099468. Google Scholar
A. Kacar and O. Terzioglu, Symbolic computation of the potential in a nonlinear Schrödinger Equation, Numer. Methods Partial Differential Equations, 23 (2007), 475-483. doi: 10.1002/num.20192. Google Scholar
K. Kime, Finite difference approximation of control via the potential in a 1-D Schrodinger equation, Electronic Journal of Differential Equations, 2000 (2000), 1-10. Google Scholar
I. Lasiecka and R. Triggiani, Exact controllability of the Euler-Bernoulli equation with boundary controls for displacement and moment, J. Math. Anal. Appl., 146 (1990), 1-33. doi: 10.1016/0022-247X(90)90330-I. Google Scholar
J. L. Lions, Exact controllability, stabilization and perturbations for distributed systems, SIAM Review, 30 (1988), 1-68. doi: 10.1137/1030001. Google Scholar
M. Morancey and V. Nersesyan, Simultaneous global exact controllability of an arbitrary number of 1D bilinear Schrödinger equations, J. Math. Pures Appl., 103 (2015), 228-254. doi: 10.1016/j.matpur.2014.04.002. Google Scholar
A. Nissen, G. Kreiss and M. Gerritsen, High order stable finite difference methods for the Schrödinger equation, J. Sci. Comput., 55 (2013), 173-199. doi: 10.1007/s10915-012-9628-1. Google Scholar
K. H. Rosen, Discrete Mathematics and Its Applications, 6th edition, McGraw Hill, New York, 2007. Google Scholar
D. L. Russell, A unified boundary controllability theory for hyperbolic and parabolic partial differential equations, Studies in Applied Mathematics, 52 (1973), 189-211. doi: 10.1002/sapm1973523189. Google Scholar
L. I. Schiff, Quantum Mechanics, McGraw Hill, New York, 1968. Google Scholar
E. Zuazua, Propagation, observation, and control of waves approximated by finite difference methods, SIAM Review, 47 (2005), 197-243. doi: 10.1137/S0036144503432862. Google Scholar
Figure 1. Example 1. $\alpha$-Localized, Mirror-Symmetric
Figure 2. Example 2. Not Localized, Not Mirror-Symmetric
Figure 3. Example 3. Localized with Equal Degree of Restriction Equal to 1, Not $\alpha$-Localized, Not Mirror-Symmetric
Gan Lu, Weiming Liu. Multiple complex-valued solutions for the nonlinear Schrödinger equations involving magnetic potentials. Communications on Pure & Applied Analysis, 2017, 16 (6) : 1957-1975. doi: 10.3934/cpaa.2017096
Xin Li, Wenxian Shen, Chunyou Sun. Invariant measures for complex-valued dissipative dynamical systems and applications. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2427-2446. doi: 10.3934/dcdsb.2017124
Tuoc Phan, Grozdena Todorova, Borislav Yordanov. Existence uniqueness and regularity theory for elliptic equations with complex-valued potentials. Discrete & Continuous Dynamical Systems, 2021, 41 (3) : 1071-1099. doi: 10.3934/dcds.2020310
Jerry L. Bona, Stéphane Vento, Fred B. Weissler. Singularity formation and blowup of complex-valued solutions of the modified KdV equation. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 4811-4840. doi: 10.3934/dcds.2013.33.4811
Yu-Hao Liang, Wan-Rou Wu, Jonq Juang. Fastest synchronized network and synchrony on the Julia set of complex-valued coupled map lattices. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 173-184. doi: 10.3934/dcdsb.2016.21.173
Chenxi Guo, Guillaume Bal. Reconstruction of complex-valued tensors in the Maxwell system from knowledge of internal magnetic fields. Inverse Problems & Imaging, 2014, 8 (4) : 1033-1051. doi: 10.3934/ipi.2014.8.1033
Yong Zhao, Shanshan Ren. Synchronization for a class of complex-valued memristor-based competitive neural networks(CMCNNs) with different time scales. Electronic Research Archive, 2021, 29 (5) : 3323-3340. doi: 10.3934/era.2021041
Grégoire Allaire, M. Vanninathan. Homogenization of the Schrödinger equation with a time oscillating potential. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 1-16. doi: 10.3934/dcdsb.2006.6.1
Younghun Hong. Scattering for a nonlinear Schrödinger equation with a potential. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1571-1601. doi: 10.3934/cpaa.2016003
Zhiyan Ding, Hichem Hajaiej. On a fractional Schrödinger equation in the presence of harmonic potential. Electronic Research Archive, 2021, 29 (5) : 3449-3469. doi: 10.3934/era.2021047
Bo Wang, Jiguang Bao. Mirror symmetry for a Hessian over-determined problem and its generalization. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2305-2316. doi: 10.3934/cpaa.2014.13.2305
Camille Laurent. Internal control of the Schrödinger equation. Mathematical Control & Related Fields, 2014, 4 (2) : 161-186. doi: 10.3934/mcrf.2014.4.161
M. Burak Erdoǧan, William R. Green. Dispersive estimates for matrix Schrödinger operators in dimension two. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4473-4495. doi: 10.3934/dcds.2013.33.4473
Markus Kunze, Abdallah Maichine, Abdelaziz Rhandi. Vector-valued Schrödinger operators in Lp-spaces. Discrete & Continuous Dynamical Systems - S, 2020, 13 (5) : 1529-1541. doi: 10.3934/dcdss.2020086
Patricio Felmer, César Torres. Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2395-2406. doi: 10.3934/cpaa.2014.13.2395
Ran Zhuo, Yan Li. Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1595-1611. doi: 10.3934/dcds.2019071
M. D. Todorov, C. I. Christov. Conservative numerical scheme in complex arithmetic for coupled nonlinear Schrödinger equations. Conference Publications, 2007, 2007 (Special) : 982-992. doi: 10.3934/proc.2007.2007.982
Jussi Behrndt, A. F. M. ter Elst. The Dirichlet-to-Neumann map for Schrödinger operators with complex potentials. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 661-671. doi: 10.3934/dcdss.2017033
Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2105-2123. doi: 10.3934/cpaa.2017104
Hengguang Li, Jeffrey S. Ovall. A posteriori eigenvalue error estimation for a Schrödinger operator with inverse square potential. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1377-1391. doi: 10.3934/dcdsb.2015.20.1377
Katherine A. Kime | CommonCrawl |
Journal Home About Issues in Progress Current Issue All Issues Feature Issues
•https://doi.org/10.1364/BOE.440975
Suppression of motion artifacts in intravascular photoacoustic image sequences
Zheng Sun and Jiejie Du
Zheng Sun1,2,* and Jiejie Du1,2
1Department of Electronic and Communication Engineering, North China Electric Power University, Baoding 071003, Hebei, China
2Hebei Key Laboratory of Power Internet of Things Technology, North China Electric Power University, Baoding 071003, Hebei, China
*Corresponding author: [email protected]
Zheng Sun https://orcid.org/0000-0002-7066-2320
Z Sun
J Du
Zheng Sun and Jiejie Du, "Suppression of motion artifacts in intravascular photoacoustic image sequences," Biomed. Opt. Express 12, 6909-6927 (2021)
Real-time volumetric lipid imaging in vivo by intravascular photoacoustics at 20 frames per second
Min Wu, et al.
Biomed. Opt. Express 8(2) 943-953 (2017)
Specific imaging of atherosclerotic plaque lipids with two-wavelength intravascular photoacoustics
Biomed. Opt. Express 6(9) 3276-3286 (2015)
Thermal intravascular photoacoustic imaging
Bo Wang, et al.
Biomed. Opt. Express 2(11) 3072-3078 (2011)
Photoacoustic Imaging and Spectroscopy
Phase conjugation
Three dimensional imaging
Article Outline
Equations (14)
Intravascular photoacoustic (IVPA) imaging is an image-based imaging modality for the assessment of atherosclerotic plaques. Successful application of IVPA for in vivo coronary arterial imaging requires one overcomes the challenge of motion artifacts associated with the cardiac cycle. We propose a method for correcting artifacts owing to cardiac motion, which are observed in sequential IVPA images acquired by the continuous pullback of the imaging catheter. This method groups raw photoacoustic signals into subsets corresponding to similar phases in the cardiac cycles. Thereafter, the sequential images are reconstructed, by representing the initial pressure distribution on the vascular cross-sections based on the clustered frames of signals by time reversal. Results of simulation data demonstrate the efficacy of this method in suppressing motion artifacts. Qualitative and quantitative evaluations of the method indicate an enhancement of the image quality. Comparison results reveal that this method is computationally efficient in motion correction compared with the image-based gating.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Photoacoustic tomography (PAT) is a rapidly developing imaging modality that allows structural and functional imaging of biological tissues with high penetration depth and high optical contrast [1]. Intravascular photoacoustic (IVPA) imaging is a typical catheter-based application of PAT. As a complementary tool to intravascular ultrasound (IVUS) for the assessment of atherosclerosis, it is capable of providing multi-scale anatomical, functional, and molecular information of vessels by combining noninvasive PAT with endoscopic detection. It enables potential applications in the diagnosis and interventional treatment of cardiovascular diseases [2,3].
Successful application of an IVPA system for in vivo intracoronary imaging is challenging because motion artifacts owing to voluntary motion and involuntary motion are inevitable. The voluntary motion such as body movement can be avoided by decreasing the image acquisition time and keeping the imaging object still during acquisition [4]. In whole-body PAT for small animals, the animal movement is significantly reduced through mechanical clamping or fixation [5–7]. This scheme is only applicable to specific scenarios because clamping is not suitable for all body parts. The involuntary motion is associated with heart beat, breathing, pulsating blood flow within the vascular lumen, and arterial vasomotion. It leads to an undesired motion between the ultrasonic transducer and vessel wall, which involves the transversal motion and longitudinal motion. The transversal motion produces the shift (translation and rotation) of the vessel structures from one slice to the other, while the longitudinal motion induces a proximal or distal displacement additional to the pullback. As a consequence, the vessel cross-sections are not equally spaced in the longitudinal direction along the pullback trajectory and the geometrical description of the vessel wall is misled. The motion artifacts are visible as saw-tooth shaped vessel wall boundaries in the volumetric images that are reconstructed from pullback recordings. The artifacts degrade image quality and subsequently make identification and interpretation of structures like vessels and atherosclerotic plaques difficult. Furthermore, the precision of quantitative assessment of tissue properties and 3-D vessel rendering is reduced.
As an imaging modality with a high frame rate, PAT enables rendering of 2-D or 3-D images via excitation of an entire volume with a single nanosecond laser pulse, which avoids motion artifacts in a single frame [8]. However, for studies of vessel morphology, plaque characterization, and other purposes requiring 3-D imagery, acquisition and analysis of multiple frames rendering multiple vascular cross-sections are required. Accordingly, motion correction is relevant in IVPA imaging of coronary arterial vessels.
Many efforts have been made to mitigate motion artifacts in PAT applications involving multi-frame data analysis. A simple and generally effective way to account for motions caused by heart beat or breathing is to gate the sequences according to an electrocardiogram (ECG) or respiratory triggering signal prospectively or retrospectively. On-line prospective gating activates data acquisition during the same phase of each cycle by employing a triggering or synchronization scheme. Off-line retrospective gating captures images or raw signals continuously in the cycles and records ECG or respiratory waveforms simultaneously. After acquisition, images or signals that are collected at the same temporal location (cardiac/respiratory phase) are selected according to the ECG or respiratory signals. This technique is essential in many established imaging systems that need to locate objects accurately [9–12]. For instance, image-based gating has been routinely applied to suppress motion artifacts present in continuous pullback IVUS and intravascular optical coherence tomography (IV-OCT) image sequences [13–17]. One frame per cycle is extracted to form a subsequence by tracking the cyclic change of the image intensity or the vascular lumen contour across the entire pullback. In experiments of photoacoustic imaging (PAI), retrospective gating has been adopted to suppress respiratory motion artifacts in a whole-body PAT system for small animals [18,19]. It is implemented in two ways, hardware or software. The hardware respiratory gating is to collect raw PA signals when the imaging object is breathing freely, simultaneously monitoring the respiratory waveform with an external equipment. The collected PA signals are aligned according to the respiratory phases. Finally, the images in a complete respiratory cycle are reconstructed based on the PA measurements in the same respiratory phases [18]. This method needs breathing training for the imaging object because quick or uneven breathing results in the reduced accuracy of triggering. The software respiratory gating is to extract a signal that implies the respiratory phases from the PA measurements. The motion is subsequently corrected based on the signal prior to image reconstruction [19]. When the pause between two breaths exceeds the length of a single breath, it is necessary to establish a standard to distinguish between a dynamic and a static frame according to the prior knowledge regarding respiratory characteristics.
Frame motion compensation (FMC) and inter-frame motion compensation (IFMC) have been used to reduce motion artifacts in frame-averaged PA images [20]. Both methods determine motion vectors between frames by block matching and three-step search. FMC corrects the error from motion artifacts by comparing past images with the most recent image as the reference image. However, large motion might not be detected because the motion vector accumulating with the time is easy to exceed the detectable range. IFMC overcomes the limitation of FMC by comparing two consecutive frames. The accuracy of both methods highly depends on the search step that is difficult to be properly selected in an automatic manner.
In spectroscopic PAT of the heart, images are acquired under the multiple wavelengths excitation at separate time points. The cardiac motion during this process leads to blurring in the images. Motion clustering has been demonstrated effective in reducing such motion blurring [21]. A clustering algorithm such as k-means is utilized to separate a sequence of single-pulse images at multiple excitation wavelengths into clusters corresponding to different stages of the cardiac cycle. The number of clusters should be selected properly based on the severity of blurring, signal-to-noise ratio (SNR), and the performance of the clustering algorithm.
In addition, the model-based scheme has been demonstrated efficient in reconstructing PAT images with high quality besides the analytical inversion formulations such as back-projection (BP) [22]. A model physically describing the forward acoustic problem is established, which outputs the theoretical acoustic pressure induced by optical absorbers in tissues. The initial pressure or optical energy deposition is recovered by iteratively minimizing the error between the measured and theoretical acoustic pressure calculated by the forward model. The scheme is capable of including all linear effects in the forward model. By incorporating the motion into the forward model, estimation of the motion parameters and reconstruction of the desired images can be implemented simultaneously [23].
In this paper, we propose a novel method for suppressing motion artifacts associated with the cardiac cycle in IVPA pullback volumes. To the best of our knowledge this is the first paper focusing on motion artifact correction for volumetric IVPA images. The raw PA signals collected by the ultrasonic detector in successive vascular cross-sections are grouped into subsets that correspond to similar phases in the cardiac cycles by clustering. The sequential images representing the initial pressure distribution in vascular cross-sections are reconstructed based on the selected frames of signals by time reversal (TR). We tested our method on simulation data. According to the results, we analyzed the influence of the objective function threshold in the clustering algorithm on image reconstruction. In addition, we conducted experiments comparing our method with the image-based gating to demonstrate the superiority of our method in suppressing IVPA motion artifacts.
The remainder of this paper is organized as follows. Sect. 2 depicts the presented method in detail. Sect. 3 provides the demonstration and comparison results. Sect. 4 gives related discussions and Sect. 5 concludes the paper with a summary.
2. Method
2.1 Principle of IVPA imaging
As illustrated in Fig. 1, the procedure of IVPA imaging is analogous to IVUS, where a catheter is inserted into the vascular lumen for imaging and pushed to the distal end under the direction of X-ray angiography. During pullback of the catheter, the probe mounted on its tip emits short pulsed laser (∼ns) that is absorbed by the surrounding tissues, leading to a temperature rise. Subsequently, the thermo-elastic effect induces a pressure rise that is proportional to the optical energy deposition. The pressure rise propagates as wide-band (∼MHz) ultrasonic waves, that is, PA waves, to the tissue surface. The ultrasonic transducers on the probe scan the surrounding tissues circumferentially, collecting the photoacoustically generated pressure along a circular trajectory parallel to the imaging plane that is perpendicular to the catheter. The ultrasonic detector is idealized as a point-like detector by ignoring its aperture effect. A full-view (360°) scanning provides the measurements in M angles, {θ1,θ2,…,θM}, where N points are sampled on each scanning radius. Finally, images representing the spatially varying optical energy deposition or initial pressure in vascular cross-sections are reconstructed from the measured PA signals by acoustic inversion [24,25]. Moreover, the solution to optical inversion enables quantitative imaging by recovering optical properties (absorption coefficient μa and scattering coefficient μs), thermoelastic coefficient (Grüneisen coefficient), and functional properties including blood oxygenation (sO2) and the concentration of chromophores from optical energy deposition or PA measurements [26,27].
Fig. 1. Schematic diagram of IVPA imaging. (a) Longitudinal view (L-view) of a vessel segment to be imaged; (b) Transversal view of an imaging plane; (c) Stacked transversal images in a temporal order; (d) Generation of cardiac cycle-dependent motion artifacts in a time-axis view of pullback volumes.
Download Full Size | PPT Slide | PDF
During acquisition of PA signals in successive cross-sections, the catheter remains in the center of the transversal imaging plane as sketched in Fig. 1(d). The cardiac cycle-dependent motion causes misalignment of successive slices along the sequence of cross-sectional images, that is, B-scan images in Cartesian coordinates as well as saw-tooth shaped vessel wall boundaries in time-axis views of IVPA pullback volumes. Moreover, the variations of the luminal shape in the vertical direction are prominent than in the horizontal direction.
2.2 Motion suppression by signal clustering
Suppose that a single pullback produces W slices to be assessed. Each slice is discretized into M×N sampling locations. The PA signals collected in the kth slice are recorded in a M×N-dimensional matrix,
(1)$${{\boldsymbol P}_k} = \left[ {\begin{array}{cccc} {{{\boldsymbol p}_{11}}}&{{{\boldsymbol p}_{12}}}& \cdots &{{{\boldsymbol p}_{1N}}}\\ {{{\boldsymbol p}_{21}}}&{{{\boldsymbol p}_{22}}}& \cdots &{{{\boldsymbol p}_{2N}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{{\boldsymbol p}_{M1}}}&{{{\boldsymbol p}_{M2}}}& \cdots &{{{\boldsymbol p}_{MN}}} \end{array}} \right], $$
where k = 1,2,…,W, pij denotes a pressure vector collected at the jth location in the ith measuring angle, i = 1,2,…,M, and j = 1,2,…,N. A matrix P constitutes a signal frame and a single pullback produces W frames in total. The correlation matrix of these signal frames is obtained by
(2)$${\boldsymbol C} = \left[ {\begin{array}{cccc} {{\rho_{11}}}&{{\rho_{12}}}& \cdots &{{\rho_{1W}}}\\ {{\rho_{21}}}&{{\rho_{22}}}& \cdots &{{\rho_{2W}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{\rho_{W1}}}&{{\rho_{W2}}}& \cdots &{{\rho_{WW}}} \end{array}} \right], $$
(3)$${\rho _{ij}} = \frac{{\textrm{|Cov}({{\boldsymbol P}_i},{{\boldsymbol P}_j})|}}{{\sqrt {D({{\boldsymbol P}_i})} \cdot \sqrt {D({{\boldsymbol P}_j})} }}. $$
Here, i and j range from 1 to W, Pi and Pj denote, respectively, the ith and jth frame, ρij is the correlation coefficient between Pi and Pj, D(Pi) and D(Pj) are the variances of Pi and Pj, and Cov(Pi, Pj) is the covariance of Pi and Pj. The correlation coefficients in C are rearranged into a 1-D array row by row, that is, {ρ11, ρ12,…, ρ1W, ρ21, ρ22, …, ρ2W, …, ρW1, ρW2, …, ρWW}. For simplicity, it is denoted as a data set, F = {f1, f2, f3, …, fQ}, where Q = W×W.
The entries in F are grouped into subsets corresponding to the similar phases in a cardiac cycle by affinity propagation (AP) clustering [28,29]. The detailed steps are as follows.
Step 1. Initialization.
The iteration times is initialized as t = 0. The responsibility matrix Rt = [rt(i, k)]Q×Q and availability matrix At = [at(i, k)]Q×Q are initialized as zero matrices, where r0(i, k) = a0(i, k) = 0. rt(i, k) denotes the responsibility sent from fi to candidate cluster center fk at the tth iteration, reflecting the accumulated evidence for how well-suited fk is to serve as the cluster center for fi. at(i, k) denotes the availability sent from candidate cluster center fk to fi at the tth iteration, reflecting the accumulated evidence for how appropriate it would be for fi to choose fk as its cluster center [28].
Step 2. Calculation of the similarity matrix.
The similarity st(i, k) is determined by
(4)$${s_t}(i,k) ={-} {|{{f_i} - {f_k}} |^2}, $$
which indicates how well fk is suited to be the cluster center for fi. The similarity matrix St at the tth iteration is a collection of real-valued similarities among all data points in F.
Step 3. Updating of responsibility matrix and availability matrix.
The responsibility matrix and availability matrix are updated as
(5)$${r_{t + 1}}(i,k) = \left\{ {\begin{array}{ll} {{s_t}(i,k) - \mathop {\max }\limits_{_{k^{\prime}\textrm{s}\textrm{.t}\textrm{.}k^{\prime} \ne k}} \{{{a_t}(i,k^{\prime}) + {r_t}(i,k^{\prime})} \}},&i \ne k\\ {{s_t}(i,k) - \mathop {\max }\limits_{_{k^{\prime}\textrm{s}\textrm{.t}\textrm{.}k^{\prime} \ne k}} \{{{s_t}(i,k^{\prime})} \}},&i = k \end{array}} \right.$$
(6)$${a_{t + 1}}(i,k) = \left\{ {\begin{array}{ll} {\min \left\{ {0,{r_{t + 1}}(k,k) + \sum\limits_{i^{\prime}\textrm{s}\textrm{.t}\textrm{.}i^{\prime} \notin \{ i,k\} } {\max ({0,{r_{t + 1}}(i^{\prime},k)} )} } \right\}},&i \ne k\\ {\sum\limits_{i^{\prime}\textrm{s}\textrm{.t}\textrm{.}i^{\prime} \ne k} {\max } \{{0,{r_{t + 1}}(i^{\prime},k)} \}},&i = k \end{array}} \right., $$
where i', k', i, and k range from 1 to Q, and rt+1(i, k) and at+1(i, k) are, respectively, the responsibility and availability between fi and fk at the (t+1)th iteration.
Step 4. Attenuation of the responsibilities and availabilities.
To avoid numerical instability, the responsibilities and availabilities are attenuated by a damping factor,
(7)$$\left\{ {\begin{array}{l} {{{\hat{r}}_{t + 1}}(i,k) = \lambda {r_t}(i,k) + (1 - \lambda ){r_{t + 1}}(i,k)}\\ {{{\hat{a}}_{t + 1}}(i,k) = \lambda {a_t}(i,k) + (1 - \lambda ){a_{t + 1}}(i,k)} \end{array}} \right., $$
where λ denotes the damping factor ranging in [0.5,1]. In this study, we set it as 0.5.
Step 5. Determination of clustering results.
The sum of the updated responsibility and availability is calculated,
(8)$$e = {\hat{a}_{t + 1}}(i,k) + {\hat{r}_{t + 1}}(i,k). $$
When e reaches its maximum, fi is the cluster center of fk if i = k; otherwise, fk is the cluster center of fi.
Step 6. Determination of termination of the iteration.
Whether the iteration is terminated or not is determined by the following objective function,
(9)$$J\textrm{ = }\sum\limits_{f \in {{\boldsymbol h}_i},i = 1}^H {|f - {g_i}{|^2}}, $$
where f denotes a sample in the cluster, hi is the cluster centered on gi, and H is the number of clusters. If J exceeds the threshold, the iteration is terminated and accordingly the clustering results are the output; otherwise, let t←t+1 and return to Step 2.
After clustering, the frames of signals collected in the same cardiac phases are selected. Finally, the images representing the initial pressure distribution in the vascular cross-sections are reconstructed from these frames of signals with the TR reconstruction approach [30]. The complete process of the proposed method is illustrated in Fig. 2.
Fig. 2. Flowchart of complete process of our method.
2.3 Performance evaluation
We validated our method on simulation data by considering the lack of continuous pullback volumetric data that was acquired on in vivo dynamic vessels owing to the limitation of experimental conditions and set-up. We implemented the method by programming using MATLAB (R2018a, The MathWorks, Inc., Natick, Massachusetts) on a laptop configured with a 2.5 GHz Intel Core i5-10300H CPU, 8 GB RAM, and Windows 10 64bits as the operating system.
2.3.1 Simulated image preparation
We constructed computer-generated phantoms that mimic coronary arterial vessels having different tissue types. Figure 3 shows representative examples of vascular cross-sections. Coronary arterial vessels follow the cyclic dynamics of the heart in the cardiac cycles, resulting in the periodical variations in the cross sectional area of the vascular lumen. For each phantom, we generated successive cross-sections along the long axis of the lumen corresponding to different time points in the cycles based on the periodic change of the luminal area. We assumed that the first frame in a pullback sequence is acquired at end-diastole at which moment the luminal area reaches its minimum. Thus, we determined the luminal area at a time-point n in the sequence by [31]
(10)$$S(n) = \left\{ {\begin{array}{lr} {A\sin (\mathrm{\pi }Rn)\exp ( - \mathrm{\pi }\alpha n) + {S_0},}&{\textrm{0 < }n < 1/R} \\ {S(n - 1/R)},&{n \ge 1/R} \end{array}} \right., $$
where A is a constant controlling the luminal area, R is the heart rate in Hz (beats per second), α is a constant determining the time when the luminal area reaches its maximum, and S0 is the minimal luminal area at end-diastole. We obtained the vessel wall contour at time-point n+1 by expanding outward or contracting inward the contour at time-point n which is represented with discrete points ${{\boldsymbol V}_{i,n}} = ({l_{i,n}},{\theta _{i,n}})$. The polar coordinate of a point in the contour at time-point n+1, ${{\boldsymbol V}_{i,n + 1}} = ({l_{i,n + 1}},{\theta _{i,n + 1}})$, is determined by
(11)$$\left\{ {\begin{array}{l} {{l_{i,n + 1}} = {l_{i,n}}\lambda \sqrt {S(n + 1)/S(n)} }\\ {{\theta_{i,n + 1}} = {\theta_{i,n}}} \end{array}} \right., $$
where l denotes the polar radius, θ denotes the polar angle, and λ is a proportional factor controlling the extent to which the contour expands or contracts. The value of λ depends on the tissue type. We set λ = 1 for the lumen, λ < 1 for the calcified plaques owing to their poorer elasticity than the lumen, and λ > 1 for the fibrosis-lipid plaques owing to their better elasticity than the lumen. Accordingly, the cross-sectional model at each time-point in the cycle is automatically generated from the one at end-diastole.
We simulated the sequential frames of spatially varying PA signals in vascular cross-sections by inputting the generated cross-sectional geometrical models into our previously developed endoscopic PAT simulation platform [31]. Table 1 provides the optical and acoustic property parameters of the vessel phantoms defined by referring to the histological findings [32,33]. The speed of sound and density of each tissue type follow Gaussian distributions based on the values shown in the table.
Table 1. Parameters of optical and acoustic properties of vessel phantoms for forward IVPA simulation
View Table | View all tables in this article
Considering that motion artifacts associated with the cardiac cycle are not visually prominent in successive transversal images, we utilized time-axis views, that is, L-views, of the pullback volumes to facilitate analyzing motion artifacts. We obtained vertical and horizontal L-views as illustrated in Fig. 4.
2.3.2 Figures of Merit
We utilized the dissimilarity matrix (DM), average dissimilarity (AD), and average inter-frame dissimilarity (AIFD) as quantitative metrics to evaluate the quality of reconstructed image sequences. For a W-frame sequence {I1,I2,…,IW}, a W×W-dimensional DM, that is, DW×W = [di,j], is constructed by pairwise comparison of the images [17],
(12)$${d_{i,j}} = 1 - \frac{{\sum\limits_{k = 1}^{Width} {\sum\limits_{l = 1}^{Height} {|{{I_i}({k,l} )- {\mu_i}} |\cdot |{{I_j}({k,l} )- {\mu_j}} |} } }}{{\sqrt {\sum\limits_{k = 1}^{Width} {\sum\limits_{l = 1}^{Height} {{{[{{I_i}({k,l} )- {\mu_i}} ]}^2}} \sum\limits_{k = 1}^{Width} {\sum\limits_{l = 1}^{Height} {{{[{{I_j}({k,l} )- {\mu_j}} ]}^2}} } } } }}, $$
where i and j range from 1 to W, Ii and Ij are, respectively, the ith and jth image frame of dimensions Width×Height, di,j is the dissimilarity between Ii and Ij, Ii(k,l) and Ij(k,l) are the gray-levels of the pixel (k, l) in Ii and Ij, respectively, and μi and μj are the average gray-levels of Ii and Ij. di,j ranges in the interval [0,1] and di,j = dj,i. A smaller element in the DM represents a frame pair that differ less in their appearances than more different frames.
AD and AIFD are, respectively, defined as [17]
(13)$$D(k )= \frac{1}{{W - k}}\sum\limits_{m = 1}^{W - k} {{d_{m,m + k}}}$$
(14)$$D = \frac{1}{W}\sum\limits_{i = 1}^W {\sum\limits_{j = 1}^W {{d_{i,j}}} }, $$
where k = 0,1,…,W‒1, D(k) denotes the AD between two frames with the interval of k frames, D(0) = 0, and dm,m+k is the dissimilarity between frames Im and Im+k. Lower AD and AIFD indicate higher similarity between frames.
3.1 Results of image reconstruction
In demonstration experiments, we generated successive 300 cross-sections corresponding to different time-points in the cardiac cycles for each phantom. We set α = 0.5 and R = 1.2 Hz, thereby the length of a cardiac cycle was ${1 / R} \approx 0.83\textrm{ }\textrm{s}$. The frame rate was set as 24 fps by referring to [34], that is, 20 frames per cycle, therefore 300 frames covered about 15 cycles. Figure 5 shows the transversal images obtained by forward simulation for the four cross-sections shown in Fig. 3. Figure 6 shows examples of transversal images representing the initial pressure distribution reconstructed from the simulated PA signals by the conventional TR algorithm without motion correction. In the figure, the shifts between successive cross-sections, that is, translation and rotation of the structures from one image to the other, can be observed. Figure 7 shows the L-views of four 300-frame sequences reconstructed by the conventional TR (non-gated sequence) and our method (gated sequence), respectively. In our method, the objective function threshold of clustering was set as 0.3. In the figure, motion artifacts associated with the cardiac cycle are observed as the saw-tooth shaped appearance of the vessel wall. Moreover, the artifacts are more prominent in the vertical L-views than in the horizontal ones. Note that the L-views of the gated sequences exhibit a significant enhancement of the visualization with the smoothed vessel wall boundaries as opposed to the non-gated sequences. However, there is an apparent loss of resolution due to gating, which is a known trade-off of the gating process [11].
Fig. 3. Geometry of vascular cross-sections which are numbered as I, II, III, and IV from left to right.
Fig. 4. Schematic diagram of L-views of IVPA pullback volumes.
Fig. 5. Simulated transversal images of four vessel cross-sections shown in Fig. 3. (a) Images representing the normalized optical energy deposition; (b) Images representing the normalized PA signals reaching the detector.
Fig. 6. Images randomly selected from the pullback sequences of transversal images representing the initial pressure distribution, which are reconstructed directly from the simulated PA signals by TR without motion suppression. (a) Phantom I; (b) Phantom II; (c) Phantom III; (d) Phantom IV.
Fig. 7. The gating results of the proposed method for four IVPA sequences. (a) Vertical L-views; (b) Horizontal L-views. The gated sequences are artificially stretched to match the same physical lengths as the non-gated ones in order to facilitate comparison due to the fact that there are much more images/mm present in the non-gated sequences, while the gated sequences have much fewer images due to the gating.
Figure 8, Fig. 9 and Table 2 provide the results of the evaluation metrics obtained from the non-gated and gated image sequences. In Fig. 8, the DMs are displayed as grayscale images which exhibit periodic structures. Obviously, the overall brightness of the visualization of DMs is reduced significantly, while both AD and AIFD are decreased after motion correction, suggesting reduced dissimilarity between frames in the gating subsets.
Fig. 8. Visualization of DMs obtained from the non-gated (left column) and gated (right column) image sequences. (a) Phantom I; (b) Phantom II; (c) Phantom III; (d) Phantom IV
Fig. 9. AD functions with respect to frame intervals obtained from the non-gated and gated image sequences for (a) phantom I, (b) II, (c) III, and (d) IV. There are 60, 65, 80, and 72 frames in four gated sequences, respectively.
Table 2. AIFDs of the image sequences before and after motion correction.
3.2 Influence of objective function threshold
The objective function threshold in Eq. (9) determines whether the iteration of clustering is terminated or not. We set it as 0.3, 0.6, 0.9, 1.2, 1.5, and 1.8 respectively while other conditions remained unchanged to investigate its influence on the performance of the proposed method. Table 3 provides the results of phantom I, revealing that a lower threshold leads to better results of motion suppression. However, a lower threshold leads to more iterations. Therefore, a trade-off between the time performance and the reconstruction accuracy should be made.
Table 3. AIFDs of the image sequences for phantom I in the case of different thresholds of the objective function.
3.3 Results of comparison with the image-based gating
We compared the proposed method with an image-based gating approach that was developed for IVUS [17]. The results shown in Fig. 10 reveal that the motion artifacts in the image sequences processed by image-based gating are still prominent compared with those obtained by our method. This conclusion can also be drawn from the results of evaluation metrics provided in Fig. 11. In addition, we recorded the run-time of both methods as provided in Table 4. The run-time for the image-based gating includes the time cost in reconstructing the entire image sequence by TR besides the time cost in gating. The results indicate that our method is more computationally efficient than the image-based gating. This is because our method groups raw PA signals prior to image reconstruction, avoiding reconstruction and post-processing of those non-gating frames. In contrast, the image-based gating corrects motion artifacts via a post processing procedure over the complete pullback recording. It requires to reconstruct the successive images from the raw signals collected in each slice along the pullback trajectory. This procedure is computationally expensive compared with the off-line gating itself.
Fig. 10. L-views of the IVPA sequence of phantom I processed with our method and the image-based gating method. (a) Non-gated sequence; (b) Gated sequence with our method; (c) Gated sequence with the image-based method.
Fig. 11. DMs and ADs of the reconstructed image sequences for phantom I. There are 60 and 55 frames in the gated sequences obtained by our method and image-based gating, respectively. (a) Visualization of the DMs obtained from non-gated sequence (left), gated sequence by our method (middle), and gated sequence by image-based gating (right); (b) AD with respect to frame interval.
Table 4. Quantitative metrics of the image sequences of phantom I processed by two methods.
4.1 Superiority of our method to other motion correction methods
As summarized in Introduction, the existing strategies for motion correction in PAT involving multi-frame analysis include FMC, motion clustering, model-based reconstruction, and gating. FMC or IFMC aims to reduce motion artifacts in frame-averaged PA images and enhance the SNR of deep tissue imaging which utilizes the regulative energy of a commonly-used Nd:YAG laser. Motion clustering focuses on alleviating motion blurring which is observed in spectroscopic PA images acquired under the multiple wavelengths excitation at different time-points. Both strategies are applicable to the scenarios where multiple images are acquired for the same target with motion. It is infeasible to apply them to correct motion artifacts associated with the cardiac cycle in IVPA pullback volumetric images that are acquired at different locations along the catheter pullback path. The model-based motion correction is to solve the forward problem iteratively in nature, where the forward operator is calculated repeatedly. This procedure is computationally burdensome, which hinders its application in real-time, high-resolution, and large-volume imaging including IVPA.
Gating techniques have been commonly utilized in cardiac imaging to reduce motion artifacts owing to heart beat or breathing. However, prospective gating requires a special ECG or respiratory triggering device, causing the added complexity, long setup times, and prolonged acquisition procedure compared with the continuous non-gating acquisition. The accuracy of retrospective gating depends on the gating signals that are extracted from the raw measurements or images. The image-based gating relies on post processing of images, therefore it requires to reconstruct all images in a dynamic sequence prior to gating. A single IVPA pullback sequence that is acquired on a vessel segment of 10 mm long contains about 400 frames in the case of constant pullback (0.5 mm/sec) and a frame rate of 20 fps. The overload of image data reduces the efficiency of the image-based algorithm. Moreover, the inevitable loss of information regarding the vascular structures and properties in the procedure of image reconstruction is an important issue that should be taken into account. Our method is based on a retrospective gating scheme by selecting those frames of PA signals that are acquired in the same cardiac phase. It differs from the state-of-art image-based gating schemes in that the latter subtracts cardiac dynamics based on grayscale images themselves, whereas our method suppresses motion prior to image reconstruction avoiding reconstructing the non-gating frames.
4.2 Limitations of our method
Our method enables full suppression of motion artifacts associated with the cardiac cycle with a relatively low computation cost compared with the image-based gating. However, the loss of information and resolution is a known trade-off of the gating process. Only static sequences at certain cardiac phases are remained in the procedure of resampling signals or images, leading to the loss in the useful information regarding vascular structural and functional features. Moreover, all frames between systole and diastole are generally required to be analyzed for a continuous assessment of tissue elastic properties. Further study on the data integrity is desired in the future, so as to ensure the rendered image sequences contain complete information while enhancing the image quality.
The motion artifacts in in vivo sequential intravascular images are a combined result of various factors such as heartbeat motion, pulsating blood flow in the vascular lumen, arterial vasomotion, and catheter-based motion. The first three issues are related to the cardiac cycle. As a result, the trajectory of the catheter tip does not remain parallel to the lumen axis owing to the lateral movement of the catheter tip with respect to the lumen, leading to the shift between successive slices. In addition, an irregular deformation and pulsation of the vessel wall are related to heartbeat motion. Our method aims to suppress motion artifacts associated with the cardiac cycle. However, there remains a need to eliminate catheter-based motion artifacts, which is a common issue in catheter-based imaging systems, such as catheter bending, longitudinal oscillation of the catheter, and nonuniform rotation distortion (NURD). The longitudinal oscillation of the catheter within the vascular lumen results in the repeated sampling at a acquisition position. NURD is inherently present in intravascular imaging systems with rotary-pullback catheters owing to mechanical friction between the catheter torque cable and sheath [35–37]. Recently, the distal scanning endoscopes with miniature micromotors have been designed to deal with artifacts associated with bending and NURD of proximally rotated catheters [38]. However, it is a challenging and expensive task to miniaturize high-speed motors since the relatively large size of the motor limits the catheter to access the vascular stenosis [36,39]. Consequently, the catheter-based motion artifacts might still exist in clinical scenarios, which degrade image quality and hinder identification of various tissue types as well as the quantification of tissue properties.
4.3 Future direction related to deep learning
In recent years, deep learning (DL) has been dominating in the field of medical imaging by significantly facilitating the performance of multiple tasks [40]. DL methodologies have gained interest as potential solutions for efficient processing and analysis of large datasets since they have outstanding advantages in image processing, identification, and interpretation over non-learning ones.
Chen et al. [41] reported a method for correcting motion artifacts in optical resolution photoacoustic microscopic (OR-PAM) images by convolutional neural network (CNN). To the best of our knowledge, it is the first study on motion correction by DL in the field of PAI. They constructed a CNN with which they post-processed the maximum amplitude projection (MAP) image of OR-PAM to alleviate motion artifacts. However, motion suppression for multi-frame PAT by DL has not been explored yet up to the writing of this paper. One of the core bottlenecks is the lack of reliable experimental training data. The size and quality of the data sets determine the efficacy of DL methodologies. Three sources for obtaining training data have been adopted currently, clinical in vivo data, phantom data, and simulation data. Previous study on reconstruction of a single PA image by DL mostly uses phantom data and simulation data for network training and verification. This is attributed to the lack of the ground truth information on the underlying tissue optical properties or the initial pressure distribution when acquiring experimental measurements [42]. Motion artifacts in IVPA imaging are unobservable in a single cross-sectional image, therefore multi-frame analysis is essential, which further improves the requirements to the amount of the training data. In addition, the underlying dynamics of the arterial vessels are generally unknown in in vivo experimental settings. Another challenging issue is that it is a laborious task to manufacture a dynamic phantom mimicking the arterial vessel in a complex clinical scenario. Computer simulation enables generation of a large amount of simulation data in a flexible way. However, it is still difficult to meet the requirements of the amount and variety of training data used in motion artifacts correction by DL. Specifically, the algorithms trained by the simulation data might fail in a clinical scenario because of the fact that the simulation data suffer from several shortcomings compared with the data distribution in reality, such as domain gap, sparsity or a selection bias [42]. By considering the similarity of IVUS and IVPA in their imaging principles and image contents, transfer learning may be a possible solution in the future. Namely, the motion correction algorithm is trained on a large data set of in vivo IVUS pullback studies, whereas a smaller experimental PA data set is utilized to finely adjust the neural network to the experimental data distribution.
In summary, motion artifacts removal is essential for stabilizing IVPA image sequences that suffer from cardiac motion. Our method is driven by raw PA signals alone that are acquired in successive slices by ultrasonic transducers during continuous pullback of the catheter. We grouped PA signals by clustering to select the signal frames that are collected in the same cardiac phase. The results demonstrate an improvement in the visualization of the L-mode views, where saw-tooth shaped vessel wall boundaries are considerably smoothed. The quantitative evaluation metrics including AD and AIFD can be increased by up to 50% after motion correction, indicating the reduction in the misalignment between vascular cross-sections. In addition, our method outperforms the image-based gating in correcting motion artifacts with a low computational burden.
National Natural Science Foundation of China (62071181).
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.
The authors declare no conflicts of interest.
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
1. W. Choi, D. Oh, and C. Kim, "Practical photoacoustic tomography: realistic limitations and technical solutions," J. Appl. Phys. 127(23), 230903 (2020). [CrossRef]
2. S.S.S. Choi and A. Mandelis, "Review of the state of the art in cardiovascular endoscopy imaging of atherosclerosis using photoacoustic techniques with pulsed and continuous-wave optical excitations," J. Biomed. Opt. 24(8), 080902 (2019). [CrossRef]
3. Y. Li, J. Chen, and Z. Chen, "Multimodal intravascular imaging technology for characterization of atherosclerosis," J. Innov. Opt. Health Sci. 13(01), 2030001 (2020). [CrossRef]
4. H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, "Motion correction in optical resolution photoacoustic microscopy," IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019). [CrossRef]
5. H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, "Whole-body three-dimensional optoacoustic tomography system for small animals," J. Biomed. Opt. 14(6), 064007 (2009). [CrossRef]
6. R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, "Dynamic optical angiography of mouse anatomy using radial projections," Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).
7. R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, "Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo," Quant. Imag. Med. Surg. 5(1), 23–29 (2015). [CrossRef]
8. X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, "Advanced optoacoustic methods for multiscale imaging of in vivo dynamics," Chem. Soc. Rev. 46(8), 2158–2198 (2017). [CrossRef]
9. M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, "Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation," Magn. Reson. Med. 80(1), 181–189 (2018). [CrossRef]
10. Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, "Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data," J. Nucl. Med. 59(9), 1480–1486 (2018). [CrossRef]
11. N. Torbati, A. Ayatollahi, and P. Sadeghipour, "Image-based gating of intravascular ultrasound sequences using the phase information of dual-tree complex wavelet transform coefficients," IEEE Trans. Med. Imaging 38(12), 2785–2795 (2019). [CrossRef]
12. R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, "A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images," Int J Cardiovasc Imaging 37(6), 1825–1837 (2021). [CrossRef]
13. Z. Sun and Q. Yan, "An off-line gating method for suppressing motion artifacts in ICUS sequence," Comput. Biol. Med. 40(11-12), 860–868 (2010). [CrossRef]
14. Z. Sun and M. Li, "Suppression of cardiac motion artifacts in sequential intracoronary optical coherence images," J. Med. Imag. Health In. 6(7), 1787–1793 (2016). [CrossRef]
15. A. Hernàndez-Sabaté, D. Gil, J. Garcia-Barnés, and E. Martí, "Image-based cardiac phase retrieval in intravascular ultrasound sequences," IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(1), 60–72 (2011). [CrossRef]
16. S.K. Nadkarni, D.R. Boughner, and A. Fenster, "Image-based cardiac gating for three-dimensional intravascular ultrasound," Ultrasound in Medicine & Biology 31(1), 53–63 (2005). [CrossRef]
17. S.M. O'Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, "Image-based gating of intravascular ultrasound pullback sequences," IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008). [CrossRef]
18. J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, "Retrospective respiration-gated whole-body photoacoustic computed tomography of mice," J. Biomed. Opt. 19(1), 16003 (2014). [CrossRef]
19. A. Ron, N. Davoudi, X.L. Deán-Ben, and D. Razansky, "Self-gated respiratory motion rejection for optoacoustic tomography," Appl. Sci. 9(13), 2737 (2019). [CrossRef]
20. M. Kim, J. Kang, J.H. Chang, T.K. Song, and Y. Yoo, "Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: a preliminary study," Proceedings of 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21-25 July, pp. 1527-1531 (2013).
21. A. Taruttis, J. Claussen, D. Razansky, and V. Ntziachristos, "Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart," J. Biomed. Opt. 17(1), 016009 (2012). [CrossRef]
22. M. Xu and L.V. Wang, "Time-domain reconstruction for thermoacoustic tomography in a spherical geometry," IEEE Trans. Med. Imaging 21(7), 814–822 (2002). [CrossRef]
23. J. Chung and L. Nguyen, "Motion estimation and correction in photoacoustic tomographic reconstruction," SIAM J. Imaging Sci. 10(1), 216–242 (2017). [CrossRef]
24. J. Poudel, L. Yang, and M.A. Anastasio, "A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography," Phys. Med. Biol. 64(14), 14TR01 (2019). [CrossRef]
25. R. Manwar, M. Zafar, and Q. Xu, "Signal and image processing in biomedical photoacoustic imaging: a review," Optics 2(1), 1–24 (2021). [CrossRef]
26. Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, "Quantitative reconstruction of absorption coefficients for photoacoustic tomography," Appl. Sci. 9(6), 1187 (2019). [CrossRef]
27. T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, "A deep learning method based on U-Net for quantitative photoacoustic imaging," Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).
28. S. Zhou and Z. Xu, "Automatic grayscale image segmentation based on affinity propagation clustering," Pattern Anal Applic 23(1), 331–348 (2020). [CrossRef]
29. C.X. Sun, Y. Yang, H. Wang, and W. Wang, "A clustering approach for motif discovery in ChIP-Seq dataset," Entropy 21(8), 802 (2019). [CrossRef]
30. Z. Sun, D. Han, and Y. Yuan, "2-D image reconstruction of photoacoustic endoscopic imaging based on time-reversal," Comput. Biol. Med. 76, 60–68 (2016). [CrossRef]
31. Z. Sun, Y. Yuan, and D. Han, "A computer-based simulator for intravascular photoacoustic images," Comput. Biol. Med. 81, 176–187 (2017). [CrossRef]
32. S.L. Jacques, "Optical properties of biological tissues: a review," Phys. Med. Biol. 58(11), R37–R61 (2013). [CrossRef]
33. Y. Xu, H. Xue, and G. Hu, "Diameter measurements of coronary artery segments based on image analysis of X-ray angiograms," Chinese J. Biomed. Eng. 26(6), 874–878 (2007).
34. D. Vanderlaan, A.B. Karpiouk, D. Yeager, and S. Emelianov, "Real-time intravascular ultrasound and photoacoustic imaging," IEEE Trans. Ultrason., Ferroelect., Freq. Contr 64(1), 141–149 (2017). [CrossRef]
35. J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, "Stick-slip non-uniform rotation distortion correction in distal scanning optical coherence tomography catheters," J. Innov. Opt. Heal. Sci. 13(06), 2050030 (2020). [CrossRef]
36. E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, "Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration," J. Biomed. Opt. 23(01), 1 (2018). [CrossRef]
37. O.O. Ahsen, H.C. Lee, M.G. Giacomelli, Z. Wang, K. Liang, T.H. Tsai, B. Potsaid, H. Mashimo, and J.G. Fujimoto, "Correction of rotational distortion for catheter-based en face OCT and OCT angiography," Opt. Lett. 39(20), 5973–5976 (2014). [CrossRef]
38. J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, "A novel synchronous micro motor for intravascular ultrasound imaging," IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019). [CrossRef]
39. F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, "In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation," PLoS One 15(3), e0230821 (2020). [CrossRef]
40. H. Deng, H. Qiao, Q. Dai, and C. Ma, "Deep learning in photoacoustic imaging: a review," J. Biomed. Opt. 26(04), 040901 (2021). [CrossRef]
41. X. Chen, W. Qi, and L. Xi, "Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy," Vis. Comput. Ind. Biomed. Art 2(1), 12 (2019). [CrossRef]
42. J. Grohl, M. Schellenberg, K. Dreher, and L. Maier-Hein, "Deep learning for biomedical photoacoustic imaging: A review," Photoacoustics 22, 100241 (2021). [CrossRef]
Article Order
W. Choi, D. Oh, and C. Kim, "Practical photoacoustic tomography: realistic limitations and technical solutions," J. Appl. Phys. 127(23), 230903 (2020).
[Crossref]
S.S.S. Choi and A. Mandelis, "Review of the state of the art in cardiovascular endoscopy imaging of atherosclerosis using photoacoustic techniques with pulsed and continuous-wave optical excitations," J. Biomed. Opt. 24(8), 080902 (2019).
Y. Li, J. Chen, and Z. Chen, "Multimodal intravascular imaging technology for characterization of atherosclerosis," J. Innov. Opt. Health Sci. 13(01), 2030001 (2020).
H. Zhao, N. Chen, T. Li, J. Zhang, R. Lin, X. Gong, L. Song, Z. Liu, and C. Liu, "Motion correction in optical resolution photoacoustic microscopy," IEEE Trans. Med. Imaging 38(9), 2139–2150 (2019).
H.P. Brecht, R. Su, M. Fronheiser, S.A. Ermilov, A. Conjusteau, and A.A. Oraevsky, "Whole-body three-dimensional optoacoustic tomography system for small animals," J. Biomed. Opt. 14(6), 064007 (2009).
R.B. Lam, R.A. Kruger, D.R. Reinecke, S.P. Delrio, M.M. Thornton, P. Picot, and T.G. Morgan, "Dynamic optical angiography of mouse anatomy using radial projections," Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, 23 Feb., vol. 7564, pp. 756405 (2010).
R. Lin, J. Chen, H. Wang, M. Yan, W. Zheng, and L. Song, "Longitudinal label-free optical-resolution photoacoustic microscopy of tumor angiogenesis in vivo," Quant. Imag. Med. Surg. 5(1), 23–29 (2015).
X. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, "Advanced optoacoustic methods for multiscale imaging of in vivo dynamics," Chem. Soc. Rev. 46(8), 2158–2198 (2017).
M.H. Moghari, A. Barthur, M.E. Amaral, T. Geva, and A.J. Powell, "Free-breathing whole-heart 3D cine magnetic resonance imaging with prospective respiratory motion compensation," Magn. Reson. Med. 80(1), 181–189 (2018).
Y. Lu, K. Fontaine, T. Mulnix, J.A. Onofrey, S. Ren, V. Panin, J. Jones, M.E. Casey, R. Barnett, P. Kench, and R. Fulton, "Respiratory motion compensation for PET/CT with motion information derived from matched attenuation-corrected gated PET data," J. Nucl. Med. 59(9), 1480–1486 (2018).
N. Torbati, A. Ayatollahi, and P. Sadeghipour, "Image-based gating of intravascular ultrasound sequences using the phase information of dual-tree complex wavelet transform coefficients," IEEE Trans. Med. Imaging 38(12), 2785–2795 (2019).
R. Bajaj, X. Huang, Y. Kilic, A. Jain, A. Ramasamy, R. Torii, J. Moon, T. Koh, T. Crake, M.K. Parker, V. Tufaro, P.W. Serruys, F. Pugliese, A. Mathur, A. Baumbach, J. Dijkstra, Q. Zhang, and C.V. Bourantas, "A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images," Int J Cardiovasc Imaging 37(6), 1825–1837 (2021).
Z. Sun and Q. Yan, "An off-line gating method for suppressing motion artifacts in ICUS sequence," Comput. Biol. Med. 40(11-12), 860–868 (2010).
Z. Sun and M. Li, "Suppression of cardiac motion artifacts in sequential intracoronary optical coherence images," J. Med. Imag. Health In. 6(7), 1787–1793 (2016).
A. Hernàndez-Sabaté, D. Gil, J. Garcia-Barnés, and E. Martí, "Image-based cardiac phase retrieval in intravascular ultrasound sequences," IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(1), 60–72 (2011).
S.K. Nadkarni, D.R. Boughner, and A. Fenster, "Image-based cardiac gating for three-dimensional intravascular ultrasound," Ultrasound in Medicine & Biology 31(1), 53–63 (2005).
S.M. O'Malley, J.F. Granada, S. Carlier, M. Naghavi, and I.A. Kakadiaris, "Image-based gating of intravascular ultrasound pullback sequences," IEEE Trans. Inform. Technol. Biomed. 12(3), 299–306 (2008).
J. Xia, W.Y. Chen, K. Maslov, M.A. Anastasio, and L.V. Wang, "Retrospective respiration-gated whole-body photoacoustic computed tomography of mice," J. Biomed. Opt. 19(1), 16003 (2014).
A. Ron, N. Davoudi, X.L. Deán-Ben, and D. Razansky, "Self-gated respiratory motion rejection for optoacoustic tomography," Appl. Sci. 9(13), 2737 (2019).
M. Kim, J. Kang, J.H. Chang, T.K. Song, and Y. Yoo, "Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: a preliminary study," Proceedings of 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21-25 July, pp. 1527-1531 (2013).
A. Taruttis, J. Claussen, D. Razansky, and V. Ntziachristos, "Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart," J. Biomed. Opt. 17(1), 016009 (2012).
M. Xu and L.V. Wang, "Time-domain reconstruction for thermoacoustic tomography in a spherical geometry," IEEE Trans. Med. Imaging 21(7), 814–822 (2002).
J. Chung and L. Nguyen, "Motion estimation and correction in photoacoustic tomographic reconstruction," SIAM J. Imaging Sci. 10(1), 216–242 (2017).
J. Poudel, L. Yang, and M.A. Anastasio, "A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography," Phys. Med. Biol. 64(14), 14TR01 (2019).
R. Manwar, M. Zafar, and Q. Xu, "Signal and image processing in biomedical photoacoustic imaging: a review," Optics 2(1), 1–24 (2021).
Y. Liu, M. Sun, T. Liu, Y. Ma, D. Hu, C. Li, and N. Feng, "Quantitative reconstruction of absorption coefficients for photoacoustic tomography," Appl. Sci. 9(6), 1187 (2019).
T. Chen, T. Lu, S. Song, S. Miao, F. Gao, and J. Li, "A deep learning method based on U-Net for quantitative photoacoustic imaging," Proceedings of SPIE International Conference on Photons Plus Ultrasound: Imaging and Sensing, vol.11240, pp.112403 V (2020).
S. Zhou and Z. Xu, "Automatic grayscale image segmentation based on affinity propagation clustering," Pattern Anal Applic 23(1), 331–348 (2020).
C.X. Sun, Y. Yang, H. Wang, and W. Wang, "A clustering approach for motif discovery in ChIP-Seq dataset," Entropy 21(8), 802 (2019).
Z. Sun, D. Han, and Y. Yuan, "2-D image reconstruction of photoacoustic endoscopic imaging based on time-reversal," Comput. Biol. Med. 76, 60–68 (2016).
Z. Sun, Y. Yuan, and D. Han, "A computer-based simulator for intravascular photoacoustic images," Comput. Biol. Med. 81, 176–187 (2017).
S.L. Jacques, "Optical properties of biological tissues: a review," Phys. Med. Biol. 58(11), R37–R61 (2013).
Y. Xu, H. Xue, and G. Hu, "Diameter measurements of coronary artery segments based on image analysis of X-ray angiograms," Chinese J. Biomed. Eng. 26(6), 874–878 (2007).
D. Vanderlaan, A.B. Karpiouk, D. Yeager, and S. Emelianov, "Real-time intravascular ultrasound and photoacoustic imaging," IEEE Trans. Ultrason., Ferroelect., Freq. Contr 64(1), 141–149 (2017).
J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, "Stick-slip non-uniform rotation distortion correction in distal scanning optical coherence tomography catheters," J. Innov. Opt. Heal. Sci. 13(06), 2050030 (2020).
E. Abouei, A.M.D. Lee, H. Pahlevaninezhad, G. Hohert, M. Cua, P. Lane, S. Lam, and C. MacAulay, "Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration," J. Biomed. Opt. 23(01), 1 (2018).
O.O. Ahsen, H.C. Lee, M.G. Giacomelli, Z. Wang, K. Liang, T.H. Tsai, B. Potsaid, H. Mashimo, and J.G. Fujimoto, "Correction of rotational distortion for catheter-based en face OCT and OCT angiography," Opt. Lett. 39(20), 5973–5976 (2014).
J. Peng, L. Ma, X. Li, H. Tang, Y. Li, and S. Chen, "A novel synchronous micro motor for intravascular ultrasound imaging," IEEE Trans. Biomed. Eng. 66(3), 802–809 (2019).
F. Griese, S. Latus, M. Schluter, M. Graeser, M. Lutz, A. Schlaefer, and T. Knopp, "In-vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation," PLoS One 15(3), e0230821 (2020).
H. Deng, H. Qiao, Q. Dai, and C. Ma, "Deep learning in photoacoustic imaging: a review," J. Biomed. Opt. 26(04), 040901 (2021).
X. Chen, W. Qi, and L. Xi, "Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy," Vis. Comput. Ind. Biomed. Art 2(1), 12 (2019).
J. Grohl, M. Schellenberg, K. Dreher, and L. Maier-Hein, "Deep learning for biomedical photoacoustic imaging: A review," Photoacoustics 22, 100241 (2021).
Abouei, E.
Ahsen, O.O.
Amaral, M.E.
Anastasio, M.A.
Ayatollahi, A.
Bajaj, R.
Barnett, R.
Barthur, A.
Baumbach, A.
Boughner, D.R.
Bourantas, C.V.
Brecht, H.P.
Carlier, S.
Casey, M.E.
Chang, J.H.
Chen, J.
Chen, N.
Chen, S.
Chen, T.
Chen, W.Y.
Chen, X.
Chen, Z.
Choi, S.S.S.
Choi, W.
Chung, J.
Claussen, J.
Conjusteau, A.
Crake, T.
Cua, M.
Dai, Q.
Davoudi, N.
Deán-Ben, X.
Deán-Ben, X.L.
Delrio, S.P.
Deng, H.
Dijkstra, J.
Dreher, K.
Emelianov, S.
Ermilov, S.A.
Feng, N.
Fenster, A.
Fontaine, K.
Fronheiser, M.
Fujimoto, J.G.
Fulton, R.
Gao, F.
Garcia-Barnés, J.
Geva, T.
Giacomelli, M.G.
Gil, D.
Gong, X.
Gottschalk, S.
Graeser, M.
Granada, J.F.
Griese, F.
Grohl, J.
Han, D.
Hernàndez-Sabaté, A.
Hohert, G.
Hu, D.
Hu, G.
Huang, X.
Jacques, S.L.
Jain, A.
Jones, J.
Kakadiaris, I.A.
Kang, J.
Karpiouk, A.B.
Kench, P.
Kilic, Y.
Kim, C.
Kim, M.
Knopp, T.
Koh, T.
Kruger, R.A.
Lam, R.B.
Lam, S.
Lane, P.
Latus, S.
Lee, A.M.D.
Lee, H.C.
Li, C.
Li, J.
Li, K.
Li, M.
Li, T.
Li, X.
Li, Y.
Liang, K.
Lin, R.
Liu, C.
Liu, T.
Liu, Y.
Liu, Z.
Lu, T.
Lu, Y.
Lutz, M.
Ma, C.
Ma, L.
Ma, Y.
MacAulay, C.
Maier-Hein, L.
Mandelis, A.
Manwar, R.
Martí, E.
Mashimo, H.
Maslov, K.
Mathur, A.
Mavadia-Shukla, J.
Mc Larney, B.
Miao, S.
Moghari, M.H.
Moon, J.
Morgan, T.G.
Mulnix, T.
Nadkarni, S.K.
Naghavi, M.
Nguyen, L.
Ntziachristos, V.
O'Malley, S.M.
Oh, D.
Onofrey, J.A.
Oraevsky, A.A.
Pahlevaninezhad, H.
Panin, V.
Parker, M.K.
Peng, J.
Picot, P.
Potsaid, B.
Poudel, J.
Powell, A.J.
Pugliese, F.
Qi, W.
Qiao, H.
Ramasamy, A.
Razansky, D.
Reinecke, D.R.
Ren, S.
Ron, A.
Sadeghipour, P.
Schellenberg, M.
Schlaefer, A.
Schluter, M.
Serruys, P.W.
Shoham, S.
Song, L.
Song, S.
Song, T.K.
Su, R.
Sun, C.X.
Sun, M.
Sun, Z.
Tang, H.
Taruttis, A.
Thornton, M.M.
Torbati, N.
Torii, R.
Tsai, T.H.
Tufaro, V.
Vanderlaan, D.
Wang, H.
Wang, L.V.
Wang, W.
Wang, Z.
Xi, L.
Xia, J.
Xu, M.
Xu, Q.
Xu, Y.
Xu, Z.
Xue, H.
Yan, M.
Yan, Q.
Yang, L.
Yang, Y.
Yeager, D.
Yoo, Y.
Yuan, Y.
Zafar, M.
Zhang, J.
Zhang, Q.
Zhao, H.
Zheng, W.
Zhou, S.
Appl. Sci. (2)
Chem. Soc. Rev. (1)
Chinese J. Biomed. Eng. (1)
Comput. Biol. Med. (3)
IEEE Trans. Biomed. Eng. (1)
IEEE Trans. Inform. Technol. Biomed. (1)
IEEE Trans. Med. Imaging (3)
IEEE Trans. Ultrason., Ferroelect., Freq. Contr (1)
IEEE Trans. Ultrason., Ferroelect., Freq. Contr. (1)
Int J Cardiovasc Imaging (1)
J. Appl. Phys. (1)
J. Biomed. Opt. (6)
J. Innov. Opt. Heal. Sci. (1)
J. Innov. Opt. Health Sci. (1)
J. Med. Imag. Health In. (1)
J. Nucl. Med. (1)
Magn. Reson. Med. (1)
Opt. Lett. (1)
Pattern Anal Applic (1)
Photoacoustics (1)
Phys. Med. Biol. (2)
PLoS One (1)
Quant. Imag. Med. Surg. (1)
SIAM J. Imaging Sci. (1)
Ultrasound in Medicine & Biology (1)
Vis. Comput. Ind. Biomed. Art (1)
Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.
Alert me when this article is cited.
Click here to see a list of articles that cite this paper
View in Article | Download Full Size | PPT Slide | PDF
Equations on this page are rendered with MathJax. Learn more.
(1) P k = [ p 11 p 12 ⋯ p 1 N p 21 p 22 ⋯ p 2 N ⋮ ⋮ ⋱ ⋮ p M 1 p M 2 ⋯ p M N ] ,
(2) C = [ ρ 11 ρ 12 ⋯ ρ 1 W ρ 21 ρ 22 ⋯ ρ 2 W ⋮ ⋮ ⋱ ⋮ ρ W 1 ρ W 2 ⋯ ρ W W ] ,
(3) ρ i j = |Cov ( P i , P j ) | D ( P i ) ⋅ D ( P j ) .
(4) s t ( i , k ) = − | f i − f k | 2 ,
(5) r t + 1 ( i , k ) = { s t ( i , k ) − max k ′ s .t . k ′ ≠ k { a t ( i , k ′ ) + r t ( i , k ′ ) } , i ≠ k s t ( i , k ) − max k ′ s .t . k ′ ≠ k { s t ( i , k ′ ) } , i = k
(6) a t + 1 ( i , k ) = { min { 0 , r t + 1 ( k , k ) + ∑ i ′ s .t . i ′ ∉ { i , k } max ( 0 , r t + 1 ( i ′ , k ) ) } , i ≠ k ∑ i ′ s .t . i ′ ≠ k max { 0 , r t + 1 ( i ′ , k ) } , i = k ,
(7) { r ^ t + 1 ( i , k ) = λ r t ( i , k ) + ( 1 − λ ) r t + 1 ( i , k ) a ^ t + 1 ( i , k ) = λ a t ( i , k ) + ( 1 − λ ) a t + 1 ( i , k ) ,
(8) e = a ^ t + 1 ( i , k ) + r ^ t + 1 ( i , k ) .
(9) J = ∑ f ∈ h i , i = 1 H | f − g i | 2 ,
(10) S ( n ) = { A sin ( π R n ) exp ( − π α n ) + S 0 , 0 < n < 1 / R S ( n − 1 / R ) , n ≥ 1 / R ,
(11) { l i , n + 1 = l i , n λ S ( n + 1 ) / S ( n ) θ i , n + 1 = θ i , n ,
(12) d i , j = 1 − ∑ k = 1 W i d t h ∑ l = 1 H e i g h t | I i ( k , l ) − μ i | ⋅ | I j ( k , l ) − μ j | ∑ k = 1 W i d t h ∑ l = 1 H e i g h t [ I i ( k , l ) − μ i ] 2 ∑ k = 1 W i d t h ∑ l = 1 H e i g h t [ I j ( k , l ) − μ j ] 2 ,
(13) D ( k ) = 1 W − k ∑ m = 1 W − k d m , m + k
(14) D = 1 W ∑ i = 1 W ∑ j = 1 W d i , j ,
Ruikang (Ricky) Wang, Editor-in-Chief
Feature Issues
Parameters of optical and acoustic properties of vessel phantoms for forward IVPA simulation
Tissue name
Tissue component
μa (cm-1)
μs (cm-1)
Anisotropy factor
Sound speed (m/s)
Density (kg/L)
Average radial thickness (mm)
Adventitia Connective tissue 1.41 0.08 0.22 18 0.82 1610 1.14 0.25-0.45
Intima/ Media Muscular tissue 1.42 0.05 0.18 18 0.82 1600 1.12 0.24 (Phantom I and II)
0.4 (Phantom III and IV)
Fibrous cap Fibrous tissue 1.46 0.24 0.5 100 0.8 1630 1.2 0.16
Calcified plaque Calcium 1.46 0.34 0.6 560 0.82 1650 1.24 0.2 (Phantom I and II)
0.45 (Phantom IV)
Lipid pool Lipid 1.46 0.42 0.65 520 0.82 1650 1.22 0.33 (Phantom II)
0.15 (Phantom III)
Lipid-fibro plaque Lipid- fibrosis 1.46 0.34 0.45 560 0.83 1560 1.17 0.3
Lumen Blood 1.34 0.12 0.99 750 0.999 1540 1 3 (Phantom I) 2.5 (Phantom II and III)
2 (Phantom IV)
AIFDs of the image sequences before and after motion correction.
Vessel phantom
AIFD
Non-gated sequence
Gated sequence
I 0.101 0.010
II 0.028 0.009
III 0.020 0.011
IV 0.027 0.019
AIFDs of the image sequences for phantom I in the case of different thresholds of the objective function.
Threshold of objective function
Approximation of objective function
1.8 1.681 0.101 0.101
1.5 1.437 0.085
Quantitative metrics of the image sequences of phantom I processed by two methods.
Length of gated sequence in frames
Run-time in seconds
Our method 60 227.1 0.101 0.010
Image-based gating 55 539.5 0.036 | CommonCrawl |
Microspectrofluorimetry and chemometrics for the identification of medieval lake pigments
Paula Nabais ORCID: orcid.org/0000-0002-7646-74701,
Maria J. Melo ORCID: orcid.org/0000-0001-7393-68011,
João A. Lopes2,
Tatiana Vitorino1,3,
Artur Neves1 &
Rita Castro1
Microspectrofluorimetry offers high sensitivity, selectivity, fast data acquisition, good spatial resolution (down to 2 μm), and the possibility of in-depth profiling. It has proved to be a powerful analytical tool in identifying dyes and lake pigments in works of art. To maximize the extraction of the information present in fluorescence emission and excitation spectra, we propose a chemometric approach to discriminate dark reds to pink colours based on brazilwood, cochineal, kermes and lac dye. These range of hues was obtained using a diverse range of medieval recipes for brazilwood, kermes and lac colourants and Winsor and Newton archive for cochineal lake pigments; the lake pigments were analyzed as colour paints (arabic-gum and glair were the medieval binders selected). Unsupervised (HCA & PCA) and supervised (SIMCA) modelling were tested, allowing to explore similarities between colourants and classify the spectral data into the different lake pigments classes. It was possible to separate the four different chromophores based on their excitation spectra or bringing together the emission and excitation spectra. The first method could also differentiate between the cochineal lake pigments, in particular between crimson lakes with different aluminates and an extender (gypsum) and between carmines with different complexing ions (aluminum and calcium).
In the past few years we have been particularly interested in the development of methodologies that will promote a complete characterization of the organic colourants used in the past as well as their degradation products [1,2,3,4,5,6,7,8,9,10,11,12]. Changes in pigments, whether used pure or admixed, can alter the appearance of a painting significantly; consequently, the identification and state of degradation of colourants is of fundamental interest, since it provides critical information about the artists' aesthetic perspective, conceptions and choices, and how the work has changed over time. Therefore, it is desirable to develop methods that can characterize these materials directly on the artwork, in situ, or from small samples that may be available from works of art. Microspectrofluorimetry offers high sensitivity and selectivity combined with good spatial resolution and the possibility of in-depth profiling. It can also be used in situ without any contact with the sample or work of art to be analyzed, for movable objects that can be transported in the laboratory [13, 14]. The importance of sensitivity is clear when the following facts are considered: some of the dyes used in the past to create bright colours may have faded or may have been applied as very thin coats over, or mixed with, an inorganic pigment or extender, and therefore they may be present in very low concentrations. The possibility of in situ analysis of ancient colourants is a considerable advantage, particularly when considering that the techniques currently employed for dye analysis (HPLC–DAD-MS, microFTIR and SERS) require micro-sampling [15,16,17]. Microspectrofluorimetry also presents some drawbacks, namely the absence of a molecular fingerprint as disclosed in infrared spectra. This limitation may be overcome by combining surface-enhanced Raman spectroscopy (SERS) and fiber-optics reflectance spectroscopy in the visible (FORS) and by using a consistent database build up with historically accurate reproductions of references for colourants, binders and colour paints, which are the result of research into written sources of medieval techniques [13, 14, 17]. They are part of reproducing the process described in the source material as well as molecular identification and comparison with the original colours. This leads to a virtuous feedback loop, where reference compounds are validated against originals and are used to improve the analytical methods applied when identifying materials [11, 18,19,20,21,22]. A hypothesis that we will test in this work using a chemometric approach.
We will focus on four natural red dyes, and their lake pigments, used during the Middle Ages (found in medieval manuscripts and described in technical treatises): lac dye, kermes, cochineal and brazilwood, Table 1. The latter is a flavonoid, but the other three are anthraquinone reds extracted from animal sources, which makes their identification by an analytical technique such as microspectrofluorimetry very challenging.
Table 1 The four red colourants studied in this work, with the respective chromophores, provenience and chronology of occurrences in the Mediterranean world (in artworks)
Brazilwood has been extensively found in books of hours from the 15th–16th c., and in the Galician-Portuguese Ajuda songbook, possibly dated from the 13th c. [5, 11, 12] and is extracted from a tree, Caesalpinia sappan or other brazilwood species brought to Europe from Brazil from the 16th c. onwards (Caesalpinia echinata, Caesalpinia brasiliensis, Caesalpinia violacea, Caesalpinia crista, and Haematoxylum brasiletto) [18]. Kermes was obtained from a small insect, Kermes vermilio, found in the kermes oak, Quercus coccifera L. Other important historical sources of red derived from the resin secreted from the female lac insect, Kerria lacca, from which are obtained both the lac dye and the shellac resin. It was applied as a dark red or pink colour in Portuguese manuscripts and it is characteristic of the Romanesque monastic production (12th–13th c) [1, 10]. In the 16th c. most of these sources were replaced by the red and scarlet colours of the American cochineal, Dactylopius coccus, commercialized by the Spanish empire [23]. Similar species were already found in Eastern Europe, Porphyrophora polonica and Porphyrophora hamelli, known as Polish cochineal and Armenian cochineal, respectively [23, 24].
In previous publications, we proved that confocal microfluorescence is a powerful tool for in situ analysis of colourants based on natural dyes [13, 14, 25]. Natural dyes may be described as weak to medium emitters. Following light absorption, an excited molecule is formed, and this fluorophore may lose its excess energy by emitting light. In a spectrofluorimeter, exciting at a single excitation wavelength and recording the fluorescence in the fluorophores' emission wavelength range results in an emission spectrum. It is also possible to excite at different wavelengths, following the colourant absorption spectrum, collecting at a single wavelength, obtaining thus an excitation spectrum that may reproduce the absorption spectrum [26].
The simultaneous acquisition of emission and excitation spectra facilitates a more accurate identification of dyes and lake pigments [14]. To maximize the extraction of the information present in these signals, this work proposes a chemometrics approach for the study of the database build up with historically accurate reproductions of brazilwood, cochineal, lac dye, and more recently kermes. These lake pigments were used to produce a similar range of colours, and the three anthraquinone based chromophores display similar excited state properties, Fig. 1. The potential of chemometric models which simplify the interpretation of each system, i.e. each colourant, will allow to explore similarities between colourants and classify the spectral data into different classes. For this reason, hierarchical cluster analysis (HCA) and principal component analysis (PCA), as well as soft independent modelling of class analogy (SIMCA), were explored, with the spectral data acquired, to test the possibility of discrimination between these main four colourants.
Excitation and emission spectra of selected reconstructions of the red lake pigments. From left to right: brazilwood, from the Livro de como se fazem as cores: recipe 8 and recipe 44; kermes, both from the Roosen-Runge adaptation of the Jean le Begue manuscript; cochineal, Winsor and Newton's Finest Orient Carmine and Crimson with gypsum; lac dye, Ms. Bolognese, recipe 129 and recipe B.140
PCA is the chemometrics workhorse. Its application is often intended to help interpretation of multivariate datasets. PCA projects multivariate data onto a lower dimension orthogonal space. These projections (loadings) yields the scores or an alternative representation of the samples, though encompassing most of the original data variance [27]. PCA is an unsupervised method in the sense that no considerations are made regarding the samples for building the model. HCA is the general designation of methods for grouping samples characterized by data vectors or matrices, eventually forming clusters. The distance between samples (e.g. Euclidean or Mahalanobis distance) is evaluated recursively, aiming at defining a clustering tree. With this grouping process, performed hierarchically, and depending on the selected algorithm, multiple clustering options are possible. Results are typically represented graphically in the form of a dendrogram, where samples are visualized according to their similarity [27]. The SIMCA model, is a supervised classification method. It is based on the development of multiple PCA models, each built considering samples of a known class or group [28, 29]. The goal is to allow for classification by presenting unknown samples to the different PCA models composing the SIMCA model. When projecting samples to this model, they are classified according to their similarity with the different PCA class models (typically Hotelling's T2 and squared residuals statistics are used to evaluate the distance to each model). Indeed, when projecting one sample, different outcomes are possible: (1) the sample might be classified according to one class; (2) the sample is classified as belonging to two or more classes; (3) the sample is not classified in any of the model's classes. This allows for the coverage of high class variability by the principal components calculated individually, making SIMCA one of the most commonly used techniques for the classification of spectral data [28, 29].
Historically accurate reconstructions
Kermes lake reconstructions were prepared, with as much historical accuracy as possible, according to the Roosen-Runge adaptation (1967) of a Jean le Begue´s manuscript (Experimenta de coloribus) recipe [30, 31]. Kermes vermilio female insects were ground in a mortar with additions of lye until a concentrated dark red solution was obtained. The mixture was heated for 30 min at 50 °C and then centrifuged for 10 min (pH circa 8). Afterwards, the dark red supernatant was heated at 50 °C and alum (Al3+) was added (pH = 6.8). This procedure was repeated to verify the reproducibility of the data.
For lac dye, twelve recipes were selected from six treatises/recipe books: Mappae clavicula (9th–12th c.), Livro de como se fazem as cores (The Book on How to Make Colours, 15th c.), the Bolognese manuscript (15th c.), the Strasbourg manuscript (15th century), the Montpellier manuscript (15th c.) and the Paduan manuscript (late 16th to 17th c.); these reproductions have been described elsewhere [20, 31,32,33,34,35,36].
The production of cochineal lake pigments is very little documented in the written sources from the medieval period. Reconstructions of these lake pigments were therefore adapted from Winsor & Newton 19th c. archive in different varieties: carmine (Finest Orient Carmine, Half Orient Carmine and Ruby Carmine), and crimson (with an aluminate composed of alum and an alkaline compound, designated as Crimson 1 and 2, and with gypsum, designated as Crimson with gypsum). The preparation of the cochineal reconstructions is described in [22] and brazilwood reconstructions have been reported elsewhere [18].
Paint references were prepared using arabic-gum and glair. Glair was prepared as described on the 11th-century De clarea treatise [37] and gum arabic, from Kremer Pigmente, was prepared according to De arte illuminandi as a 10% solution [38]. For the glair, the egg white was beaten and the liquid that formed at the bottom was used; for the gum arabic, the pieces were ground and then added to pure water. The lake pigments were first ground in an agate mortar with pure water and then ground with the binder. The paints were applied on filter paper and parchment with a paintbrush and allowed to dry. Spectroscopic or equivalent grade solvents and Millipore filtered water were used for all the spectroscopic studies as well as for the extraction of the dyes and preparation of the lake pigments.
Microspectrofluorimetry measurements
Fluorescence excitation and emission spectra were recorded with a Jobin–Yvon/Horiba SPEX Fluorog 3-2.2 spectrofluorometer hyphenated to an Olympus BX51M confocal microscope, with spatial resolution controlled by a multiple-pinhole turret, corresponding to a minimum 2 μm and maximum 60 μm spot, with 50 × objective. Beam-splitting is obtained with standard dichroic filters mounted at 45°; they are located in a two-place filter holder. For a dichroic filter of 570 nm, excitation may be carried out until about 560 nm and emission collected after about 580 nm ("excite bellow, collect above"). The optimization of the signal was performed daily for all pinhole apertures through mirror alignment, following the manufacturer's instructions, using a rhodamine standard (or other adequate reference). For the study of red dyes, two filter holders with two sets of dichroic filters are employed, 500 and 570 nm in one set and 525 and 600 nm in the other set. This enables both the emission and excitation spectra to be collected with the same filter holder. A continuous 450 W xenon lamp, providing an intense broad spectrum from the UV to near-IR, is directed into a double-grating monochromator, and spectra are collected after focusing on the sample (eye view) followed by signal intensity optimization (detector reading). The pinhole aperture that controls the area of analysis is selected based on the signal-to-noise ratio. For weak to medium emitters, it is set to 8 μm, in this work for very weak signals 30 μm spot was also used (pinholes 5 and 8, respectively) with the following slits set:emission slits = 3/3/3 mm and excitation slits = 5/3/0.8 mm. Emission and excitation spectra were acquired on the same spot whenever possible. For more details on the experimental set-up please see [13, 14].
The paint reconstructions were analysed in situ. For each of the prepared paintings 6–9 emission and excitation spectra were acquired, in different days, in different points, and the data was shown to be reproducible. Forty excitation spectra with the corresponding emission spectra, were obtained for brazilwood; 34 spectra for cochineal; 22 spectra for kermes; and 22 spectra for lac dye. Therefore, 118 excitation spectra in total, together with the corresponding emission spectra, were acquired.
Theory and calculation
Spectral pre-treatment
Both excitation and the emission spectra were used. For each sample, excitation and emission intensity were independently normalized to 1 (area) and then the two data blocks were merged (horizontal concatenation of the matrices). For the excitation spectral dataset used in this work it was considered that some filtering was necessary combined with the removal of baseline drifts. It was decided to adopt the Haar transform and 1st derivative (2nd order) were selected for this task. Normalization by area (1) is also typically used for the analysis of fluorescence area and was also considered and applied subsequently to the first two methods [39]. See Additional file 1 for the spectral pre-treatment data.
Chemometric methods
The HCA algorithm was made resourcing to the scores of PCA models. The number of principal components to use for the HCA was defined as the most appropriate for achieving a colourant class separation considering only the calibration dataset. Dendrograms considering from 1 to 10 components were tested. Colourants classification with the SIMCA method resourced on the development of models from a calibration dataset. Samples assignment to classes was defined according to a distance to model metric as described in [40]. The distance to model (d) as defined in Eq. 1 was used with the threshold 1.5 as the criterium for assigning samples to colourant classes.
$$ d = \sqrt {\left( {\frac{{T^{2} }}{{T_{Lim,95\% }^{2} }}} \right)^{2} + \left( {\frac{Q}{{Q_{Lim,95\% }^{{}} }}} \right)^{2} } < 1.5 $$
In Eq. 1, T2 and Q are the Hotelling's T2 and squared residuals statistics, respectively and \( {\text{T}}_{{{\text{LIM,95\% }}}}^{2} \) and QLIM,95% are the confidence limits for a significance of 0.05. A sample is considered to belong to a class when d < 1.5. Prior to the application of all chemometric methods the data were mean centred. All chemometric analysis and data manipulations were performed in Matlab Version 8.6 (R2015b) (The Mathworks, Natick, MA) and the PLS Toolbox Version 8.2.1 (Eigenvector Research, Manson, WA).
The data: historically accurate reconstructions
Brazilwood lakes and lac dye reconstructions encompass a chronological arch from the 15th c. until the 19th c. and 12th c. until the 16th c., respectively, also showing that the main steps for the manufacture of these lake pigments were kept through time [18, 32, 33]. Due to the lack of cochineal recipes in medieval records, 19th c. W&N carmine and crimson cochineal pigments were used [22]. The main process of W&N for carmine manufacture (Finest Orient Carmine) involved an acid extraction of the dye and the addition of aluminum from alum to calcium from milk. Two other processes to produce carmines involved the absence of the milk in the previous process (Half Orient Carmine) and extraction of the dye with potassium carbonate followed by precipitation with alum and cream of tartar (Ruby Carmine). The crimson colours were produced with the addition of a lake pigment dispersion to an aluminate composed of alum and an alkaline compound (ammonium or sodium carbonate), or an extender (gypsum) [22]. Kermes database is constituted of several reconstructions of a Jean le Begue's recipe [30, 31]. Further studies in other recipes are currently in progress.
Pigment lakes have been fully characterized and rationalized by multi-analytical techniques [18, 20, 22].
Analyzing the relative fluorescence intensities between colorants, we may conclude that brazilwood and cochineal present relatively similar excitation and emission intensities, while lac dye' intensity increases, in some cases, by tenfold (see Additional file 1: Fig. S1–S4). Kermes, on the other hand, is the chromophore which presents the lowest intensities from the four colourants. The chromophores of both laccaic acid and carminic acid have been characterized in solution [41]. The quantum yield of fluorescence registered for these chromophores with a 1:100 ratio aluminum complex (lake), has been 1.5 × 10−2 and 4 × 10−2, respectively, which enables their characterization as moderate and weak emitters. No values for brazilein—Al3+ complexes are available. However, knowing that brazilein at pH = 1.5 shows a quantum yield of fluorescence of 6.8 × 10−3 [42], we may predict a tenfold increase, for an aluminum complex, of the quantum yield, i.e., 7 × 10−2 [41]. A photophysical characterization is yet to be done on kermesic acid, which could shed some light on why the intensities are much lower than the other chromophores.
Unsupervised modelling
The pre-processed spectral data were analysed by PCA. The first principal components were examined regarding the ability for separating samples with different colourants. The first and second components separate mainly brazilwood and cochineal from the other two colourants (lac dye and kermes), Fig. 2. The third component differentiates the crimson recipes based on cochineal, where an aluminate or extender was added to a lake pigment dispersion, and the finest orient carmine colours, that stand out for the addition of milk as a source of calcium, Fig. 3. Even more interesting is the fact that it distinguishes between Crimson 1, which had the addition of an aluminate composed of ammonium carbonate and alum, and Crimson 2, which had the addition of sodium carbonate and alum. This proves the potential of this methodology for the identification of not only the colourant, but also the specific recipe. The forth component separates the classes of kermes and lac dye, considering mostly the 400–440 nm region, Fig. 4.
Principal component analysis scores, for normalized and filtered (by Haar transform and 1st derivative (2nd order)) excitation spectra for red lake pigments, showing the separation of cochineal (green) and brazilwood (red) from the other two colorants
Principal component analysis scores illustrating the separation of cochineal manufacturing processes, Crimson lakes and Finest Orient Carmine (green)
Principal component analysis scores, for normalized and filtered (by Haar transform and 1st derivative (2nd order)) excitation spectra for red lake pigments, illustrating the separation of the red lake pigments: kermes (light-blue), lac dye (dark-blue), cochineal (green) and brazilwood (red)
The PCA model is not a classification method and, for a better visualization of the similarity between the different samples, the HCA method resourcing to the Ward's algorithm was used (using the Mahalanobis distance). The HCA method was fed with the principal components generated by PCA models. The number of principal components (PCs) to use was based on the development of different HCA considering different number of PCs but using only approximately 2/3 of the total samples (the dataset division was based on the Kennard-Stone algorithm). This was performed for emission/excitation spectra models. It was found that five components yielded the best colourants discrimination. After this selection, the HCA method was applied to all samples considering always five components. These results are presented subsequently. Considering the excitation spectra dataset alone, the HCA method revealed a successful separation of the four dyes, Fig. 5. Four distinct clusters are visible in the dendrogram, each encompassing the samples of a different colourant. When both excitation and emission spectra sets were used, the distinction between lac dye and kermes was not possible due to the similarities in the emission spectra for these colourants, which is also seen in Fig. 2.
Dendrogram generated by HCA applied to excitation spectra, showing a clear discrimination between the four red lake pigments: kermes (light-blue), lac dye (dark-blue), cochineal (green) and brazilwood (red)
SIMCA model
As previously mentioned the SIMCA model, is a supervised classification method, allowing the development of multiple PCA models, each built considering samples of a known class or group. To build the SIMCA method, samples were divided randomly in a training and validation set according to the Kennard-Stone algorithm, where 2/3 of the 118 samples were set for calibration and the remaining for validating the model. Four PCA models were calibrated considering the excitation and emission spectra of the four colourants. The criteria for selecting the number of components was based on the percentage of captured variance (at least 95%). Brazilwood, cochineal and lac dye samples were modelled with four components, while kermes demanded five components. Optimized PCA models were then built using the entire calibration dataset. Each model was then tested by projecting the validation samples. In SIMCA, the normalization by area proved to be enough as a pre-treatment.
Both training samples and validation samples were predicted as the correct class as shown in Fig. 6. The distance to model for each sample (Calibration and Testing sample) for the four different PCA models forming the SIMCA approach are presented in Fig. 6 together with the threshold level for class assignment (1.5). For a better visualization of the data, each colourant is represented with a specific colour. Also, distance to model values were truncated to five for better results visualization. Clearly, brazilwood samples (red markers on Fig. 6 top-left) lie below the 1.5 threshold meaning that these samples are close to the brazilwood model as expected. Other samples lie significantly above the 1.5 threshold. This result can be observed for all colourant models. There are no samples that could belong to more than one class. The strict application of the colourant assignment criterium results on 100% of correct classifications for all validation samples. It was demonstrated the ability of the SIMCA modelling approach to correctly assign the colourant type to all validation samples resourcing to the excitation and emission spectra.
Distance to model metric used to assign samples (calibration and test sets) to colorant classes resulting from the SIMCA modelling approach: kermes (light-blue), lac dye (dark-blue), cochineal (green) and brazilwood (red). The SIMCA modelling approach results in 100% correct predictions for both calibration and validation sets
Microspectrofluorimetry is a powerful technique for the analysis of dyes and lake pigments, due to the advantage of being used in situ without any contact with the sample or work of art to be analyzed. In this work, this technique was explored in a robust methodology for the identification of red lake pigments, using a chemometric approach that allowed to explore similarities between colourants and classify the spectral data into the four different colourant classes, Fig. 1. Unsupervised (HCA & PCA) and supervised (SIMCA) modelling were tested in the discrimination between the four dye families. The first was applied considering the excitation spectral dataset alone and with several pre-processing treatments, allowing for the separation of the colourants into different clusters. It was also possible to pin point the main W&N's manufacturing processes of cochineal lake pigments, where among the crimson lakes it was possible to distinguish between the different additives, aluminates (Crimson 1 and 2) and the gypsum, and among the carmine colours between the Finest Orient Carmine, which had the addition of milk as a source of calcium, and the Half Orient Carmine and Ruby Carmine, both without calcium.
The SIMCA modelling, allowed for the discrimination between chromophores, with both spectral sets, i.e. excitation and emission, while using less pre-processing treatments.
Based on these results, this methodology will be next applied in data acquired form artworks, from medieval manuscripts to textiles, to select which modelling (unsupervised or supervised) best suits the data. Finally, a search algorithm will be developed making this new advanced analytical tool accessible to the conservation community, and not only to the photophysics experts.
Melo MJ, Castro R, Miranda A. Colour in medieval portuguese manuscripts: Between beauty and meaning. In: Sgamellotti A, Brunetti BG, Miliani C, editors. Science and art: the painted surface. Cambridge: MIT press; 2014. p. 170–92.
Miguel C, Claro A, Gonçalves AP, Muralha VS, Melo MJ. A study on red lead degradation in a medieval manuscript Lorvão Apocalypse (1189). J Raman Spectrosc. 2009;40(12):1966–73.
Muralha VS, Miguel C, Melo MJ. Micro-Raman study of medieval cistercian 12–13th century manuscripts: Santa Maria de Alcobaça, Portugal. J Raman Spectrosc. 2012;43(11):1737–46.
Melo MJ, Araújo R, Castro R, Casanova C. Colour degradation in medieval manuscripts. Microchem J. 2016;124:837–44.
Melo MJ, Otero V, Vitorino T, Araújo R, Muralha VSF, Lemos A, Picollo M. A spectroscopic study of brazilwood paints in medieval books of hours. Appl Spectrosc. 2014;68(4):434–44.
Melo MJ, Vilarigues M, Muralha VSF, Castro R. Fernão Vaz Dourado's colours. In: Miró M, editor. Universal Atlas of Fernão Vaz Dourado, 1571. Barcelona: M. Moleiro Editor; 2013. p. 168–186.
Moura L, Melo MJ, Casanova C, Claro A. A study on Portuguese manuscript illumination: The Charter of Vila Flor (Flower town), 1512. J Cult Herit. 2007;8(3):299–306.
Miguel C. Le Vert et le rouge: A study on the materials, techniques and meaning of the green and red colours in medieval Portuguese illuminations. Doctoral dissertation, Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia; 2012.
Castro R. The book of birds in Portuguese scriptoria: preservation and access. Doctoral dissertation, Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia; 2016.
Melo MJ, Miranda A, Miguel C, Castro R, Lemos A, Muralha S, Lopes J, Gonçalves AP. The colour of medieval Portuguese illumination: an interdisciplinary approach. Rev Hist Arte. 2011;1:152–73.
Melo MJ, Nabais P, Guimarães M, Araújo R, Castro R, Oliveira MC, Whitworth I. Organic dyes in illuminated manuscripts: an unique cultural and historic record. Phil Trans R Soc A. 2016;374(2082):20160050.
Nabais P, Castro R, Lopes GV, de Sousa LC, Melo MJ. Singing with light: an interdisciplinary study on the medieval Ajuda Songbook. J Mediev Iber Stud. 2016;8(2):283–312.
Melo MJ, Claro A. Bright light: microspectrofluorimetry for the characterization of lake pigments and dyes in works of art. Acc Chem Res. 2010;43(6):857–66.
Claro A, Melo MJ, Schäfer S, Seixas de Melo JS, Pina F, van den Berg KJ, Burnstock A. The use of microspectrofluorimetry for the characterization of lake pigments. Talanta. 2008;74(4):922–9.
Mas S, Miguel C, Melo MJ, Lopes JA, de Juan A. Screening and quantification of proteinaceous binders in medieval paints based on μ-Fourier transform infrared spectroscopy and multivariate curve resolution alternating least squares. Chemometr Intell Lab. 2014;134:148–57.
Miguel C, Lopes JA, Clarke M, Melo MJ. Combining infrared spectroscopy with chemometric analysis for the characterization of proteinaceous binders in medieval paints. Chemometr Intell Lab. 2012;119:32–8.
Castro R, Pozzi F, Leona M, Melo MJ. Combining SERS and microspectrofluorimetry with historically accurate reconstructions for the characterization of lac dye paints in medieval manuscript illuminations. J Raman Spectrosc. 2014;45:1172–9.
Vitorino T, Melo MJ, Carlyle L, Otero V. New insights into brazilwood lake pigments manufacture through the use of historically accurate reconstructions. Stud Conserv. 2016;61(5):255–73.
Miguel C, Pinto JV, Clarke M, Melo MJ. The alchemy of red mercury sulphide: the production of vermilion for medieval art. Dyes Pigments. 2014;102:210–7.
Castro R, Miranda A, Melo MJ. Interpreting lac dye in medieval written sources: new knowledge from the reconstruction of recipes relating to illuminations in Portuguese manuscripts. In: Eyb-Green S, Townsend JH, Atkinson JK, Kroustallis S, Pilz K, van Leeuwen I, editors. Sources on art technology: back to basics. London: Archetype publications; 2016. p. 88–99.
Miguel C, Claro A, Melo MJ, Lopes JA. Green, blue, greenish blue or bluish green? Copper pigments in medieval Portuguese illuminations. In: Hermens E, Townsend JH, editors. Sources and serendipity—testimonies of artists' practice. Proceedings of the third symposium of the art technological source research working group. London: Archetype publications; 2009. p. 33–8.
Vitorino T, Otero V, Carlyle L, Melo MJ, Parola AJ, Picollo M. Nineteenth-century cochineal lake pigments from Winsor & Newton: Insight into their methodology through reconstructions. In: Bridgland J, editor. ICOM-CC 18th Triennial Conference Preprints, Copenhagen, 4–8 September 2017. Paris: International Council of Museums; 2017.
Cardon D. Natural dyes: sources, tradition, technology and science. London: Archetype Publications; 2007.
Phipps E. Cochineal red: the art history of a color. Metrop Mus Art Bull. 2013;67(3):4–48.
Claro A, Melo MJ, Seixas de Melo JS, van den Berg KJ, Burnstock A, Montague M, Newman R. Identification of red colourants in cultural heritage by microspectrofluorimetry. J Cult Herit. 2010;11:27–34.
Valeur B, Berberan-Santos MN. Molecular Fluorescence: Principles and Applications. 2nd ed. Weinheim: Wiley-VCH Verlag GmbH; 2012.
Miller JN, Miller JC. Statistics and chemometrics for analytical chemistry. 6th ed. England: Pearson Education Limited; 2010. p. 221–31.
Stumpe B, Engel T, Steinweg B, Marschner B. Application of PCA and SIMCA statistical analysis of FT-IR spectra for the classification and identification of different slag types with environmental origin. Environ Sci Technol. 2012;46:3964–72.
Duca D, Mancini M, Rossini G, Mengarelli C, Foppa Pedretti E, Toscano G, Pizzi A. Soft Indepedent Modelling of Class Analogy applied to infrared spectroscopy for rapid discrimination between hardwood and softwood. Energy. 2016;117:251–8.
Schweppe H, Roosen-Runge H. Carmine-cochineal carmine and kermes carmine, artists' pigments. In: Feller RL, editor. A handbook of their history and characteristics, vol. 1. Oxford: Oxford University Press; 1986. p. 155–298.
Merrifield MM. Medieval and renaissance treatises on the arts of painting: original texts with English translations. USA: Dover Publications; 1999.
Strolovitch DL. O libro de komo se fazen as kores das tintas todas (Transliteration). As Materias da Imagem. Lisboa: Campo da Comunicação; 2010. p. 213–36.
Melo MJ, Castro R. « O livro de como se fazem as cores » : medieval colours for practitioners. Online edition. 2016. https://www.dcr.fct.unl.pt/LivComoFazemCores. Accessed 30 Jan 2018.
Smith CS, Hawthorne JG. Mappae clavicula: a little key to the world of medieval techniques. Trans Am Phil Soc. 1974;64(4):1–128.
Neven S. Les recettes artistiques du Manuscrit de Strasbourg et leur tradition dans les réceptaires allemands des XVe et XVIe siècles (Étude historique, édition, traduction et commentaires technologiques). Doctoral dissertation, Université de Liège; 2011.
Clarke M. Mediaeval painters' materials and techniques: the montpellier liber diversarum arcium. Londres: Archetype; 2011.
Thompson DV. 'The De clarea of the so-called Anonymous Bernensis'. Technical Studies in the Field of the Fine Arts. 1932; p. 14–19.
Brunello F. De arte illuminandi e altri trattati sulla miniatura medieval. 2nd ed. Vicenza: Neri Pozza Editore; 1992.
Rinnan Å, van den Berg F, Engelsen SB. Review of the most common pre-processing techniques for near-infrared spectra. Trends Anal Chem. 2009;28(10):1201–22.
Bylesjö M, Rantalainen M, Cloarec O, Nicholson JK, Holmes E, Trygg J. OPLS discriminant analysis: combining the strengths of PLS-DA and SIMCA classification. J Chemom. 2006;20(8–10):341–51.
Claro A. An interdisciplinary approach to the study of colour in Portuguese manuscript illuminations. Doctoral dissertation, Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia; 2009.
Rondão R, Seixas de Melo JS, Pina J, Melo MJ, Vitorino T, Parola AJ. Brazilwood reds: the (photo)chemistry of brazilin and brazilein. J Phys Chem A. 2013;117:10650–60.
MJM contributed with the conception and design of the research work; acquisition, analysis and interpretation of data; final approval of the version of the article to be published. JAL contributed with the conception of the data treatment and calculations, as well as the revision of the version to be published. PN contributed with the acquisition of the spectral data related to kermes reconstructions; conception of the models and treatments applied to the spectral data, and with the writing and revision of the version to be published. TV contributed with the reconstructions of brazilwood and cochineal recipes as well as the acquisition of the spectral data, and the revision of the version to be published. AN contributed with the kermes reconstructions and the acquisition of the spectral data. RC contributed with the reconstructions of lac dye and their spectra. All authors read and approved the final manuscript.
Most of the data on which the conclusions of the manuscript rely is published in this paper, and the full data is available for consultation on request.
These studies were supported by the Portuguese Science Foundation through three research projects and three Ph.D. Grants, including the three awarded to Rita Castro, Paula Nabais and Tatiana Vitorino (Ph. D. Grant Nos. SFRH/BD/76789/2011, CORES Ph. D. programme PD/BD/105895/2014 and PD/BD/105902/2014), and through the scientific infrastructures funded through RECI/QEQ-MED/0330/2012, REM2013 and the Associated Laboratory for Sustainable Chemistry—Clean Processes and Technologies—LAQV, which is financed by national funds from FCT/MEC (UID/QUI/50006/2015) and co-financed by the ERDF under the PT2020 Partnership Agreement (POCI-01-0145-FEDER-007265). Support was also given by the Calouste Gulbenkian Foundation award 'Estímulo à Investigação 2016' (146301).
DCR and LAQV-REQUIMTE, Faculty of Sciences and Technology, NOVA University of Lisbon, 2829-516, Caparica, Portugal
Paula Nabais, Maria J. Melo, Tatiana Vitorino, Artur Neves & Rita Castro
iMed.ULisboa-Research Institute for Medicines, Faculty of Pharmacy, University of Lisbon, Av. Prof. Gama Pinto, 1649-003, Lisbon, Portugal
João A. Lopes
Nello Carrara Institute of Applied Physics, National Research Council, 50019, Sesto Fiorentino, Italy
Tatiana Vitorino
Paula Nabais
Maria J. Melo
Artur Neves
Rita Castro
Correspondence to Maria J. Melo or João A. Lopes.
Additional file 1: Figure. S1.
Excitation and emission spectra of brazilwood recipes from the Book on how to make colours, from left to right: top, recipes 8 and 9; bottom, recipes 27 and 44. Figure S2. Excitation and emission spectra of kermes recipes from the a Roosen-Runge adaptation of a Jean le Begue´s manuscript: left, applied with glair; right, applied with gum-arabic. Figure S3. Excitation and emission spectra of cochineal recipes from the Winsor & Newton 19th c. archive, from left to right: top, Finest Orient Carmine and Half Orient Carmine; middle, Ruby Carmine and Crimson with gypsum (lake pigment dispersion with gypsum); bottom, Crimson 1 (lake pigment dispersion with ammonium carbonate and alum) and 2 (lake pigment dispersion with sodium carbonate and alum). Figure S4. Excitation and emission spectra of lac dye recipes, from left to right: top, Ms. Bolognese recipe 129 (both spectra showing different intensities); middle, Ms. Bolognese recipes 130 and 140; bottom, Ms. Mappae Clavicula recipe 253 and Ms. Montpellier, Liber Diversarum Arcium, chapter VIII, recipe 1.9. Figure S5. Pre-treatment of the excitation spectral set prior to the application of HCA: Haar transform (left), and Haar transform + normalization (by area = 1) (right). Figure S6. Normalization (by area = 1) of the excitation (left) and emission (right) spectral set prior to the application of the SIMCA modelling.
Nabais, P., Melo, M.J., Lopes, J.A. et al. Microspectrofluorimetry and chemometrics for the identification of medieval lake pigments. Herit Sci 6, 13 (2018). https://doi.org/10.1186/s40494-018-0178-1
Lake pigments
Spectrofluorimetry
Manuscripts in the Making | CommonCrawl |
Specific motor cortex hypoexcitability and hypoactivation in COPD patients with peripheral muscle weakness | BMC Pulmonary Medicine | Full Text
Forty COPD patients and 22 healthy controls, aged between 40 and 80 years, were recruited for the study (Fig. 1). The COPD patients were recruited and tested at their entrance in two French pulmonary rehabilitation centers (Cliniques du Souffle La Vallonie, Lodève, and Les Clarines, Riom-ès-Montagne, France) between 2012 and 2014. The healthy controls were recruited through an ad in a local newspaper within the same period. The participation criteria for the COPD patients were a diagnosis of COPD with forced expiratory volume in the 1st second (FEV1) between 30 and 80% of the theoretical values (GOLD 2 and 3), with no exacerbation or weight loss in the month preceding the study. The non-inclusion criteria were the same for patients and controls: inability to give written consent, inability to perform the experimental maneuvers, impaired visual function, use of drugs known to impact brain function (GABA agonist, Z-drugs, tricyclic antidepressants, melatoninergic antidepressants, selective serotonin/noradrenalin reuptake inhibitors and opioid receptor agonists), chronic current or past alcohol abuse (> 14 units of alcohol per week), mental disorder and neurologic or neuromuscular disease. For the diagnosis of peripheral muscle weakness, the isometric maximal quadriceps torque (QMVC) of each participant was expressed as a percentage of predicted values obtained from the national isometric muscle strength database consortium [33]. Patients with QMVC below 80% of predicted values were then assigned to the muscle weakness group (COPDMW) and the others to the non-muscle weakness group (COPDNoMW) [34]. Healthy controls with peripheral muscle weakness were excluded from the analyses (n = 2). All participants gave written consent. Procedures were approved by the local ethics committee (CPP Sud-Est VI, Clermont-Ferrand, number AU980) and complied with the principles of the Declaration of Helsinki for human experimentation.
Flow diagram of the study
Both patients and controls underwent plethysmography (V6200 Autobox, Sensormedics Corp., Yorba Linda, CA, USA). Measurements included forced vital capacity (FVC) and FEV1. The presence of persistent airflow obstruction and thus COPD was defined by a postbronchodilatator FEV1/FVC ratio < 0.7 [35]. The FEV1 values were expressed as a percentage of predicted value [36].
Blood gas analyses
Measurement of blood gases (PaO2 and PaCO2) collected from the radial artery was performed in resting patients while they breathed room air, using a blood gas analyzer (ABL 825, Radiometer Medical, Bronshoj, Denmark).
Neuromuscular tests
After determination of the dominant leg [37], the participants were comfortably seated on a dedicated ergometer for knee extensor testing (Quadriergoforme, Aleo Industrie, Salome, France) equipped with a strain gauge torque sensor (Captels, Saint Mathieu de Treviers, France). The hip and the knee angle were set at 90°. The pelvis and the proximal extremity of the patella were securely attached to the chair in order to minimize movements of adjacent muscles. All the experimental manoeuvers of the protocol were done on the ergometer and in the same body position (including stimulations at rest). The participants were systematically familiarized to the experimental procedures the day before the protocol through a physical training session. This session included transcranial magnetic and femoral nerve stimulation recruitment curves, followed by 3 maximal voluntary contractions and several submaximal voluntary contractions at 30 and 50% of MVC lasting 5 s or until the targets were correctly reached, with superimposed transcranial magnetic and femoral nerve stimulations.
Evaluation of isometric maximal quadriceps torque
Isometric maximal quadriceps torque of the dominant leg was assessed as the highest torque value recorded during the protocol. Participants were verbally encouraged during each contraction to ensure maximal personal implication. QMVC was expressed in Nm and as a percentage of predicted values obtained from the national isometric muscle strength database consortium [33]. The maximal electrically evoked torque (quadriceps peak twitch, QPt) was assessed at rest as the highest twitch response induced by IMmax femoral nerve stimulation (IMmax determination is described in the following paragraph).
Evaluation of peripheral and spinal excitability by femoral nerve stimulation
The femoral nerve stimulation was applied to assess peripheral and spinal excitability. A constant-current and high voltage stimulator (DS7AH, Digitimer, Hertforshire, UK) was used. Rectangular monophasic pulses of 500 μs were used to ensure optimal activation of deeper muscle fibers [38] and to enable the appearance of H-waves [39]. The anode, a self-adhesive electrode (10 × 5 cm), was placed over the greater trochanter. The cathode, a ball electrode covered with damp foam, was placed over the participant's femoral triangle (Scarpa), 3 to 5 cm below the inguinal ligament. To determine optimal location, the cathode was moved by small amounts while delivering pulses at 50 mA until the highest M-wave response was obtained over the vastus medialis with the smallest possible response over the antagonist biceps femoris. Then markers were set over the participant to maintain the cathode position. A recruitment curve was performed at rest to determine the intensities at which the highest M-wave (Mmax) and H-reflex (Hmax) were obtained. One pulse was delivered on the femoral nerve every 10 s, with the intensity beginning at 50 mA and increasing by 10 mA until no further increase in twitch mechanical response and M-wave amplitude occurred. The intensity used during the protocol was set as 10% above the intensity at which Mmax was elicited (supramaximal intensity noted IMmax). IMmax was used to evoke M-wave at rest (Mmax) and during maximal voluntary contraction to deliver double twitch pulses (doublet) at 100 Hz. Subsequently to the IMmax determination, the intensity at which the maximum Hmax was obtained was carefully sought. This intensity was used to evoke H-reflex at rest (Hmax). Peripheral and spinal excitability were defined as the highest Mmax and Hmax recorded during the protocol, respectively. Hmax was normalized with respect to Mmax (Hmax/Mmax) to avoid potential bias due to peripheral excitability differences. Mmax and Hmax latencies were defined as the time between the stimulation onset and the evoked potential onset.
Evaluation of corticospinal excitability by transcranial magnetic stimulation
Single transcranial magnetic stimulation (TMS) pulses of 1-ms duration were delivered over the motor cortex using a Magstim 200 (Magstim Co., Whitland, UK). During the settings, TMS pulses were delivered during isometric submaximal voluntary contraction at 10% of the maximal quadriceps torque (facilitation). The figure-of-eight coil was held over the contralateral motor cortex at the optimum scalp position to elicit motor evoked potential (MEP) responses in the contralateral vastus medialis muscle. The contralateral motor cortex was first localized using the 10–10 EEG system (C3 point for right limb stimulation, C4 point for left limb stimulation). Then, the coil was moved by small amounts until the highest MEP response on the vastus medialis was obtained with suprathreshold stimuli, with the smallest possible response over the antagonist biceps femoris, in order to determine the optimal coil location. If significant activation of the antagonist biceps femoris muscle was noted, the coil was slightly moved, until its activation was minimized. Then markers were positioned over the participant and over the coil to maintain the coil location. After that, a recruitment curve was performed during voluntary contraction, at 10% of the maximal quadriceps torque, in order to determine the maximal intensity (noted IMep) [40]. One pulse was delivered every 10 s with increasing intensity in steps of 2% until the highest response was obtained. At least three pulses were delivered at each intensity level to check for reproducibility. The maximal intensity was defined as the intensity at which the highest MEP amplitude was obtained over the vastus medialis. This was then used during the protocol to elicit MEP responses during maximal voluntary contractions in order to assess corticospinal excitability and primary motor cortex activation. If a participant reached the maximum stimulator output without evidence of maximal MEP response (i.e., no evidence of plateau of the MEP amplitude before reaching the maximal output), the data were excluded from the analyses.
Corticospinal excitability was assessed during maximal voluntary contractions by the highest amplitude of the MEP induced by IMEP with respect to peripheral excitability (MEP/Mmax). The silent period duration was measured as the time between the MEP onset and the return of voluntary EMG activity. The central motor conduction time was calculated from the delay between stimulus artifact and the MEP onset.
Evaluation of voluntary activation with femoral nerve and transcranial magnetic stimulation
The voluntary activation was assessed by peripheral nerve stimulation (VAperipheral) and transcranial magnetic stimulation (VAcortical).
VAperipheral was calculated according to the twitch interpolation technique (4). A supramaximal doublet was delivered during the force plateau of the maximal voluntary contraction (superimposed doublet) and 2 s after relaxation (control doublet). VAperipheral was calculated as the ratio between the twitch-like increment in torque induced by the supramaximal doublet during maximal voluntary contraction and after relaxation:
$$ {\mathrm{VA}}_{\mathrm{cortical}}\left(\%\right)=\left[1-\left(\mathrm{superimposed}\ \mathrm{twitch}/\mathrm{estimated}\ \mathrm{resting}\ \mathrm{twitch}\right)\right]\times 100 $$
VAcortical was calculated by stimulating the motor cortex during the quadriceps contractions according to the method described by Sidhu et al. [19]. The estimated resting twitch was calculated from the curve-response relationship obtained by plotting the twitch-like increment in torque induced by the transcranial magnetic pulses delivered during the last two maximal voluntary contractions, as well as those obtained during submaximal voluntary contractions at 30 and 50% of QMVC. When no linear relationship could be obtained between the voluntary force and the twitch-like increment in torque (r < 0.9), the data were excluded from the analyses [41]. VAcortical was calculated as the ratio between the highest twitch-like increment in torque induced by the TMS pulses during maximal voluntary contractions and the estimated resting twitch:
$$ {\mathrm{VA}}_{\mathrm{cortical}}\ \left(\%\right)=\left[1-\left(\mathrm{superimposed}\ \mathrm{twitch}/\mathrm{estimated}\ \mathrm{resting}\ \mathrm{twitch}\right)\right]\times 100 $$
EMG activity
The surface EMG activity of the vastus medialis, rectus femoris and biceps femoris was recorded throughout the protocol with Biopac technology (Biopac MP100, Biopac Systems, Santa Barbara, CA, USA). Bipolar, silver chloride, square surface electrodes with a 9-mm diameter were used (Contrôle Graphique Médical, Brie-Compte-Robert, France). In order to minimize impedance (< 5 kΩ), the skin was shaved, abraded, and cleaned with alcohol. Two electrodes were set at the middle belly of the vastus medialis, rectus femoris and long head of the biceps femoris muscles of the dominant leg with an interelectrode distance of 2 cm. The reference electrode was placed on the opposite patella. The EMG signal was band-pass-filtered (10–500 Hz), amplified (× 1000) and recorded at a sample frequency of 4096 Hz.
The participants performed four maximal voluntary contractions of the knee extensors, each separated by 2 min of recovery (Fig. 2). They were asked to maintain maximal effort for at least 4 s. During the first two maximal voluntary contraction maneuvers, a double pulse at 100 Hz was delivered over the femoral nerve (superimposed doublet) during the force plateau and 2 s after relaxation (control doublet). During the last two maximal voluntary contraction maneuvers, a single TMS pulse at IMep was delivered over the motor cortex to elicit MEPs during the force plateau. Three single pulses at IMmax or Hmax intensity separated by 10 s were delivered twice between maximal voluntary contractions to elicit Mmax and Hmax at rest, respectively. The time interval between Mmax and Hmax stimuli was between 30 and 40s. If any pre-stimulus voluntary activity was observed, the involved stimuli were discarded. After the maximal voluntary contractions, three submaximal voluntary contractions (SVC) with visual feedback were performed at 50 and 30% of QMVC. A single TMS pulse at IMep was delivered during the force plateau of each SVC to elicit superimposed twitch responses at 30 and 50% of QMVC.
Experimental design. QMVC: Quadriceps voluntary contractions at maximal (100% of QMVC) or submaximal (50 and 30% of QMVC) intensity. Superimposed and control doublets, maximal M-waves (Mmax), and maximal H-waves (Hmax) were delivered via electrical stimulation over the femoral nerve. Motor evoked potentials (MEP) were delivered over the motor cortex via transcranial magnetic stimulation
All statistical analyses except slope comparisons were performed using Statistica software (StatSoft, Inc., version 6.0, Tulsa, OK, USA). All data were examined for normality using a Shapiro-Wilk test. Differences between the pooled COPD patients and the healthy controls were studied using unpaired t-tests for parametric data and the non-parametric Mann-Whitney U test otherwise. Differences between the COPDMW and COPDNoMW groups and healthy controls were tested using a one-way between-subject analysis of variance (ANOVA), unless when no data were available for healthy controls (i.e. blood gas analyses and comorbidities), with the 2 groups of patients compared using unpaired t-test instead. The underlying assumptions of ANOVA were checked using a Levene test (homogeneity of the variance). When the ANOVA F ratio was significant (p < 0.05), the means were compared by a Studentized Newman-Keuls (SNK) post-hoc test. Analysis of covariance (ANCOVA) was used with 1) QMVC as the criterion variable and QPt as the covariate (adjusted maximal voluntary strength), 2) VAcortical as the criterion variable and PaO2 as the covariate. Bivariate regression analyses were performed using the Pearson coefficient. The slopes and Y-intercepts of the relationships between QMVC and QPt were compared for differences between the three groups using a specific ANOVA procedure of the Statgraphics Centurion XVII statistical package. Data are presented mean ± standard error (SE).
You are told by us how Soreness During Intercourse or Penetration
Comparison of the effect of citalopram, bupropion, sertraline, and tricyclic antidepressants on QTc: A cross-sectional study | CommonCrawl |
Regular solutions and global attractors for reaction-diffusion systems without uniqueness
CPAA Home
Eliminating flutter for clamped von Karman plates immersed in subsonic flows
September 2014, 13(5): 1907-1933. doi: 10.3934/cpaa.2014.13.1907
Reaction-diffusion equations with a switched--off reaction zone
Peter E. Kloeden 1, , Thomas Lorenz 2, and Meihua Yang 3,
Institut für Mathematik, Goethe Universität, D-60054 Frankfurt am Main
Institute of Mathematics, Johann Wolfgang Goethe University, 60054 Frankfurt (Main)
School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan, 430074
Received March 2013 Revised May 2013 Published June 2014
Reaction-diffusion equations are considered on a bounded domain $\Omega$ in $\mathbb{R}^d$ with a reaction term that is switched off at a point in space when the solution first exceeds a specified threshold and thereafter remains switched off at that point, which leads to a discontinuous reaction term with delay. This problem is formulated as a parabolic partial differential inclusion with delay. The reaction-free region forms what could be called dead core in a biological sense rather than that used elsewhere in the literature for parabolic PDEs. The existence of solutions in $L^2(\Omega)$ is established firstly for initial data in $L^{\infty}(\Omega)$ and in $W_0^{1,2}(\Omega)$ by different methods, with $d$ $=$ $2$ or $3$ in the first case and $d$ $\geq$ $2$ in the second. Solutions here are interpreted in the sense of integral or strong solutions of nonhomogeneous linear parabolic equations in $L^2(\Omega)$ that are generalised to selectors of the corresponding nonhomogeneous linear parabolic differential inclusions and are shown to be equivalent under the assumptions used in the paper.
Keywords: Reaction-diffusion equation, dead core, existence of solutions., memory, inclusion equations, discontinuous right-hand sides.
Mathematics Subject Classification: Primary: 35R70; Secondary: 35K15, 35K5.
Citation: Peter E. Kloeden, Thomas Lorenz, Meihua Yang. Reaction-diffusion equations with a switched--off reaction zone. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1907-1933. doi: 10.3934/cpaa.2014.13.1907
R. A. Adams and J. F. Fournier, Sobolev Spaces,, second edition, (2003). Google Scholar
J.-P. Aubin and H. Frankowska, Set-Valued Analysis,, Birkh\, (1990). Google Scholar
J.-P. Aubin and A. Cellina, Differential Inclusions. Set-valued Maps and Viability Theory,, Springer, (1984). doi: 10.1007/978-3-642-69512-4. Google Scholar
V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces,, Editura Academiei Republicii Socialiste Rom\^ania, (1976). Google Scholar
Ph. Bénilan, Solutions intégrales d'équations d'évolution dans un espace de Banach,, \emph{C. R. Acad. Sci. Paris S\'er. A-B}, 274 (1972). Google Scholar
H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations,, Universitext. Springer, (2011). Google Scholar
C. Castaing, L. A. Faik and A. Salvadori, Evolution equations governed by m-accretive and subdifferential operators with delay,, \emph{Int. J. Appl. Math.}, 2 (2000), 1005. Google Scholar
C. Castaing and M. Valadier, Convex Analysis and Measurable Multifunctions,, Lecture Notes in Mathematics, (1977). Google Scholar
Xinfu Chen, J.-S. Guo and Bei Hu, Dead-core rates for the porous medium equation with a strong absorption,, \emph{Discrete and Continuous Dynamical Systems, (). doi: 10.3934/dcdsb.2012.17.1761. Google Scholar
E. B. Davies, Heat Kernels and Spectral Theory,, Cambridge Tracts in Mathematics, (1989). doi: 10.1017/CBO9780511566158. Google Scholar
J. Diestel and Jr. J. J. Uhl, Vector Measures,, American Mathematical Society, (1977). Google Scholar
A. Gavioli and L. Malaguti, Viable solutions of differential inclusions with memory in Banach spaces,, \emph{Portugal. Math.}, 57 (2000), 203. Google Scholar
J.-S. Guo and P. Souplet, Fast rate of formation of dead-core for the heat equation with strong absorption and applications to fast blow-up,, \emph{Math. Ann.}, 331 (2005), 651. doi: 10.1007/s00208-004-0601-7. Google Scholar
J.-S. Guo and C.-C. Wu, Finite time dead-core rate for the heat equation with a strong absorption,, \emph{Tohoku Math. J.}, 60 (2008), 37. Google Scholar
Shouchuan Hu and N. S. Papageorgiou, Handbook of Multivalued Analysis. Vol. I, Theory,, Kluwer Academic Publishers, (1997). doi: 10.1007/978-1-4615-6359-4. Google Scholar
Shouchuan Hu and N. S. Papageorgiou, Handbook of Multivalued Analysis. Vol. II, Applications,, Kluwer Academic Publishers, (2000). doi: 10.1007/978-1-4615-4665-8_17. Google Scholar
A. G. Ibrahim, On differential inclusions with memory in Banach spaces,, \emph{Proc. Math. Phys. Soc. Egypt}, 67 (1992), 1. Google Scholar
A. G. Ibrahim, Topological properties of solution sets for functional differential inclusions governed by a family of operators,, \emph{Portugal. Math.}, 58 (2001), 255. Google Scholar
O. V. Kapustyan, V. S. Mel'nik, J. Valero and V. V. Yasinsky, Global Attractors of Multi-valued Dynamical Systems and Evolution Equations without Uniqueness,, National Academy of Sciences of Ukraine, (2008). Google Scholar
O. A. Ladyzhenskaya, V. A. Solonnikov and N. N. Uraltseva, Linear and Quasi-linear Equations of Parabolic Type,, Translations of Mathematical Monographs 23, (1968). Google Scholar
Ch. B. Morrey, Multiple Integrals in the Calculus of Variations,, Springer, (1966). Google Scholar
N. Pavel, Nonlinear Evolution Operators and Semigroups. Applications to Partial Differential Equations,, Lecture Notes in Mathematics, (1260). Google Scholar
A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations,, Applied Mathematical Sciences, (1983). doi: 10.1007/978-1-4612-5561-1. Google Scholar
P. Quittner and Ph. Souplet, Superlinear Parabolic Problems. Blow-up, Global Existence and Steady States,, Birkh\, (2007). Google Scholar
H. L. Royden, Real Analysis,, third edition, (1988). Google Scholar
G. V. Smirnov, Introduction to the Theory of Differential Inclusions,, American Mathematical Society, (2002). Google Scholar
A. A. Tolstonogov, Solutions of evolution inclusions. I,, \emph{Siberian Math. J.}, 33 (1993), 500. doi: 10.1007/BF00970899. Google Scholar
A. A. Tolstonogov and Ya. I. Umanskiĭ, Solutions of evolution inclusions. II,, \emph{ Siberian Math. J.}, 33 (1993), 693. doi: 10.1007/BF00971135. Google Scholar
A. A. Tolstonogov, Differential Inclusions in a Banach Space,, Kluwer Academic Publishers, (2000). doi: 10.1007/978-94-015-9490-5. Google Scholar
W. P. Ziemer, Weakly Differentiable Functions. Sobolev Spaces and Functions of Bounded Variation,, Graduate Texts in Mathematics, (1989). doi: 10.1007/978-1-4612-1015-3. Google Scholar
Shin-Yi Lee, Shin-Hwa Wang, Chiou-Ping Ye. Explicit necessary and sufficient conditions for the existence of a dead core solution of a p-laplacian steady-state reaction-diffusion problem. Conference Publications, 2005, 2005 (Special) : 587-596. doi: 10.3934/proc.2005.2005.587
Angela Alberico, Teresa Alberico, Carlo Sbordone. Planar quasilinear elliptic equations with right-hand side in $L(\log L)^{\delta}$. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1053-1067. doi: 10.3934/dcds.2011.31.1053
M. Grasselli, V. Pata. A reaction-diffusion equation with memory. Discrete & Continuous Dynamical Systems - A, 2006, 15 (4) : 1079-1088. doi: 10.3934/dcds.2006.15.1079
Chunlai Mu, Jun Zhou, Yuhuan Li. Fast rate of dead core for fast diffusion equation with strong absorption. Communications on Pure & Applied Analysis, 2010, 9 (2) : 397-411. doi: 10.3934/cpaa.2010.9.397
Angelo Favini, Atsushi Yagi. Global existence for Laplace reaction-diffusion equations. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1-21. doi: 10.3934/dcdss.2020083
Aníbal Rodríguez-Bernal, Alejandro Vidal-López. A note on the existence of global solutions for reaction-diffusion equations with almost-monotonic nonlinearities. Communications on Pure & Applied Analysis, 2014, 13 (2) : 635-644. doi: 10.3934/cpaa.2014.13.635
Lili Du, Chunlai Mu, Zhaoyin Xiang. Global existence and blow-up to a reaction-diffusion system with nonlinear memory. Communications on Pure & Applied Analysis, 2005, 4 (4) : 721-733. doi: 10.3934/cpaa.2005.4.721
Wei Feng, Weihua Ruan, Xin Lu. On existence of wavefront solutions in mixed monotone reaction-diffusion systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 815-836. doi: 10.3934/dcdsb.2016.21.815
Yuriy Golovaty, Anna Marciniak-Czochra, Mariya Ptashnyk. Stability of nonconstant stationary solutions in a reaction-diffusion equation coupled to the system of ordinary differential equations. Communications on Pure & Applied Analysis, 2012, 11 (1) : 229-241. doi: 10.3934/cpaa.2012.11.229
Jong-Shenq Guo, Yoshihisa Morita. Entire solutions of reaction-diffusion equations and an application to discrete diffusive equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 193-212. doi: 10.3934/dcds.2005.12.193
Samira Boussaïd, Danielle Hilhorst, Thanh Nam Nguyen. Convergence to steady state for the solutions of a nonlocal reaction-diffusion equation. Evolution Equations & Control Theory, 2015, 4 (1) : 39-59. doi: 10.3934/eect.2015.4.39
Michele V. Bartuccelli, K. B. Blyuss, Y. N. Kyrychko. Length scales and positivity of solutions of a class of reaction-diffusion equations. Communications on Pure & Applied Analysis, 2004, 3 (1) : 25-40. doi: 10.3934/cpaa.2004.3.25
Peter Poláčik, Eiji Yanagida. Stable subharmonic solutions of reaction-diffusion equations on an arbitrary domain. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 209-218. doi: 10.3934/dcds.2002.8.209
Chin-Chin Wu, Zhengce Zhang. Dead-core rates for the heat equation with a spatially dependent strong absorption. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2203-2210. doi: 10.3934/dcdsb.2013.18.2203
Xinfu Chen, Jong-Shenq Guo, Bei Hu. Dead-core rates for the porous medium equation with a strong absorption. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1761-1774. doi: 10.3934/dcdsb.2012.17.1761
Zhaosheng Feng. Traveling waves to a reaction-diffusion equation. Conference Publications, 2007, 2007 (Special) : 382-390. doi: 10.3934/proc.2007.2007.382
Nick Bessonov, Gennady Bocharov, Tarik Mohammed Touaoula, Sergei Trofimchuk, Vitaly Volpert. Delay reaction-diffusion equation for infection dynamics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (5) : 2073-2091. doi: 10.3934/dcdsb.2019085
Shu-Xiang Huang, Fu-Cai Li, Chun-Hong Xie. Global existence and blow-up of solutions to a nonlocal reaction-diffusion system. Discrete & Continuous Dynamical Systems - A, 2003, 9 (6) : 1519-1532. doi: 10.3934/dcds.2003.9.1519
Hideo Deguchi. A reaction-diffusion system arising in game theory: existence of solutions and spatial dominance. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3891-3901. doi: 10.3934/dcdsb.2017200
Joaquin Riviera, Yi Li. Existence of traveling wave solutions for a nonlocal reaction-diffusion model of influenza a drift. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 157-174. doi: 10.3934/dcdsb.2010.13.157
Peter E. Kloeden Thomas Lorenz Meihua Yang | CommonCrawl |
Graph dynamical networks for unsupervised learning of atomic scale dynamics in materials
Tian Xie1,
Arthur France-Lanord1,
Yanming Wang ORCID: orcid.org/0000-0002-0912-681X1,
Yang Shao-Horn2 &
Jeffrey C. Grossman ORCID: orcid.org/0000-0003-1281-23591
Nature Communications volume 10, Article number: 2667 (2019) Cite this article
Understanding the dynamical processes that govern the performance of functional materials is essential for the design of next generation materials to tackle global energy and environmental challenges. Many of these processes involve the dynamics of individual atoms or small molecules in condensed phases, e.g. lithium ions in electrolytes, water molecules in membranes, molten atoms at interfaces, etc., which are difficult to understand due to the complexity of local environments. In this work, we develop graph dynamical networks, an unsupervised learning approach for understanding atomic scale dynamics in arbitrary phases and environments from molecular dynamics simulations. We show that important dynamical information, which would be difficult to obtain otherwise, can be learned for various multi-component amorphous material systems. With the large amounts of molecular dynamics data generated every day in nearly every aspect of materials design, this approach provides a broadly applicable, automated tool to understand atomic scale dynamics in material systems.
Understanding the atomic scale dynamics in condensed phases is essential for the design of functional materials to tackle global energy and environmental challenges1,2,3. The performance of many materials depends on the dynamics of individual atoms or small molecules in complex local environments. Despite the rapid advances in experimental techniques4,5,6, molecular dynamics (MD) simulations remain one of the few tools for probing these dynamical processes with both atomic scale time and spatial resolutions. However, due to the large amounts of data generated in each MD simulation, it is often challenging to extract statistically relevant dynamics for each atom especially in multi-component, amorphous material systems. At present, atomic scale dynamics are usually learned by designing system-specific descriptions of coordination environments or computing the average behavior of atoms7,8,9,10. A general approach for understanding the dynamics in different types of condensed phases, including solid, liquid, and amorphous, is still lacking.
The advances in applying deep learning to scientific research open new opportunities for utilizing the full trajectory data from MD simulations in an automated fashion. Ideally, one would trace every atom or small molecule of interest in the MD trajectories, and summarize their dynamics into a linear, low dimensional model that describes how their local environments evolve over time. Recent studies show that combining Koopman analysis and deep neural networks provides a powerful tool to understand complex biological processes and fluid dynamics from data11,12,13. In particular, VAMPnets13 develop a variational approach for Markov processes to learn an optimal latent space representation that encodes the long-time dynamics, which enables the end-to-end learning of a linear dynamical model directly from MD data. However, in order to learn the atomic dynamics in complex, multi-component material systems, sharing knowledge learned for similar local chemical environments is essential to reduce the amount of data needed. The recent development of graph convolutional neural networks (GCN) has led to a series of new representations of molecules14,15,16,17 and materials18,19 that are invariant to permutation and rotation operations. These representations provide a general approach to encode the chemical structures in neural networks which shares parameters between different local environments, and they have been used for predicting properties of molecules and materials14,15,16,17,18,19, generating force fields19,20, and visualizing structural similarities21,22.
In this work, we develop a deep learning architecture, Graph Dynamical Networks (GDyNets), that combines Koopman analysis and graph convolutional neural networks to learn the dynamics of individual atoms in material systems. The graph convolutional neural networks allow for the sharing of knowledge learned for similar local environments across the system, and the variational loss developed in VAMPnets13,23 is employed to learn a linear model for atomic dynamics. Thus, our method focuses on the modeling of local atomic dynamics instead of global dynamics. This significantly improves the sampling of the atomic dynamical processes, because a typical material system includes a large number of atoms or small molecules moving in structurally similar but distinct local environments. We demonstrate this distinction using a toy system that shows global dynamics can be exponentially more complex than local dynamics. Then, we apply this method to two realistic material systems—silicon dynamics at solid–liquid interfaces and lithium ion transport in amorphous polymer electrolytes—to demonstrate the new dynamical information one can extract for such complex materials and environments. Given the enormous amount of MD data generated in nearly every aspect of materials research, we believe the broad applicability of this method could help uncover important new physical insights from atomic scale dynamics that may have otherwise been overlooked.
Koopman analysis of atomic scale dynamics
In materials design, the dynamics of target atoms, like the lithium ion in electrolytes and the water molecule in membranes, provide key information to material performance. We describe the dynamics of the target atoms and their surrounding atoms as a discrete process in MD simulations,
$${\boldsymbol{x}}_{t + \tau } = {\boldsymbol{F}}({\boldsymbol{x}}_t),$$
where xt and xt+τ denote the local configuration of the target atoms and their surrounding atoms at time steps t and t + τ, respectively. Note that Eq. (1) implies that the dynamics of x is Markovian, i.e. xt+τ only depends on xt not the configurations before it. This is exact when x includes all atoms in the system, but an approximation if only neighbor atoms are included. We also assume that each set of target atoms follow the same dynamics F. These are valid assumptions since (1) most interactions in materials are short-range, (2) most materials are either periodic or have similar local structures, and we could test them by validating the dynamical models using new MD data, which we will discuss later.
The Koopman theory24 states that there exists a function χ(x) that maps the local configuration of target atoms x into a lower dimensional feature space, such that the non-linear dynamics F can be approximated by a linear transition matrix K,
$${\boldsymbol{\chi }}({\boldsymbol{x}}_{t + \tau }) \approx {\boldsymbol{K}}^T{\boldsymbol{\chi }}({\boldsymbol{x}}_t).$$
The approximation becomes exact when the feature space has infinite dimensions. However, for most dynamics in material systems, it is possible to approximate it with a low dimensional feature space if τ is sufficiently large due to the existence of characteristic slow processes. The goal is to identify such slow processes by finding the feature map function χ(x).
Learning feature map function with graph dynamical networks
In this work, we use GCN to learn the feature map function χ(x). GCN provides a general framework to encode the structure of materials that is invariant to permutation, rotation, and reflection18,19. As shown in Fig. 1, for each time step in the MD trajectory, a graph \({\cal{G}}\) is constructed based on its current configuration with each node vi representing an atom and each edge ui,j representing a bond connecting nearby atoms. We connect M nearest neighbors considering periodic boundary conditions while constructing the graph, and a gated architecture18 is used in GCN to reweigh the strength of each connection (see Supplementary Note 1 for details). Note that the graphs are constructed separately for each step, so the topology of each graph may be different. Also, the 3-dimensional information is preserved in the graphs since the bond length is encoded in ui,j. Then, each graph is input to the same GCN to learn an embedding for each atom through graph convolution (or neural message passing16) that incorporates the information of its surrounding environments.
$${\boldsymbol{v}}_i^\prime = {\mathrm{Conv}}({\boldsymbol{v}}_i,{\boldsymbol{v}}_j,{\boldsymbol{u}}_{(i,j)}),\quad (i,j) \in {\cal{G}}.$$
After K convolution operations, information from the Kth neighbors will be propagated to each atom, resulting in an embedding \({\boldsymbol{v}}_i^{(K)}\) that encodes its local environment.
Illustration of the graph dynamical networks architecture. The MD trajectories are represented by a series of graphs dynamically constructed at each time step. The red nodes denote the target atoms whose dynamics we are interested in, and the blue nodes denote the rest of the atoms. The graphs are input to the same graph convolutional neural network to learn an embedding \({\boldsymbol{v}}_i^{(K)}\) for each atom that represents its local configuration. The embeddings of the target atoms at t and t + τ are merged to compute a VAMP loss that minimizes the errors in Eq. (2)
To learn a feature map function for the target atoms whose dynamics we want to model, we focus on the embeddings learned for these atoms. Assume that there are n sets of target atoms each made up with k atoms in the material system. For instance, in a system of 10 water molecules, n = 10 and k = 3. We use the label v[l,m] to denote the mth atom in the lth set of target atoms. With a pooling function18, we can get an overall embedding v[l] for each set of target atoms to represent its local configuration,
$${\boldsymbol{v}}_{[l]} = {\mathrm{Pool}}({\boldsymbol{v}}_{[l,0]},{\boldsymbol{v}}_{[l,1]}, \ldots ,{\boldsymbol{v}}_{[l,k]}).$$
Finally, we build a shared two-layer fully connected neural network with an output layer using a Softmax activation function to map the embeddings v[l] to a feature space \(\widetilde {\boldsymbol{v}}_{[l]}\) with a pre-determined dimension. This is the feature space described in Eq. (2), and we can select an appropriate dimension to capture the important dynamics in the material system. The Softmax function used here allows us to interpret the feature space as a probability over several states13. Below, we will use the term "number of states" and "dimension of feature space" interchangeably.
To minimize the errors of the approximation in Eq. (2), we compute the loss of the system using a VAMP-2 score13,24 that measures the consistency between the feature vectors learned at timesteps t and t + τ,
$${\mathrm{Loss}} = - {\mathrm{VAMP}}(\widetilde {\boldsymbol{v}}_{[l],t},\widetilde {\boldsymbol{v}}_{[l],t + \tau }),\quad t \in [0,T - \tau ],l \in [0,n].$$
This means that a single VAMP-2 score is computed over the whole trajectory and all sets of target atoms. The entire network is trained by minimizing the VAMP loss, i.e. maximizing the VAMP-2 score, with the trajectories from the MD simulations.
Hyperparameter optimization and model validation
There are several hyperparameters in the GDyNets that need to be optimized, including the architecture of GCN, the dimension of the feature space, and lag time τ. We divide the MD trajectory into training, validation, and testing sets. The models are trained with trajectories from the training set, and a VAMP-2 score is computed with trajectories from the validation set. The GCN architecture is optimized according to the VAMP-2 score similar to ref. 18.
The accuracy of Eq. (2) can be evaluated with a Chapman-Kolmogorov (CK) equation,
$${\boldsymbol{K}}(n\tau ) = {\boldsymbol{K}}^n(\tau ),\quad n = 1,2, \ldots .$$
This equation holds if the dynamic model learned is Markovian, and it can predict the long-time dynamics of the system. In general, increasing the dimension of feature space makes the dynamic model more accurate, but it may result in overfitting when the dimension is very large. Since a higher feature space dimension and a larger τ make the model harder to understand and contain less dynamical details, we select the smallest feature space dimension and τ that fulfills the CK equation within statistical uncertainty. Therefore, the resulting model is interpretable and contains more dynamical details. Further details regarding the effects of feature space dimension and τ can be found in refs. 13,24.
Local and global dynamics in the toy system
To demonstrate the advantage of learning local dynamics in material systems, we compare the dynamics learned by the GDyNet with VAMP loss and a standard VAMPnet with fully connected neural networks that learns global dynamics for a simple model system using the same input data. As shown in Fig. 2a, we generated a 200 ns MD trajectory of a lithium atom moving in a face-centered cubic (FCC) lattice of sulfur atoms at a constant temperature, which describes an important lithium ion transport mechanism in solid-state electrolytes7. There are two different sites for the lithium atom to occupy in a FCC lattice, tetrahedral sites and octahedral sites, and the hopping between the two sites should be the only dynamics in this system. As shown in Fig. 2b–d, after training and validation with the first 100 ns trajectory, the GDyNet correctly identified the transition between the two sites with a relaxation timescale of 42.3 ps while testing on the second 100 ns trajectory, and it performs well in the CK test. In contrast, the standard VAMPnet, which inputs the same data as the GDyNet, learns a global transition with a much longer relaxation timescale at 236 ps, and it performs much worse in the CK test. This is because the model views the four octahedral sites as different sites due to their different spatial locations. As a result, the transitions between these identical sites are learned as the slowest global dynamics.
A two-state dynamic model learned for a lithium ion in the face-centered cubic lattice. a Structure of the FCC lattice and the relative energies of the tetrahedral and octahedral sites. b–d Comparison between the local dynamics (left) learned with GDyNet and the global dynamics (right) learned with a standard VAMPnet. b Relaxation timescales computed from the Koopman models as a function of the lag time. The black lines are reference lines where the relaxation timescale equals to the lag time. c Assignment of the two states in the FCC lattice. The color denotes the probability of being in state 0, which corresponds to one of the two states that has a larger population. d CK test comparing the long-time dynamics predicted by Koopman models at τ = 10 ps (blue) and actual dynamics (red). The shaded areas and error bars in b, d report the 95% confidence interval from five independent trajectories by dividing the test data equally into chunks
It is theoretically possible to identify the faster local dynamics from a global dynamical model when we increase the dimension of feature space (Supplementary Fig. 1). However, when the size of the system increases, the number of slower global transitions will increase exponentially, making it practically impossible to discover important atomic scale dynamics within a reasonable simulation time. In addition, it is possible in this simple system to design a symmetrically invariant coordinate to include the equivalence of the octahedral and tetrahedral sites. But in a more complicated multi-component or amorphous material system, it is difficult to design such coordinates that take into account the complex atomic local environments. Finally, it is also possible to reconstruct global dynamics from the local dynamics. Since we know how the four octahedral and eight tetrahedral sites are connected in a FCC lattice, we can construct the 12 dimensional global transition matrix from the 2 dimensional local transition matrix (see Supplementary Note 2 for details). We obtain the slowest global relaxation timescale to be 531 ps, which is close to the observed slowest timescale of 528 ps from the global dynamical model in Supplementary Fig. 1. Note that the timescale from the two-state global model in Fig. 2 is less accurate since it fails to learn the correct transition. In sum, the built-in invariances in GCN provide a general approach to reduce the complexity of learning atomic dynamics in material systems.
Silicon dynamics at a solid–liquid interface
To evaluate the performance of the GDyNets with VAMP loss for a more complicated system, we study the dynamics of silicon atoms at a binary solid–liquid interface. Understanding the dynamics at interfaces is notoriously difficult due to the complex local structures formed during phase transitions25,26. As shown in Fig. 3a, an equilibrium system made of two crystalline Si {110} surfaces and a liquid Si–Au solution is constructed at the eutectic point (629 K, 23.4% Si27) and simulated for 25 ns using MD. We train and validate a four-state model using the first 12.5 ns trajectory, and use it to identify the dynamics of Si atoms in the last 12.5 ns trajectory. Note that we only use the Si atoms in the liquid phase and the first two layers of the solid {110} surfaces as the target atoms (Fig. 3b). This is because the Koopman models are optimized for finding the slowest transition in the system, and including additional solid Si atoms will result in a model that learns the slower Si hopping in the solid phase which is not our focus.
A four-state dynamical model learned for silicon atoms at a solid–liquid interface. a Structure of the silicon-gold two-phase system. b Cross section of the system, where only silicon atoms are shown and color-coded with the probability of being in each state. c The distribution of silicon atoms in each state as a function of z-axis coordinate. d Relaxation timescales computed from the Koopman models as a function of the lag time. The black lines are reference lines where the relaxation timescale equals to the lag time. e Eigenvectors projected to each state for the three relaxations of Koopman models at τ = 3 ns. f CK test comparing the long-time dynamics predicted by Koopman models at τ = 3 ns (blue) and actual dynamics (red). The shaded areas and error bars in d, f report the 95% confidence interval from five sets of Si atoms by randomly dividing the target atoms in the test data
In Fig. 3b, c, the model identified four states that are crucial for the Si dynamics at the solid–liquid interface – liquid Si at the interface (state 0), solid Si (state 1), solid Si at the interface (state 2), and liquid Si (state 3). These states provide a more detailed description of the solid–liquid interface structure than conventional methods. In Supplementary Fig. 2, we compare our results with the distribution of the q3 order parameter of the Si atoms in the system, which measures how much a site deviates from a diamond-like structure and is often used for studying Si interfaces28. We learn from the comparison that (1) our method successfully identifies the bulk liquid and solid states, and learns additional interface states that cannot be obtained from q3; (2) the states learned by our method are more robust due to access to dynamical information, while q3 can be affected by the accidental ordered structures in the liquid phase; (3) q3 is system specific and only works for diamond-like structures, but the GDyNets can potentially be applied to any material given the MD data.
In addition, important dynamical processes at the solid–liquid interface can be learned with the model. Remarkably, the model identified the relaxation process of the solid–liquid transition with a timescale of 538 ns (Fig. 3d, e), which is one order of magnitude longer than the simulation time of 12.5 ns. This is because the large number of Si atoms in the material system provide an ensemble of independent trajectories that enable the identification of rare events29,30,31. The other two relaxation processes correspond to the transitions of solid Si atoms into/out of the interface (73.2 ns) and liquid Si atoms into/out of the interface (2.26 ns), respectively. These processes are difficult to obtain with conventional methods due to the complex structures at solid–liquid interfaces, and the results are consistent with our understanding that the former solid relaxation is significantly slower than the latter liquid relaxation. Finally, the model performs excellently in the CK test on predicting the long-time dynamics.
Lithium ion dynamics in polymer electrolytes
Finally, we apply GDyNets with VAMP loss to study the dynamics of lithium ions (Li-ions) in solid polymer electrolytes (SPEs), an amorphous material system composed of multiple chemical species. SPEs are candidates for next-generation battery technology due to their safety, stability, and low manufacturing cost, but they suffer from low Li-ion conductivity compared with liquid electrolytes32,33. Understanding the key dynamics that affect the transport of Li-ions is important to the improvement of Li-ion conductivity in SPEs.
We focus on the state-of-the-art33 SPE system—a mixture of poly(ethylene oxide) (PEO) and lithium bis-trifluoromethyl sulfonimide (LiTFSI) with Li/EO = 0.05 and a degree of polymerization of 50, as shown in Fig. 4a. Five independent 80 ns trajectories are generated to model the Li-ion transport at 363 K, following the same approach as described in ref. 67. We train a four-state GDyNet with one of the trajectories, and use the model to identify the dynamics of Li-ions in the remaining four trajectories. The model identified four different solvation environments, i.e. states, for the Li-ions in the SPE. In Fig. 4b, the state 0 Li-ion has a population of 50.6 ± 0.8%, and it is coordinated by a PEO chain on one side and a TFSI anion on the other side. The state 1 has a similar structure as state 0 with a population of 27.3 ± 0.4%, but the Li-ion is coordinated by a hydroxyl group on the PEO side rather than an oxygen. In state 2, the Li-ion is completely coordinated by TFSI anion ions, which has a population of 15.1 ± 0.4%. And the state 3 Li-ion is coordinated by PEO chains with a population of 7.0 ± 0.9%. Note that the structures in Fig. 4b only show a representative configuration for each state. We compute the element-wise radial distribution function (RDF) for each state in Supplementary Fig. 3 to demonstrate the average configurations, which is consistent with the above description. We also analyze the total charge carried by the Li-ions in each state considering their solvation environments in Fig. 4c (see Supplementary Note 3 and Supplementary Table 1 for details). Interestingly, both state 0 and state 1 carry almost zero total charge in their first solvation shell due to the one TFSI anion in their solvation environments.
A four-state dynamical model learned for lithium ion in a PEO/LiTFSI polymer electrolyte. a Structure of the PEO/LiTFSI polymer electrolyte. b Representative configurations of the four Li-ion states learned by the dynamical model. c Charge integral of each state around a Li-ion as a function of radius. d Relaxation timescales computed from the Koopman models as a function of the lag time. The black lines are reference lines where the relaxation timescale equals to the lag time. e Eigenvectors projected to each state for the three relaxations of Koopman models at τ = 0.8 ns. f CK test comparing the long-time dynamics predicted by Koopman models at τ = 0.8 ns (blue) and actual dynamics (red). The shaded areas and error bars in d, f report the 95% confidence interval from four independent trajectories in the test data
We further study the transition between the four Li-ion states. Three relaxation processes are identified in the dynamical model as shown in Fig. 4d, e. By analyzing the eigenvectors, we learn that the slowest relaxation is a process involving the transport of a Li-ion into and out of a PEO coordinated environment. The second slowest relaxation happens mainly between state 0 and state 1, corresponding to a movement of the hydroxyl end group. The transitions from state 0 to states 2 and 3 constitute the last relaxation process, as state 0 can be thought of an intermediate state between state 2 and state 3. The model performs well in CK tests (Fig. 4f). Relaxation processes in the PEO/LiTFSI systems have been extensively studied experimentally34,35, but it is difficult to pinpoint the exact atomic scale dynamics related to these relaxations. The dynamical model learned by GDyNet provides additional insights into the understanding of Li-ion transport in polymer electrolytes.
Implications to lithium ion conduction
The state configurations and dynamical model allow us to further quantify the transitions that are responsible for the Li-ion conduction. In Fig. 5, we compute the contribution from each state transition to the Li-ion conduction using the Koopman model at τ = 0.8 ns. First, we learn that the majority of conduction results from transitions within the same states (i → i). This is because the transport of Li-ions in PEO is strongly coupled with segmental motion of the polymer chains8,36, in contrast to the hopping mechanism in inorganic solid electrolytes37. In addition, due to the low charge carried by state 0 and state 1, the majority of charge conduction results from the diffusion of states 2 and 3, despite their relatively low populations. Interestingly, the diffusion of state 2, a negatively charged species, accounts for ~40% of the Li-ion conduction. This provides an atomic scale explanation to the recently observed negative transference number at high salt concentration PEO/LiTFSI systems38.
Contribution from each transition to lithium ion conduction. Each bar denotes the percentage that the transition from state i to state j contributes to the overall lithium ion conduction. The error bars report the 95% confidence interval from four independent trajectories in test data
We have developed a general approach, GDyNets, to understand the atomic scale dynamics in material systems. Despite being widely used in biophysics31, fluid dynamics39, and kinetic modeling of chemical reactions40,41,42, Koopman models, (or Markov state models31, master equation methods43,44) have not been used in learning atomic scale dynamics in materials from MD simulations except for a few examples in understanding solvent dynamics45,46,47. Our approach also differs from several other unsupervised learning methods48,49,50 by directly learning a linear Koopman model from MD data. Many crucial processes that affect the performance of materials involve the local dynamics of atoms or small molecules, like the dynamics of lithium ions in battery electrolytes51,52, the transport of water and salt ions in water desalination membranes53,54, the adsorption of gas molecules in metal organic frameworks55,56, among many other examples. With the improvement of computational power and continued increase in the use of molecular dynamics to study materials, this work could have broad applicability as a general framework for understanding the atomic scale dynamics from MD trajectory data.
Compared with the Koopman models previously used in biophysics and fluid dynamics, the introduction of graph convolutional neural networks enables parameter sharing between the atoms and an encoding of local environments that is invariant to permutation, rotation, and reflection. This symmetry facilitates the identification of similar local environments throughout the materials, which allows the learning of local dynamics instead of exponentially more complicated global dynamics. In addition, it is easy to extend this method to learn global dynamics with a global pooling function18. However, a hierarchical pooling function is potentially needed to directly learn the global dynamics of large biological systems including thousands of atoms. It is also possible to represent the local environments using other symmetry functions like smooth overlap of atomic positions (SOAP)57, social permutation invariant (SPRINT) coordinates58, etc. By adding a few layers of neural networks, a similar architecture can be designed to learn the local dynamics of atoms. However, these built-in invariances may also cause the Koopman model to ignore dynamics between symmetrically equivalent structures which might be important to the material performance. One simple example is the flip of an ammonia molecule—the two states are mirror symmetric to each other so the GCN will not be able to differentiate them by design. This can potentially be resolved by partially breaking the symmetry of GCN based on the symmetry of the material systems.
The graph dynamical networks can be further improved by incorporating ideas from both the fields of Koopman models and graph neural networks. For instance, the auto-encoder architecture12,59,60 and deep generative models61 start to enable the direct generation of future structures in the configuration space. Our method currently lacks a generative component, but this can potentially be achieved with a proper graph decoder62,63. Furthermore, transfer learning on graph embeddings may reduce the number of MD trajectories needed for learning the dynamics64,65.
In summary, graph dynamical networks present a general approach for understanding the atomic scale dynamics in materials. With a toy system of lithium ion transporting in a face-centered cubic lattice, we demonstrate that learning local dynamics of atoms can be exponentially easier than global dynamics in material systems with representative local structures. The dynamics learned from two more complicated systems, solid–liquid interfaces and solid polymer electrolytes, indicate the potential of applying the method to a wide range of material systems and understanding atomic dynamics that are crucial to their performances.
Construction of the graphs from trajectory
A separate graph is constructed using the configuration in each time step. Each atom in the simulation box is represented by a node i whose embedding vi is initialized randomly according to the element type. The edges are determined by connecting M nearest neighbors whose embedding u(i,j) is calculated by,
$${\boldsymbol{u}}_{(i,j)}[t] = \exp ( - (d_{(i,j)} - \mu _t)^2/\sigma ^2),$$
where μt = t · 0.2 Å for t = 0, 1, …, K, σ = 0.2 Å, and d(i,j) denotes the distance between i and j considering the periodic boundary conditions. The number of nearest neighbors M is 12, 20, and 20 for the toy system, Si–Au binary system, and PEO/LiTFSI system, respectively.
Graph convolutional neural network architecture details
The convolution function we employed in this work is similar to those in refs. 18,22 but features an attention layer66. For each node i, we first concatenate neighbor vectors from the last iteration \({\boldsymbol{z}}_{(i,j)}^{(t - 1)} = {\boldsymbol{v}}_i^{(t - 1)} \oplus {\boldsymbol{v}}_j^{(t - 1)} \oplus {\boldsymbol{u}}_{(i,j)}\), then we compute the attention coefficient of each neighbor,
$$\alpha _{ij} = \frac{{\exp ({\boldsymbol{z}}_{(i,j)}^{(t - 1)}{\boldsymbol{W}}_{\mathrm{a}}^{(t - 1)} + b_{\mathrm{a}}^{(t - 1)})}}{{\mathop {\sum}\limits_j {\exp } ({\boldsymbol{z}}_{(i,j)}^{(t - 1)}{\boldsymbol{W}}_{\mathrm{a}}^{(t - 1)} + b_{\mathrm{a}}^{(t - 1)})}},$$
where \({\boldsymbol{W}}_{\mathrm{a}}^{(t - 1)}\) and \(b_{\mathrm{a}}^{(t - 1)}\) denotes the weights and biases of the attention layers and the output αij is a scalar number between 0 and 1. Finally, we compute the embedding of node i by,
$${\boldsymbol{v}}_i^{(t)} = {\boldsymbol{v}}_i^{(t - 1)} + \mathop {\sum}\limits_j {\alpha _{ij}} \cdot g({\boldsymbol{z}}_{(i,j)}^{(t - 1)}{\boldsymbol{W}}_{\mathrm{n}}^{(t - 1)} + {\boldsymbol{b}}_{\mathrm{n}}^{(t - 1)}),$$
where g denotes a non-linear ReLU activation function, and \({\boldsymbol{W}}_{\mathrm{n}}^{(t - 1)}\) and \({\boldsymbol{b}}_{\mathrm{n}}^{(t - 1)}\) denotes weights and biases in the network.
The pooling function computes the average of the embeddings of each atom for the set of target atoms,
$${\boldsymbol{v}}_{[l]} = \frac{1}{k}\mathop {\sum}\limits_m {{\boldsymbol{v}}_{[l,m]}} .$$
Determination of the relaxation timescales
The relaxation timescales represent the characteristic timescales implied by the transition matrix K(τ), where τ denotes the lag time of the transition matrix. By conducting an eigenvalue decomposition for K(τ), we could compute the relaxation timescales as a function of lag time by,
$$t_i(\tau ) = - \frac{\tau }{{\ln |\lambda _i(\tau )|}},$$
where λi(τ) denotes the ith eigenvalue of the transition matrix K. Note that the largest eigenvalue is alway 1, corresponding to infinite relaxation timescale and the equilibrium distribution. The finite ti(τ) are plotted in Figs. 2b, 3d, and 4d for each material system as a function of τ by performing this computation using the corresponding K(τ). If the dynamics of the system is Markovian, i.e. Eq. (6) holds, one can prove that the relaxation timescales ti(τ) will be constant for any τ13,24. Therefore, we select a smallest τ* from Figs. 2b, 3d, and 4d to obtain a dynamical model that is Markovian and contains most dynamical details. We then compute the relaxation timescales using this τ* for each material system, and these timescales remain constant for any τ > τ*.
State-weighted radial distribution function
The RDF describes how particle density varies as a function of distance from a reference particle. The RDF is usually determined by counting the neighbor atoms at different distances over MD trajectories. We calculate the RDF of each state by weighting the counting process according to the probability of the reference particle being in state i,
$$g_i(r_{\mathrm{A}}) = \frac{1}{{\rho _i}}\frac{{{\mathrm{d}}[n(r_{\mathrm{A}})\cdot p_i]}}{{4\pi r_{\mathrm{A}}^2{\mathrm{d}}r_{\mathrm{A}}}},$$
where rA denotes the distance between atom A and the reference particle, pi denotes the probability of the reference particle being in state i, and ρi denotes the average density of state i.
Analysis of Li-ion conduction
We first compute the expected mean-squared-displacement of each transition at different t using the Bayesian rule,
$${\Bbb E}[d^2(t)|i \to j] = \frac{{\mathop {\sum}\limits_{t{\prime}} {d^2} (t\prime ,t\prime + t)p_i(t\prime )p_j(t\prime + t)}}{{\mathop {\sum}\limits_{t{\prime}} {p_i} (t\prime )p_j(t\prime + t)}},$$
where pi (t) is the probability of state i at time t, and d2(t′, t′ + t) is the mean-squared-displacement between t′ and t′ + t. Then, the diffusion coefficient of each transition Di→j(τ) at the lag time τ can be calculated by,
$$D_{ij}(\tau ) = \frac{1}{6}\left. {\frac{{{\mathrm{d}}{\Bbb E}[d^2(t)|i \to j]}}{{{\mathrm{d}}t}}} \right|_{t = \tau },$$
which is shown in Supplementary Table 2.
Finally, we compute the contribution of each transition to Li-ion conduction with Koopman matrix K(τ) using the cluster Nernst-Einstein equation67,
$$\sigma _{ij} = \frac{{e^2N_{{\mathrm{Li}}}}}{{Vk_{\mathrm{B}}T}}\pi _iz_{ij}K_{ij}(\tau )D_{ij}(\tau ),$$
where e is the elementary charge, kB is the Boltzmann constant, V, T are the volume and temperature of the system, NLi is the number of Li-ions, πi is the stationary distribution population of state i, and zij is the averaged charge of state i and state j. The percentage contribution is computed by,
$$\frac{{\sigma _{ij}}}{{\mathop {\sum}\limits_{i,j} {\sigma _{ij}} }}.$$
Lithium diffusion in the FCC lattice toy system
The molecular dynamics simulations are performed using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)68, as implemented in the MedeA®69 simulation environment. A purely repulsive interatomic potential in the form of a Born–Mayer term was used to describe the interactions between Li particles and the S sublattice, while all other interactions (Li–Li and S–S) are ignored. The cubic unit cell includes one Li atom and four S atoms, with a lattice parameter of 6.5 Å, a large value allowing for a low energy barrier. 200 ns MD simulations are run in the canonical ensemble (nVT) at a temperature of 64 K, using a timestep of 1 fs, with the S particles frozen. The atomic positions, which constituted the only data provided to the GDyNet and VAMPnet models, are sampled every 0.1 ps. In addition, the energy following the Tet-Oct-Tet migration path was obtained from static simulations by inserting Li particles on a grid.
Silicon dynamics at solid–liquid interface
The molecular dynamics simulation for the Si–Au binary system was carried out in LAMMPS68, using the modified embedded-atom method interatomic potential27,28. A sandwich like initial configuration was created, where Si–Au liquid alloy was placed in the middle, contacting with two {110} orientated crystalline Si thin films. 25 ns MD simulations are run in the canonical ensemble (nVT) at the eutectic point (629 K, 23.4% Si27), using a time step of 1 fs. The atomic positions, which constituted the only data provided to the GDyNet model, are sampled every 20 ps.
Scaling of the algorithm
The scaling of the GDyNet algorithm is \({\cal{O}}(NMK)\), where N is the number of atoms in the simulation box, M is the number of neighbors used in graph construction, and K is the depth of the neural network.
The MD simulation trajectories of the toy system, the Si–Au binary system, and the PEO/LiTFSI system are available at https://archive.materialscloud.org/2019.0017.
Code availability
GDyNets is implemented using TensorFlow70 and the code for the VAMP loss function is modified on top of ref. 13. The code is available from https://github.com/txie-93/gdynet.
Etacheri, V., Marom, R., Elazari, R., Salitra, G. & Aurbach, D. Challenges in the development of advanced li-ion batteries: a review. Energy Environ. Sci. 4, 3243–3262 (2011).
Imbrogno, J. & Belfort, G. Membrane desalination: where are we, and what can we learn from fundamentals? Annu. Rev. Chem. Biomol. Eng. 7, 29–64 (2016).
Peighambardoust, S. J., Rowshanzamir, S. & Amjadi, M. Review of the proton exchange membranes for fuel cell applications. Int. J. Hydrog. energy 35, 9349–9384 (2010).
Zheng, A., Li, S., Liu, S.-B. & Deng, F. Acidic properties and structure–activity correlations of solid acid catalysts revealed by solid-state nmr spectroscopy. Acc. Chem. Res. 49, 655–663 (2016).
Yu, C. et al. Unravelling li-ion transport from picoseconds to seconds: bulk versus interfaces in an argyrodite li6ps5cl–li2s all-solid-state li-ion battery. J. Am. Chem. Soc. 138, 11192–11201 (2016).
Perakis, F. et al. Vibrational spectroscopy and dynamics of water. Chem. Rev. 116, 7590–7607 (2016).
Wang, Y. et al. Design principles for solid-state lithium superionic conductors. Nat. Mater. 14, 1026 (2015).
Borodin, O. & Smith, G. D. Mechanism of ion transport in amorphous poly (ethylene oxide)/litfsi from molecular dynamics simulations. Macromolecules 39, 1620–1629 (2006).
Miller, T. F. III, Wang, Z.-G., Coates, G. W. & Balsara, N. P. Designing polymer electrolytes for safe and high capacity rechargeable lithium batteries. Acc. Chem. Res. 50, 590–593 (2017).
Getman, R. B., Bae, Y.-S., Wilmer, C. E. & Snurr, R. Q. Review and analysis of molecular simulations of methane, hydrogen, and acetylene storage in metal–organic frameworks. Chem. Rev. 112, 703–723 (2011).
Li, Q., Dietrich, F., Bollt, E. M. & Kevrekidis, I. G. Extended dynamic mode decomposition with dictionary learning: a data-driven adaptive spectral decomposition of the koopman operator. Chaos: Interdiscip. J. Nonlinear Sci. 27, 103111 (2017).
Lusch, B., Kutz, J. N. & Brunton, S. L. Deep learning for universal linear embeddings of nonlinear dynamics. Nat. Commun. 9, 4950 (2018).
Mardt, A., Pasquali, L., Wu, H. & Noé, F. Vampnets for deep learning of molecular kinetics. Nat. Commun. 9, 5 (2018).
Duvenaud, D. K. et al. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems 2224–2232 (2015).
Kearnes, S., McCloskey, K., Berndl, M., Pande, V. & Riley, P. Molecular graph convolutions: moving beyond fingerprints. J. Comput.-aided Mol. Des. 30, 595–608 (2016).
Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O. & Dahl, G. E. Neural message passing for quantum chemistry, arXiv preprint arXiv:1704.01212 (2017).
Schütt, K. T., Arbabzadah, F., Chmiela, S., Müller, K. R. & Tkatchenko, A. Quantum-chemical insights from deep tensor neural networks. Nat. Commun. 8, 13890 (2017).
Xie, T. & Grossman, J. C. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Phys. Rev. Lett. 120, 145301 (2018).
Schütt, K. T., Sauceda, H. E., Kindermans, P.-J., Tkatchenko, A. & Müller, K.-R. Schnet—a deep learning architecture for molecules and materials. J. Chem. Phys. 148, 241722 (2018).
Zhang, L., Han, J., Wang, H., Car, R. & Weinan, E. Deep potential molecular dynamics: a scalable model with the accuracy of quantum mechanics. Phys. Rev. Lett. 120, 143001 (2018).
Zhou, Q. et al. Learning atoms for materials discovery. Proc. Natl Acad. Sci. USA 115, E6411–E6417 (2018).
Xie, T. & Grossman, J. C. Hierarchical visualization of materials space with graph convolutional neural networks. J. Chem. Phys. 149, 174111 (2018).
Wu, H. & Noé, F. Variational approach for learning markov processes from time series data, arXiv preprint arXiv:1707.04659 (2017).
Koopman, B. O. Hamiltonian systems and transformation in hilbert space. Proc. Natl Acad. Sci. USA 17, 315–318 (1931).
Sastry, S. & Angell, C. A. Liquid–liquid phase transition in supercooled silicon. Nat. Mater. 2, 739 (2003).
Angell, C. A. Insights into phases of liquid water from study of its unusual glass-forming properties. Science 319, 582–587 (2008).
Ryu, S. & Cai, W. A gold–silicon potential fitted to the binary phase diagram. J. Phys.: Condens. Matter 22, 055401 (2010).
Wang, Y., Santana, A. & Cai, W. Atomistic mechanisms of orientation and temperature dependence in gold-catalyzed silicon growth. J. Appl. Phys. 122, 085106 (2017).
Pande, V. S., Beauchamp, K. & Bowman, G. R. Everything you wanted to know about markov state models but were afraid to ask. Methods 52, 99–105 (2010).
Chodera, J. D. & Noé, F. Markov state models of biomolecular conformational dynamics. Curr. Opin. Struct. Biol. 25, 135–144 (2014).
Husic, B. E. & Pande, V. S. Markov state models: From an art to a science. J. Am. Chem. Soc. 140, 2386–2396 (2018).
Meyer, W. H. Polymer electrolytes for lithium-ion batteries. Adv. Mater. 10, 439–448 (1998).
Hallinan, D. T. Jr. & Balsara, N. P. Polymer electrolytes. Annu. Rev. Mater. Res. 43, 503–525 (2013).
Mao, G., Perea, R. F., Howells, W. S., Price, D. L. & Saboungi, M.-L. Relaxation in polymer electrolytes on the nanosecond timescale. Nature 405, 163 (2000).
Do, C. et al. Li+ transport in poly (ethylene oxide) based electrolytes: neutron scattering, dielectric spectroscopy, and molecular dynamics simulations. Phys. Rev. Lett. 111, 018301 (2013).
Diddens, D., Heuer, A. & Borodin, O. Understanding the lithium transport within a rouse-based model for a peo/litfsi polymer electrolyte. Macromolecules 43, 2028–2036 (2010).
Bachman, J. C. et al. Inorganic solid-state electrolytes for lithium batteries: mechanisms and properties governing ion conduction. Chem. Rev. 116, 140–162 (2015).
Pesko, D. M. et al. Negative transference numbers in poly (ethylene oxide)-based electrolytes. J. Electrochem. Soc. 164, E3569–E3575 (2017).
Mezić, I. Analysis of fluid flows via spectral properties of the koopman operator. Annu. Rev. Fluid Mech. 45, 357–378 (2013).
Georgiev, G. S., Georgieva, V. T. & Plieth, W. Markov chain model of electrochemical alloy deposition. Electrochim. acta 51, 870–876 (2005).
Valor, A., Caleyo, F., Alfonso, L., Velázquez, J. C. & Hallen, J. M. Markov chain models for the stochastic modeling of pitting corrosion. Math. Prob. Eng. 2013 (2013).
Miller, J. A. & Klippenstein, S. J. Master equation methods in gas phase chemical kinetics. J. Phys. Chem. A 110, 10528–10544 (2006).
Buchete, N.-V. & Hummer, G. Coarse master equations for peptide folding dynamics. J. Phys. Chem. B 112, 6057–6069 (2008).
Sriraman, S., Kevrekidis, I. G. & Hummer, G. Coarse master equation from bayesian analysis of replica molecular dynamics simulations. J. Phys. Chem. B 109, 6479–6484 (2005).
Gu, C. et al. Building markov state models with solvent dynamics. In BMC bioinformatics, Vol. 14, S8 (BioMed Central, 2013). https://doi.org/10.1186/1471-2105-14-S2-S8
Hamm, P. Markov state model of the two-state behaviour of water. J. Chem. Phys. 145, 134501 (2016).
Schulz, R. et al. Collective hydrogen-bond rearrangement dynamics in liquid water. J. Chem. Phys. 149, 244504 (2018).
Cubuk, E. D., Schoenholz, S. S., Kaxiras, E. & Liu, A. J. Structural properties of defects in glassy liquids. J. Phys. Chem. B 120, 6139–6146 (2016).
Nussinov, Z. et al. Inference of hidden structures in complex physical systems by multi-scale clustering. In Information Science for Materials Discovery and Design, 115–138 (Springer International Publishing, Springer, 2016). https://doi.org/10.1007/978-3-319-23871-5_6
Kahle, L., Musaelian, A., Marzari, N. & Kozinsky, B. Unsupervised landmark analysis for jump detection in molecular dynamics simulations, Phys. Rev. Materials 3, 055404 (2019).
Funke, K. Jump relaxation in solid electrolytes. Prog. Solid State Chem. 22, 111–195 (1993).
Xu, K. Nonaqueous liquid electrolytes for lithium-based rechargeable batteries. Chem. Rev. 104, 4303–4418 (2004).
Corry, B. Designing carbon nanotube membranes for efficient water desalination. J. Phys. Chem. B 112, 1427–1434 (2008).
Cohen-Tanugi, D. & Grossman, J. C. Water desalination across nanoporous graphene. Nano Lett. 12, 3602–3608 (2012).
Rowsell, J. L. C., Spencer, E. C., Eckert, J., Howard, J. A. K. & Yaghi, O. M. Gas adsorption sites in a large-pore metal-organic framework. Science 309, 1350–1354 (2005).
Li, J.-R., Kuppler, R. J. & Zhou, H.-C. Selective gas adsorption and separation in metal–organic frameworks. Chem. Soc. Rev. 38, 1477–1504 (2009).
Bartók, A. P., Kondor, R. & Csányi, G. On representing chemical environments. Phys. Rev. B 87, 184115 (2013).
Pietrucci, F. & Andreoni, W. Graph theory meets ab initio molecular dynamics: atomic structures and transformations at the nanoscale. Phys. Rev. Lett. 107, 085504 (2011).
Wehmeyer, C. & Noé, F. Time-lagged autoencoders: deep learning of slow collective variables for molecular kinetics. J. Chem. Phys. 148, 241703 (2018).
Ribeiro, J. M. L., Bravo, P., Wang, Y. & Tiwary, P. Reweighted autoencoded variational bayes for enhanced sampling (rave). J. Chem. Phys. 149, 072301 (2018).
Wu, H., Mardt, A., Pasquali, L. & Noe, F. Deep generative markov state models. In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, 3979–3988 (Curran Associates Inc., USA 2018). http://dl.acm.org/citation.cfm?id=3327144.3327312
Jin, W., Barzilay, R. & Jaakkola, T. Junction tree variational autoencoder for molecular graph generation, arXiv preprint arXiv:1802.04364 (2018).
Simonovsky, M. & Komodakis, N. Graphvae: Towards generation of small graphs using variational autoencoders, arXiv preprint arXiv:1802.03480 (2018).
M. M. Sultan & V. S. Pande. Transfer learning from markov models leads to efficient sampling of related systems. J. Phys. Chem. B (2017). https://doi.org/10.1021/acs.jpcb.7b06896
Altae-Tran, H., Ramsundar, B., Pappu, A. S. & Pande, V. Low data drug discovery with one-shot learning. ACS Cent. Sci. 3, 283–293 (2017).
Velickovic, P. et al. Graph attention networks, arXiv preprint arXiv:1710.10903 1 (2017).
France-Lanord, A. & Grossman, J. C. Correlations from ion-pairing and the nernst-einstein equation, Phys. Rev. Lett. 122, 136001 (2019).
Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117, 1–19 (1995).
MedeA-2.22. Materials Design, Inc, San Diego, (2018).
Abadi, M. et al. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), 265–283 (Savannah, GA, USA 2016). http://dl.acm.org/citation.cfm?id=3026877.3026899
This work was supported by Toyota Research Institute. Computational support was provided by Google Cloud, the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and the Extreme Science and Engineering Discovery Environment, supported by National Science Foundation grant number ACI-1053575.
Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
Tian Xie
, Arthur France-Lanord
, Yanming Wang
& Jeffrey C. Grossman
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
Yang Shao-Horn
Search for Tian Xie in:
Search for Arthur France-Lanord in:
Search for Yanming Wang in:
Search for Yang Shao-Horn in:
Search for Jeffrey C. Grossman in:
T.X. developed the software and performed the analysis. A.F.-L. and Y.W. performed the molecular dynamics simulations. T.X., A.F.-L., Y.W., Y.S.H., and J.C.G. contributed to the interpretation of the results. T.X. and J.C.G. conceived the idea and approach presented in this work. All authors contributed to the writing of the paper.
Correspondence to Jeffrey C. Grossman.
The authors declare no competing interests.
Peer review information: Nature Communications thanks Stefan Chmiela and other anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Xie, T., France-Lanord, A., Wang, Y. et al. Graph dynamical networks for unsupervised learning of atomic scale dynamics in materials. Nat Commun 10, 2667 (2019) doi:10.1038/s41467-019-10663-6
Deep Learning: New Engine for the Study of Material Microstructures and Physical Properties
果 卢
Modern Physics (2019)
Effect of Chemical Variations in the Structure of Poly(ethylene oxide)-Based Polymers on Lithium Transport in Concentrated Electrolytes
Arthur France-Lanord
, Tian Xie
, Jeremiah A. Johnson
, Yang Shao-Horn
Chemistry of Materials (2019)
Molecular Dynamics Properties without the Full Trajectory: A Denoising Autoencoder Network for Properties of Simple Liquids
Alireza Moradzadeh
& N. R. Aluru
The Journal of Physical Chemistry Letters (2019)
Dynamic graphical models of molecular kinetics
Simon Olsson
& Frank Noé
Proceedings of the National Academy of Sciences (2019)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Nature Communications menu
Editors' Highlights | CommonCrawl |
0.999...=1
Dylan Hendrickson3 Aug 2016 18:40 UTC
Informal proofs
Formal proof
Arguments against \(0.999\dotsc=1\)
Although some people find it counterintuitive, the decimal expansions \(0.999\dotsc\) and \(1\) represent the same real number.
These "proofs" can help give insight, but be careful; a similar technique can "prove" that \(1+2+4+8+\dotsc=-1\). They work in this case because the series corresponding to \(0.999\dotsc\) is absolutely convergent.
\begin{align} x &= 0.999\dotsc \newline 10x &= 9.999\dotsc \newline 10x-x &= 9.999\dotsc-0.999\dotsc \newline 9x &= 9 \newline x &= 1 \newline \end{align}
\begin{align} \frac 1 9 &= 0.111\dotsc \newline 1 &= \frac 9 9 \newline &= 9 \times \frac 1 9 \newline &= 9 \times 0.111\dotsc \newline &= 0.999\dotsc \end{align}
The real numbers are dense, which means that if \(0.999\dots\neq1\), there must be some number in between. But there's no decimal expansion that could represent a number in between \(0.999\dots\) and \(1\).
This is a more formal version of the first informal proof, using the definition of decimal notation.
Show proof
\(0.999\dots\) is the decimal expansion where every digit after the decimal point is a \(9\). By definition, it is the value of the series \(\sum_{k=1}^\infty 9 \cdot 10^{-k}\). This value is in turn defined as the limit of the sequence \((\sum_{k=1}^n 9 \cdot 10^{-k})_{n\in\mathbb N}\). Let \(a_n\) denote the \(n\)th term of this sequence. I claim the limit is \(1\). To prove this, we have to show that for any \(\varepsilon>0\), there is some \(N\in\mathbb N\) such that for every \(n>N\), \(|1-a_n|<varepsilon\).
Let's prove by induction that \(1-a_n=10^{-n}\). Since \(a_0\) is the sum of {$0$ terms, \(a_0=0\), so \(1-a_0=1=10^0\). If \(1-a_i=10^{-i}\), then
\begin{align} 1 - a{i+1} &= 1 - (ai + 9 \cdot 10^{-(i+1)}) \newline &= 1-a_i − 9 \cdot 10^{-(i+1)} \newline &= 10^{-i} − 9 \cdot 10^{-(i+1)} \newline &= 10 \cdot 10^{-(i+1)} − 9 \cdot 10^{-(i+1)} \newline &= 10^{-(i+1)} \end{align}
So \(1-a_n=10^{-n}\) for all \(n\). What remains to be shown is that \(10^{-n}\) eventually gets (and stays) arbitrarily small; this is true by the archimedean property and because \(10^{-n}\) is monotonically decreasing. <div><div>
These arguments are used to try to refute the claim that \(0.999\dotsc=1\). They're flawed, since they claim to prove a false conclusion.
\(0.999\dotsc\) and \(1\) have different digits, so they can't be the same. In particular, \(0.999\dotsc\) starts "$0.$," so it must be less than 1.
Why is this wrong?
Decimal expansions and real numbers are different objects. Decimal expansions are a nice way to represent real numbers, but there's no reason different decimal expansions have to represent different real numbers.
If two numbers are the same, their difference must be \(0\). But \(1-0.999\dotsc=0.000\dotsc001\neq0\).
Decimal expansions go on infinitely, but no farther.\(0.000\dotsc001\) doesn't represent a real number because the \(1\) is supposed to be after infinitely many \(0\)s, but each digit has to be a finite distance from the decimal point. If you have to pick a real number to for \(0.000\dotsc001\) to represent, it would be \(0\).
\(0.999\dotsc\) is the limit of the sequence \(0.9, 0.99, 0.999, \dotsc\). Since each term in this sequence is less than \(1\), the limit must also be less than \(1\). (Or "the sequence can never reach \(1\).")
The sequence gets arbitrarily close to \(1\), so its limit is \(1\). It doesn't matter that all of the terms are less than \(1\).
In the first proof, when you subtract \(0.999\dotsc\) from \(9.999\dotsc\), you don't get \(9\). There's an extra digit left over; just as \(9.99-0.999=8.991\), \(9.999\dotsc-0.999\dotsc=8.999\dotsc991\).
There are infinitely many \(9\)s in \(0.999\dotsc\), so when you shift it over a digit there are still the same amount. And the "decimal expansion" \(8.999\dotsc991\) doesn't make sense, because it has infinitely many digits and then a \(1\).
The winning architecture for numerals
Eric Rogstad3 Aug 2016 20:29 UTC
If these are included I think it would be good to also include explanations of why each one is wrong. | CommonCrawl |
Is there really any difference between Resonance and Mesomeric effect?
I have been in the classes of two different teachers of organic chemistry and they both agree on the fact that there is a difference between the mesomeric effect and resonance effect (they didn't agree mutually, I just compared their answers) but both of them tell the differences which are unique or simply don't match. So I decided to find the difference via research. I searched online and in the books but answers there also vary. Here are some findings of my search
Answer by @NotEvans (here) mentions that in the gold book of IUPAC, they have been taken as synonym terms.
Answer by @Raju (here) mentions the difference as "Resonance refers to delocalization of electrons in a given system. The mesomeric effect is the electron-donating or withdrawing nature of a substituent due to resonance." The answer matches @KshitizSharma answered on the link where @NotEvans answered.
A page on the website Pediaa mentions the difference to be "Resonance is the effect that describes the polarity of a molecule that is induced by the interaction between lone electron pairs and bond electron pairs. The mesomeric effect is the effect of substituents or functional groups on chemical compounds."
There are several other places but as expected, all their answers differ
My one teacher let's say A (don't want to disclose the name) says that the Resonance effect is a broader umbrella term that includes the mesomeric effect. In resonance, we have delocalization of electrons and the delocalization can be of several types such as pi-electron delocalization, sigma electron delocalization, dancing resonance, etc (these phenomena are shown in images below) but in mesomeric effect, we can only have pi-electron delocalization.
Dancing Resonance
Sigma Bond Resonance
Pi Electron Resonance
Now my other teacher, let's say teacher B sort of agrees with the definition and difference of Resonance and Mesomeric Effect given on the website Pediaa which is mentioned above in point 3.
My opinion (not conclusion) on all of this is that technically my teacher A seems to be correct to me. But I can't decide and the definition on the IUPAC book confuses me even more so I came here to ask for help. Thanks.
Notice: The website called Pediaa might have its maintenance going on because of which the server seems down.So please believe my mentioned words, they have been exactly copied.
Ritanshu
RitanshuRitanshu
$\begingroup$ Please edit your post for style, grammar, and punctuation. Remove the all-caps. Your images are not searchable. $\endgroup$
– Todd Minehardt ♦
$\begingroup$ @ToddMinehardt thanks for pointing it out. I have corrected grammar using a chrome extension because I am not very knowledgeable in grammar. I have made the images searchable and there are minor style improvements. $\endgroup$
– Ritanshu
$\begingroup$ Overthinking it. But the answer by Farooq should do a great job. $\endgroup$
Have you read the poem "Five Blind Men and the Elephant", where each person tries to explore an elephant and comes with their own version. Someone touches elephant's trunk and calls it a snake, someone touches the legs and calls it a tree trunk and so on. The world of molecules and sub-atomic world is indeed that elephant, and chemists are unfortunately unable to see them- to see what is happening. Hence each one comes up with an intelligent story only to be improved or discarded later. So the lesson is that don't take everything literally (incuding the statements made by high school or college teachers). Moreover, do not rely on the web-answers including this one ultimate facts.
In the same way the concept of resonance was developed really a long time ago. Keep in mind that none of the structure you draw with resonance reflects a reality. It is a subtle and an esoteric way of human expression that we do not know how it looks like in reality. X-rays allow you see the arrangement of solids in space but it does not let you see electrons, there is no physical arrow pushing going on and there no electron dance going on. It is all human imagination.
Think of the classic benzene example, (image from Wikipedia). None of the structures reflect a reality, hence the term resonance hybrid, because benzene does not chemically behave as if its double bonds were "static". The C-C bond distances in the ring have to be different, otherwise.
There is a very nice article by Robert C. Kerber titled If It's Resonance, What Is Resonating? in the Journal of Chemical Education (It is freely available on Google Scholar). Have you wondered why the name "Resonance"?
History The use of multiple structures to represent compounds with (what we now call) delocalized bonding (4) was pioneered in Germany by Arndt (5) and in Britain by Ingold (6). Their ideas were based essentially on chemical intuition. In the United States, Pauling $(7,8)$ in his pioneering applications of the principles of quantum mechanics to chemistry, came to a similar description from a different starting point. Whereas the concepts of describing a single substance in terms of multiple structures may not have differed significantly, the terminology did. Arndt coined the term $Zwischenstufe$ for the hybrid structure, while Ingold opted for the equivalent term (derived from Greek) mesomer. He proposed for the concept the terms mesomeric effect or mesomerism. The cognate term mesomerie came into use in French and German. Pauling preferred the term resonance, derived from valence bond theory (see below). Pauling and Ingold were well aware of each other's terminology, and each offered reasons for rejecting the alternative proposal: Because the resonating system does not have a structure intermediate between those involved in the resonance, but instead a structure which is further changed by the resonance stabilization, I prefer not to use the word 'mesomerism,' suggested by Ingold, for the resonance phenomenon.
L. Pauling (9) Pauling describes the phenomenon under the name 'resonance,' which, as is well known, is based on the mathematical analogy between mechanical resonance and the behavior of wave functions in quantum mechanical exchange phenomena. There appears, however, to be some possibility that this method of description may suggest an analogy which has never been intended.
C. K. Ingold (6) Pauling's argument conflates the concepts of structure and energy. Moreover, his chosen term resonance is derived from an analogy to coupled oscillators, which also lacks energetic implications: "There is no close classical analogue of resonance energy" (10). So neither term includes explicit reference to energy considerations, weakening Pauling's argument. For use in discussing the structure (i.e., geometry) of the hybrid, Ingold's argument seems the more persuasive. His recognition of the gratuitous oscillatory aspect of the analogy is prophetic of the subsequent experience of generations of students.*
Your next query on mesomerism and IUPAC. In terms of terminology, most chemists follow the IUPAC recommendation's on terminologies. You can call mesomerism as a type of resonance. See this reference "Introductory Organic Chemistry and Hydrocarbons: A Physical Chemistry Approach" by Caio Lima Firme on pg 111-112.
As Ingold himself defined: "Mesomerism is an extension of valency theory (that is the classical valence bond theory) and, like all valency theory, is founded in the quantum theory (...). The fundamental wave-property in the theory of valency is resonance $-$ the resonance of connected standing waves; their mutual perturbation replaces these waves by new standing waves" (Ingold 1938). Ingold exemplified the resonance theory by using the two resonance structures: $\mathrm{R}_{2} \mathrm{~N}-\mathrm{CH}=\mathrm{N}^{+} > \mathrm{R}_{3}$ and $\mathrm{R}_{3}^{+} > \mathrm{N}=\mathrm{C}(\mathrm{H})-\mathrm{NR}_{2}$ with their own standing waves which resonate to form the real molecule. Ingold stated that the term mesomerism was given to account for the special importance of resonance in organic chemistry. He also said: "When mesomerism was first recognized as a general phenomenon in organic chemistry, it was appreciated as an electron displacement than as an energy disappearance" (Ingold 1938). From this point on, we can say that mesomerim is equivalent to resonance type 3 (resonance involving only charged and/or neutral covalent structures), although IUPAC recognizes mesomerism as essentially synonymous with resonance (which could include all types of resonance). Nonetheless, IUPAC adds that mesomerism is "particularly associated with the picture of $\pi$-electrons as less localized in an actual molecule than in a Lewis formula" (IUPAC Gold Book). Then, mesomeric state is the same as resonance hybrid and energy of mesomerism is synonymous to resonance energy (Ingold 1938).
M. FarooqM. Farooq
$\begingroup$ Thank you, seems like it's all the result of conflict between Ingold and Pauling. $\endgroup$
$\begingroup$ Again, just recall the blind men and the elephant (molecule)! Both were highly respected chemists. $\endgroup$
– M. Farooq
$\begingroup$ Yes, I understand $\endgroup$
Not the answer you're looking for? Browse other questions tagged resonance or ask your own question.
Difference between Resonance Effect and Mesomeric Effect
Pauli exclusion principle and resonance
Inductive vs resonance effects and the acidity of phenol
Halogen and inductive effect
What is the hybridisation of the carbons in the allyl radical?
Which of these carbocations are more stable?
Comparison of acidic strength between nitric acid and nitrous acid
Confused about identifying delocalized electron pairs in Isoniazid
Effect of phenyl and vinyl substituents on Acidity of carboxylic | CommonCrawl |
Search results for: A. Moeller
Search for a heavy pseudoscalar boson decaying to a Z and a Higgs boson at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A. M. Sirunyan, A. Tumasyan, W. Adam, F. Ambrogi, more
A search is presented for a heavy pseudoscalar boson $$\text {A}$$ A decaying to a Z boson and a Higgs boson with mass of 125$$\,\text {GeV}$$ GeV . In the final state considered, the Higgs boson decays to a bottom quark and antiquark, and the Z boson decays either into a pair of electrons, muons, or neutrinos. The analysis is performed using a data sample corresponding to an integrated luminosity...
Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV
The CMS collaboration, A. M. Sirunyan, A. Tumasyan, W. Adam, more
Abstract Results are reported of a search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb−1 collected at a center-of-mass energy of 13 TeV using the CMS detector. The results are interpreted in the context of models of gauge-mediated supersymmetry breaking. Production...
Search for the associated production of the Higgs boson and a vector boson in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV via Higgs boson decays to τ leptons
Abstract A search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to a pair of τ leptons is performed. A data sample of proton-proton collisions collected at s $$ \sqrt{s} $$ = 13 TeV by the CMS experiment at the CERN LHC is used, corresponding to an integrated luminosity of 35.9 fb−1. The signal strength is measured relative to the expectation...
Search for a low-mass τ−τ+ resonance in association with a bottom quark in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Abstract A general search is presented for a low-mass τ−τ+ resonance produced in association with a bottom quark. The search is based on proton-proton collision data at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The data are consistent with the standard model expectation. Upper limits at 95% confidence level...
Search for supersymmetry in events with a photon, jets, $$\mathrm {b}$$ b -jets, and missing transverse momentum in proton–proton collisions at 13$$\,\text {Te}\text {V}$$ Te
A search for supersymmetry is presented based on events with at least one photon, jets, and large missing transverse momentum produced in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te . The data correspond to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 and were recorded at the LHC with the CMS detector in 2016. The analysis characterizes signal-like...
Combined measurements of Higgs boson couplings in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te , corresponding to an integrated luminosity of 35.9$${\,\text {fb}^{-1}} $$ fb-1 . The combination is based...
Measurement of inclusive very forward jet cross sections in proton-lead collisions at s N N $$ \sqrt{s_{\mathrm{NN}}} $$ = 5.02 TeV
Abstract Measurements of differential cross sections for inclusive very forward jet production in proton-lead collisions as a function of jet energy are presented. The data were collected with the CMS experiment at the LHC in the laboratory pseudorapidity range −6.6 < η < −5.2. Asymmetric beam energies of 4 TeV for protons and 1.58 TeV per nucleon for Pb nuclei were used, corresponding to a...
Measurement of the energy density as a function of pseudorapidity in proton–proton collisions at $$\sqrt{s} =13\,\text {TeV} $$ s=13TeV
A measurement of the energy density in proton–proton collisions at a centre-of-mass energy of $$\sqrt{s} =13$$ s=13 $$\,\text {TeV}$$ TeV is presented. The data have been recorded with the CMS experiment at the LHC during low luminosity operations in 2015. The energy density is studied as a function of pseudorapidity in the ranges $$-\,6.6<\eta <-\,5.2$$ -6.6<η<-5.2 and $$3.15<|\eta...
Measurement of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ production cross section, the top quark mass, and the strong coupling constant using dilepton events in pp collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A measurement of the top quark–antiquark pair production cross section $$\sigma _{\mathrm {t}\overline{\mathrm {t}}} $$ σtt¯ in proton–proton collisions at a centre-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te is presented. The data correspond to an integrated luminosity of $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 , recorded by the CMS experiment at the CERN LHC in 2016. Dilepton events ($$\mathrm...
Search for vector-like quarks in events with two oppositely charged leptons and jets in proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te
A search for the pair production of heavy vector-like partners $$\mathrm {T}$$ T and $$\mathrm {B}$$ B of the top and bottom quarks has been performed by the CMS experiment at the CERN LHC using proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te . The data sample was collected in 2016 and corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . Final states...
Measurements of the pp → WZ inclusive and differential production cross sections and constraints on charged anomalous triple gauge couplings at s $$ \sqrt{s} $$ = 13 TeV
Abstract The WZ production cross section is measured in proton-proton collisions at a centre-of-mass energy s $$ \sqrt{s} $$ = 13 TeV using data collected with the CMS detector, corresponding to an integrated luminosity of 35.9 fb−1. The inclusive cross section is measured to be σtot(pp → WZ) = 48.09 − 0.96+ 1.00 (stat) − 0.37+ 0.44 (theo) − 2.17+ 2.39 (syst) ± 1.39(lum) pb, resulting in...
Search for nonresonant Higgs boson pair production in the b b ¯ b b ¯ $$ \mathrm{b}\overline{\mathrm{b}}\mathrm{b}\overline{\mathrm{b}} $$ final state at s $$ \sqrt{s} $$ = 13 TeV
Abstract Results of a search for nonresonant production of Higgs boson pairs, with each Higgs boson decaying to a b b ¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair, are presented. This search uses data from proton-proton collisions at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb−1, collected by the CMS detector at the LHC. No signal is observed, and...
Search for contact interactions and large extra dimensions in the dilepton mass spectra from proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for nonresonant excesses in the invariant mass spectra of electron and muon pairs is presented. The analysis is based on data from proton-proton collisions at a center-of-mass energy of 13 TeV recorded by the CMS experiment in 2016, corresponding to a total integrated luminosity of 36 fb−1. No significant deviation from the standard model is observed. Limits are set at 95% confidence...
Measurement of the top quark mass in the all-jets final state at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV and combination with the lepton+jets channel
A top quark mass measurement is performed using $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 of LHC proton–proton collision data collected with the CMS detector at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV . The measurement uses the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ all-jets final state. A kinematic fit is performed to reconstruct the decay of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ system...
Search for resonant production of second-generation sleptons with same-sign dimuon events in proton–proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV
A search is presented for resonant production of second-generation sleptons ($$\widetilde{\mu } _{\mathrm {L}}$$ μ~L , $$\widetilde{\nu }_{\mu }$$ ν~μ ) via the R-parity-violating coupling $${\lambda ^{\prime }_{211}}$$ λ211′ to quarks, in events with two same-sign muons and at least two jets in the final state. The smuon (muon sneutrino) is expected to decay into a muon and a neutralino (chargino),...
Search for resonant t t ¯ $$ \mathrm{t}\overline{\mathrm{t}} $$ production in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for a heavy resonance decaying into a top quark and antiquark t t ¯ $$ \left(\mathrm{t}\overline{\mathrm{t}}\right) $$ pair is performed using proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV. The search uses the data set collected with the CMS detector in 2016, which corresponds to an integrated luminosity of 35.9 fb−1. The analysis considers three exclusive...
Search for excited leptons in ℓℓγ final states in proton-proton collisions at s = 13 $$ \sqrt{\mathrm{s}}=13 $$ TeV
Abstract A search is presented for excited electrons and muons in ℓℓγ final states at the LHC. The search is based on a data sample corresponding to an integrated luminosity of 35.9 fb−1 of proton-proton collisions at a center-of-mass energy of 13 TeV, collected with the CMS detector in 2016. This is the first search for excited leptons at s $$ \sqrt{s} $$ = 13 TeV. The observation is consistent...
Search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks is performed in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te collected with the CMS detector at the LHC. The analyzed data sample corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . The signal is characterized by a large missing transverse...
Measurement of exclusive $$\mathrm {\Upsilon }$$ Υ photoproduction from protons in $$\mathrm {p}$$ p Pb collisions at $$\sqrt{\smash [b]{s_{_{\mathrm {NN}}}}} = 5.02\,\text {TeV} $$ sNN=5.02TeV
The exclusive photoproduction of $$\mathrm {\Upsilon }\mathrm {(nS)} $$ Υ(nS) meson states from protons, $$\gamma \mathrm {p} \rightarrow \mathrm {\Upsilon }\mathrm {(nS)} \,\mathrm {p}$$ γp→Υ(nS)p (with $$\mathrm {n}=1,2,3$$ n=1,2,3 ), is studied in ultraperipheral $$\mathrm {p}$$ p Pb collisions at a centre-of-mass energy per nucleon pair of $$\sqrt{\smash [b]{s_{_{\mathrm {NN}}}}} = 5.02\,\text...
HADRON-HADRON SCATTERING (29)
HIGGS (20)
SUPERSYMMETRY (12)
EXOTICA (8)
QCD (7)
CROSS SECTION (6)
EXTRA DIMENSIONS (5)
HEAVY IONS (5)
B2G (4)
ELECTROWEAK (4)
HEAVY ION (4)
RESONANCES (4)
SUSY (4)
W′ (4)
B-PHYSICS (3)
DIBOSON (3)
POLLINATION (3)
Z′ (3)
2HDM (2)
ADOLESCENT/YOUNG ADULT LITERATURE (2)
ALPHA-S (2)
AQGC (2)
B PHYSICS (2)
B-TAGGING (2)
CHARGE ASYMMETRY (2)
DIJET (2)
DILEPTONS (2)
DIMUONS (2)
DIPHOTON (2)
GRAVITON (2)
HEAVY-IONS (2)
LEPTON-FLAVOUR-VIOLATION (2)
LEPTONS (2)
LOW MISSING TRANSVERSE ENERGY (2)
MSSM (2)
MUON (2)
MUONS (2)
OUTCROSSING (2)
PHOTONS (2)
QUALITATIVE (2)
QUARKONIUM PRODUCTION (2)
RESONANCE (2)
RIDGE (2)
SELF‐FERTILIZATION (2)
SEPARATION (2)
STRONG COUPLING CONSTANT (2)
TPRIME (2)
UPC (2)
Τ (2)
13 TEV (1)
3-JET MASS (1)
3‐EARLY ADOLESCENCE (1)
4‐ADOLESCENCE (1)
ABSOLUTE AMOUNT ADSORBED (1)
ACCUMULATION (1)
ADD (1)
ADSORPTION ISOTHERMS (1)
AGE‐SPECIFIC EXPRESSION OF INBREEDING DEPRESSION (1)
ALPHAT (1)
ANALYTICAL MODELS (1)
ANOMALOUS COUPLING (1)
ANOMALOUS COUPLINGS (1)
APPROXIMATE BAYESIAN COMPUTATION (1)
ARGON (1)
ASYMMETRY (1)
ATGC (1)
B HADRONS (1)
B0 DECAYS (1)
BACTERIA LYSIS (1)
BATESON–DOBZHANSKY–MÜLLER INCOMPATIBILITY (1)
BIOCHAR (1)
BIOLOGICAL SAMPLE PREPARATION (1)
Springer (433)
Elsevier (188)
Wiley (16)
Physics Letters B (160)
Journal of Cystic Fibrosis (6)
Nuclear Physics A (4)
Journal of Adolescent & Adult Literacy (2)
Microporous and Mesoporous Materials (2)
NeuroImage (2)
Nuclear Physics, Section A (2)
Paediatric Respiratory Reviews (2)
Applied Physics A (1)
Clinical Otolaryngology (1)
Drug Discovery Today: Therapeutic Strategies (1)
EJC Supplements (1)
Ecography (1)
Ecology Letters (1)
European Journal of Cancer (1)
European Journal of Soil Science (1)
European Psychiatry (1)
IEEE Transactions on Nuclear Science (1)
Journal of Ecology (1)
Journal of Equine Veterinary Science (1)
Journal of the American College of Surgeons (1)
Molecular Ecology (1)
New Phytologist (1)
Oecologia (1)
Osteoporosis International (1)
Plant Biotechnology Journal (1)
Proceedings of the American Society for Information Science and Technology (1)
Separation and Purification Technology (1)
Soil Use and Management (1)
Theoretical and Applied Genetics (1)
Trends in Ecology & Evolution (1)
Trends in Genetics (1)
Value in Health (1) | CommonCrawl |
communications earth & environment
Isotopic variability in tropical cyclone precipitation is controlled by Rayleigh distillation and cloud microphysics
Dipole patterns in tropical precipitation were pervasive across landmasses throughout Marine Isotope Stage 5
Katrina Nilsson-Kerr, Pallavi Anand, … Melanie J. Leng
Chemical evidence of inter-hemispheric air mass intrusion into the Northern Hemisphere mid-latitudes
S. Li, S. Park, … S. C. Wofsy
Causes of large projected increases in hurricane precipitation rates with global warming
Maofeng Liu, Gabriel A. Vecchi, … Thomas R. Knutson
Global increase in tropical cyclone rain rate
Oscar Guzman & Haiyan Jiang
Quaternary rainfall variability is governed by insolation in northern China and ice-sheet forcing in the South
Debo Zhao, Zhengyao Lu, … Anchun Li
The role of cyclonic activity in tropical temperature-rainfall scaling
Dominik Traxl, Niklas Boers, … Bodo Bookhagen
Drier tropical and subtropical Southern Hemisphere in the mid-Pliocene Warm Period
Gabriel M. Pontes, Ilana Wainer, … Zhongshi Zhang
Emergence of seasonal delay of tropical rainfall during 1979–2019
Fengfei Song, L. Ruby Leung, … Yun Qian
Mediterranean winter rainfall in phase with African monsoons during the past 1.36 million years
Bernd Wagner, Hendrik Vogel, … Xiaosen Zhang
Chijun Sun ORCID: orcid.org/0000-0002-3668-346X1,2,
Lijun Tian ORCID: orcid.org/0000-0003-3772-28353,4,
Timothy M. Shanahan1,
Judson W. Partin ORCID: orcid.org/0000-0003-0315-55455,
Yongli Gao3,
Natasha Piatrunia1 &
Jay Banner1
Communications Earth & Environment volume 3, Article number: 50 (2022) Cite this article
Atmospheric chemistry
Atmospheric dynamics
Palaeoclimate
Tropical cyclones produce rainfall with extremely negative isotope values (δ18O and δ2H), but the controls on isotopic fractionation during tropical cyclones are poorly understood. Here we studied the isotopic composition of rainfall at sites across central Texas during Hurricane Harvey (2017) to better understand these processes. Rainfall δ18O trend towards more negative values as a result of Rayleigh distillation of precipitation-generating airmasses as they travel towards the center of the storm. Superimposed on these gradual changes are abrupt isotopic shifts with exceptionally low deuterium excess values. These appear to be controlled by microphysical processes associated with the passage of spiral rainbands over the sampling locations. Isotope-enabled climate modeling suggests that it may be possible to identify the signature of tropical cyclones from annually resolved isotopic proxy records, but will depend on the size of the storm and the proximity of the site to the core of the storm system.
Tropical cyclones (TCs) are known to produce precipitation with extremely negative stable isotopic delta values (i.e., δ18O and δ2H) compared with other tropical rainfall sources1,2,3,4. The exceptionally depleted isotopic signatures of TCs provide a potentially valuable indicator of past storm activity that may be recorded in high-resolution archives of climate (e.g., speleothems, tree rings, corals, etc.)5,6. However, recent studies of modern rainfall from Central America have demonstrated that anomalously positive stable isotope values in tropical precipitation can occur under a variety of conditions, complicating the interpretation of isotope-based TC reconstructions4,7. And while a number of studies over the past few decades have tried to characterize how rainwater stable isotopes change during TCs, and to identify hurricane events in proxy records, there remain questions about the mechanisms driving these changes and whether they are large enough to be recorded in even annually-resolved proxy records.
For example, previous studies have observed that rainwater isotopic values from TCs are spatiotemporally heterogeneous. A consistent observation in many studies of TCs is that there is a systematic depletion of heavy isotopes in rainfall radially inward towards the center of the storm, with exceptionally negative delta values only occurring in areas close to the eyewall2,3,8,9. What causes this spatial isotopic pattern remains a subject of debate. One process suggested to account for the isotopic change is the effect of higher vertical rainout efficiency associated with higher and thicker clouds near the eyewall2,3. Other studies have attributed these changes to post-condensation isotopic exchange between falling rain and ambient vapor3,10 or re-evaporation of falling precipitation9 at the edge of TCs or the uptake of isotopically enriched moisture from nearby warm surface ocean waters4.
A few studies have examined high-frequency variability in TC rainfall during individual storm events and have observed large swings in the isotopic composition of precipitation on hourly timescales4,11. For example, a study of Tropical Cyclone Ita displayed rapid swings of δ18O in both rainwater and vapor by up to ~10‰ associated with the passage of spiral rainbands11. An organized spiral rainband exhibits a somewhat similar cross-band structure to squall line mesoscale convective systems (MCS), with updrafts on the inner side of the band and downdrafts on the outer side12,13. However, unlike in an MCS, where updrafts (downdrafts) lead to convective (stratiform) precipitation with relatively enriched (depleted) isotopic values4,14,15,16,17, spiral rainbands associated with TCs consist of predominantly stratiform precipitation12. The mechanisms controlling stable isotope variability during the passage of these rainbands have not been investigated.
Here, we examine the evolution of δ18O, δ2H, and d-excess values of precipitation (hereafter δ18Op, δ2Hp, d-excessp) during Hurricane Harvey (2017) using 10 min to sub-daily rainwater collection from Austin (30.3°N, 97.7°W), San Antonio (29.6°N, 98.6°W), and Houston (30.1°N, 95.4°W). Hurricane Harvey was a record-setting TC in US history in terms of both total rainfall and the affected area, with an economic impact second only to Hurricane Katrina (2005)18. Although the eye of Hurricane Harvey did not pass directly over our sampling locations, its spiral rainbands produced 2–3 days of rainfall over all three sites, allowing us to capture both long-term trends and high-frequency changes in precipitation isotope values and to evaluate the causes of these variations.
The evolution of Harvey and weather conditions in southeastern Texas
Hurricane Harvey was spawned from a large convective mass off the west coast of Africa on August 12, 2017 which later developed into a tropical depression on August 17. It made landfall on August 25 at 2200 Central Daylight Time (CDT) over southeastern Texas as a Category 4 hurricane (Fig. 1a). Harvey rapidly weakened as it migrated northwestward over land between August 26 and August 27. In the early morning of August 27, the synoptic condition steered Harvey back southeastward, and Harvey moved back offshore in the early morning of August 28. Harvey eventually made its way back onto land in Cameron County, Louisiana as it was migrating northeastward while dissipating18.
Fig. 1: Evolution of Hurricane Harvey and δ18Op.
a Total precipitation between August 25 and September 2, 2017 (shadings). The glowing curve indicates the track of Hurricane Harvey. The storm symbols on the curve indicate the hurricane intensity, with the adjacent time stamps indicating the time of each stage. The storm symbols are explained in the legend; T.D. Tropical Depression; T.S. Tropical Storm. The stars indicate the rainwater sampling sites. b Violin plots showing the distribution of δ18Op values at the three sites. c Timeseries of δ18Op at Austin, San Antonio, and Houston. Time is reported in CDT.
The Austin Camp Mabry weather station recorded a total of 8 inches of precipitation during Hurricane Harvey18. Rainfall started around 0000 CDT August 26th and lasted until 0900 CDT August 28th, with the maximum precipitation rate occurring between 0700 and 0800 CDT August 26th (Supplementary Fig. 1). Ground-level relative humidity increased rapidly from 75 to 100% as rainfall started, which then briefly dropped to 85% after the first major rainband moved away from Austin at 1200 CDT August 26th before it quickly returned to 97%. Relative humidity remained over 90% until the morning of August 28th (Supplementary Fig. 1). The recorded temperature was relatively stable during the storm, ranging from 21 to 24 °C (Supplementary Fig. 1).
The San Antonio International Airport weather station recorded a total of 1.9 inches of precipitation during Hurricane Harvey18. Intermittent small patches of rainfall started in the afternoon of August 25th. The main storm event occurred from 0500 CDT August 26th to 1500 CDT August 27th. During the main storm, the ground-level relative humidity was constantly at around 90%, and temperature ranged from 23 to 24 °C (Supplementary Fig. 2).
The Houston George Bush Intercontinental Airport weather station recorded a total of 31.3 inches of precipitation during Hurricane Harvey18. The historically high rainfall amount in the Houston metro area was partially due to a weak stationary front during August 26–28th, which enhanced surface convergence and uplift, with moisture inflow in the eastern side of Harvey continuously delivering warm humid air from the Gulf of Mexico. The recorded ground-level relative humidity was fluctuating between 90 and 100%, and the surface temperature gradually decreased from 26 °C at the onset to 22 °C near the end (Supplementary Fig. 3).
Isotopic composition of Hurricane Harvey rainwater
Over the period of rainwater collection, the mean hourly δ18Op values in Austin (−9.6 ± 3.35‰, 1σ, n = 50) and San Antonio (−9.86 ± 3.47‰, 1σ, n = 29) are similar, and the mean δ18Op value in Houston is slightly higher (−8.37 ± 1.74‰, 1σ, n = 14) (Fig. 1b, Supplementary Data 1). These values are substantially more negative than the long-term average August δ18Op (−3.4 ± 3.4‰) and mean annual δ18Op (−4.3 ± 1.1‰) of rainfall collected in central Texas16,19,20. The timeseries of rainfall δ18Op at all three sites display a trend towards more negative values as the storm progressed (Fig. 1c). In Austin and San Antonio, δ18Op decreased by over 12‰ between August 26, 0000 CDT and August 28, 0000 CDT, whereas the trend is more subtle in Houston, with a change of only ca. 6‰ between the onset and the end of the storm. At the Austin sampling location, δ18Op values also display a return to relatively enriched values towards the end of the storm, which is not evident at the other sites.
The hourly Austin and San Antonio datasets also display high-frequency shifts in δ18Op that are superimposed on the long-term trends (Fig. 1c). This high-frequency variability is further supported by our parallel 10-min resolution sampling at Austin, which shows changes that are consistent with our hourly data set (Supplementary Fig. 1). There is an apparent lead-lag phase relationship between the records in Austin and San Antonio, with the San Antonio δ18Op lagging the Austin δ18Op by approximately 2 h (Supplementary Fig. 4), suggesting that the high-frequency variability at the two sites is related.
In Austin, our data show that these high-frequency shifts in δ18Op are accompanied by large negative shifts in d-excess of up to −15‰ during the first half of the storm (August 26, Supplementary Fig. 1). After midnight on August 27 d-excess values remained largely positive for the remainder of the storm. The evolution of d-excessp from our parallel 10 min d-excessp data agrees with the hourly data, showing large negative shifts during this interval of collection at 10 min-resolution (Supplementary Fig. 1). However, the d-excessp data from San Antonio do not show exceptionally negative values (Supplementary Fig. 2), except for the first sample, which was collected during a brief period of precipitation under the influence of the periphery of the storm, where strong evaporation likely occurred. While we do not observe high-frequency changes in d-excessp from San Antonio, there was a period of relatively low d-excessp values between the morning of August 26 and the midnight August 27. The San Antonio d-excessp rose back to near 10‰ on August 27 before it dropped again during the final stage of the storm. At Houston, d-excessp values remained stable, near 10‰ but given the low resolution (several hours) sampling at that site, we cannot evaluate whether high-frequency shifts in isotopic values may have occurred there (Supplementary Fig. 3).
3-day trend in δ18Op controlled by upstream rainout
The long-term trend of Hurricane Harvey rainfall δ18Op in time and space shows a strong dependence on the relative distance to the hurricane eye, with delta values across all three sites decreasing as the eye approached (Figs. 2a, 3a). This observation is consistent with previous TC isotope studies, and was previously attributed to several factors including higher cloud tops near the eyewall3, diffusive isotopic exchange between falling droplets and the ambient vapor3, and post-condensation re-evaporation of falling droplets9. While the isotopic ratios were overall more deleted in the proximity of the eyewall at all three sites, the most depleted values observed in Austin and San Antonio occurred several hours after the closest point to the eyewall was reached (Fig. 2a). At San Antonio, δ18Op values did not increase as Harvey migrated away, and at Austin δ18Op only increased in the latest portion of the storm as the precipitation rate fell off dramatically. This asymmetry of the isotope-eyewall distance relationship evident in the Harvey dataset suggests that a more complex mechanism is needed to explain the long-term trends in the isotopic value of precipitation. Furthermore, similar to what has previously been observed in other TCs4, there is no significant correlation between the stable isotope data from Hurricane Harvey and environmental parameters, such as temperature, precipitation amount, wind speed, and relative humidity at ground level (Supplementary Fig. 1), indicating that the long-term spatiotemporal pattern of TC δ18Op was not driven by instantaneous changes in local environmental conditions.
Fig. 2: Spatiotemporal characteristics of δ18Op of Hurricane Harvey.
a Evolution of δ18O in space relative to the center of Hurricane Harvey (open circles color-coded by δ18Op values). The black arrows indicate the directions of relative displacement of the sampling sites, assuming the hurricane eye was stationary. b–e Snapshots of MRMS radar reflectivity at different times. The purple line indicates the track position for Hurricane Harvey (Black and Zelinsky, 2018). The red storm symbols indicate the instant positions of the hurricane eye. The black stars indicate our sites of rainwater collection.
Fig. 3: Spatiotemporal evolution of δ18Op controlled by the upstream rainout amount.
Scatter plots of a δ18Op vs. Distance to the eye, b distance to the eye vs. Upstream rainout, and c δ18O vs. Upstream rainout. Blue, green, and brown dots indicate data collected from Austin, Houston, and San Antonio, respectively. The r and p values reported in the figures include data from all three sites. d Comparison of 5 km-height backward trajectories reaching Austin at the onset of the storm (turquoise, 8/26 0000 CDT), at the time when the location was at the closest point to the eye (dark blue, 8/27 0100 CDT), and at the end of the storm (light blue, 8/28 0900 CDT), showing more swirly travel history of moisture when the eye was closer.
Backward trajectories of airmasses are often used to assess the impact of moisture origin and transport history on the isotopic composition of rainfall. Although previous studies show that changes in moisture source can partially account for seasonal variability of rainwater isotopic composition in central Texas16,19,21, it is unlikely to be the case here on such short timescales. Our analysis shows that during Hurricane Harvey, moisture delivered to our sites was primarily derived from a relatively small area (25–29°N, 90–94°W) in the Gulf of Mexico, with little change throughout the sampling period (Supplementary Fig. 5). Thus, we can simplify our interpretation by assuming that the initial isotopic value of the moisture supplied to Hurricane Harvey remained largely unchanged. Tracking the meteorological conditions along Lagrangian back trajectories of airmasses, we find that δ18Op is most strongly correlated with total precipitation occurring along a 72 h backward trajectory (i.e., upstream rainout) (Fig. 3c). This indicates that the long-term trend of the isotopic value of hurricane rainfall reflects integrated upstream processes associated with Rayleigh distillation of water vapor via rainfall that previously fell rather than local rainfall. There is also a strong correlation between the upstream rainout and distance to the hurricane eye (Fig. 3b), with higher upstream rainout of water vapor for airmasses near the eyewall than airmasses near the edge of the storm. We hypothesize that this relationship could account for the observed spatiotemporal patterns in δ18Op values of hurricane rainfall (Fig. 3a). Modeled back trajectories show that near the eyewall, airmasses had a longer transport history, orbiting around the hurricane eye, during which continuous rainout could have enhanced the isotopic depletion of the residual water vapor (Fig. 3d). In contrast, water vapor near the edge of the storm was transported more directly from the adjacent regions or from the ocean, which had not experienced extensive rainout, and therefore, produced precipitation with less negative δ18Op values (Fig. 3d). The upstream rainfall control could also account for the delayed return of δ18Op to more positive values observed in the Austin and San Antonio records. At these sites, upstream rainout remained elevated even after the closest point relative to the hurricane eye was reached because of the development of a weak stationary front in the eastern sector of the hurricane, which led to intense rainfall upwind of Austin and San Antonio and isotopic depletion over these sites (Supplementary Figs. 1 and 2).
In order to determine whether this upstream rainout effect is a fundamental process controlling isotopic fractionation in TCs, we also examined this effect in published high-resolution TC isotope studies. We find that the Chinese records of Typhoons Haitang, Meigi, and Soudelor9, and the Central American records of Hurricane Irma and Hurricane Otto4,7 also exhibit significant negative correlations between upstream rainout and δ18Op (Supplementary Table 1), similar to our findings for Hurricane Harvey. This indicates that the upstream rainout could be a widely applicable control on the spatiotemporal variability in TCs. However, we also find that some typhoons from southeastern China exhibit strong correlations between δ18Op and relative humidity, suggesting that post-condensation evaporation may play a more important role in some storms9,11. We find that the correlations between δ18Op and upstream rainout generally tend to be stronger when relative humidity is higher (Supplementary Fig. 6). We hypothesize that variations in relative humidity were larger during drier TCs, and as a result, local effects associated with evaporative fractionation dominated the isotopic evolution of precipitation from these storms. Hurricane Irma appears to be an exception in that it shows relatively low relative humidity during the event and yet a significant correlation between δ18Op and upstream rainout. We recognize that the backward trajectory analysis depends on the meteorological data input, as well as the models that generated these meteorological data. Thus, uncertainties might arise from the relatively low resolution of the global meteorological datasets used in the trajectory analysis (40 km resolution EDAS Contiguous US dataset for Hurricane Harvey; 1° × 1° resolution GDAS global dataset for other TCs; see "Methods" section for details) and any uncertainties associated with the parameterization of the climate models used to simulate them. Future studies that have access to improved meteorological input or with coupled vapor-rainwater isotope analyses may help to improve our understanding of the relative importance of humidity, upstream rainout, or other processes in driving isotopic changes in TCs.
High-frequency isotopic changes associated with spiral rainbands
The rapid (~1–3 h) shifts in the δ18Op data from Austin and San Antonio appear to be associated with the passage of individual spiral rainbands (Fig. 2b–e, and 4a). In Austin, our record shows an initial positive shift in δ18Op upon the arrival of each rainband, followed by a transition to more negative δ18Op values as the rainband passes over the sampling location (e.g., 0700−0800 and 1400−1700 on August 26, 0000−0300 on August 27). Previous studies have attributed such high-frequency isotopic variability in TC to varying precipitation types, where stratiform precipitation produces more isotopically depleted rainfall than does convective precipitation4,11. However, this is not likely the cause here. Although spiral rainbands of TC bear some structural similarity to a squall line MCS, with large anvil stratiform clouds expanding towards the outer side and weak convective precipitation forming from updrafts on the inner side, the dynamics controlling the two systems are different12. If the changes in precipitation type were the dominant isotopic control, we would expect to see a negative shift in δ18Op as the downdraft-dominated outer side of rainbands reached the site first, followed by a positive shift in δ18Op associated with the subsequent updraft22,23, opposite to what we observe (Fig. 4a). Instead, we suggest that the rapid decrease in δ18Op values were driven by the local rainout effect during the passage of rainbands, which is supported by a weak but significant correlation between the time-derivative of δ18Op change and hourly precipitation amount (r = −0.39, p < 0.01) in Austin. However, it is less clear what drives the initial positive shift in δ18Op preceding the decrease within a rainband.
Fig. 4: High-frequency shifts in δ18Op and d-excessp strongly associated with rainbands and vertical air motions.
a Evolution of δ18Op (blue curve), d-excessp (red curve) in comparison with rainfall intensity (black curve) in Austin. Peaks in rain rate portray the passage of several individual spiral rainbands over Austin. Rain rate data is from the 2 min MRMS dataset40. b Vertical velocity over Austin. Negative (positive) values indicate ascending (descending) motions. c Relative humidity over Austin. Vertical velocity and relative humidity data are from the ERA5 global reanalysis28.
The observation that positive spikes in δ18Op upon the arrival of each rainband were accompanied by large negative shifts in d-excess (Fig. 4a) suggests that these changes could be driven by kinetic isotopic fractionation rather than equilibrium condensation, which would yield d-excess values near +10‰. Negative d-excess values in rainwater are commonly interpreted as a sign of post-condensation re-evaporation because of the faster diffusion rate of HDO relative to H218O; re-evaporation causes the preferential escape of lighter isotopologues from liquid water, increasing δ18Op values and decreasing d-excess values24,25. However, using previously reported effective fractionation factors26,27 we estimate that at least 50% of the falling droplets would need to be re-evaporated in order for re-evaporation to account for such negative d-excessp values. Because ground-level relative humidity remained near saturation throughout our sampling campaign and these shifts occurred as precipitation was intensifying (Supplementary Fig. 1), extensive re-evaporation is unlikely. Similarly, although we observe a dip in relative humidity associated with a pause in rainfall at around 1200 CDT on August 26th, negative d-excessp shifts were not present during this interval (Supplementary Fig. 1). Together, these observations suggest that increased evaporation associated with transient decreases in relative humidity cannot account for the spikes of exceptionally negative d-excessp in the Austin data.
Using ERA5 hourly reanalysis data (0.25° × 0.25°)28, we investigate the relationship between rainband mesoscale circulation and the rapid changes in d-excessp and propose that it is related to the microphysics of condensation. Our results show that as rainbands arrived, these shifts in d-excessp were accompanied by moderate ascending air motion, which could be responsible for the delivery of supercooled liquid water to the mid-troposphere (Fig. 4b). Support for this comes from the greater-than-100% relative humidity at 500 mbar (Fig. 4c). These environmental conditions are favorable for the Wegener–Bergeron–Findeisen (WBF) condensation process, in which vapor, ice, and supercooled liquid water coexist. In the WBF scenario, vapor is sub-saturated relative to liquid water but is supersaturated relative to ice, causing water molecules to evaporate from liquid water and deposit on ice condensate29. Both of these processes are associated with kinetic isotopic fractionation. At such low temperatures, the effective fractionation between vapor and liquid is larger than between vapor and ice30. The net result is that the WBF process would concentrate heavy isotopologues in the vapor phase, causing increased δ18Ovapor and decreased d-excessvapor31,32,33. The subsequent condensation of ice from such vapor would result in progressively more positive δ18Op and negative d-excessp values. Our parallel 10 min d-excessp data shows a similar range of variation during this interval while exhibiting an incoherent evolution over time, which may indicate spatial heterogeneities associated with these microphysical processes (Supplementary Fig. 1). A single column convection model suggests that such changes in δ18Ovapor and d-excessvapor can only occur when the WBF process is considered, though the magnitude of the signal is strongly dependent on parameterization31. Furthermore, in an idealized storm, isotopic fractionation associated with the WBF process would occur at the anvil detrainment level of a convective system where the convective clouds decay to stratiform clouds31, which is consistent with the observation that the strongly negative d-excess occurred when rainfall was transitioning from the stratiform side of the rainband to the weak/dying convective (Fig. 5). The subsequent decrease in δ18Op and the return of d-excessp values to near 10‰ as each rainband passes over indicates that the WBF process no longer plays a role once this narrow transition zone of the rainband passes, and that the hourly-scale changes in δ18Op are dominated by local rainout effects.
Fig. 5: Schematics of a possible mechanism for the exceptionally large and coupled shifts in δ18Op and d-excessp.
a Spatial anatomy of two spiral rainbands at August 26th 0700 CDT and August 26th 1400 CDT, respectively, overlying MRMS radar images in Fig. 2b, c. Pink (purple) shading indicates stratiform (convective) rainfall. Purple shading with dashed outlines indicates decaying convective rainfall. The red storm symbols indicate the hurricane eye. The black stars indicate Austin, TX. b A schematic illustrating the mesoscale dynamics responsible for the WBF process and high-frequency isotopic shifts. The upper diagram highlights positive shifts in δ18Op (blue curve) and negative shifts in d-excessp (red curve) upon the arrival of two rainbands (black curve). The lower schematic generalizes the vertical structure of these two rainbands, showing the coexisting three phases of water in the region where stratiform rainfall transitions into convective rainfall, which is favorable for the WBF condensation. Light gray shading: the cross-section of a spiral rainband. Dark gray shading: regions of high reflectivity. Broad arrows: mesoscale circulations.
Although WBF condensation can produce condensate with very negative d-excess values, one remaining question is whether the isotopic signals of WBF condensation are preserved as raindrops fall to the ground-level. Post-condensation processes, including re-evaporation and diffusive exchanges during the fall of droplets, could alter the original isotopic composition34,35, thus masking the original signal of condensation. Re-evaporation would further lower d-excessp values, whereas diffusive exchanges would combine the d-excess of ambient vapor and droplets14, in this case making d-excessp more positive. Figure 4c shows that there are a few pockets of relatively dry air with relative humidity of 70–90% in the lower troposphere at around 700 mbar, associated with the lingering drying effect of the downdrafts (Fig. 4b). We do find that the periods with negative d-excessp occur when the lower troposphere is relatively dry. However, the converse is not true: there are multiple periods within the storm when the lower troposphere is dry but precipitation does not show anomalously negative d-excessp values (e.g., August 26th 3:00, August 28th 0:00 and after). We thus hypothesize that a dry lower troposphere may limit diffusive exchange such that d-excessp values remain negative, but these high frequency shifts are accounted for mainly by the microphysics of condensation.
We explore two options to explain the observation that δ18Op values from San Antonio show hourly variations that are similar to those observed in Austin but with a smaller magnitude and a lag of approximately two hours (Fig. 1c; Supplementary Fig. 4). We hypothesize that this lead-lag relationship can be explained by either the counterclockwise moisture transport from Austin to San Antonio (with a distance of 70 miles and wind speed of 35 miles per hour) or the slow east-to-west migration of rainbands that drives hourly isotopic variations through local rainout processes. The smaller magnitude of the shifts in δ18Op at San Antonio compared with Austin might be due to either a dilution effect during the moisture transport or a substantially weaker local rainout effect associated with lighter rainfall when rainbands passed by San Antonio. However, the large negative spikes in d-excessp are absent in the San Antonio dataset; instead, there is only a slight decrease of ~3‰ in d-excessp values between 0900−2200 CDT on August 26 (Supplementary Fig. 2). One possible explanation could be that as the processes driving these exceptionally negative d-excess values are microphysical, vapor d-excess values in the atmosphere are heterogeneous, and these spurious signals are homogenized or dispersed during the moisture transport from Austin to San Antonio. Alternatively, as the rainbands migrated to San Antonio, the shift to a predominance of light stratiform rainfall and less well-defined rainbands was unable to sustain extensive WRF condensation, and as a result, did not produce large negative spikes in d-excess. If there were isotope measurements from localities to the east of San Antonio, which would not have been impacted by downstream moisture from Austin, it would allow us to separate the impact of these two mechanisms. Therefore, future studies should consider sampling high-resolution rainwater at multiple sites to better understand the local vs. upstream processes in controlling high-frequency isotopic variations.
Implications for paleo TC reconstructions
Our data show that the rainwater δ18Op values of Hurricane Harvey were substantially more negative than the mean annual and mean August δ18Op values in this region. However, whether it could produce a significant anomaly in the annually integrated rainwater isotopic values, which might be recorded in high-resolution paleoclimate records, is less clear. To assess the relative isotopic impact of Hurricane Harvey on annually-averaged precipitation isotope values (i.e., Δδ18O), we compare the mean annual δ18Op with and without Hurricane Harvey using the SWING2 isotope-model data36 and the gridded Hurricane Harvey δ18Op (see the "Methods" section).
As expected, our calculations demonstrate that hurricane Harvey generated isotopically depleted rainfall over a large region of the southern US (Fig. 6a). However, the total impact of Harvey on the annual isotope signal depends on the δ18O value of the storm, the amount of precipitation that fell during the storm, and the mean annual rainfall occurring in a given location. Based on the ensemble mean of SWING2 models, the mean annual precipitation at Austin, San Antonio, and Houston are approximately 900, 850, and 1100 mm per year, respectively; the mean annual δ18Op are −5.3‰, −5.3‰, and −4.7‰, respectively. We calculate that Hurricane Harvey dumped 171, 53, and 758 mm of precipitation at these three sites, with amount-weighted mean δ18Op values of −9.7‰, −9.7‰ and −8.0‰, respectively. Taken together, Hurricane Harvey would shift the annual mean isotopic values by −0.7‰, −0.3‰, and −1.3‰ in Austin, San Antonio, and Houston, respectively. In the core region of Hurricane Harvey precipitation over southeastern Texas, we calculate that the isotopic value of mean annual rainfall is shifted by up to −2‰ due to the incorporation of highly depleted rainfall during the hurricane event (Fig. 6b). This shift exceeds 2σ of annual δ18Op in this region and should be evident in high-resolution proxy datasets37. However, over San Antonio and the Mississippi Valley, the total precipitation that fell during Harvey is small relative to the mean annual precipitation, such that it is unlikely to produce an anomaly that is likely to be visible in an annually integrated proxy record (|Δδ18Op|< 0.3‰; Fig. 6b). Thus, while the isotopic signature of large TC events can be significant, particularly in semi-arid regions, the anomaly will be relatively restricted spatially to the region associated with the core of the cyclone-induced rainfall anomaly. Furthermore, these calculations highlight the need for proxy records with annual or higher resolution when attempting to reconstruct hurricanes; in records with longer temporal averaging, the hurricane signature will be even more muted. In addition, the preservation potential of such a signal is proxy-specific and is subject to factors such as variability in soil moisture and surface- and groundwater hydrology associated with a given proxy38,39. To further constrain the robustness and limitation of the isotope approach as a paleotempestology proxy, proxy-specific studies in the future are needed which will focus on how well TC signals are preserved in different proxy systems.
Fig. 6: Isotopic impact of Hurricane Harvey on mean annual rainfall.
a Estimated δ18Op values in Hurricane Harvey rainfall (shadings; see the "Methods" section). The contours indicate mean annual precipitation (unit: mm) across this region estimated using the multi-model ensemble mean of SWING2 models. b The difference between the mean annual δ18Op precipitation with and without Hurricane Harvey (Δδ18Oannual). The stars indicate the rainwater sampling sites in this study.
In this study, we investigate the mechanisms of isotopic variability during Hurricane Harvey (2017) using rainwater collected in Austin, San Antonio, and Houston. We demonstrate that event-long trends in rainfall δ18O values are predominantly controlled by upstream rainout and Rayleigh distillation along the moisture transport pathway. Since moisture near the hurricane eye generally had experienced more rainout, the upstream rainout control explains the observation of previous studies that the δ18Op values are consistently more negative near the eyewall. In our compilation of existing TC isotope data, upstream rainout shows a strong correlation with δ18Op in the majority of these TCs, suggesting that upstream rainout is the control on the low-frequency isotopic variability during most TCs. However, our analysis using a compilation of existing isotope data of tropical cyclone precipitation also suggests that when the relative humidity is low, isotopic changes due to re-evaporation could potentially dominate the isotopic signals in collected rainwater. Our data also exhibit relatively large high-frequency shifts in the hourly δ18Op and d-excessp values during the storm. While local rainout in rainbands explains the negative shifts in δ18Op as rainbands passed by, the positive shifts in δ18Op and negative shifts in d-excessp that preceded them could be best accounted for by microphysical condensation processes associated with the WBF condensation. Lastly, we estimated the impact of Hurricane Harvey on annually integrated isotopic values of precipitation in order to assess whether water isotope-based approaches can be used to study paleotempestology. While a TC like Hurricane Harvey can produce a large amount of isotopically depleted rainwater over a relatively large region, whether the signal can be detected in annually resolved biogeological archives depends on the isotopic values of the storm, the total precipitation amount from the storm, and the mean annual precipitation amount. Thus, we suggest that caution should be exercised when employing the isotopic method to study paleotempestology.
Sample collection and isotope measurement
Rainwater samples were collected hourly in Austin (30.30°N, 97.73°W, n = 50) and San Antonio (29.58°N, 98.62°W, n = 29), and sub-daily in Houston (30.08°N, 95.35°W, n = 14). To collect, a bucket was placed ~1 m above ground in an uncovered area. The collected rainwater was transferred to a 2 ml glass vial using a clean glass pipette and the vial was capped immediately after collection. Minimal headspace was allowed in glass vials in order to avoid evaporation. The glass vials were stored in a refrigerator at 4 °C before isotope analysis. The buckets were emptied and wiped dry before being used again. Two buckets were used to alternate between hourly collection. In addition, we also collected a total of 119 rainwater samples in Austin at 10 min resolution on August 26 using another set of exactly the same apparatus, which yielded consistent results with hour parallel hourly sampling (Supplementary Fig. 1). In Houston, the first two samples were collected from a nearby creek because the hazardous weather condition prevented us from setting up the collection system. We use these two samples to approximate the integrated rainwater isotopic composition.
The isotope (δ18O and δ2H) measurements of rainwater samples from Austin and Houston were performed on a Picarro L2130‐i Liquid Water Analyzer at the University of Texas at Austin16. The isotope measurements of samples from San Antonio were measured using a Picarro L2130-i Water Isotope Analyzer at the University of Texas at San Antonio. Each sample was injected ten times with autosamplers and was analyzed using High Precision mode. The first two injections were discarded in order to remove the isotopic memory effects of the instrument. The average of the remaining eight measurements is reported here. A standard (δ18O = −0.23‰, δ2H = −5.01‰) calibrated against IAEA Water Standards GISP, SLAP, and VSMOW2 was inserted in the beginning, the middle, and the end of each run to correct for the intercept of the calibration equation of the instrument. Three additional in‐stock water samples of known isotope composition (δ18O = −6.58‰, −8.61‰, and −9.31‰, respectively; δ2H = −43.34‰, −59.34‰, and −63.03‰, respectively) were analyzed as check standards along with each run to verify the adjusted calibration. The analytical precision is better than 0.1‰ for δ18O and 1‰ for δ2H. δ18O and δ2H are reported in per mil (‰) relative to the standard Vienna Standard Mean Ocean Water (VSMOW). The delta notion of isotopic values is defined as:
$${{{{{{\rm{\delta }}}}}}}^{18}{{{{{{\rm{O}}}}}}}=[({\,}^{18}{{{{{{\rm{O}}}}}}/}^{16}{{{{{{\rm{O}}}}}}})_{{{{{{\rm{Sample}}}}}}}{/}({\,}^{18}{{{{{{\rm{O}}}}}}/}^{16}{{{{{{\rm{O}}}}}}})_{{{{{{\rm{VSMOW}}}}}}}-1]\times {10}^{3}{\, \% }_{0}$$
$${{{{{{\rm{\delta }}}}}}}^{2}{{{{{{\rm{H}}}}}}}=[({\,}^{2}{{{{{{\rm{H}}}}}}/}^{1}{{{{{{\rm{H}}}}}}})_{{{{{{\rm{Sample}}}}}}}{/}({\,}^{2}{{{{{{\rm{H}}}}}}/}^{1}{{{{{{\rm{H}}}}}}})_{{{{{{\rm{VSMOW}}}}}}}-1]\times 10^{3}{\, \% }_{0}$$
and the deuterium-excess (d-excess) measuring the relative enrichment of deuterium and 18O is defined as:
$${{{{{\rm{d}}}}}}{\mbox{-}}{{{{{\rm{excess}}}}}}={{{{{{\rm{\delta }}}}}}}^{2}{{{{{\rm{H}}}}}}{\,-\,}8\times {{{{{{\rm{\delta }}}}}}}^{18}{{{{{\rm{O}}}}}}$$
The Multi-Radar/Multi-Sensor System (MRMS) was used to estimate the precipitation rate and precipitation type during Hurricane Harvey. MRMS ingests the US National Weather Service WSR-88D doppler radar network and Canadian radar network, along with commercial and US Terminal Doppler Weather Radar, providing 0.01° × 0.01° fields of precipitation rate and precipitation type at 2 min resolution40. MRMS's precipitation type data separate convective and stratiform precipitation following the algorithm described in41. To calculate the precipitation rates at our sites, we calculated the averages of precipitation rates in three 0.1° × 0.1° boxes centered around our Austin, San Antonio, and Houston sampling locations, respectively.
We also used the hourly observational weather data from the Austin Camp Mabry weather station, San Antonio International Airport weather station, and Houston George Bush Intercontinental Airport weather station. The hourly precipitation measured at these stations is consistent with the radar-derived rainfall. Here, we used the weather station data to assess the impacts of temperature, relative humidity, surface pressure, and wind speed on rainwater isotopic ratios.
HYSPLIT backward trajectory modeling
In order to evaluate the influence of moisture transport history, we performed backward trajectory analyses using the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model software42. A Python-based function PySPLIT43 was used to operate the HYSPLIT software which mass-generates trajectories in batches. We used the 40 km Eta Data Assimilation System (EDAS) data covering the Contiguous US as the meteorological data input. For other TCs considered in this study that occurred outside of the Contiguous US, we used the 1° resolution Global Data Assimilation System (GDAS) data, which has global coverage. The results yielded from these two datasets for Hurricane Harvey exhibit a strong correlation with each other (r = 0.47, p = 0.0005), suggesting a relatively small impact of using different datasets for the sake of our analysis. 72 h backward trajectories were generated hourly throughout each event, with end elevation being 500, 1000, 2000, …, 8000, 9000 m above the ground.
HYSPLIT reports meteorological parameters alongside the spatial location (latitude, longitude, elevation) of airmass at a certain point. We calculated the rainout along trajectory by summing the total precipitation amount along each trajectory. To account for moisture in the entire air column, we calculated the weighted mean values of upstream rainout based on the specific humidity at each level:
$${{{{{{\rm{UR}}}}}}}_{\left({{{{{\rm{t}}}}}}\right)}=\frac{\mathop{\sum }\limits_{l=1}^{10}{q}_{(t,l)}\mathop{\sum }\limits_{n=1}^{72}{P}_{(t,l,n)}}{\mathop{\sum }\limits_{l=1}^{10}{q}_{(t,l)}}$$
where UR is the vertically specific humidity-weighted 72 h total upstream rainout, q is the specific humidity at a certain altitude, P is the hourly precipitation amount at a certain upstream location. t indicates the time at the sampling site. l indicates the altitude at the sampling site, including 10 levels (500, 1000, 2000, …, 9000 m above ground level). n indicates the number of hours prior to the arrival of the vapor at our sampling location.
To compare the isotopic data from Hurricane Harvey to mean annual precipitation isotopic values, we used the ensemble mean of the Stable Water Isotope Intercomparison Group Phase 2 (SWING2) model data, which reports precipitation and isotopic composition of precipitation modeled by a set of different isotope-enabled general circulation models36. The ensemble of these model simulations accurately reproduces the precipitation amount across this region, and the modeled isotopic ratios are comparable to the previously reported rainwater isotope data in central Texas16,19.
Computing the annual isotopic anomaly associated with Harvey
To assess the potential isotopic signature of Hurricane Harvey that might be seen in an annually integrated paleoclimate stable isotope record, we first calculated the temporal evolution of rainfall δ18O values at each grid cell based on the migration of the hurricane eye and a transfer function based on the relationship between δ18O and distance to the eye (Fig. 3a). The event-integrated mean isotopic values at each grid cell were calculated through amount-weighting the hourly δ18O values by the gridded hourly precipitation data from the North American Land Data Assimilation System (NLDAlS)44. We recognize that this reproduction of spatial rainwater δ18O values of Hurricane Harvey (Fig. 6a) may not necessarily be accurate since the relative distance to the eye is not a mechanistic control on isotopic values, as we discussed in the earlier text. This δ18Op map for Hurricane Harvey likely underestimated the true isotopic depletion at the later stage when the system shifted northeastward into the Mississippi Valley, where the oceanic moisture source was cut off from the storm. Therefore, we think this approach could provide a conservative estimate of Harvey δ18Op, which is valuable for assessing the isotopic impact of the hurricane.
To assess the relative isotopic impact of Hurricane Harvey on annually averaged precipitation isotope values, we combined the SWING2 climatological isotope values and our gridded Harvey δ18Op values. We first calculated a gridded map of amount-weighted mean annual δ18Op (δ18Oannual). We then compared the mean annual δ18Op with and without Hurricane Harvey, which allows us to quantify the annually integrated stable isotope anomaly (Δδ18Oannual) value associated with the Hurricane Harvey storm event (Fig. 6b).
$$\triangle {{{{{{\rm{\delta }}}}}}}^{18}{{{{{{\rm{O}}}}}}}_{{{{{{\rm{annual}}}}}}}=\frac{{{{{{{\rm{\delta }}}}}}}^{18}{{{{{{\rm{O}}}}}}}_{{{{{{\rm{annual}}}}}}}\times \,{P}_{{{{{{\rm{annual}}}}}}}+{{{{{{\rm{\delta }}}}}}}^{18}{{{{{{\rm{O}}}}}}}_{{{{{{\rm{Harvey}}}}}}}\times {P}_{{{{{{\rm{Harvey}}}}}}}\,}{{P}_{{{{{{\rm{annual}}}}}}}+{P}_{{{{{{\rm{Harvey}}}}}}}}-{{{{{{\rm{\delta }}}}}}}^{18}{{{{{{\rm{O}}}}}}}_{{{{{{\rm{annual}}}}}}}$$
where δ18Oannual is the amount-weighted mean annual δ18Op at each grid, Pannual is the mean annual precipitation amount at each grid, δ18OHarvey is the amount-weighted mean δ18Op from Hurricane Harvey at each grid, and PHarvey is the total precipitation amount from Hurricane Harvey at each grid.
All rainwater isotope data and calculated upstream rainout data are available at: https://doi.org/10.6084/m9.figshare.17169032.v145. The Multi-RADAR Multi-Sensor. (MRMS) product is archived at: https://mesonet.agron.iastate.edu/archive/. The input datafiles for HYSPLIT are available at: https://www.ready.noaa.gov/archives.php. The Stable Water Isotope Intercomparison Group, Phase 2 (SWING2) model data is available at: https://data.giss.nasa.gov/swing2/. The hourly station-based weather data is archived and available at: https://mesowest.utah.edu/. The North American Land Data Assimilation System (NLDAS) data is available at: https://ldas.gsfc.nasa.gov/nldas/v2/forcing.
The MATLAB codes for calculating upstream rainfall and gridded Hurricane Harvey rainwater δ18O are available at https://doi.org/10.6084/m9.figshare.17311481.v1.
Gedzelman, S. et al. Probing hurricanes with stable isotopes of rain and water vapor. Monthly Weather Rev. 131, 1112–1127 (2003).
Lawrence, J. R., Gedzelman, S. D., Zhang, X. & Arnold, R. Stable isotope ratios of rain and vapor in 1995 hurricanes. J. Geophys. Res.: Atmos. 103, 11381–11400 (1998).
Lawrence, R. J. & Gedzelman, D. S. Low stable isotope ratios of tropical cyclone rains. Geophys. Res. Lett. 23, 527–530 (1996).
Sánchez-Murillo, R. et al. Deciphering key processes controlling rainfall isotopic variability during extreme tropical cyclones. Nat. Commun. 10, 1–10 (2019).
Baldini, L. M. et al. Persistent northward North Atlantic tropical cyclone track migration over the past five centuries. Sci. Rep. 6, 37522 (2016).
Frappier, A. B., Sahagian, D., Carpenter, S. J., González, L. A. & Frappier, B. R. Stalagmite stable isotope record of recent tropical cyclone events. Geology 35, 111–114 (2007).
Welsh, K. & Sánchez-Murillo, R. Rainfall, groundwater, and surface water isotope data from extreme tropical cyclones (2016–2019) within the Caribbean Sea and Atlantic Ocean basins. Data in Brief 30, 105633 (2020).
Lawrence, J. R., Gedzelman, S. D., Gamache, J. & Black, M. Stable isotope ratios: Hurricane Olivia. J. Atmos. Chem. 41, 67–82 (2002).
Xu, T. et al. Stable isotope ratios of typhoon rains in Fuzhou, Southeast China, during 2013–2017. J. Hydrol. 570, 445–453 (2019).
Fudeyasu, H. et al. Isotope ratios of precipitation and water vapor observed in Typhoon Shanshan. J. Geophys. Res.: Atmos. 113, D12113 (2008).
Munksgaard, N. C. et al. Stable isotope anatomy of tropical cyclone Ita, north-eastern Australia, April 2014. PLoS One 10, e0119728 (2015).
Didlake, A. C. Jr & Houze, R. A. Jr Dynamics of the stratiform sector of a tropical cyclone rainband. J. Atmos. Sci. 70, 1891–1911 (2013).
Houze Jr, R. A. Mesoscale convective systems. Rev. Geophys. 42, RG4003 (2004).
Kurita, N. Water isotopic variability in response to mesoscale convective system over the tropical ocean. J. Geophys. Res.: Atmos. 118, 376–310,390 (2013).
Aggarwal, P. K. et al. Proportions of convective and stratiform precipitation revealed in water isotope ratios. Nat. Geosci. 9, 624–629 (2016).
Sun, C., Shanahan, T. M. & Partin, J. Controls on the isotopic composition of precipitation in the South‐Central United States. J. Geophys. Res.: Atmos. 124, 8320–8335 (2019).
Munksgaard, N. C. et al. Data descriptor: Daily observations of stable isotope ratios of rainfall in the tropics. Sci. Rep. 9, 1–7 (2019).
Blake, E. S. & Zelinsky, D. A. National Hurricane Center Tropical Cyclone Report: Hurricane Harvey (National Hurricane Center, National Oceanographic and Atmospheric Association, 2018).
Pape, J. R., Banner, J. L., Mack, L. E., Musgrove, M. & Guilfoyle, A. Controls on oxygen isotope variability in precipitation and cave drip waters, central Texas, USA. Jo. Hydrol. 385, 203–215 (2010).
Feng, W., Casteel, R. C., Banner, J. L. & Heinze-Fry, A. Oxygen isotope variations in rainfall, drip-water and speleothem calcite from a well-ventilated cave in Texas, USA: Assessing a new speleothem temperature proxy. Geochim. Cosmochim. Acta 127, 233–250 (2014).
Feng, W. et al. Changing amounts and sources of moisture in the U.S. southwest since the Last Glacial Maximum in response to global climate change. Earth Planetary Sci. Lett. 401, 47–56 (2014).
He, S., Goodkin, N. F., Kurita, N., Wang, X. & Rubin, C. M. Stable isotopes of precipitation during tropical Sumatra Squalls in Singapore. J. Geophys. Res.: Atmos. 123, 3812–3829 (2018).
Risi, C., Bony, S., Vimeux, F., Chong, M. & Descroix, L. Evolution of the stable water isotopic composition of the rain sampled along Sahelian squall lines. Quart. J. R. Meteor. Soc. 136, 227–242 (2010).
Dansgaard, W. Stable isotopes in precipitation. Tellus 16, 436–468 (1964).
Bowen, G. J., Cai, Z., Fiorella, R. P. & Putman, A. L. Isotopes in the water cycle: regional-to global-scale patterns and applications. Annu. Rev. Earth Planetary Sci. 47, 453–479 (2019).
Cappa, C. D., Hendricks, M. B., DePaolo, D. J. & Cohen, R. C. Isotopic fractionation of water during evaporation. J. Geophys. Res.: Atmos. 108, 4525 (2003).
Horita, J., Rozanski, K. & Cohen, S. Isotope effects in the evaporation of water: a status report of the Craig–Gordon model. Isotopes Environ. Health Stud. 44, 23–49 (2008).
Hersbach, H. et al. The ERA5 global reanalysis. Quart. J. R. Meteor. Soc. 146, 1999–2049 (2020).
Korolev, A. Limitations of the Wegener–Bergeron–Findeisen mechanism in the evolution of mixed-phase clouds. J. Atmos. Sci. 64, 3372–3375 (2007).
Ciais, P. & Jouzel, J. Deuterium and oxygen 18 in precipitation: Isotopic model, including mixed cloud processes. J. Geophys. Res.: Atmos. 99, 16793–16803 (1994).
Bolot, M., Legras, B. & Moyer, E. Modelling and interpreting the isotopic composition of water vapour in convective updrafts. Atmos Chem Phys, 13, 7903–7935 (2013).
Dütsch, M., Pfahl, S. & Sodemann, H. The impact of nonequilibrium and equilibrium fractionation on two different deuterium excess definitions. J. Geophys. Res.: Atmos. 122, 732–712,746 (2017).
Lu, G. & DePaolo, D. J. Lattice Boltzmann simulation of water isotope fractionation during ice crystal growth in clouds. Geochim. Cosmochim. Acta 180, 271–283 (2016).
Risi, C., Bony, S. & Vimeux, F. Influence of convective processes on the isotopic composition (δ18O and δD) of precipitation and water vapor in the tropics: 2. Physical interpretation of the amount effect. J. Geophys. Res.: Atmos. 113, D19305 (2008).
Risi, C., Muller, C. & Blossey, P. Rain evaporation, snow melt, and entrainment at the heart of water vapor isotopic variations in the tropical troposphere, according to large‐eddy simulations and a two‐column model. J. Adv. Model. Earth Syst. 13, e2020MS002381 (2021).
Risi, C. et al. Process‐evaluation of tropospheric humidity simulated by general circulation models using water vapor isotopologues: 1. Comparison between models and observations. J. Geophys. Res.: Atmos. 117, D05303 (2012).
Feng, W. et al. Changing amounts and sources of moisture in the US southwest since the Last Glacial Maximum in response to global climate change. Earth Planetary Sci. Lett. 401, 47–56 (2014).
Mora, C. I., Miller, D. L. & Grissino-Mayer, H. D. Tempest in a tree ring: Paleotempestology and the record of past hurricanes. Sediment. Rec. 4, 4–8 (2006).
Frappier, A. B. Masking of interannual climate proxy signals by residual tropical cyclone rainwater: Evidence and challenges for low‐latitude speleothem paleoclimatology. Geochem., Geophys., Geosyst. 14, 3632–3647 (2013).
Zhang, J. et al. Multi-radar multi-sensor (MRMS) quantitative precipitation estimation: Initial operating capabilities. Bull. Am. Meteorol. Soc. 97, 621–638 (2016).
Qi, Y., Zhang, J. & Zhang, P. A real‐time automated convective and stratiform precipitation segregation algorithm in native radar coordinates. Quart. J. R. Meteorol. Soc. 139, 2233–2240 (2013).
Draxler, R. R. & Hess, G. Description of the HYSPLIT_4 modelling system. NOAA Tech. Mem. ERL ARL-224 (1997).
Warner, M. S. Introduction to PySPLIT: A Python toolkit for NOAA ARL's HYSPLIT model. Comput. Sci. Eng. 20, 47–62 (2018).
Xia, Y. et al. NLDAS Primary Forcing Data L4 hourly 0.125 × 0.125 degree V002. Goddard Earth Sciences Data and Information Services Center (GES DISC), Greenbelt, MD, USA, Rep. NASA/GSFC/HSL (2009).
Sun, C. et al. Data for Sun et al. Commun. Earth Environ. figshare Dataset. https://doi.org/10.6084/m9.figshare.17169032.v1.
C.S. acknowledges the Advanced Study Program Postdoctoral Fellowship of the National Center for Atmospheric Research (NCAR) for support. NCAR is sponsored by the National Science Foundation. This research was supported by the National Science Foundation (AGS 1702271) and UT system-CONACYT collaborative research grant (ConTex 2017-33) to T.M.S., the endowment of Amy Shelton and V.H. McNutt Distinguished Professorship in Geology at The University of Texas at San Antonio to Y.G., and the National Natural Science Foundation of China (42106228) to L.T.
Department of Geological Sciences, Jackson School of Geosciences, University of Texas at Austin, Austin, TX, USA
Chijun Sun, Timothy M. Shanahan, Natasha Piatrunia & Jay Banner
Climate and Global Dynamics Laboratory, National Center for Atmospheric Research, Boulder, CO, USA
Chijun Sun
Department of Earth and Planetary Sciences, University of Texas at San Antonio, San Antonio, TX, USA
Lijun Tian & Yongli Gao
Key Laboratory of Cenozoic Geology and Environment, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China
Lijun Tian
Institute for Geophysics, Jackson School of Geosciences, University of Texas at Austin, Austin, TX, USA
Judson W. Partin
Timothy M. Shanahan
Yongli Gao
Natasha Piatrunia
Jay Banner
C.S., T.M.S., and J.W.P. designed the study. C.S., L.T., and J.W.P. collected rainwater samples. C.S., L.T., J.W.P., and Y.G. conducted the laboratory work and analyzed the isotope data. T.M.S., J.W.P., Y.G., and J.B. facilitated the multi-lab collaboration. C.S. and N.P. conducted the PySPLIT analysis. C.S. analyzed the meteorological data and SWING2 isotope modeling data. C.S. and T.M.S wrote the first draft of the paper. C.S. created the figures. All authors contributed to editing the final version of the manuscript.
Correspondence to Chijun Sun or Lijun Tian.
Communications Earth & Environment thanks Ricardo Sánchez-Murillo, Ana María Durán-Quesada, and the other, anonymous, reviewer for their contribution to the peer review of this work. Primary Handling Editors: Regina Rodrigues, Joe Aslin
Sun, C., Tian, L., Shanahan, T.M. et al. Isotopic variability in tropical cyclone precipitation is controlled by Rayleigh distillation and cloud microphysics. Commun Earth Environ 3, 50 (2022). https://doi.org/10.1038/s43247-022-00381-1
Communications Earth & Environment (Commun Earth Environ) ISSN 2662-4435 (online) | CommonCrawl |
How to determine the direction of instantaneous acceleration in a 2D motion? [duplicate]
Derivation of formula of normal acceleration 1 answer
How do we determine the direction of instantaneous acceleration when the body is moving in a plane (or a 3D space)? This question has been truly bothering me for nearly two weeks. I looked it up, found a similar post, but that didn't really clear up my doubts, so I decided to put it up.
Let's get to the point. I do understand the direction of acceleration (average or instantaneous) is along the direction of "change in velocity" over a time interval t. And it's relatively much more easy, to find the direction of that "change in velocity" (vector addition/subtraction), if the time interval over which the change takes place is significantly larger, say 5 seconds, 10 seconds...etc. But it gets much more challenging to determine this direction when the time interval becomes infinitesimally small, i.e when it approaches zero. Let's say for example, a body is moving along a curve, and it's trajectory equation is $y = x²$. It means the body is moving along a parabolic path. What I know is, the body's instantaneous velocity at any point, is along the tangent to the curve at that point. But,
How do we find the direction of its instantaneous acceleration at that point, if all we're given is its trajectory equation?
If we differentiate its trajectory equation partially wrt time, and plot its Vy vs Vx relation, what does the tangent at any point to this Vy vs Vx curve give? Does the slope of a Vy vs Vx curve have any physical meaning?
kinematics acceleration vectors coordinate-systems differentiation
π times eπ times e
marked as duplicate by ja72, user191954, Jon Custer, Kyle Kanos, Aaron Stevens Oct 11 '18 at 0:45
$\begingroup$ I think the equation should be written in the parametric form with time as the parameter.Then differentiate it. $\endgroup$ – Mohan Oct 8 '18 at 13:28
$\begingroup$ Can you please elaborate it? $\endgroup$ – π times e Oct 8 '18 at 13:38
$\begingroup$ There isn't such a unique point or line where accelerations are zero in general like instantaneous velocities have. $\endgroup$ – ja72 Oct 8 '18 at 16:59
The easiest way is to get your position as a function of time - instead of defining your trajectory as a curve $y(x)$, use the separate equations $x(t)$ and $y(t)$. Then, the acceleration as a function of time will just be the vector $\langle \frac{d^2 x}{dt^2}, \frac{d^2 y}{dt^2}\rangle$.
If for some reason that's not an option, given the curve $y(x)$, we can differentiate it to get
$$\frac{dy}{dx}=\frac{\frac{dy}{dt}}{\frac{dx}{dt}}$$
Since the velocity vector is $\langle \frac{dx}{dt},\frac{dy}{dt}\rangle$, the angle $\theta_v$ that the velocity vector makes with the horizontal axis is given by
$$\theta_v = \tan^{-1}\left(\frac{\frac{dy}{dt}}{\frac{dx}{dt}}\right)=\tan^{-1}\left(\frac{dy}{dx}\right)$$
But finding the direction of the velocity vector is easy, because the velocity vector always points along the tangent to your curve $y(x)$, so you can get that information directly from the curve. However, the acceleration vector is not subject to the same restrictions. It can have both a tangential component and a normal component. The normal component:
$$a_n = \frac{|v|^2}{R}$$
changes only the direction of the velocity vector. As you can see, it does have some dependence on the instantaneous radius of curvature $R$ of the curve $y(x)$, but it also depends on the magnitude of the velocity vector $v$, which we do not have because we aren't given the functions $x(t)$ or $y(t)$. So we can't calculate the normal component. The tangential component changes only the magnitude of the velocity vector (and whether it's moving "forward" or "backward" along the curve), which entirely depends on how quickly the particle moves along the curve and not on the shape of the curve itself. So we can't calculate either component separately. What we want is the angle between the velocity and acceleration vectors
$$\theta_{va} = \tan^{-1}\left(\frac{a_n}{a_t}\right) = \tan^{-1}\left(\frac{\frac{v^2}{R}}{\frac{dv}{dt}}\right)$$
from which we cannot, in general, eliminate the dependence on the magnitude of the velocity. So, in general, you cannot find the direction of the acceleration vector from the shape of the path $y(x)$ alone. You must have some information about the motion as a function of time.
That said, there are two special cases in which you can get the direction of the acceleration vector based only on the shape of the curve:
If the motion is at constant speed, then the magnitude of the velocity vector $v$ is fixed, and so $a_t=0$. In that case, the acceleration, if it is nonzero, will always point in the direction perpendicular to the curve (and by extension, perpendicular to the velocity vector).
If the motion is in a straight line, then the direction of the velocity vector is fixed, and so $a_n=0$. In that case, the acceleration will either be parallel or antiparallel to the velocity vector (depending on if the speed is increasing or decreasing at a particular moment in time).
And just as a final note, taking the second derivative of the curve $y(x)$ gives you
$$\frac{d^2 y}{dx^2} = \frac{\frac{d^2 y}{dt^2}\frac{dx}{dt}-\frac{d^2x}{dt^2}\frac{dy}{dt}}{\left(\frac{dx}{dt}\right)^3}$$
which doesn't give you any way of separating the two second time-derivatives to get the ratio $\frac{\frac{d^2y}{dt^2}}{\frac{d^2x}{dt^2}}$ that you would need to calculate the direction of the acceleration vector. So you can't use the same trick as we used to determine the direction of the velocity vector, either.
If you think of $$ (a_x,a_y)=\frac{d}{dt}(v_x,v_y) $$ then you can find $$ \frac{a_y}{a_x}=\frac{dv_y/dt}{dv_x/dt}=\frac{dv_y}{dv_x} =\frac{\Delta v_y}{\Delta v_x} $$ In other words, the ratio of accelerations at time $t$, from which you can get the direction of $\vec a$, is just the slope of the $(v_x,v_y)$ graph at that point.
You are not going to get information about the magnitude of the acceleration from the $v_y$ vs $v_x$ graph as this parametric curve eliminates $t$. The simplest example of why this is so would be two curves $(v_x,v_y)=(t,2t)$ and $(2t,4t)$ which completely overlap but for which the accelerations are different. You would need locate $(v_x,v_y)$ for a sequence of different $t$ on the same plot, and indicate values of $t$ on your curve so you can recover the time interval between two values of $\vec v$.
ZeroTheHeroZeroTheHero
I think what you are asking is how to decompose an acceleration vector $\vec{a}$ into tangential and normal components and find the center of rotation of the normal (centrifugal) component.
A velocity vector $\vec{v}$ is always tangent to the path, with a tangent vector $\hat{e}$ and magnitude (speed) $v$ $$ \vec{v} = v \,\hat{e} $$
An acceleration vector $\vec{a}$ has both a tangent component with magnitude $\dot{v}$ and a normal component along $\hat{n}$ with magnitude $v^2/r$ $$ \vec{a} = \dot{v} \hat{e} + \frac{v^2}{r} \hat{n} $$ where $r$ is the radius of curvature of the path.
See the linked Wikipedia article on the calculation of the radius of curvature from the path definition:
$$ r = \frac{ \left(1+ \left( \frac{{\rm d}y}{{\rm d}x} \right)^2 \right)^{3/2} }{ \frac{{\rm d}^2 y}{{\rm d}x^2} } $$
$$ r = \frac{ \left( \dot{x}^2 + \dot{y}^2 \right)^{3/2} }{ \dot{y} \ddot{x} - \ddot{y} \dot{x} } $$
ja72ja72
Not the answer you're looking for? Browse other questions tagged kinematics acceleration vectors coordinate-systems differentiation or ask your own question.
Derivation of formula of normal acceleration
Does acceleration at an angle to velocity change the direction?
How is the direction of the instantaneous acceleration determined?
Can a particle have no instantaneous velocity at all points of the path taken but a finite average velocity?
Position vs time graph with constant acceleration
Acceleration of car. One dimensional motion easy problem
For how long is an objects velocity it's instantaneous velocity at time $t$?
Wrong derivation of centripetal acceleration's direction in uniform circular motion?
Direction of velocity in curvilinear motion
Question regarding average velocity
Instantaneous changes in acceleration | CommonCrawl |
Are primary care and continuity of care associated with asthma-related acute outcomes amongst children? A retrospective population-based study
Sarah Cooper1,2,
Elham Rahme2,3,4,
Sze Man Tse5,
Roland Grad1,
Marc Dorais6 &
Patricia Li1,2,4,7
BMC Primary Care volume 23, Article number: 5 (2022) Cite this article
18 Accesses
Having a primary care provider and a continuous relationship may be important for asthma outcomes. In this study, we sought to determine the association between 1) having a usual provider of primary care (UPC) and asthma-related emergency department (ED) visits and hospitalization in Québec children with asthma and 2) UPC continuity of care and asthma outcomes.
Population-based retrospective cohort study using Québec provincial health administrative data, including children 2-16 years old with asthma (N = 39, 341). Exposures and outcomes were measured from 2010-2011 and 2012-2013, respectively. Primary exposure was UPC stratified by the main primary care models in Quebec (team-based Family Medicine Groups, family physicians not in Family Medicine Groups, pediatricians, or no assigned UPC). For those with an assigned UPC the secondary exposure was continuity of care, measured by the UPC Index (high, medium, low). Four multivariate logistic regression models examined associations between exposures and outcomes (ED visits and hospitalizations).
Overall, 17.4% of children had no assigned UPC. Compared to no assigned UPC, having a UPC was associated with decreased asthma-related ED visits (pediatrician Odds Ratio (OR): 0.80, 95% Confidence Interval (CI) [0.73, 0.88]; Family Medicine Groups OR: 0.84, 95% CI [0.75,0.93]; non-Family Medicine Groups OR: 0.92, 95% CI [0.83, 1.02]) and hospital admissions (pediatrician OR: 0.66, 95% CI [0.58, 0.75]; Family Medicine Groups OR: 0.82, 95% CI [0.72, 0.93]; non-Family Medicine Groups OR: 0.76, 95% CI [0.67, 0.87]). Children followed by a pediatrician were more likely to have high continuity of care. Continuity of care was not significantly associated with asthma-related ED visits. Compared to low continuity, medium and high continuity of care decreased asthma-related hospital admissions, but none of these associations were significant.
Having a UPC was associated with reduced asthma-related ED visits and hospital admissions. However, continuity of care was not significantly associated with outcomes. The current study provides ongoing evidence for the importance of primary care in children with asthma.
For children with asthma, primary care physicians may play an essential role in delivering evidence-based management, including assessing asthma control, ensuring appropriate use of medications, providing asthma education and action plans, and referring to asthma specialists when needed [1]. Population-based studies in Canada and the United Kingdom have demonstrated that areas with high compared to low supply of, or access to, primary care physicians reduced the risk of emergency department (ED) visits and hospitalizations for children with asthma [2, 3]. However, few large-scale studies have demonstrated the impact of having a usual provider of primary care (UPC) and continuity with this provider. Continuity of care, a core attribute of primary care [4], is defined as a health care service that extends over some time, where there is a timely and effective exchange of health information between a patient and their individual medical professional or within a medical team [5]. Nearly two decades ago, Christakis et al. demonstrated that for a group of children in the United States enrolled in a large health maintenance organization and another group enrolled in Medicaid, increased continuity was associated with decreased acute health services utilization (ED visits, hospitalizations); the risk was further decreased for children with asthma [6, 7].
In Québec, Canada, children who are residents of the province have access to primary care providers through public health insurance in the form of pediatricians, family physicians who belong to team-based Family Medicine Groups (FMG), and family physicians not part of an FMG [8, 9]. FMGs were implemented as part of primary care reforms since 2002 to improve the delivery of primary care services [9]. To date, there is little evidence to support such alternative primary care models that may improve continuity of care through informational and team-based continuity [9, 10]. To provide evidence to support policies for ongoing efforts to improve access to primary care and interventions for continuity of care, we aimed to determine the association between having a UPC and continuity of care with asthma-related acute outcomes care in a population-based cohort of children with asthma living in Québec, Canada. We hypothesized that having a UPC, and high continuity of care amongst those with an assigned UPC, would be associated with fewer asthma-related ED visits and hospitalizations.
We conducted a population-based retrospective cohort study with linked administrative data across outpatient and inpatient health settings from the province of Quebec, Canada, for children aged 2-16 years old, with a diagnosis of asthma from January 1, 2010, to December 31, 2011.
Data sources and characteristics of participants
Québec is Canada's second-largest province in terms of population, with approximately 8.2 million inhabitants [11]. All Québec permanent residents have access to public health insurance, administered by the Régie de l'Assurance Maladie du Québec (RAMQ), covering all essential medical services provided in hospitals or outpatient settings. We used three databases, linked together using an encrypted health number [12]: 1) the Registered Persons Database (encrypted health insurance number, sex, age, and postal code); 2) the Physician Claims Database (records of remunerated services through all clinical settings, i.e., RAMQ billings); and 3) the Hospital Discharge Database (MED-ECHO, all admissions data from the hospitals). Rurality and socioeconomic status were assigned by linking postal codes from the registered person's databases to 2011 Statistics Canada census data.
We used a validated algorithm to identify those children with administratively defined asthma as of December 31, 2011. This definition required at least two physician visits or one hospitalization for asthma in the RAMQ billings during the exposure period of January 1, 2010, and December 31, 2011 [13, 14]. We excluded patients with invalid health insurance numbers.
Primary exposure: usual provider of primary care
We assigned each child to one of the four types of UPC: family physicians within the team-based FMG, family physicians not part of an FMG, pediatrician, or no assigned UPC using the RAMQ physician claims during two-year exposure of January 1, 2010, to December 31, 2011. To assign the UPC, we adapted an algorithm created to identify patient attachment to a family physician amongst adults with RAMQ data, which we have previously used in the pediatric population (see Appendix I) [15, 16]. The algorithm used a hierarchy, in which we first searched for billing codes identifying that a patient was enrolled with a team-based FMG or family physician not part of an FMG, or followed for routine growth and development monitoring by a pediatrician. If these codes were not available, the patient was assigned to the usual provider of care who billed the most primary care visits (with a minimum of 2 visits). The remainder of patients who did not satisfy the aforementioned criteria had no UPC.
Secondary exposure: usual provider of care (UPC) index score
Continuity of care between patients and providers has been previously formulated and categories into the following: interpersonal continuity (the ongoing personal relationship between patient and physician), longitudinal continuity (the accumulation of interactions over a period of time), informational continuity (the availability and exchange of medical and social information over time and between professionals), and management continuity (the effective execution of a care plan through collaboration and coordination of health care teams) [17, 18].
We examined longitudinal continuity in the current study through the use of the UPC Index. The UPC Index was defined as the proportion of a child's medical visits with their assigned UPC [5]. This measure takes on a value of 0 to 1, with values close to 1 suggesting a high continuity of care. The UPC Index was divided into 3 categories a priori (>0-0.4= low, 0.41-0.70 = medium, >0.70 = high) as in previous studies [17,18,19]. The score was assigned to each child by dividing the total number of visits with the child's determined UPC (ni) by the total number of primary care visits with any primary care provider (n) between January 1, 2010, and December 31, 2011, (Eq. 1) [20].
$$UPC\ Index=\max \frac{n_i}{n}$$
Equation 1 usual provider of care (UPC) index [20]
The primary and secondary outcomes were asthma-related ED visits and hospitalizations, respectively, measured in the two-year outcome follow-up period of January 1, 2012, to December 31, 2013, as binary outcomes. ED visits were determined through the identification of physician claims where the establishment code was the ED. Hospital admissions were determined using the MED-ECHO database. Outcomes were determined to be "asthma-related" by using ICD-9 (for ED visits) and ICD-10 (for hospitalization) codes agreed upon by Québec asthma specialists (Appendix 1), identified in the Physician Claims Database and the MED-ECHO databases, respectively [21].
Covariates
The covariates were age, sex, socioeconomic status (SES), rurality, other co-morbidities, and previous health care utilization. Children were categorized into the following age groups: 2-5 years old, 6-9 years old, 10-12 years old, and 13-16 years old. SES was determined using the Material and Social Deprivation Index, which is based on census data [22]. The study population was divided into five quintiles (Q1 to Q5, least deprived to most deprived). Rurality was defined using the Census Metropolitan and Census Agglomeration Influenced Zone developed by Statistics Canada and divided into 3 categories: urban (population>100,000), small cities (population 10,000- 100,000), and rural (population <10,000) [23]. To account for other co-morbidities, specifically prevalent chronic diseases associated with higher healthcare utilization (i.e., diabetes and children with medical complexity), children were classified as having asthma only or asthma and other chronic diseases [24]. Previous health care utilization was measured by previous all-cause ED visits, all-cause hospital admissions, and asthma specialist (either a pediatrician who billed for an asthma visit in a hospital outpatient clinic and/or a pulmonologist) visits between 2010-2011.
Medians and interquartile ranges (IQR) and the counts and percentages were reported to summarize the distribution of continuous and categorical variables, respectively.
To test the association between the exposures and the outcomes, multivariable logistic regression models were used, and results were reported as odds ratios (OR) with 95% confidence intervals (CI). The models were adjusted with all the covariates described in the preceding section. Given that we anticipated <5% of missing data based on previous work with similar Quebec health administrative data, we planned to exclude missing values from the analyses [16]. All statistical analyses were completed in SAS software, Version 9.4 (SAS Institute, Inc., Cary, NC, USA).
Sensitivity analyses
To assess the robustness of our findings with the secondary exposure (UPC Index), we conducted a sensitivity analysis using a different measure of continuity of care, the Bice-Boxerman (COC) Index. The COC index measures the dispersion of care (numerator in Eq. 2) over one or several primary care providers (denominator in Eq. 2) [25]. This measure takes on a value of 0 to 1, with values close to 1, suggesting a high continuity of care. We constructed the COC Index using only primary care visits and the following Eq. 2 [25].
$$COC\ Index=\frac{\left(\sum_{i=1}^p{n}_i^2\right)-n}{n\left(n-1\right)}$$
(where n is the total number of primary care visits, ni is the number of visits with primary care physician i, and p is the total number of primary care physicians visited [25])
Equation 2 bice-boxerman continuity of care (COC) Index [25]
In Quebec, the use of health administrative data for research projects is highly regulated and monitored, and must be approved by the Commission d'accès à l'information and a research ethics board. The health administrative data is anonymized, and extensive measures are in place to ensure confidentiality and ethical conduct of research. Thus, informed consent was not required. All methods were carried out in accordance with relevant guidelines and regulations. In the current study, we obtained the required approval by the Commission d'accès à l'information and the REB at the McGill University Health Centre.
We identified 39,341 children with administratively defined asthma (Table 1). As of January 1, 2012, 17.4% of children diagnosed with asthma had no assigned UPC. The majority of the patient population was followed by a pediatrician (34.9%), then by team-based FMG (24.1%), and finally by non-FMG (23.6%). The median [IQR] number of visits made to the UPC were 4 [3, 7], 3 [2, 4], and 3 [1, 5] for pediatrician, team-based FMG, and non-FMG, respectively. Children who were determined to have no UPC, in comparison to other primary care models, were more likely to come from the older age categories, come from the most deprived socioeconomic quintile, and live in non-urban settings.
Table 1 Baseline characteristics of cohort by primary care model
Table 2 shows the crude proportions and adjusted odd ratios of asthma-related ED visits and hospital admissions for the main exposure, UPC, and the covariates. A total of 10.3% and 6.1% of the cohort had asthma-related ED visits and hospital admissions, respectively. Children who had no UPC had the highest percentage of experiencing asthma-related ED visits (12.1%) and hospital admissions (8.2%). We found that overall, children who had any type of primary care physician compared to those without, had a decreased odds of having asthma-related ED visit (pediatrician OR: 0.80, 95% CI [0.73, 0.88]; team-based FMG OR: 0.84, 95% CI [0.75,0.93]; non-FMG OR: 0.92, 95% CI [0.83, 1.02]) or hospital admission (pediatrician OR: 0.66, 95% CI [0.58, 0.75]; team-based FMG OR: 0.82, 95% CI [0.72, 0.93]; non-FMG OR: 0.76, 95% CI [0.67, 0.87]).
Table 2 Curde proportions and adjusted odds ratios of asthma-related acute outcomes for the main exposure, UPC, and covariates
Among children who had a UPC (82.6%), 37.4% had a low UPC Index score (Table 3). Children who had low continuity of care had a median of 2 (IQR: [2, 3]) visits with their UPC and 10 (IQR: [6, 14]) primary care visits in total over the two-year exposure period. In contrast, those children who had high continuity of care had a median of 5 (IQR: [3, 8]) visits with their UPC and a median of 6 (IQR: [4, 10]) primary care visits in total. We found that those children who had high continuity of care were more likely to be followed by a pediatrician (59.4%) than those who had low continuity who were more likely to be followed by a family physician in a team-based FMG (40.5%). Those children who had high continuity of care with their UPC, in comparison to low, were more likely to come from the most affluent neighborhood, come from an urban setting, or have no prior ED visits and hospital admissions.
Table 3 Baseline characteristics of cohort by UPC index
Table 4 shows the crude proportions and adjusted ORs of asthma-related ED visits and hospital admissions for the secondary exposure (the UPC Index) among children who had a UPC. The low UPC Index group had the highest percentage of children who experienced an asthma-related ED visit (12.9%) and hospital admission (7.8%). There were no significant differences in the adjusted analyses for ED visits. Compared to low continuity, both medium and high continuity of care was associated with decreased odds of hospitalizations, but the associations were not statistically significant.
Table 4 Crude proportions and adjusted odds ratios of asthma-related acute outcomes for the secondary exposure, UPC Index, and covariates
In the sensitivity analyses using the COC Index, the results were similar when using the UPC Index, but some associations were significant (see Additional files: Appendix 2). Compared to those who had a low COC Index score, children who had a high COC Index score had an increased odds of having an asthma-related ED visit (high OR: 1.10, 95% CI [1.01, 1.21]). Children who had a medium COC Index score had a decreased odds of having an asthma-related hospital admission compared to those who had a low COC Index score (medium OR: 0.84, 95% CI [0.72, 0.98]).
Using a population-based cohort of children with asthma in Quebec (N = 39,341), we demonstrated that 17.4% did not have an assigned UPC, and for those who had an assigned UPC, 38.1% had low continuity of care. Having a UPC compared to having no assigned UPC was associated with reduced asthma-related ED visits and hospital admissions. Children with the lowest continuity of care (UPC Index) compared to medium or high continuity of care experienced higher rates of asthma-related ED visit (12.9% vs. 9.4% or 7.0%, respectively) and hospital admission (7.8% vs. 5.2 or 3.6%, respectively). However, in the adjusted analyses, the associations between continuity of care and outcomes were not significant.
Our findings are in line with several studies conducted in the general adult population. These studies have shown that having a regular source of care compared to none was associated with decreased odds of an ED visit [7, 26,27,28,29,30,31]. In a telephone survey of 8 502 Ontario residents 16 years and older, among those with a chronic disease, having a regular family physician was associated with a decreased likelihood of ED use (OR=0.47, p = 0.01) [32]. Glazier et al. [33] also found that patients from the general population with at least one chronic condition and without a family physician were 1.22 times more likely to have an ED visit than those who had a regular physician. A study of Medicaid-insured children in the United States also demonstrated that increased preventive asthma visits and acute asthma care by primary care pediatricians was associated with decreased ED visits and hospitalizations, supporting the role of regular assessment and monitoring by primary care [34, 35]. In the current study, the increased ED visits by children without a UPC may have been explained by the use of the ED by these children for drug renewals or treatment of minor asthma exacerbations that could otherwise have been managed in primary care [36, 37].
In the Canadian setting where access to health care is universal, we observed socioeconomic inequalities. Compared to other primary care models, children with no UPC were more likely to come from the most deprived socioeconomic quintile. Further, children from the most compared to the least deprived quintile were more likely to have ED visits (OR: 1.34, 95% CI [1.21, 1.49]) and hospitalizations (OR: 1.12, 95% CI [0.98, 1.29]). A recent scoping review mapped out the multiple structural and social determinants of health related to asthma that are associated with poor outcomes, such as access to healthcare, medications, education, and housing [38]. Reducing disparities in asthma outcomes requires interventions that can, at least in part, effectively address these interconnected determinants. For example, previous studies have evaluated community health workers who provided psychosocial and educational support, care coordination, home environment assessment, and remediation. These interventions were reported to be cost-effective, as well as reduce ED visits, hospitalizations, patient missed school days, and parent missed workdays [39,40,41].
Children whose assigned UPC was a pediatrician, compared to other models, had decreased odds of having asthma-related ED visits and hospital admissions. Possible explanations for these findings include increased availability of walk-in clinics to prevent ED visits or better adherence to evidence-based treatments to prevent exacerbations amongst pediatricians. However, for the latter hypothesis, a survey conducted in Quebec around the same time as the current study (2013-2014) found that pediatricians and family physicians did not differ in their approach to prescribing long-term controller medication for patients with persistent asthma [42]. In the current study, children assigned to a pediatrician were also more likely to have high continuity of care. Clinic-related factors have been shown to predict higher continuity of care (as reported by patients) in primary care practices in Ontario, Canada [43], including having more than 24 hours on call per week for physicians, having a smaller practice, having fewer nurses, and being closed on weekends (so patients could not see whichever family physician was covering the clinic on the weekend, thus decreasing continuity with their primary provider) [43].
Although the associations were not significant, high continuity of care with a UPC was associated with increased odds of having an ED visit and decreased odds of having a hospital admission. Prior studies examining these associations have produced mixed results and had some limitations, which the current study attempted to address [6, 7, 19, 27, 44]. These limitations included a focus on a specific population (such as Medicaid recipients or US-based private medical insurance cooperative) [6, 7, 27, 44], a cross-sectional design [27], or a lack of pediatric focus [19, 27]. Cree et al. [19], which was the only study conducted in Canada using administrative data from 2774 children and adults with asthma limited to one health region in Alberta, found that high continuity of care was associated with decreased risk of an ED visit (OR= 0.24, 95% CI [0.19-0.29]) and a decreased risk of the number of hospitalizations (RR=0.69, 95% CI [0.54-0.89]). In the current study, the increased odds of ED visits among those with high continuity of care may signal issues around timely access to the UPC during an asthma exacerbation. Hospitalizations generally represent a more severe asthma exacerbation. Higher continuity of care with a UPC may have played a role in better controlling the disease to prevent a more severe asthma presentation.
Our study had some limitations. Firstly, although we adjusted for multiple variables, there may have been residual confounders not captured in our population-based health administrative database, such as adherence to prescribed medication, asthma phenotype (i.e. specific clinical, biological, physiological characteristics), and physician characteristics. Secondly, we adjusted for previous ED visits as a proxy for clinical factors that we cannot measure in the health administrative data, such as children who have more severe asthma phenotypes that required ED visits. However, we may have overadjusted our regression models by including previous ED visits as a potential confounder. The latter would have occurred if a given UPC group, being a model with less accessible primary care, resulted in ED visits, both prior to and during the outcome assessment periods. In this instance, adjusting for previous ED visits could have absorbed some of the effect of the UPC exposure; instead of the effect being attributed to the UPC exposure some of it would be attributed to prior ED visits. Therefore, differences between the UPC groups may be more pronounced than reported by our findings. Thirdly, although the UPC Index and the COC Index are among the most used administrative measures of continuity in primary care research, the UPC index, only captures one aspect of continuity of care, longitudinal continuity. It does not consider other domains of continuity of care, such as management or interpersonal continuity [5]. We attempted to address the former through our sensitivity analysis using the COC Index, which attempts to measure management continuity, i.e., the effective collaboration and coordination of health care teams [20]. Some studies have demonstrated an association between interpersonal continuity and improved preventive care and reduced hospitalizations [45]. The UPC Index used in our analyses measures longitudinal continuity and is not a direct measure of interpersonal continuity, although concepts may overlap; repeated interactions (longitudinal continuity) may lead to a therapeutic relationship (interpersonal continuity), but it is not guaranteed that seeing the same doctor equates to a good patient-doctor relationship [46], or to better outcomes. Lastly, no matter the quality of primary care services received, especially in young populations, some acute care utilization is unavoidable, and administrative data did not allow us to differentiate these visits from those that could be avoided by timely and effective primary care.
In a universal health care system, the current study revealed the importance of access, and potentially continuity of care, with a usual provider of care for reducing asthma-related ED visits and hospital admissions.
The datasets generated during the current study are not publicly available due privacy laws by the Commission d'accès à l'information of Quebec but analyzed data may be available from the corresponding author on reasonable request.
FMG:
Family Medicine Groups
Usual Provider of Care
RAMQ:
Régie de l'Assurance Maladie du Québec
COC:
Bice-Boxerman Continuity of Care Index
CMC:
Children with Medical Complexity
Socioeconomic Quintile
Cloutier MM, Hall CB, Wakefield DB, Bailit H. Use of asthma guidelines by primary care providers to reduce hospitalizations and emergency department visits in poor, minority, urban children. J Pediatr. 2005;146(5):591–7.
Guttmann A, Shipman SA, Lam K, Goodman DC, Stukel TA. Primary care physician supply and children's health care use, access, and outcomes: findings from Canada. Pediatrics. 2010:peds.2009-821.
Cecil E, Bottle A, Cowling TE, Majeed A, Wolfe I, Saxena S. Primary care access, emergency department visits, and unplanned short hospitalizations in the UK. Pediatrics. 2016;137(2):e20151492.
Gulliford M, Naithani S, Morgan M. What is' continuity of care'? J Health Serv Res Policy. 2006;11(4):248–50.
Haggerty JL, Reid RJ, Freeman GK, Starfield BH, Adair CE, McKendry R. Continuity of care: a multidisciplinary review. BMJ. 2003;327(7425):1219–21.
Christakis DA, Mell L, Koepsell TD, Zimmerman FJ, Connell FA. Association of lower continuity of care with greater risk of emergency department use and hospitalization in children. Pediatrics. 2001;107(3):524–9.
Christakis DA, Wright JA, Koepsell TD, Emerson S, Connell FA. Is greater continuity of care associated with less emergency department utilization? Pediatrics. 1999;103(4):738–42.
Guttmann A, Gandhi, S., Hanvey, Li, P., Barwick, M., Cohen, E., Glazer, S., Reisman, J. & Brownell, M. Primary health care services for children and youth in Canada: access, quality and structure 2017 [Available from: https://cichprofile.ca/module/3/].
Strumpf E, Ammi M, Diop M, Fiset-Laniel J, Tousignant P. The impact of team-based primary care on health care services utilization and costs: Quebec's family medicine groups. J Health Econ. 2017;55:76–94.
Starfield B, Horder J. Interpersonal continuity: old and new perspectives. Br J Gen Pract. 2007;57(540):527–9.
Canada S. Quebec [Province] and Canada [Country] (table). Census Profile. In: Census, editor. Ottawa: Statistics Canada Catalogue; 2017.
Rochette L, Émond V. Chronic-disease surveillance in Quebec using administrative file linkage. 2014 International Methodology Symposium Beyond traditional survey taking: adapting to a changing world; 2014.
To T, Dell S, Dick PT, Cicutto L, Harris JK, MacLusky IB, et al. Case verification of children with asthma in Ontario. Pediatr Allergy Immunol. 2006;17(1):69–76.
Ouimet M-J, Pineault R, Prud'homme A, Provost S, Fournier M, Levesque J-F. The impact of primary healthcare reform on equity of utilization of services in the province of Quebec: a 2003–2010 follow-up. Int J Equity Health. 2015;14(1):139.
Provost S, Perez J, Pineault R, Borges Da Silva R, Tousignant P. An algorithm using administrative data to identify patient attachment to a family physician. Int J Family Med. 2015;2015:967230.
Nakhla M, Rahme E, Simard M, Larocque I, Legault L, Li P. Risk of ketoacidosis in children at the time of diabetes mellitus diagnosis by primary caregiver status: a population-based retrospective cohort study. CMAJ. 2018;190(14):E416–e21.
Barker I, Steventon A, Deeny SR. Association between continuity of care in general practice and hospital admissions for ambulatory care sensitive conditions: cross sectional study of routinely collected, person level data. BMJ. 2017;356:j84.
Saultz JW, Albedaiwi WJTAFM. Interpersonal continuity of care and patient satisfaction: a critical review. Ann Fam Med. 2004;2(5):445–51.
Cree M, Bell N, Johnson D, Carriere K. Increased continuity of care associated with decreased hospital care and emergency department visits for patients with asthma. Dis Manag. 2006;9(1):63–71.
Pollack CE, Hussey PS, Rudin RS, Fox DS, Lai J, Schneider EC. Measuring care continuity: a comparison of claims-based methods. Med Care. 2016;54(5):e30.
Despres F, Ducharme F, Forget A, Tse SM, Kettani F-Z, Blais L. Development and validation of a pharmacoepidemiologic pediatric asthma control index using information from administrative database. A66 THE MANY FACES OF ASTHMA IN CHILDHOOD: American Thoracic Society; 2017. p. A2229-A.
Pampalon R, Raymond G. A deprivation index for health and welfare planning in Quebec. Chronic Dis Can. 2000;21(3):104–13.
Pampalon R, Martinez J, Hamel D. Does living in rural areas make a difference for health in Quebec? Health Place. 2006;12(4):421–35.
Cohen E, Berry JG, Camacho X, Anderson G, Wodchis W, Guttmann A. Patterns and costs of health care use of children with medical complexity. Pediatrics. 2012:peds. 2012-0175.
Bice TW, Boxerman SB. A quantitative measure of continuity of care. Med Care. 1977;15(4):347–9.
Christakis DA, Feudtner C, Pihoker C, Connell FA. Continuity and quality of care for children with diabetes who are covered by Medicaid. Acad Pediatr. 2001;1(2):99–103.
Gill JM, Mainous AG 3rd, Nsereko M. The effect of continuity of care on emergency department use. Arch Fam Med. 2000;9(4):333–8.
Grumbach K, Keane D, Bindman A. Primary care and public emergency department overcrowding. Am J Public Health. 1993;83(3):372–8.
Haddy RI, Schmaler M, Epting R. Nonemergency emergency room use in patients with and without primary care physicians. J Fam Pract. 1987;24(4):389–92.
Sox CM, Swartz K, Burstin HR, Brennan TA. Insurance or a regular physician: which is the most powerful predictor of health care? Am J Public Health. 1998;88(3):364–70.
Petersen LA, Burstin HR, O'neil AC, Orav EJ, Brennan TA. Nonurgent emergency department visits: the effect of having a regular doctor. Med Care. 1998:1249–55.
Mian O, Pong R. Does better access to FPs decrease the likelihood of emergency department use?: Results from the Primary Care Access Survey. Can Fam Physician. 2012;58(11):e658–e66.
Glazier RH, Moineddin R, Agha MM, Zagorski B, Hall R, Manuel DG, et al. The impact of not having a primary care physician among people with chronic conditions. Toronto: Institute for Clinical Evaluative Sciences; 2008.
Lougheed MD, Lemiere C, Ducharme FM, Licskai C, Dell SD, Rowe BH, et al. Canadian Thoracic Society 2012 guideline update: diagnosis and management of asthma in preschoolers, children and adults. Can Respir J. 2012;19(2):127–64.
Garbutt JM, Yan Y, Strunk RC. Practice variation in management of childhood asthma is associated with outcome differences. J Allergy Clin Immunol Pract. 2016;4(3):474–80.
Lara M, Duan N, Sherbourne C, Halfon N, Leibowitz A, Brook RH. Children's use of emergency departments for asthma: persistent barriers or acute need? J Asthma. 2003;40(3):289–99.
Lawson CC, Carroll K, Gonzalez R, Priolo C, Apter AJ, Rhodes KV. "No other choice": reasons for emergency department utilization among urban adults with acute asthma. Acad Emerg Med. 2014;21(1):1–8.
Sullivan K, Thakur N. Structural and social determinants of health in asthma in developed economies: a scoping review of literature published between 2014 and 2019. Curr Allergy Asthma Rep. 2020;20(2):5.
Bhaumik U, Sommer SJ, Lockridge R, Penzias R, Nethersole S, Woods ER. Community asthma initiative: cost analyses using claims data from a medicaid managed care organization. J Asthma. 2020;57(3):286–94.
Shreeve K, Woods ER, Sommer SJ, Lorenzi M, Monteiro K, Nethersole S, et al. Community health workers in home visits and asthma outcomes. Pediatrics. 2021;147(4).
Woods ER, Bhaumik U, Sommer SJ, Chan E, Tsopelas L, Fleegler EW, et al. Community asthma initiative to improve health outcomes and reduce disparities among children with asthma. MMWR Suppl. 2016;65(1):11–20.
Ducharme FM, Lamontagne AJ, Blais L, Grad R, Lavoie KL, Bacon SL, et al. Enablers of physician prescription of a long-term asthma controller in patients with persistent asthma. Can Respir J. 2016;2016.
Kristjansson E, Hogg W, Dahrouge S, Tuna M, Mayo-Bruinsma L, Gebremichael G. Predictors of relational continuity in primary care: patient, provider and practice factors. BMC Fam Pract. 2013;14:72.
Utidjian LH, Fiks AG, Localio AR, Song L, Ramos MJ, Keren R, et al. Pediatric asthma hospitalizations among urban minority children and the continuity of primary care. J Asthma. 2017;54(10):1051–8.
Saultz JW, Lochner J. Interpersonal continuity of care and care outcomes: a critical review. Ann Fam Med. 2005;3(2):159–66.
Jung HP, Wensing M, Grol R. What makes a good general practitioner: do patients and doctors have different views? Br J Gen Pract. 1997;47(425):805–9.
We thank Hyejee Ohm for her contributions on the development of the UPC algorithm.
All phases of this study were supported by the Canadian Institutes of Health Research (grant ID: 129904) and the Fonds de la Recherche du Québec- Santé.
Department of Family Medicine, McGill University, Montréal, Québec, Canada
Sarah Cooper, Roland Grad & Patricia Li
Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, 5252 Boulevard de Maisonneuve O, Montréal, Québec, H4A 3S5, Canada
Sarah Cooper, Elham Rahme & Patricia Li
Department of Medicine, McGill University, Montréal, Québec, Canada
Elham Rahme
Department of Epidemiology, Biostatistics and Occupational Health, McGill University, Montréal, Québec, Canada
Elham Rahme & Patricia Li
Department of Pediatrics, Université de Montréal, Montréal, Québec, Canada
Sze Man Tse
StatSciences Inc., Notre-Dame-de-l'Île-Perrot, Québec, Canada
Marc Dorais
Department of Pediatrics, McGill University, Montréal, Québec, Canada
Patricia Li
Sarah Cooper
Roland Grad
Ms. Cooper conceptualized and designed the study, conducted the analyses, interpreted the data, drafted the initial manuscript, and reviewed and revised the manuscript. Dr. Li obtained the data, conceptualized and designed the study, interpreted the data, and reviewed and revised the manuscript. Dr. Grad, Dr. Tse, and Dr. Rahme interpreted the data and reviewed and revised the manuscript. Marc Dorais, biostatistician helped in preparing and cleaning the dataset, along with helping to guide the programming on SAS. All authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.
Correspondence to Patricia Li.
The data was obtained with the approval from the Commission d'accès à l'information. In Quebec, the Commission d'accès à l'information (CAI) holds the authority to grant a researcher health administrative data, such as data from the Régie de l'assurance maladie du Québec, for research or statistical purposes without individual consent (article 125, "Loi sur l'accès aux documents des organismes publics et sur la protection des renseignement personnels" [Access to documents of public organizations and on the protection of personal information Act]; article 67 of the "Loi sur l'assurance maladie" [Health Insurance Act]). In the approval process of the CAI, the research study submitted must also be approved by a research ethics board. Therefore, for the current study, approval was obtained from the CAI and the Research Ethics Board of the McGill University Health Centre. The health administrative data is anonymized, and extensive measures are in place to ensure confidentiality and ethical conduct of research. All methods were carried out in accordance with relevant guidelines and regulations.
All authors have no conflicts of interest to disclose.
Cooper, S., Rahme, E., Tse, S.M. et al. Are primary care and continuity of care associated with asthma-related acute outcomes amongst children? A retrospective population-based study. BMC Prim. Care 23, 5 (2022). https://doi.org/10.1186/s12875-021-01605-7
Primary care access | CommonCrawl |
What is the relation between a policy which is the solution to a MDP and a policy like $\epsilon$-greedy?
In the context of reinforcement learning, a policy, $\pi$, is often defined as a function from the space of states, $\mathcal{S}$, to the space of actions, $\mathcal{A}$, that is, $\pi : \mathcal{S} \rightarrow \mathcal{A}$. This function is the "solution" to a problem, which is represented as a Markov decision process (MDP), so we often say that $\pi$ is a solution to the MDP. In general, we want to find the optimal policy $\pi^*$ for each MDP $\mathcal{M}$, that is, for each MDP $\mathcal{M}$, we want to find the policy which would make the agent behave optimality (that is, obtain the highest "cumulative future discounted reward", or, in short, the highest "return").
It is often the case that, in RL algorithms, e.g. Q-learning, people often mention "policies" like $\epsilon$-greedy, greedy, soft-max, etc., without ever mentioning that these policies are or not solutions to some MDP. It seems to me that these are two different types of policies: for example, the "greedy policy" always chooses the action with the highest expected return, no matter which state we are in; similarly, for the "$\epsilon$-greedy policy"; on the other hand, a policy which is a solution to a MDP is a map between states and actions.
What is then the relation between a policy which is the solution to a MDP and a policy like $\epsilon$-greedy? Is a policy like $\epsilon$-greedy a solution to any MDP? How can we formalise a policy like $\epsilon$-greedy in a similar way that I formalised a policy which is the solution to a MDP?
I understand that "$\epsilon$-greedy" can be called a policy, because, in fact, in algorithms like Q-learning, they are used to select actions (i.e. they allow the agent to behave), and this is the fundamental definition of a policy.
reinforcement-learning terminology definitions markov-decision-process policy
nbronbro
for example, the "greedy policy" always chooses the action with the highest expected return, no matter which state we are in
The "no matter which state we are in" there is generally not true; in general, the expected return depends on the state we are in and the action we choose, not just the action.
In general, I wouldn't say that a policy is a mapping from states to actions, but a mapping from states to probability distributions over actions. That would only be equivalent to a mapping from states to actions for deterministic policies, not for stochastic policies.
Assuming that our agent has access to (estimates of) value functions $Q(s, a)$ for state-action pairs, the greedy and $\epsilon$-greedy policies can be described in precisely the same way.
Let $\pi_g (s, a)$ denote the probability assigned to an action $a$ in a state $s$ by the greedy policy. For simplicity, I'll assume there are no ties (otherwise it would in practice be best to randomize uniformly across the actions leading to the highest values). This probability is given by:
$$ \pi_g (s, a) = \begin{cases} 1, & \text{if } a = \arg\max_{a'} Q(s, a') \\ 0, & \text{otherwise} \end{cases} $$
Similarly, $\pi_{\epsilon} (s, a)$ could denote the probability assigned by an $\epsilon$-greedy strategy, with probabilities given by:
$$ \pi_{\epsilon} (s, a) = \begin{cases} (1 - \epsilon) + \frac{\epsilon}{\vert \mathcal{A}(s) \vert}, & \text{if } a = \arg\max_{a'} Q(s, a') \\ \frac{\epsilon}{\vert \mathcal{A}(s) \vert}, & \text{otherwise} \end{cases} $$ where $\vert \mathcal{A}(s) \vert$ denotes the size of the set of legal actions in state $s$.
Dennis SoemersDennis Soemers
$\begingroup$ By "the 'greedy policy' always chooses the action with the highest expected return, no matter which state we are in", I meant that, given a state where the agent is currently in, then the action that is chosen is always the one with the highest expected return from that state. No? I am more confused now, because the expected return is something associated with a state. $\endgroup$ – nbro Feb 10 '19 at 18:03
$\begingroup$ @nbro Sure. Which can still be interpreted as a mapping from states to probability distributions (where every probability distribution happens to have all the probability mass assigned to a single action) $\endgroup$ – Dennis Soemers Feb 10 '19 at 18:04
$\begingroup$ So, you're defining policies like the greedy or $\epsilon$-greedy with respect to value functions. Is this always the case? Also, you say that a policy which solves a MDP is a mapping between states and probability distribution over actions. However, AFAIK, the optimal policies are (often) deterministic. No? Can they be considered greedy? For example, the policy learned by Q-learning, which, at the end, will be a mapping from states to actions (or probability over actions?). Is the policy learned using Q-learning greedy? $\endgroup$ – nbro Feb 10 '19 at 18:16
$\begingroup$ @nbro Greedy and $\epsilon$-greedy policies will always have to be with respect to some sort of value function yes... but there are different ways to learn policies directly without first learning a value function as an "intermediate" step (see policy gradients). If you start to include adversarial elements in your environment, there may be cases where optimal policies are non-deterministic (think Rock-Paper-Scissors). $\endgroup$ – Dennis Soemers Feb 10 '19 at 18:46
$\begingroup$ Technically... I'd say $Q$-learning doesn't learn a policy at all. It learns the value function $Q^{\pi_g}$ corresponding to the greedy policy $\pi_g$ (the value function is the output of the learning algorithm, and given that value function it is trivial to derive the greedy policy with respect to those value). $\endgroup$ – Dennis Soemers Feb 10 '19 at 18:47
Not the answer you're looking for? Browse other questions tagged reinforcement-learning terminology definitions markov-decision-process policy or ask your own question.
Does eligibility traces and epsilon-greedy do the same task in different ways?
Some RL algorithms (especially policy gradients) initialize with random policies, which often manifests as random jitter on spot for a long time?
Questions about n-step tree backup algorithm
What is the relation between Monte Carlo and model-free algorithms?
What is the difference between return and expected return?
Can someone please help me validate my MDP?
Is the agent aware of a possible different set of actions for each state?
Why does having a fixed policy change a Markov Decision Process (MDP) to a Markov Reward Process (MRP)? | CommonCrawl |
How to explain what code is to my parents?
I am an engineering student in computer science and recently my parents asked me to explain a bit what I do, which is their way of asking "What is coding?".
They have no idea of what coding is, what languages are, lines of codes etc, and I wanted to explain briefly how it all works. I wanted to explain what a programming language is and how it is used to write algorithms, make computations... To me it seems very logical that the interpreter reads lines of code from top to bottom, understanding the statements of a particular language.
I can't find the right words and nice examples to help them understand globally how it works.
How can I explain this idea to them?
layperson
ShashimeeShashimee
$\begingroup$ Thanks for the question. Could you provide your parents' background? Not all parents are ignorant of computer science. dilbert.com/strip/1998-07-15 $\endgroup$ – Ellen Spertus Jun 20 '17 at 17:33
$\begingroup$ I feel this is too broad. HTML is code, assembler is also code. Code is just that, a formal language that expresses something. If I were an average C++ programmer, I would describe my job to my grandmother as such: "I write computer programs, that is, I use a specialized formal language - akin, in this respect, to the language of mathematics - to express the behavior I want the machine to exhibit. My understanding of what the correct behaviour should be must be sound in order for this to work." It's all kinds of wrong, technically, but I feel it conveys the idea. $\endgroup$ – Tobia Tesan Jun 21 '17 at 10:18
$\begingroup$ Unfortunately, no one can be told what the Matrix is, @Aurora0001, is this you? $\endgroup$ – Ghanima Jun 21 '17 at 20:50
$\begingroup$ I wonder... The oldest computers I know of might be the player pianos... $\endgroup$ – Malady Jun 21 '17 at 22:03
$\begingroup$ It's not that hard to explain. public static void main(), *ahem*. Lorem ipsum dolor sit amet, consectetur adipiscing elit... $\endgroup$ – user541686 Jun 22 '17 at 3:53
Coding is like writing a recipe for the computer to follow so that it solves your problem. The computer "reads" each step, and follows it, eventually reaching a solution. Some programs are better than others, just like some recipes are better than others - they are faster, they produce a better result, etc. Programmers aren't really the cooks, though - the computer itself produces the result. The programmers are more like the cookbook authors, producing the recipes (programs) for the computer to follow.
Now, I don't know about you, but I've followed a recipe before, and there have been times where I've been left wondering, "Have I put in 2 or 3 cups of flour?" or having mixed in an extra couple of chocolate chips (quite by accident, of course). A computer doesn't have these problems. It can make your "recipe" much more swiftly and accurately than you can. However, this gives you no excuse for providing the wrong recipe. No amount of accuracy will make brussels sprouts into ice cream, unfortunately.
Lastly, @dckuehn brings up a great point. No analogy is perfect, and this one is no exception (heheh). Whereas, in a recipe, you really try hard not to vary the input so as not to vary the output, most programs take different kinds of input and produce different kinds of output, according to the same rule. Sort of like you can put a bunch of different cookies in the oven to bake them - not the same input, not the same output, but a fairly similar process inbetween.
Stevoisiak
heatherheather
$\begingroup$ Very nice analogy. "The programmers are more like the cookbook" :) $\endgroup$ – Ben I.♦ Jun 20 '17 at 23:06
$\begingroup$ Nice indeed ! Very simple, yet accurate. Maybe adding a simple real example of code (like do the sum of the first 100 integers as @Cort Ammon suggested) would make them understand better. $\endgroup$ – Shashimee Jun 21 '17 at 7:09
$\begingroup$ I think it also helps to explain that computers do very simple things, but do them extremely quickly, and very reliably. Kind of like how a production line can build a car in lots of steps. $\endgroup$ – Sean Houlihane Jun 21 '17 at 12:45
$\begingroup$ I think it would be worth explaining that programs differ from recipes in that the result is not always the same. You expect (hope) a recipe to yield the same thing each time. The program will react different to different inputs, similar to a math equation. $\endgroup$ – dckuehn Jun 21 '17 at 16:52
$\begingroup$ I like and use the recipe analogy a lot, but I usually add the caveat that a program is a recipe written for an entity that cannot think for itself. A standard recipe will just say something like "add 2 cups of flour". However, there's a lot of assumed knowledge there that a computer program has to include explicitly. Where does the flour come from? What should be used to bring it over to the bowl? What if there's not enough? What if it spills? $\endgroup$ – chazlarson Jun 22 '17 at 16:48
The best way to explain coding to someone is very dependent on their background. You really need to tailor the story to fit what they understand. That being said, they're parents so...
Coding is a way to give instructions to the computer, telling it what to do. Think about the instructions you leave for a babysitter. They tell the babysitter exactly what needs to happen while they are away. Now, despite some appearances, the computer is not very smart. It doesn't have good common sense. Think of it like the first babysitter they ever hired. Surely they wrote up a massive document with every last little tiny detail about how to take care of you, emergency contacts, emergency emergency contacts (in case the emergency contacts can't be reached), food allergies, medicine allergies, allergic reactions to allergy medications. You name it. They made sure every last detail was covered, so the babysitter had instructions to handle whatever may be encountered while they are away.
Now, computers are fast. Really really fast. They did a billion things while you read this sentence. As far as this computerized babysitter is concerned, it's not like your parents are gone for a night - it's like they're gone for a month.
Now have them imagine the manifest they would have typed up for this brand new babysitter, babysitting you for the first time, for a month. Imagine all of the precise instructions laid out in the best order the can manage so that the babysitter can just check off each instruction, line by line.
That's coding. I have a feeling they appreciate for loops right about now.
I tend to describe code as a contract. Most people know that if you read a contract, it's not in English - it's in "legalese". Legalese is a language that looks mostly like English, but it's full of odd phrases and very specific wording, and the punctuation is different. Contracts are written this way because each phrase has been interpreted by a court to mean something very specific, and so contracts will use the same precise wording so that they will be interpreted by a court in the way the lawyer wants.
Programming languages often look kind of like English too; they use English words, but have different structures and punctuation. This is because a compiler is like a court: it will interpret these specific phrases in a specific way, and each phrase will add a certain piece of behaviour to the program that the compiler produces.
A contract can have loopholes or grey areas. These are places where the wording of the contract is unclear, isn't an accurate representation of what the lawyer or their client wanted, or doesn't cover a certain scenario, which means that what happens is either up to the court, or not governed by the contract. For the lawyer, either of these scenarios is bad. A good lawyer can write you a contract with no loopholes or grey areas, but it can be very difficult, because legalese can be hard to read, the law is very complex, and some scenarios are very detailed.
A program can have loopholes and grey areas too: we call them "bugs". These are places where your code doesn't tell the computer what to do in a particular scenario, or it describes something that actually isn't quite what you wanted. A good programmer can write you a program with no bugs, but it's often very hard to do, because code can be hard to read, computers are very complicated, and the program might be doing a lot of very intricate things.
A great lawyer can write a contract that has no loopholes or grey areas, but is still relatively easy to read, even by someone who isn't a lawyer. This is good for the client, because they can understand their contract, but also for the law firm, because it means that any lawyer can understand the contract and make amendments without missing something subtle and introducing a loophole.
A great software engineer can write a program that has no bugs, but is still relatively uncomplex and easy to read. This is good for the client, because it means their program is easier to review and verify correct, but also for the software company, because it mean that any programmer can read the program and make alterations without missing something subtle and introducing a bug.
anaximanderanaximander
$\begingroup$ Welcome to CSE! This is an excellent analogy. I sure hope we will be hearing from you again soon. $\endgroup$ – ItamarG3 Jun 21 '17 at 9:54
$\begingroup$ And "language lawyer" is a real term! wiki.c2.com/?LanguageLawyer $\endgroup$ – Baldrickk Jun 21 '17 at 10:32
$\begingroup$ "A good programmer can write you a program with no bugs" - Maybe, but depending on the complexity it may be impossible to determine that. I find that telling the layperson that it is possible to produce a program with no bugs is giving a flawed impression. Perhaps this would be better: "Good programmers can write you a program with few bugs." Gives a more nuanced impression about bug-checking and implies teamwork. $\endgroup$ – called2voyage Jun 21 '17 at 14:29
$\begingroup$ @called2voyage As a professional software engineer, I'm painfully aware of that. All I'm saying is that, for the purposes of an analogy to answer the question "what is coding?", I don't think it's necessary to go into that much detail. Having used this analogy to explain my job to non-technical friends and family on several occasions, I do tend to go on to explain that the average computer program is so complex that it's nigh impossible to have zero bugs in anything beyond the truly trivial, but I'd consider that tangential to the actual answer to the question being asked here. $\endgroup$ – anaximander Jun 21 '17 at 15:08
$\begingroup$ @called2voyage A good programmer can write a (complex) program with no bugs. Whether there exist any good programmers remains an open question. $\endgroup$ – Ray Jun 21 '17 at 23:25
My explanation would be:
The computer isn't some machine that can do a lot of intelligent things. Rather it is very dumb, but can execute instructions carefully, very fast and without getting bored.
For instance: given the task of adding up all numbers from 1 to 1 million, a human probably wouldn't make it past 100 without making some mistake and getting bored with it. A computer happily performs this task in a couple of milliseconds.
The job of the coder is to supply the computer with a detailed set of instructions to perform some task - this is the program or application.
It is not "the computer" that makes an error, usually it is the programmer that didn't forsee some combination of events and thus didn't provide instructions for that.
The instructions are in some computer language, of which there are many, each with it's own strengths and weaknesses.
Of course there is a formula to calculate the sum of consecutive numbers. This strengthens the point: a human might think "there has got to be a simpler way" and figure it out, a computer would never think that. It blindly continues counting (and still finishes sooner than the human with the formula).
Hans KestingHans Kesting
$\begingroup$ Welcome to CSE! I like the way you've described bugs here. I hope we hear more from you in the future! $\endgroup$ – Ben I.♦ Jun 21 '17 at 10:51
$\begingroup$ Just as an aside. I wouldn't get bored of calculating the sum of 1 to 1 million. It's 1,000,001*500,000 = 500,000,500,000 I would get bored if I actually added them up. $\endgroup$ – Phil M Jones Jun 21 '17 at 15:14
$\begingroup$ I recently asked my pupils to sum the numbers 1…100, I gave them 30 seconds. We then looked at how we could do it. We ended up with the formula $(n+1)\times{}n/2$ same as what @PhilMJones did. $\endgroup$ – ctrl-alt-delor Jul 14 '17 at 16:26
Simple: you let one of the best science teachers of all time explain it to them for you.
Here's a video of Richard Feynman introducing computers to a non-technical audience at some new-age retreat back in the 80s. Starts by explaining how computers work from the inside out, and goes on into heuristics and AI, all in his signature style of great analogies (army of filing clerks, dumber but faster).
As someone who had never come in touch with coding throughout my education, this lecture gave me quite a few Aha! moments and single-handedly encouraged me to start dabbling with programming.
typotypo
$\begingroup$ Welcome to CSE! RF is, indeed, amazing. I hope we hear more from you in the future! $\endgroup$ – Ben I.♦ Jun 22 '17 at 1:56
"What is coding ?"
A set of instructions written in human readable language (at least to developers) that is executed to perform a task or goal.
"what a programming language is"
High level: Just like any language that exists in the world today, it has it's own alphabet, syntax and grammar that is for communication.
Technical: First off I just want to note that some languages are interpreted while some are compiled, as for their differences I believe it's off-topic. The idea is that your "code" is tokenized based on the language's alphabet and syntax and formatted into a parsing tree. The parsing tree is then translated into some intermediate code. Lastly the compiler translates the intermediate code into source code or machine code that can be executed by the CPU.
"how it is used"
Since programming languages are made for humans to easily learn and write in, it is like writing a book. We choose alphabets of the language and write meaningful segments that perform tasks whether it is to loop over an array or read some files.
KanekiKaneki
Well, I would personally suggest you to show them the first lecture of Harvard CS50 class and believe me they will not leave it without completing all of them. It is one of the best structured course for any student (well in this case you can call your parents as students :p) irrespective of the background of the person.
They have an interactive environment called as Scratch in which you can design new interesting projects. Have a look at some featured projects, they are very good!
ItamarG3
Skand Vishwanath PeriSkand Vishwanath Peri
$\begingroup$ I don't think his parents want to learn how to code, but rather how to coding and programming languages work. $\endgroup$ – ItamarG3 Jun 20 '17 at 14:40
$\begingroup$ If they're willing to sit through that first lesson (and if they understand English well enough), they will certainly come out with a pretty good idea of what coding is. David Malan is absolutely electrifying! $\endgroup$ – Ben I.♦ Jun 20 '17 at 15:01
$\begingroup$ This is close to a link-only answer, which is frowned upon on Stack Exchange sites. Could you summarize what is discussed in the CS50 lecture, and what makes it compelling? $\endgroup$ – 200_success Jun 21 '17 at 0:51
I've been a professional Software Developer for about 30 years, and a hobby programmer before that going back to the 70's. So I've been asked this a lot, and have had time to try lots of approraches.
The main issue is that you are talking a different universe than most everyone else's experience. In a social situation, you have one or two sentences before the eyes start to glaze over, so you can't really go into detail. So after decades of experimentation, the explanation I now give for what I do is:
I tell the computer what to do. It flips me off (or insert offensive invective here to taste) and merrily does something else. Then I spend the rest of the day trying to figure out why.
I once gave this explanation to a room full of public school teachers, and got the rather amusing response:
That's the same as teaching!
I think they are actually onto something there.
Note that Brooks' Mythical Man Month* devotes the last half of its first chapter to roughly this question (the sections titled The Joys of the Craft and The Woes of the Craft). I think what I wrote above is actually about the best possible tl;dr of Brooks' text. But if someone is really interested in reading 18 paragraphs about it, send them the link above.
* - Generally considered The Bible of Software Engineering. As Brooks himself quipped, this is because "everybody quotes it, some people read it, and a few people go by it".
T.E.D.T.E.D.
$\begingroup$ That's a cute joke. Welcome to CSE! $\endgroup$ – Ben I.♦ Jun 21 '17 at 12:45
$\begingroup$ @BenI. - Thank you. Expanded it to make it more of a proper answer though (perhaps removing a bit of the punch. Sorry.) $\endgroup$ – T.E.D. Jun 21 '17 at 13:26
$\begingroup$ Thank you for a hard laugh! $\endgroup$ – Bennett Brown Nov 29 '19 at 3:31
In addition to the excellent definitions already provided, consider taking a slightly different approach or at least augmenting it a bit. This comes back to good pedagogy: I wouldn't give students definitions without context. Moreover, I would support whatever concept I want to get across, no matter how broad or narrow, with specific examples and implementations.
To explain programming to someone who has no idea what it means, it is necessary to have short blocks of code at the ready. Explaining it in the abstract probably won't be too successful. Relatively straightforward ideas like printing "hello, world" or the even numbers from 1-100 or the sum of the integers 1-50 would show the bare minimum of logical and computational ability. Maybe even something relatively intuitive like bubble sort or linear search. From there, compiling the program - assuming a language like C or Java - would at least allow you to explain the notion of translation.
I like to use the metaphor that computers "speak" binary, but I speak C (or Python or Java or...). I need to translate what I want to accomplish into the language that the computer can understand. You can explain compilers/interpreters as a kind of translator that understands both computer-speak and human-speak and knows how to take your thoughts and turn them into "words" that the computer can understand. (You can optionally include the idea of assembly and instruction sets, but again, that's probably beyond the scope of this conversation.)
Peter♦Peter
I think I would explain it like this:
What is a programming language?
I'd say, it is a set of instructions1 (a "language", the words are the operators, sentences are expressions and so on) to tell a computer what it should do. There are instructions to do basic arithmetic or calculations, instructions you can use to interact with the user, and instructions that can control the program's flow, for example by repeating other instructions or branching, so it does different things when confronted with different inputs.
How is it used to make computations?
As with the human languages, there are multiple programming languages. To develop in a certain language you usually write (text) files containing the instructions (you could now mention how this looks in a programming language you like). These text files are read and executed by the computer. Depending on whether the language is low- or high-level, the computer might need additional software (the interpreter) to understand the language.
Often, developers use so called IDEs which make development easier as they offer hepful features (for example, automatically creating instruction(s) you normally use a lot or immediately pointing out errors).
This is just the process of making a program. The combined instructions that eventually make up your program are often called "code" by developers, so they use the verb "(to) code" to describe the process of writing down instructions to solve a particular task.
1 As Brian H. pointed out in his comment, a more precise definition of a programming language would be "a set of syntactical and grammatical rules bundeled with a standard library that provides instructions to tell the computer what to do". However, as you asked for an explanation in layman's terms, I wouldn't recommend this definition to explain what a programming language is.
TuringTuxTuringTux
$\begingroup$ i don't agree with: What is a programming language?: it is a set of instructions. The set of instructions is called a program, and the language is a set of syntactical and grammatical rules to make sure the computer understands these instructions. $\endgroup$ – Brian H. Jun 21 '17 at 14:26
$\begingroup$ @BrianH I'd agree that a program is a set of instructions. However, I do think the language is also a set of instructions as it defines them - I could only think of moving the instructions to the construct of the "standard library": "A language is a set of syntactical and grammatical rules combined with a standard library that provides several instructions that can be used following in a program". Would this be better in your eyes? $\endgroup$ – TuringTux Jun 21 '17 at 14:53
$\begingroup$ Sounds about right i think. $\endgroup$ – Brian H. Jun 21 '17 at 14:57
$\begingroup$ @BrianH. I've edited this into my answer as a footnote, I hope you're okay with it :) $\endgroup$ – TuringTux Jun 21 '17 at 17:51
A programmer writes the instructions for a computer to do its job. A program is a set of instructions for it to follow to accomplish or aid in performing a task. Sometimes the instructions can be rather abstract things like skipping certain instructions or doing others over and over. The programmer assembles instructions into smaller groups that accomplish a part of the task, called a subroutine (or function, method, etc). These are used as building blocks from which they construct the larger program.
Or, in one sentence: If computers are magic then coding is writing the spells.
bükWyrmbükWyrm
$\begingroup$ Welcome to CSE! I hope we hear more from you in the future. $\endgroup$ – Ben I.♦ Jun 21 '17 at 21:37
$\begingroup$ No no, computers are magic. I should know, I'm an actual professional friggin' wizard. I got into illusion magic recently, too. (In all seriousness, this comment is being serious). $\endgroup$ – Draco18s no longer trusts SE Jun 23 '17 at 16:58
I'm a programmer and when people ask me to explain what I do, I usually say that computers talk in bits, right? Zeros and ones. And it's hard for a human to understand that, so there are this tools called programming languages that are able to translate a more human-like text into zeros and ones.
The problem is that programming languages have lots of rules and each programming languages has it's own structure and rules (just like normal languages!) our job as programmers is to know this rules and play with them in order to get what I want.
Example time!
Let's say that we choose as our tool the Python programming language (not to be confused with the snake)
And our goal is to print 10 times on the screen "Hi" We as smart programmers know that in python if we say:
print("Hi")
It will show "Hi" on the screen, but it doesn't solve our problem, right? We want to see it 10 times in the screen, so we as VERY smart programmers tell python:
And our goal is accomplished!
But it's quite tiring to copy and paste that much code, so we as super programmers we know python's syntax and rules so we do:
for x in range(1, 10):
And super saiyan programmers will do:
print("Hi\n" * 10)
TLDR: Programmers are human beings trying to talk to computers through very strict and sometimes mean translators. But it's fun though!
SafirahSafirah
$\begingroup$ Btw, I know in my answer I'm mixing a lot of concepts, but it was on the hope of not overcomplicating my answer $\endgroup$ – Safirah Jun 21 '17 at 11:47
Code is like the rules of a game.
For instance Hangman. Explaining how the game works step by step is almost writing a computerprogram in pseudocode.
1 - Player 1 randomly picks a word out of a list.
2 - He writes down a number of dashes corresponding to the number of letters in the word.
3 - He askes player 2 to guess a letter.
4 - Does the letter occur in the word or not?
5 - Etc.
The purpose is not to really write the program, although 'we programmers' can probably picture it by now, certainly the part where the the computer is the one thinking of a word for the other player to guess.
DraakhondDraakhond
$\begingroup$ P.S. I am not great at computer science, but i am an educator! My answer is a 'lesson one' intended for a layperson. $\endgroup$ – Draakhond Jun 21 '17 at 14:44
Here is a fun, interactive, and impressively comprehensive-yet-easily-digestible article that covers all the basics of programming/CS for laypersons. It even explains many of the professional aspects of what a being employed as a programmer entails beyond just the theoretical and technical aspects. It's where I usually send people with a similarly un-technical background who display a curiosity about programming.
https://www.bloomberg.com/graphics/2015-paul-ford-what-is-code/
MitochondrionMitochondrion
$\begingroup$ Nice article. Welcome to CSE! I hope we hear more from you in the future. $\endgroup$ – Ben I.♦ Jun 21 '17 at 15:05
Coding is like writing an instruction or assembly manual for a product. Tell your parents its like them writing a manual for assembling something like a bicycle.
Ben I.♦
Steveo250kSteveo250k
$\begingroup$ Welcome to CSE! This answer is a little short on details, would you mind fleshing it out a little? Otherwise, it's a pretty nice analogy. I hope we hear more from you in the future. $\endgroup$ – Ben I.♦ Jun 22 '17 at 17:30
Tell them that you are making a computer work as per instructions, and the instructions we give are a sets of rules(syntax) with which one is able to produce many logical structures. Then explain that an engineer like you tries to think about the best possible logic to instruct the computer to do a specific task (such as "printing multiplication tables from 1 to 100 in a few seconds").
Shekhar ReddyShekhar Reddy
$\begingroup$ Welcome to CSE!! $\endgroup$ – Ben I.♦ Jun 21 '17 at 12:43
Coding/Programming is interacting with a computer or device with a language the computer can translate and actually use, because computers and other devices don't understand human languages.
Sometimes those interactions are things like asking questions, or taking notes to keep track of things, or looking up old information wherever you put it.
Programs are big collections of those designed interactions to achieve a specific goal, like, make a spreadsheet, send an email, or draw an object in three dimensions on the screen -- stuff like that.
Programming languages are just languages that humans code with so computers know how to translate and use their designed interactions. Those languages are usually a mix of human words and symbols to help with the creation of those instructions for those interactions.
The computer/device only cares about the instructions, not the language used to make those instructions, so it takes those translated, coded interactions as instructions for a language that the computer knows how to use that we humans can't really read without help of a translator.
Coders/Programmers are basically the middle-men to help with communication and action between machines and humans.
kayleeFrye_onDeckkayleeFrye_onDeck
When people asked me what I did, when I was a programmer/software engineer. I would tell them one of these.
I am an author.
I write instruction manuals for computers. No not manuals on how to use computers, manuals for computers: The computers read the manuals, so that they know what to do.
I write poetry for computers.
I teach computers, how to be medical pumps. (My last job was programming medical pumps).
I am a wizard, I recite magic spells at computers, and they do what I want.
ctrl-alt-delorctrl-alt-delor
Thanks for contributing an answer to Computer Science Educators Stack Exchange!
Not the answer you're looking for? Browse other questions tagged layperson or ask your own question.
How can I explain the difference between CS and coding to a layperson?
How do I explain blockchain using an analogy?
What are some non-CS concepts that can be defined using BNF notation? | CommonCrawl |
Multi-bump solutions for a class of quasilinear equations on $R$
CPAA Home
Spectral analysis and stabilization of a chain of serially connected Euler-Bernoulli beams and strings
March 2012, 11(2): 809-828. doi: 10.3934/cpaa.2012.11.809
On the structure of the global attractor for non-autonomous dynamical systems with weak convergence
Tomás Caraballo 1, and David Cheban 2,
Dpto. Ecuaciones Diferenciales y Análisis Numérico, Facultad de Matemáticas, Universidad de Sevilla, Campus Reina Mercedes, Apdo. de Correos 1160, 41080 Sevilla
State University of Moldova, Department of Mathematics and Informatics, A. Mateevich Street 60, MD–2009 Chişinău
Received January 2011 Revised January 2011 Published October 2011
The aim of this paper is to describe the structure of global attractors for non-autonomous dynamical systems with recurrent coefficients (with both continuous and discrete time). We consider a special class of this type of systems (the so--called weak convergent systems). It is shown that, for weak convergent systems, the answer to Seifert's question (Does an almost periodic dissipative equation possess an almost periodic solution?) is affirmative, although, in general, even for scalar equations, the response is negative. We study this problem in the framework of general non-autonomous dynamical systems (cocycles). We apply the general results obtained in our paper to the study of almost periodic (almost automorphic, recurrent, pseudo recurrent) and asymptotically almost periodic (asymptotically almost automorphic, asymptotically recurrent, asymptotically pseudo recurrent) solutions of different classes of differential equations.
Keywords: dissipative systems, convergent systems, skew-product systems, almost periodic, global attractor, Non-autonomous dynamical systems, cocycles, almost automorphic, quasi-periodic, asymptotically almost periodic solutions., recurrent solutions.
Mathematics Subject Classification: Primary: 34C11, 34C27, 34D05, 34D23, 34D45, 34K14,37B20, 37B55, 37C55, 7C60, 37C65, 37C70, 37C7.
Citation: Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2012, 11 (2) : 809-828. doi: 10.3934/cpaa.2012.11.809
N. P. Bhatia and G. P. Szegö, "Stability Theory of Dynamical Systems,", Lecture Notes in Mathematics, (1970). Google Scholar
I. U. Bronsteyn, "Extensions of Minimal Transformation Group,", Noordhoff, (1979). Google Scholar
B. F. Bylov, R. E. Vinograd, D. M. Grobman and V. V. Nemytskii, "Lyapunov Exponents Theory and Its Applications to Problems of Stabity,", Moscow, (1966). Google Scholar
T. Caraballo and D. N. Cheban, Levitan/Bohr almost periodic and almost automorphic solutions of second-order monotone differential equations,, J. Differ. Eqns., 251 (2011). Google Scholar
D. N. Cheban, Quasiperiodic solutions of the dissipative systems with quasiperiodic coefficients,, Differential Equations, 22 (1986), 267. Google Scholar
D. N. Cheban, $\mathbb C$-analytic dissipative dynamical systems,, Differential Equations, 22 (1986), 1915. Google Scholar
D. N. Cheban, Boundedness, dissipativity and almost periodicity of the solutions of linear and weakly nonlinear systems of differential equations,, Dynamical systems and boundary value problems, (1987), 143. Google Scholar
D. N. Cheban, Global pullback atttactors of C-analytic nonautonomous dynamical systems,, Stochastics and Dynamics, 1 (2001), 511. Google Scholar
D. N. Cheban, "Global Attractors of Non-Autonomous Dissipative Dynamical Systems," Interdisciplinary Mathematical Sciences 1., River Edge, (2004). Google Scholar
D. N. Cheban, Levitan almost periodic and almost automorphic solutions of $V$-monotone differential equations,, J. Dynamics and Differential Equations, 20 (2008), 669. Google Scholar
D. N. Cheban, "Asymptotically Almost Periodic Solutions of Differential Equations,", Hindawi Publishing Corporation, (2009). Google Scholar
D. N. Cheban, "Global Attractors of Set-Valued Dynamical and Control Systems,", Nova Science Publishers, (2010). Google Scholar
D. N. Cheban and C. Mammana, Invariant manifolds, global attractors and almost periodic solutions of non-autonomous difference equations,, Nonlinear Analysis TMA, 56 (2004), 465. Google Scholar
D. N. Cheban and B. Schmalfuß, Invariant manifolds, global attractors, almost automorphic and almost periodic solutions of non-autonomous differential equations,, J. Math. Anal. Appl., 340 (2008), 374. Google Scholar
C. Conley, "Isolated Invariant Sets and the Morse Index,", Region. Conf. Ser. Math., (1978). Google Scholar
B. P. Demidovich, On Dissipativity of Certain Nonlinear Systems of Differential Equations, I,, Vestnik MGU, 6 (1961), 19. Google Scholar
B. P. Demidovich, On Dissipativity of Certain Nonlinear Systems of Differential Equations, II,, Vestnik MGU, 1 (1962), 3. Google Scholar
B. P. Demidovich, "Lectures on Mathematical Theory of Stability,", Moscow, (1967). Google Scholar
A. M. Fink and P. O. Fredericson, Ultimate boundedness does not imply almost periodicity,, Journal of Differential Equations, 9 (1971), 280. Google Scholar
J. K. Hale, "Asymptotic Behaviour of Dissipative Systems,", Amer. Math. Soc., (1988). Google Scholar
M. W. Hirsch, H. L. Smith and X.-Q. Zhao, Chain transitivity, attractivity, and strong repellers for semidynamical systems,, J. Dyn. Diff. Eqns, 13 (2001), 107. Google Scholar
B. M. Levitan and V. V. Zhikov, "Almost Periodic Functions and Differential Equations,", Cambridge Univ. Press, (1982). Google Scholar
A. Pavlov, A. Pogrowsky, N. van de Wouw and N. Nijmeijer, Convergent dynamics, a tribute to Boris Pavlovich Demidovich,, Systems and Control Letters, 52 (2007), 257. Google Scholar
V. A. Pliss, "Nonlocal Problems in the Theory of Oscillations,", Nauka, (1964). Google Scholar
V. A. Pliss, "Integral Sets of Periodic Systems of Differential Equations,", Nauka, (1977). Google Scholar
G. R. Sell, "Topological Dynamics and Ordinary Differential Equations,", Van Nostrand-Reinhold, (1971). Google Scholar
B. A. Shcherbakov, The comparability of the motions of dynamical systems with regard to the nature of their recurrence,, Differential Equations, 11 (1975), 1246. Google Scholar
B. A. Shcherbakov, "Poisson Stability of Motions of Dynamical Systems and Solutions of Differential Equations,", \cStiin\cta, (1985). Google Scholar
R. E. Vinograd, Inapplicability of the method of characteristic exponents to the study of non-linear differential equations,, Mat. Sb. N.S., 41 (1957), 431. Google Scholar
T. Yoshizawa, "Stability Theory and the Existence of Periodic Solutions and Almost Periodic Solutions," Applied Mathematical Sciences, Vol. 14,, Springer-Verlag, (1975). Google Scholar
V. V. Zhikov, On stability and unstability of Levinson's centre,, Differentsial'nye Uravneniya, 8 (1972), 2167. Google Scholar
V. V. Zhikov, Monotonicity in the theory of almost periodic solutions of non-linear operator equations,, Mat. Sbornik, 90 (1973), 214. Google Scholar
V. I. Zubov, "The Methods of A. M. Lyapunov and Their Application,", Noordhoof, (1964). Google Scholar
V. I. Zubov, "Theory of Oscillations,", Nauka, (1979). Google Scholar
Bixiang Wang. Stochastic bifurcation of pathwise random almost periodic and almost automorphic solutions for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3745-3769. doi: 10.3934/dcds.2015.35.3745
Ernest Fontich, Rafael de la Llave, Yannick Sire. A method for the study of whiskered quasi-periodic and almost-periodic solutions in finite and infinite dimensional Hamiltonian systems. Electronic Research Announcements, 2009, 16: 9-22. doi: 10.3934/era.2009.16.9
Mikhail B. Sevryuk. Invariant tori in quasi-periodic non-autonomous dynamical systems via Herman's method. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 569-595. doi: 10.3934/dcds.2007.18.569
Francesca Alessio, Carlo Carminati, Piero Montecchiari. Heteroclinic motions joining almost periodic solutions for a class of Lagrangian systems. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 569-584. doi: 10.3934/dcds.1999.5.569
Claudia Valls. On the quasi-periodic solutions of generalized Kaup systems. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 467-482. doi: 10.3934/dcds.2015.35.467
Tomás Caraballo, David Cheban. Almost periodic and asymptotically almost periodic solutions of Liénard equations. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 703-717. doi: 10.3934/dcdsb.2011.16.703
Tomás Caraballo, David Cheban. Almost periodic and almost automorphic solutions of linear differential equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 1857-1882. doi: 10.3934/dcds.2013.33.1857
Felipe García-Ramos, Brian Marcus. Mean sensitive, mean equicontinuous and almost periodic functions for dynamical systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 729-746. doi: 10.3934/dcds.2019030
Xiang Li, Zhixiang Li. Kernel sections and (almost) periodic solutions of a non-autonomous parabolic PDE with a discrete state-dependent delay. Communications on Pure & Applied Analysis, 2011, 10 (2) : 687-700. doi: 10.3934/cpaa.2011.10.687
Qihuai Liu, Dingbian Qian, Zhiguo Wang. Quasi-periodic solutions of the Lotka-Volterra competition systems with quasi-periodic perturbations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1537-1550. doi: 10.3934/dcdsb.2012.17.1537
Xianhua Huang. Almost periodic and periodic solutions of certain dissipative delay differential equations. Conference Publications, 1998, 1998 (Special) : 301-313. doi: 10.3934/proc.1998.1998.301
Nguyen Minh Man, Nguyen Van Minh. On the existence of quasi periodic and almost periodic solutions of neutral functional differential equations. Communications on Pure & Applied Analysis, 2004, 3 (2) : 291-300. doi: 10.3934/cpaa.2004.3.291
Weigu Li, Jaume Llibre, Hao Wu. Polynomial and linearized normal forms for almost periodic differential systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 345-360. doi: 10.3934/dcds.2016.36.345
P.E. Kloeden. Pitchfork and transcritical bifurcations in systems with homogeneous nonlinearities and an almost periodic time coefficient. Communications on Pure & Applied Analysis, 2004, 3 (2) : 161-173. doi: 10.3934/cpaa.2004.3.161
Massimo Tarallo. Fredholm's alternative for a class of almost periodic linear systems. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2301-2313. doi: 10.3934/dcds.2012.32.2301
Ahmed Y. Abdallah. Attractors for first order lattice systems with almost periodic nonlinear part. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1241-1255. doi: 10.3934/dcdsb.2019218
P.E. Kloeden, Desheng Li, Chengkui Zhong. Uniform attractors of periodic and asymptotically periodic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 213-232. doi: 10.3934/dcds.2005.12.213
Michael Zgurovsky, Mark Gluzman, Nataliia Gorban, Pavlo Kasyanov, Liliia Paliichuk, Olha Khomenko. Uniform global attractors for non-autonomous dissipative dynamical systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 2053-2065. doi: 10.3934/dcdsb.2017120
Xiaojun Chang, Yong Li. Rotating periodic solutions of second order dissipative dynamical systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 643-652. doi: 10.3934/dcds.2016.36.643
Pedro J. Torres. Non-collision periodic solutions of forced dynamical systems with weak singularities. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 693-698. doi: 10.3934/dcds.2004.11.693
Tomás Caraballo David Cheban | CommonCrawl |
Analysis of the effect of meteorological factors on hemorrhagic fever with renal syndrome in Taizhou City, China, 2008–2020
Rong Zhang1 na1,
Ning Zhang2 na1,
Wanwan Sun1 na1,
Haijiang Lin3 na1,
Ying Liu1,
Tao Zhang1,
Mingyong Tao4,
Jimin Sun1,
Feng Ling1 &
Zhen Wang1
BMC Public Health volume 22, Article number: 1097 (2022) Cite this article
Hemorrhagic fever with renal syndrome (HFRS) is endemic in Zhejiang Province, China, while few studies have concentrated on the influence of meteorological factors on HFRS incidence in the area.
Data on HFRS and meteorological factors from January 1, 2008 to December 31, 2020 in Taizhou City, Zhejiang Province were collected. Multivariate analysis was conducted to the relationship between meteorological factors including minimum temperatures, relative humidity, and cumulative rainfall with HFRS.
The HFRS incidence peaked in November and December and it was negatively correlated with average and highest average temperatures. Compared with median of meteorological factors, the relative risks (RR) of weekly average temperature at 12 ℃, weekly highest temperature at 18 ℃relative humidity at 40%, and cumulative rainfall at 240 mm were most significant and RRs were 1.41 (95% CI: 1.09–1.82), 1.32 (95% CI: 1.05–1.66), 2.18 (95% CI: 1.16–4.07), and 1.91 (95% CI: 1.16–2.73), respectively. Average temperature, precipitation, relative humidity had interactions on HFRS and the risk of HFRS occurrence increased with the decrease of average temperature and the increase of precipitation.
Our study results are indicative of the association of environmental factors with the HFRS incidence, probable recommendation could be use of environmental factors as early warning signals for initiating the control measure and response.
Climate change, especially extreme weather, not only affect the incidence of acute infectious diseases of the respiratory tract [1,2,3], but also increase the risk of death in patients with chronic diseases [4]. Hemorrhagic fever with renal syndrome (HFRS) is a natural focal disease, and a large number of studies have shown that its incidence is influenced by climate change [5]. In the context of global warming, temperature, rainfall, and relative humidity are the main meteorological factors that pose a serious threat to human health [6]. Previous studies on the impact of meteorological factors on diseases have identified certain hysteresis effects, which vary in form and by region [7, 8].
Meteorological factors, such as temperature, precipitation, and humidity, might affect human travel, thereby directly affecting the likelihood of rodent-human contact [9]. They can also affect the spread of diseases by affecting crop yields, rodent reproduction, and vector density [10]. For example, temperature and rainfall were associated with the host ecosystem, affecting the HFTS transmission speed and the potential risk of outbreaks [11, 12]. These factors had a lagging effect on HFRS incidence, but lag time ranged 3–5 months in different areas [13,14,15,16]. Moreover, El Niño extreme weather events were also associated with the occurrence of HFRS [17].
The first documented case of HFRS in Zhejiang Province was reported in Jiaxing City in 1963. Since then, the size of the endemic area has gradually increased. In recent years, the number of cases has decreased with vaccination, rodent control strategies, and environmental sanitation improvements [18]. However, the affected areas of Zhejiang Province is gradually expanding, and the incidence rates in some areas remain high [19]. Up to date, cases have been reported in all 11 prefecture-level cities in the province [19]. However, few studies have concentrated on the influence of meteorological factors on HFRS incidence in the area. In this study, distributed lag nonlinearity (DLNM) and generalized additive model (GAM) were used to evaluate the impact of HFRS incidence in Taizhou City, Zhejiang Province, and to determine the key influencing factors.
Study area
Taizhou City, a coastal city in the central part of Zhejiang Province, belongs to the mid-subtropical monsoon area and experiences four distinct seasons (Supplementary Figure S1). The territory experiences mild summers, cold winters, abundant rain, and a mild, humid climate due to the meteorological effects of nearby ocean waters and mountains in the northwest.
According to the Law on Prevention and Treatment of Infectious Diseases, HFRS is classified as a Class B infectious disease in China, and cases must be reported within 24 h of diagnosis [19]. Data on HFRS from 2008 to 2020 in Taizhou City were collected from the Chinese Notifiable Disease Reporting System.
We collected daily meteorological data from the China Meteorological Data Sharing Service System (http://data.cma.cn/). These data, including daily average temperature, (Avetemp), minimum temperature (Mintemp), maximum temperature (Maxtemp), relative humidity, and total precipitation, were used to calculate the weekly average for each value.
Statistical methods
Normality test and descriptive analysis were conducted to summarize characteristics of all variables. Spearman correlation was used to assess the relationship between HFRS incidence and meteorological factors. This study developed a time series model based on the GAM and used the cross-basis process to describe the distribution of changes in the independent variable dimension and the lag dimension simultaneously [8]. Further, DLNM was used to fit the non-linear and lag effects of weekly Avetemp, Maxtemp, Mintemp, average relative humidity, and cumulative rainfall on the risk of HFRS [20]. The incubation period for HFRS is affected by the host animal, vector density, and meteorological factors, and lasts for several weeks. In our study, the maximum lag period was set to 16 weeks [13, 20, 21]. Since HFRS cases in Taizhou City were relatively rare, Quasi-Poisson regression was used in this model to control for overdispersion. We used a two-stage analysis method. First, we used the DLNM to estimate the association of weekly Avetemp, Maxtemp, Mintemp, relative humidity, and weekly total precipitation (WTP) with the number of HFRS case [22]. The general algebraical definition of the model are as follows:
$$Log[E(Yt)] = \beta + cb(Kt,16,\beta 1) + S1(x) + S2(z)+S3(m)+S4(n)+ S5(week)$$
Among them, t is the observation week; [E(Yt)] is HFRS cases observed in month Yt, βis the intercept of the entire model; cb (Kt, 16, and β1) is the cross-basis function of K, and K is one of the meteorological elements. S1–4 are factors such as Avtemp, Maxtemp, Mintemp, RH, and WTP; β1 is the estimated value of the effect of K in a specific lag week t; the maximum lag week is set to 16; Week is the ordinal variable of the week in the year; s() is the penalty spline function. This study uses cubic spline functions, s1–4, to adjust the confounding factors in the model, and s5(week) to adjust the weekly confounding factors.
Second, we analyzed the interaction between weekly Avetemp, Maxtemp, Mintemp, relative humidity, and accumulated rainfall with GAMs, and then analyzed the different effects of high and low values of the meteorological factors on the cases. The basic model as follows:
$$Log[\mathrm{E}(\mathrm{Yt})] =\upbeta 2 +\mathrm{ S}1(\mathrm{k},\mathrm{x}) +\mathrm{ S}2(\mathrm{z}) +\mathrm{ S}3(\mathrm{m}) +\mathrm{ S}4(\mathrm{n}) +\mathrm{ S}5(\mathrm{week})$$
β2 is the intercept; K is one of the meteorological factors (Avtemp, Maxtemp, Mintemp, RH, and WTP), and X, Z, M, and N denote the other factors. s() means the penalty spline function. s1 (K and X) is the spline function of the interaction between variables K and X.
In the model, the number of cases was used as the dependent variable, and a cross-basis function was established for the number of cases and temperature. Spline interpolation was used to control the influence of confounding factors such as relative humidity, rainfall, and long-term trends. The best degree of freedom (df) was selected based on the spline function results through sensitivity testing and generalized cross-validation criteria [23].
DLNM can describe the complex nonlinearity and hysteresis correlation of temperature-HFRS through the cross basis function. It is necessary to scientifically define the reasonable lag time of the model [21]. We chose one ns (natural cubic B-spline, df = 6) as the exposure–response. Two nodes are located at the 2.5th and 97.5th percentiles of the meteorological factor distribution, and the other is for the exposure–response relationship, based on high temperature [13, 15]. The assumption that Maxtemp and Mintemp may affect the incidence of HFRS is fixed (df = 6) and the maximum lag time was set at 16 weeks to capture the delayed effects of extreme temperatures. In this study, the degrees of freedom and maximum lag times for mean temperature, relative humidity, accumulated rainfall, mean maximum temperature and mean minimum temperature were set in order: (df = 6, lag = 16; df = 4, lag = 16; df = 3, lag = 10; df = 6, lag = 21; df = 4, lag = 20).
We performed sensitivity analysis by changing the df of the weather variables and time points. All analyses were performed using ArcGIS 10.2 (ESRI, Redlands, CA, USA) and R software (packages "dlnm" and "mgcv") (R Foundation for Statistical Computing, Vienna, Austria).
During the study period, a total of 1196 HFRS cases were reported in Taizhou City. Descriptive statistics collected over the past 13 years indicated that the highest weekly case distribution in Taizhou reached 12 cases (Table 1). The Avetemp, Maxtemp, and Mintemp in Taizhou City from 2008 to 2020 all show a normal distribution and showed obvious periodicity and seasonality (Fig. 1). The average weekly temperature was 18.04 °C; Mintemp, 2.33 °C, and Maxtemp, 30.33 °C. The weekly average Maxtemp and weekly average Mintemp were 19.3 °C and 11.13 °C, respectively. The average weekly humidity was 77.88% and the average weekly rainfall was 38.54 mm (Fig. 2). HFRS incidence was negatively correlated with the Avetemp and the highest Avetemp, while it didn't significantly related to the weekly average relative humidity, weekly total precipitation, and the lowest Avetemp (Table 2).
Table 1 Descriptive statistics of weekly HFRS cases and meteorological factors in Taizhou City, China from 2008 to 2020
Time series of weekly Avetemp,Tmax, Tmin, RH and WTP, and number of HFRS from 2008 to 2020 in Taizhou City, China
Boxplot of Avetemp, Mintemp, Max temp, RH and WTP
Table 2 Correlation analysis of meteorological factors and HFRS in Taizhou city,China from 2008 to 2020
In the DLNM, we used the median of each meteorological factor as a reference and then calculated the relative risk of each variable. The impact of Avetemp on HFRS rapidly decreased and then slowly increased. In Lag 3, weekly Avetemp is most significant at 13 °C (RR = 1.28, 95% CI = 1.04–1.57). In Lag 4, weekly Avetemp was most significant at 12 °C (RR = 1.41, 95% CI = 1.09–1.82) (Fig. 3A, B). In lag 3, Maxtemp is most significant at 18 °C (RR = 1.32, 95% CI = 1.05–1.66) (Fig. 3C). In lag 4, Maxtemp is most significant at 20 °C (RR = 1.12, 95% CI = 1.02–1.24) (Fig. 3D). There was no statistical difference between the high and low values of the Mintemp; the Mintemp of 1 °C had the same RR value in lag1 and lag2 (RR = 1.59, 95% CI = 1.02–2.47) (Fig. 3E, F).
The lag-specific effect of meteorological factors on HFRS in Taizhou City
The RR of the highest value of relative humidity in the 97.5th percentile in lag2 and lg16 were 0.97, and 0.61, respectively (Supplementary Fig. S2G, H). A relative humidity of 31% was the most significant in lag 2 and a maximum relative humidity of 40% was the most significant in lag 15 (Fig. 3G, H).
The highest WTP value of the 97.5th percentile had the largest RR in lag13 (RR = 1.35, 95% CI = 1.01–1.79) (Supplementary Fig. S2I, J); a cumulative rainfall of 240 mm in lag3 was the most significant (RR = 1.91, 95% CI = 1.16–2.73) (Fig. 3I). A cumulative rainfall of 120 mm in lag10 was the most significant (RR = 1.27, 95% CI = 1.08–1.49) (Fig. 3J).
We calculated the corresponding RR for the minimum to the maximum lag of each meteorological factor (Fig. 4), and the Avetemp was between lag3–5 (RR = 1.4, 95% CI = 1.09–1.81). The time effect of the Maxtemp was significant, and the risk of infection was the highest in the lag1–2 weeks. Regarding cumulative rainfall, the effect of lag was most significant during lag3–4 weeks. The maximum effect of relative humidity was the most significant at lag3–4 and lag12–15 weeks, respectively.
RRs of meteorological factors on HFRS at different lags in Taizhou city from 2008 to 2020
GAMs was used to explore the interaction between Avetemp, WTP, and RH, and the results are shown in Fig. 5. The left side of Fig. 5 showed the interaction between Avetemp and WTP. Obviously, the infection risk of HFRS was inversely proportional to Avetemp and directly proportional to the WTP. As showed in the middle of Fig. 5, the infection risk of HFRS is inversely proportional to Avetemp and directly proportional to RH. Figure 5 showed that as the WTP increased, the RH decreased and the risk of infection increased. The risk of HFRS infection increased with the decrease of Avetemp and the increase of WTP. These indicated that HFRS in Taizhou City increased when Avetemp decreased and WTP increased.
The coefficients of meteorological factors on HFRS in Taizhou City
In this study, we investigated the relationship between Avetemp, Maxtemp, Mintemp, WTP, relative humidity and HFRS in Taizhou City from 2008 to 2020 using DLNM and GAMs. Our study found that weekly Avetemp and weekly maximum temperature were negatively associated with HFRS incidence, which is consistent with results from Shandong Province [24].
The lagged effects of WTP and relative humidity were also most pronounced in Taizhou City, with a lag of 3–4 weeks. Rather than concentrating rodent control efforts only twice, in winter and spring [7, 19], the high incidence period identified in this study. Several studies have confirmed that extreme weather has a significant impact on many diseases [18, 19]. We found that the effects of Maxtemp and Mintemp on HFRS were most pronounced at a lag of 1 week. Several models have been used to study the effect of later factors on dengue fever, and similarly confirmed the existence of a lag period for climatic factors on local dengue incidence [25].
We found that the risk of infection increased with the increase of precipitation, which was similar with previous findings [26]. The effect of WTP on the risk of disease in Taizhou City was most pronounced at a lag of 1 month, and this effect persisted until a lag of 12 weeks. This study confirmed that infectious diseases in coastal areas such as Zhejiang Province were more affected by tropical cyclones [27]. For example, rainfall and relative humidity had a significant effect on severe fever with thrombocytopenia syndrome [28].
Meteorological factors had non-line relationship with HFRS and lag effects exist. HFRS mostly occurred when temperature and relative humidity were low and WTP was high. Our study results are indicative of the association of environmental factors with the HFRS incidence, probable recommendation could be use of environmental factors as early warning signals for initiating the control measure and response.
HFRS incidence was directly associated with density and infection rate of rodents, but these data were not available in this study. More over, other factors including social factors and environmental factors might also influence HFRS. Further research should be conducted to explore the contribution rate of different factors on HFRS.
All data analyzed during this research period are included in the body of this article and supplementary materials.
Ma YX, Yang SX, Yu ZA, Jiao HR, Zhang YF, Ma BJ, et al. Effect of diurnal temperature range on outpatient visits for common cold in Shanghai, China. Environ Sci Pollut Res. 2020;27(2):1436–48. https://doi.org/10.1007/s11356-019-06805-4.
Shi P, Dong YQ, Yan HC, Zhao CK, Li XY, Liu W, et al. Impact of temperature on the dynamics of the COVID-19 outbreak in China. Sci Total Environ. 2020;728(1): 138890. https://doi.org/10.1016/j.scitotenv.2020.138890.
Ma YX, Zhou JD, Yang SX, Yu ZA, Wang F, Zhou J. Effects of extreme temperatures on hospital emergency room visits for respiratory diseases in Beijing, China. Environ Sci Pollut Res. 2019;26(3):3055–64. https://doi.org/10.1007/s11356-018-3855-4.
Ma YX, Zhao YX, Zhou JD, Jiang YY, Yang SX, Yu ZA. The relationship between diurnal temperature range and COPD hospital admissions in Changchun, China. Environ Sci Pollut Res. 2018;25(18):17942–9. https://doi.org/10.1007/s11356-018-2013-3.
Jiang F, Wang L, Wang S, Zhu L, Dong L, Zhang Z. Meteorological factors affect the epidemiology of hemorrhagic fever with renal syndrome via altering the breeding and hantavirus-carrying states of rodents and mites: a 9 years' longitudinal study. Emerg Microbes Infect. 2017;11(6):e104. https://doi.org/10.1038/emi.2017.92.
McMichael AJ, Wilkinson P, Kovats RS, Pattenden S, Hajat S, Armstrong B, et al. International study of temperature, heat and urban mortality: the "ISOTHURM" project. Int J Epidemiol. 2008;37(5):1121–31. https://doi.org/10.1093/ije/dyn086.
Li YD, Cazelles B, Yang GQ, Laine M, Huang ZXY, Cai J, et al. Intrinsic and extrinsic drivers of transmission dynamics of hemorrhagic fever with renal syndrome caused by Seoul hantavirus. PLoS Negl Trop Dis. 2019;13(9):e7757. https://doi.org/10.1371/journal.pntd.0007757.
Gasparrini A, Armstrong B, Kenward MG. Distributed lag non-linear models. Stat Med. 2010;29(21):2224–34. https://doi.org/10.1002/sim.3940.
Wei YH, Wang Y, Li XN, Qin PZ, Lu Y, Xu JM, et al. Meteorological factors and risk of hemorrhagic fever with renal syndrome in Guangzhou, southern China, 2006–2015. PLOS Negl Trop Dis. 2018;12(6):e6604. https://doi.org/10.1371/journal.pntd.0006604.
Hansen A, Cameron S, Liu QY, Sun YH, Weinstein P, Williams C, et al. Transmission of haemorrhagic fever with renal syndrome in china and the role of climate factors: a review. Int J Infect Dis. 2015;33:212–8. https://doi.org/10.1016/j.ijid.2015.02.010.
Bi P, Tong SL, Donald K, Parton K, Ni JF. Climatic, reservoir and occupational variables and the transmission of haemorrhagic fever with renal syndrome in China. Int J Epidemiol. 2002;31(1):189–93. https://doi.org/10.1093/ije/31.1.189.
Tian HY, Yu PB, Cazelles B, Xu L, Tan H, Yang J, et al. Interannual cycles of Hantaan virus outbreaks at the human–animal interface in Central China are controlled by temperature and rainfall. Proc Natl Acad Sci. 2017;114(30):8041–6. https://doi.org/10.1073/pnas.1701777114.
Sun WW, Liu XB, Li W, Mao ZY, Sun JM, Lu L. Effects and interaction of meteorological factors on hemorrhagic fever with renal syndrome incidence in Huludao City, northeastern China, 2007–2018. PLoS Negl Trop Dis. 2021;3(15):e9217. https://doi.org/10.1371/journal.pntd.0009217.
Wu HC, Wu C, Lu QB, Ding ZY, Xue M, Lin JF. Spatial-temporal characteristics of severe fever with thrombocytopenia syndrome and the relationship with meteorological factors from 2011 to 2018 in Zhejiang Province, China. PLoS Negl Trop Dis. 2020;14(4):e8186. https://doi.org/10.1371/journal.pntd.0008186.
Xu QQ, Li RZ, Rutherford S, Luo C, Liu YF, Li XJ. Using a distributed lag non-linear model to identify impact of temperature variables on haemorrhagic fever with renal syndrome in Shandong Province. Epidemiol Infect. 2018;146(13):1671–9. https://doi.org/10.1017/S095026881800184X.
Zhang WY, Guo WD, Fang LQ, Li CP, Bi P, Glass GE, et al. Climate variability and hemorrhagic fever with renal syndrome transmission in Northeastern China. Environ Health Perspect. 2010;118(7):915–20. https://doi.org/10.1289/ehp.0901504.
Bi P, Parton KA. El Niño and incidence of hemorrhagic fever with renal syndrome in China. JAMA. 2003;289(2):176–7. https://doi.org/10.1001/jama.289.2.176-d.
Tian H, Tie WF, Li HB, Hu XQ, Xie GC, Du LY, et al. Orthohantaviruses infections in humans and rodents in Baoji, China. PLoS Negl Trop Dis. 2020;14(10):e8778. https://doi.org/10.1371/journal.pntd.0008778.
Zhang R, Mao ZY, Yang J, Liu SL, Liu Y, Qin SW, et al. The changing epidemiology of hemorrhagic fever with renal syndrome in Southeastern China during 1963–2020: A retrospective analysis of surveillance data. PLoS Negl Trop Dis. 2021;15(8):e9673. https://doi.org/10.1371/journal.pntd.000967320.
Cao LN, Huo XY, Xiang JJ, Lu L, Liu X, Song XP, et al. Interactions and marginal effects of meteorological factors on haemorrhagic fever with renal syndrome in different climate zones: Evidence from 254 cities of China. Sci Total Environ. 2020;721:137564. https://doi.org/10.1016/j.scitotenv.2020.137564.
Wang P, Zhang X, Hashizume M, Goggins WB, Luo C. A systematic review on lagged associations in climate-health studies. Int J Epidemiol. 2021;50(4):1199–212. https://doi.org/10.1093/ije/dyaa286.
Ma YX, Jiao HR, Zhang YF, Feng FL, Cheng BW, Ma BJ, et al. Short-term effect of extreme air temperature on hospital emergency room visits for cardiovascular diseases from 2009 to 2012 in Beijing, China. Environ Sci Pollut Res. 2020;27(30):38029–37. https://doi.org/10.1007/s11356-020-09814-w.
Bai X, Peng C, Jiang T, Hu ZM, Huang DS, Guan PG. Distribution of geographical scale, data aggregation unit and period in the correlation analysis between temperature and incidence of HFRS in mainland China: a systematic review of 27 ecological studies. PLOS Negl Trop Dis. 2019;13(8):e7688. https://doi.org/10.1371/journal.pntd.0007688.
Fang LQ, Wang XJ, Liang S, Li YL, Song SX, Zhang WY, et al. Spatiotemporal trends and climatic factors of hemorrhagic fever with renal syndrome epidemic in Shandong Province, China. PLoS Negl Trop Dis. 2010;4(8):e789. https://doi.org/10.1371/journal.pntd.0000789.
Guo P, Liu T, Zhang Q, Wang L, Xiao J, Zhang Q, et al. Developing a dengue forecast model using machine learning: a case study in China. Plos Neglect Trop D. 2017;10(11):e5973. https://doi.org/10.1371/journal.pntd.0005973.
Xiang JJ, Hansen A, Liu QY, Tong MX, Liu XB, Sun YH, et al. Impact of meteorological factors on hemorrhagic fever with renal syndrome in 19 cities in China, 2005–2014. Sci Total Environ. 2018;636:1249–56. https://doi.org/10.1016/j.scitotenv.2018.04.407.
Zheng J, Han W, Jiang B, Ma W, Zhang Y. Infectious Diseases and Tropical Cyclones in Southeast China. Int J Environ Res Public Health. 2017;14(5):494. https://doi.org/10.3390/ijerph14050494.
Sun JM, Lu L, Liu KK, Yang J, Wu HX, Liu QY. Forecast of severe fever with thrombocytopenia syndrome incidence with meteorological factors. Sci Total Environ. 2018;626:1188–92. https://doi.org/10.1016/j.scitotenv.2018.01.196.
We would like to thank the Taizhou Municipal Health Organization, the Center for Disease Control and Prevention, and the China National Meteorological Information Center for providing data for our research.
This study was supported by the medical research program of Zhejiang Province (2019KY358).
Rong Zhang, Ning Zhang, Wanwan Sun and Haijiang Lin contribute equally to this work.
Key Laboratory of Vaccine, Prevention and Control of Infectious Disease of Zhejiang Province, Zhejiang Provincial Center for Disease Control and Prevention, Zhejiang Province, Hangzhou, 310051, China
Rong Zhang, Wanwan Sun, Ying Liu, Tao Zhang, Jimin Sun, Feng Ling & Zhen Wang
Puyan Street Community Health Service Center of Binjiang District, Zhejiang Province, Hangzhou, 310013, China
Ning Zhang
Taizhou City Center for Disease Control and Prevention, Zhejiang Province, Taizhou, 318000, China
Haijiang Lin
Ningbo University School of Medicine, Zhejiang Province, Ningbo, 315000, China
Mingyong Tao
Rong Zhang
Wanwan Sun
Ying Liu
Tao Zhang
Jimin Sun
Feng Ling
Zhen Wang
Rong Zhang:Conceptual,Methodology, Writing original draft preparation and funding acquisition. Ning Zhang and Wanwan Sun: Analysis and modeling. Haijiang Lin: Resources. Mingyong Tao and Tao Zhang: Visualization.Ying Liu: Software. Jimin Sun: Project administration, writing and editing. Feng Ling and Zhen Wang: writing review, Supervision. All authors read and approved the final manuscript.
Correspondence to Jimin Sun, Feng Ling or Zhen Wang.
All methods were carried out in accordance with relevant guidelines and regulations. This study was reviewed and approved by the Ethics Committee of the Zhejiang Provincial Center for Disease Control and Prevention (No.2020–021). All the data of the individuals were kept confidential as requested and ethical approve.
The authors have declared that no competing interest exist.
Study areas of Taizhou City in China. The map was created by ArcGIS 10.2 (Software, ESRI Inc., Redlands, CA, USA). The base layer of the map of Zhejiang Province was supported from National Earth System Science Data Center, National Science & Technology Infrastructure of China (http://www.geodata.cn). Figure S2. With the median as the reference, the lag effect between Avetemp, Maxtemp, Mintemp, RH and WTP and HFRS infection. Abbreviations: Avetemp, average temperature; CI, confidence interval; df, degree of freedom; DLNM, distributed lag non-linear model; GAM, generalized additive model; HFRS, Hemorrhagic fever with renal syndrome; Maxtemp, maximum temperature; Mintemp, minimum temperature; RH,relative humidity;WTP,weekly total precipitation; RR, relative risk.
Zhang, R., Zhang, N., Sun, W. et al. Analysis of the effect of meteorological factors on hemorrhagic fever with renal syndrome in Taizhou City, China, 2008–2020. BMC Public Health 22, 1097 (2022). https://doi.org/10.1186/s12889-022-13423-2
Hemorrhagic fever with renal syndrome
Distributed lag non-linear
Generalized additive models
Lag effect
Interactive effect
Climate impacts on health | CommonCrawl |
DOE PAGES Journal Article: Continuum of quantum fluctuations in a three-dimensional S=1 Heisenberg magnet
Title: Continuum of quantum fluctuations in a three-dimensional S=1 Heisenberg magnet
Conventional crystalline magnets are characterized by symmetry breaking and normal modes of excitation called magnons, with quantized angular momentum $$\hbar$$. Neutron scattering correspondingly features extra magnetic Bragg diffraction at low temperatures and dispersive inelastic scattering associated with single magnon creation and annihilation. Exceptions are anticipated in so-called quantum spin liquids, as exemplified by the one-dimensional spin-1/2 chain, which has no magnetic order and where magnons accordingly fractionalize into spinons with angular momentum $$\hbar$$/2. This is spectacularly revealed by a continuum of inelastic neutron scattering associated with two-spinon processes. Here, we report evidence for these key features of a quantum spin liquid in the three-dimensional antiferromagnet NaCaNi2F7. We show that despite the complication of random Na1+–Ca2+ charge disorder, NaCaNi2F7 is an almost ideal realization of the spin-1 antiferromagnetic Heisenberg model on a pyrochlore lattice. Magnetic Bragg diffraction is absent and 90% of the neutron spectral weight forms a continuum of magnetic scattering with low-energy pinch points, indicating NaCaNi2F7 is in a Coulomb-like phase. Our results demonstrate that disorder can act to freeze only the lowest-energy magnetic degrees of freedom; at higher energies, a magnetic excitation continuum characteristic of fractionalized excitations persists.
Plumb, K. W. [1];
Search DOE PAGES for author "Plumb, K. W."
Changlani, Hitesh J. [1]; Scheie, A. [1]; Zhang, Shu [1]; Krizan, J. W. [2]; Rodriguez-Rivera, J. A. [3]; Qiu, Yiming [4]; Winn, B. [5]; Cava, R. J. [2]; Broholm, C. L. [6]
Johns Hopkins Univ., Baltimore, MD (United States)
Princeton Univ., NJ (United States). Dept. of Chemistry
National Inst. of Standards and Technology (NIST), Gaithersburg, MD (United States). Center for Neutron Research; Univ. of Maryland, College Park, MD (United States). Dept. of Materials Science and Engineering
National Inst. of Standards and Technology (NIST), Gaithersburg, MD (United States). Center for Neutron Research
Johns Hopkins Univ., Baltimore, MD (United States); National Inst. of Standards and Technology (NIST), Gaithersburg, MD (United States). Center for Neutron Research; Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
AC05-00OR22725
Nature Physics
Journal Volume: 15; Journal Issue: 1; Journal ID: ISSN 1745-2473
Nature Publishing Group (NPG)
71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS
Plumb, K. W., Changlani, Hitesh J., Scheie, A., Zhang, Shu, Krizan, J. W., Rodriguez-Rivera, J. A., Qiu, Yiming, Winn, B., Cava, R. J., and Broholm, C. L. Continuum of quantum fluctuations in a three-dimensional S=1 Heisenberg magnet. United States: N. p., 2018. Web. doi:10.1038/s41567-018-0317-3.
Plumb, K. W., Changlani, Hitesh J., Scheie, A., Zhang, Shu, Krizan, J. W., Rodriguez-Rivera, J. A., Qiu, Yiming, Winn, B., Cava, R. J., & Broholm, C. L. Continuum of quantum fluctuations in a three-dimensional S=1 Heisenberg magnet. United States. doi:10.1038/s41567-018-0317-3.
Plumb, K. W., Changlani, Hitesh J., Scheie, A., Zhang, Shu, Krizan, J. W., Rodriguez-Rivera, J. A., Qiu, Yiming, Winn, B., Cava, R. J., and Broholm, C. L. Mon . "Continuum of quantum fluctuations in a three-dimensional S=1 Heisenberg magnet". United States. doi:10.1038/s41567-018-0317-3. https://www.osti.gov/servlets/purl/1559715.
title = {Continuum of quantum fluctuations in a three-dimensional S=1 Heisenberg magnet},
author = {Plumb, K. W. and Changlani, Hitesh J. and Scheie, A. and Zhang, Shu and Krizan, J. W. and Rodriguez-Rivera, J. A. and Qiu, Yiming and Winn, B. and Cava, R. J. and Broholm, C. L.},
abstractNote = {Conventional crystalline magnets are characterized by symmetry breaking and normal modes of excitation called magnons, with quantized angular momentum $\hbar$. Neutron scattering correspondingly features extra magnetic Bragg diffraction at low temperatures and dispersive inelastic scattering associated with single magnon creation and annihilation. Exceptions are anticipated in so-called quantum spin liquids, as exemplified by the one-dimensional spin-1/2 chain, which has no magnetic order and where magnons accordingly fractionalize into spinons with angular momentum $\hbar$/2. This is spectacularly revealed by a continuum of inelastic neutron scattering associated with two-spinon processes. Here, we report evidence for these key features of a quantum spin liquid in the three-dimensional antiferromagnet NaCaNi2F7. We show that despite the complication of random Na1+–Ca2+ charge disorder, NaCaNi2F7 is an almost ideal realization of the spin-1 antiferromagnetic Heisenberg model on a pyrochlore lattice. Magnetic Bragg diffraction is absent and 90% of the neutron spectral weight forms a continuum of magnetic scattering with low-energy pinch points, indicating NaCaNi2F7 is in a Coulomb-like phase. Our results demonstrate that disorder can act to freeze only the lowest-energy magnetic degrees of freedom; at higher energies, a magnetic excitation continuum characteristic of fractionalized excitations persists.},
doi = {10.1038/s41567-018-0317-3},
journal = {Nature Physics},
Insulating spin glasses
Villain, Jacques
Zeitschrift f�r Physik B Condensed Matter and Quanta, Vol. 33, Issue 1
Molecular Spin Resonance in the Geometrically Frustrated Magnet MgCr 2 O 4 by Inelastic Neutron Scattering
Tomiyasu, K.; Suzuki, H.; Toki, M.
Valence-Bond Crystal and Anisotropic Excitation Spectrum on 3-Dimensionally Frustrated Pyrochlore
Isoda, Makoto; Mori, Shigeyoshi
Journal of the Physical Society of Japan, Vol. 67, Issue 12
DOI: 10.1143/JPSJ.67.4022
Order by Distortion and String Modes in Pyrochlore Antiferromagnets
Tchernyshyov, Oleg; Moessner, R.; Sondhi, S. L.
DOI: 10.1103/PhysRevLett.88.067203
Mantid—Data analysis and visualization package for neutron scattering and μ SR experiments
Arnold, O.; Bilheux, J. C.; Borreguero, J. M.
Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, Vol. 764
DOI: 10.1016/j.nima.2014.07.029
Quantum dimer models and effective Hamiltonians on the pyrochlore lattice
Moessner, R.; Sondhi, S. L.; Goerbig, M. O.
Spin Disorder on a Triangular Lattice
Nakatsuji, S.
Frustrated order by disorder: The pyrochlore anti-ferromagnet with bond disorder
Bellier-Castella, L.; Gingras, M. Jp; Holdsworth, P. Cw
Canadian Journal of Physics, Vol. 79, Issue 11-12
Halperin-Saslow modes as the origin of the low-temperature anomaly in NiGa 2 S 4
Podolsky, Daniel; Kim, Yong Baek
Physical Review B, Vol. 79, Issue 14
Order-by-disorder and spiral spin-liquid in frustrated diamond-lattice antiferromagnets
Bergman, Doron; Alicea, Jason; Gull, Emanuel
Nature Physics, Vol. 3, Issue 7
DOI: 10.1038/nphys622
Spin-Ice State of the Quantum Heisenberg Antiferromagnet on the Pyrochlore Lattice
Huang, Yuan; Chen, Kun; Deng, Youjin
Statics and Dynamics of Incommensurate Spin Order in a Geometrically Frustrated Antiferromagnet CdCr 2 O 4
Chung, J. -H.; Matsuda, M.; Lee, S. -H.
Magnetic Coulomb Phase in the Spin Ice Ho2Ti2O7
Fennell, T.; Deen, P. P.; Wildes, A. R.
NaCaN i 2 F 7 : A frustrated high-temperature pyrochlore antiferromagnet with S = 1 N i 2 +
Krizan, J. W.; Cava, R. J.
Spin Freezing in Geometrically Frustrated Antiferromagnets with Weak Disorder
Saunders, T. E.; Chalker, J. T.
Sum rules for the frequency spectrum of linear magnetic chains
Hohenberg, P. C.; Brinkman, W. F.
DOI: 10.1103/PhysRevB.10.128
Topological Spin Glass in Diluted Spin Ice
Sen, Arnab; Moessner, R.
Low-temperature properties of classical geometrically frustrated antiferromagnets
Moessner, R.; Chalker, J. T.
DOI: 10.1103/PhysRevB.58.12049
Pyrochlore Antiferromagnet: A Three-Dimensional Quantum Spin Liquid
Canals, B.; Lacroix, C.
MACS—a new high intensity cold neutron spectrometer at NIST
Rodriguez, J. A.; Adler, D. M.; Brand, P. C.
Measurement Science and Technology, Vol. 19, Issue 3
Emergent excitations in a geometrically frustrated magnet
Lee, S. -H.; Broholm, C.; Ratcliff, W.
Nature, Vol. 418, Issue 6900
DOI: 10.1038/nature00964
Liquidlike correlations in single-crystalline Y 2 Mo 2 O 7 : An unconventional spin glass
Silverstein, H. J.; Fritsch, K.; Flicker, F.
Polarized neutron scattering on HYSPEC: the HYbrid SPECtrometer at SNS
Zaliznyak, Igor A.; Savici, Andrei T.; Ovidiu Garlea, V.
Journal of Physics: Conference Series, Vol. 862
DOI: 10.1088/1742-6596/862/1/012030
Colloquium : Spontaneous magnon decays
Zhitomirsky, M. E.; Chernyshev, A. L.
Reviews of Modern Physics, Vol. 85, Issue 1
DOI: 10.1103/RevModPhys.85.219
Hydrodynamic theory of spin waves in spin glasses and other systems with noncollinear spin orientations
Halperin, B. I.; Saslow, W. M.
Dipolar Spin Correlations in Classical Pyrochlore Magnets
Isakov, S. V.; Gregor, K.; Moessner, R.
Absent pinch points and emergent clusters: Further neighbor interactions in the pyrochlore Heisenberg antiferromagnet
Conlon, P. H.; Chalker, J. T.
Glassy Statics and Dynamics in the Chemically Ordered Pyrochlore Antiferromagnet Y 2 Mo 2 O 7
Gardner, J. S.; Gaulin, B. D.; Lee, S. -H.
Entropy Balance and Evidence for Local Spin Singlets in a Kagomé-Like Magnet
Ramirez, A. P.; Hessen, B.; Winklemann, M.
Ordering in the pyrochlore antiferromagnet due to Dzyaloshinsky-Moriya interactions
Elhajal, Maged; Canals, Benjamin; Sunyer, Raimon
Exchange Monte Carlo Method and Application to Spin Glass Simulations
Hukushima, Koji; Nemoto, Koji
Journal of the Physical Society of Japan, Vol. 65, Issue 6
Partial order from disorder in a classical pyrochlore antiferromagnet
Chern, Gia-Wei; Moessner, R.; Tchernyshyov, O.
Direct Measurement of the Spin Hamiltonian and Observation of Condensation of Magnons in the 2D Frustrated Quantum Magnet Cs 2 CuCl 4
Coldea, R.; Tennant, D. A.; Habicht, K.
Static and dynamic X Y -like short-range order in a frustrated magnet with exchange disorder
Ross, K. A.; Krizan, J. W.; Rodriguez-Rivera, J. A.
Properties of a Classical Spin Liquid: The Heisenberg Pyrochlore Antiferromagnet
Mean-field approach to magnetic ordering in highly frustrated pyrochlores
Reimers, J. N.; Berlinsky, A. J.; Shi, A. -C.
NaCaCo 2 F 7 : A single-crystal high-temperature pyrochlore antiferromagnet
Less than 50% sublattice polarization in an insulating S = 3 2 kagomé antiferromagnet at T ≈ 0
Lee, S. -H.; Broholm, C.; Collins, M. F.
Power-law spin correlations in pyrochlore antiferromagnets
Henley, C. L.
Quantum spin liquid: The Heisenberg antiferromagnet on the three-dimensional pyrochlore lattice
Canals, Benjamin; Lacroix, Claudine
Dissociation of spin objects in geometrically frustrated CdFe 2 O 4
Kamazawa, K.; Park, S.; Lee, S. -H.
NaSrMn 2 F 7 , NaCaFe 2 F 7 , and NaSrFe 2 F 7 : novel single crystal pyrochlore antiferromagnets
Sanders, M. B.; Krizan, J. W.; Plumb, K. W.
Journal of Physics: Condensed Matter, Vol. 29, Issue 4
DOI: 10.1088/1361-648X/29/4/045801
Ordering by quantum fluctuations in a strongly frustrated Heisenberg antiferromagnet
Harris, A. B.; Berlinsky, A. J.; Bruder, C.
Journal of Applied Physics, Vol. 69, Issue 8
The "Coulomb Phase" in Frustrated Systems
Henley, Christopher L.
Annual Review of Condensed Matter Physics, Vol. 1, Issue 1
DOI: 10.1146/annurev-conmatphys-070909-104138
Spin Dynamics in Pyrochlore Heisenberg Antiferromagnets
Bellier-Castella, L.; Gingras, M. J. P.; Holdsworth, P. C. W.
DOI: 10.1139/cjp-79-11-12-1365
Breathing chromium spinels: a showcase for a variety of pyrochlore Heisenberg Hamiltonians
Ghosh, Pratyay; Iqbal, Yasir; Müller, Tobias
npj Quantum Materials, Vol. 4, Issue 1
Multiple Coulomb phase in the fluoride pyrochlore CsNiCrF6
Fennell, T.; Harris, M. J.; Calder, S.
Nature Physics, Vol. 15, Issue 1
Experimental signatures of a three-dimensional quantum spin liquid in effective spin-1/2 Ce2Zr2O7 pyrochlore
Gao, Bin; Chen, Tong; Tam, David W.
Nature Physics, Vol. 15, Issue 10
A quantum liquid of magnetic octupoles on the pyrochlore lattice
Sibille, Romain; Gauthier, Nicolas; Lhotel, Elsa
Thermodynamics of the pyrochlore-lattice quantum Heisenberg antiferromagnet
Müller, Patrick; Lohmann, Andre; Richter, Johannes
Physical Review B, Vol. 100, Issue 2
DOI: 10.1103/physrevb.100.024424
Goldstone modes in the emergent gauge fields of a frustrated magnet
Garratt, S. J.; Chalker, J. T.
Dynamical Structure Factor of the Three-Dimensional Quantum Spin Liquid Candidate NaCaNi 2 F 7
Zhang, Shu; Changlani, Hitesh J.; Plumb, Kemp W.
Extended Scattering Continua Characteristic of Spin Fractionalization in the Two-dimensional Frustrated Quantum Magnet Cs2CuCl4Observed by Neutron Scattering
Journal Article Coldea, Radu ; Tennant, D. A. ; Tyleczynski, Z. - Physical Review B
The magnetic excitations of the quasi-2D spin-1/2 frustrated Heisenberg antiferromagnet Cs{sub 2}CuCl{sub 4} are explored throughout the 2D Brillouin zone using high-resolution time-of-flight inelastic neutron scattering. Measurements are made both in the magnetically ordered phase, stabilized at low temperatures by the weak interlayer couplings, as well as in the spin liquid phase above the ordering temperature T{sub N}, when the 2D magnetic layers are decoupled. In the spin liquid phase the dynamical correlations are dominated by highly dispersive excitation continua, a characteristic signature of fractionalization of S = 1 spin waves into pairs of deconfined S = 1/2 spinons andmore » the hallmark of a resonating-valence-bond (RVB) state. The boundaries of the excitation continua have strong 2D-modulated incommensurate dispersion relations. Upon cooling below T{sub N} magnetic order in an incommensurate spiral forms due to the 2D frustrated couplings. In this phase sharp magnons carrying a small part of the total scattering weight are observed at low energies, but the dominant continuum scattering which occurs at medium to high energies is essentially unchanged compared to the spin liquid phase. Linear spin-wave theory including one- and two-magnon processes can describe the sharp magnon excitation, but not the dominant continuum scattering, which instead is well described by a parametrized two-spinon cross section. Those results suggest a crossover in the nature of the excitations from S = 1 spin waves at low energies to deconfined S = 1/2 spinons at medium to high energies, which could be understood if Cs{sub 2}CuCl{sub 4} was in the close proximity of transition between a fractional RVB spin liquid and a magnetically ordered state. A large renormalization factor of the excitation energies [R = 1.63(5)], indicating strong quantum fluctuations in the ground state, is obtained using the exchange couplings determined from saturation-field measurements. We provide an independent consistency check of this quantum renormalization factor using measurements of the second moment of the paramagnetic scattering.« less
Identifying spinon excitations from dynamic structure factor of spin-1/2 Heisenberg antiferromagnet on the Kagome lattice
Journal Article Zhu, W. ; Gong, Shou-shu ; Sheng, D. N. - Proceedings of the National Academy of Sciences of the United States of America
A spin-more » 1 / 2 lattice Heisenberg Kagome antiferromagnet (KAFM) is a prototypical frustrated quantum magnet, which exhibits exotic quantum spin liquids that evade long-range magnetic order due to the interplay between quantum fluctuation and geometric frustration. So far, the main focus has remained on the ground-state properties; however, the theoretical consensus regarding the magnetic excitations is limited. Here, we study the dynamic spin structure factor (DSSF) of the KAFM by means of the density matrix renormalization group. By comparison with the well-defined magnetically ordered state and the chiral spin liquid sitting nearby in the phase diagram, the KAFM with nearest neighbor interactions shows distinct dynamical responses. The DSSF displays important spectral intensity predominantly in the low-frequency region around the Q = M point in momentum space and shows a broad spectral distribution in the high-frequency region for momenta along the boundary of the extended Brillouin zone. The excitation continuum identified from momentum- and energy-resolved DSSF signals emergent spinons carrying fractional quantum numbers. These results capture the main observations in the inelastic neutron scattering measurements of herbertsmithite and indicate the spin liquid nature of the ground state. By tracking the DSSF across quantum-phase transition between the chiral spin liquid and the magnetically ordered phase, we identify the condensation of two-spinon bound state driving the quantum-phase transition.« less
Magnons and continua in a magnetized and dimerized spin - 1 2 chain
Journal Article Stone, M. B. ; Chen, Y. ; Reich, D. H. ; ... - Physical Review. B, Condensed Matter and Materials Physics
We examine the magnetic field dependent excitations of the dimerized spin -1/2 chain, copper nitrate, with antiferromagnetic intra-dimer exchangemore » $$J_1=0.44$$ (1) meV and exchange alternation $$\alpha=J_2/J_1=0.26$$ (2). Magnetic excitations in three distinct regimes of magnetization are probed through inelastic neutron scattering at low temperatures. At low and high fields there are three and two long-lived magnon-like modes, respectively. The number of modes and the anti-phase relationship between the wave-vector dependent energy and intensity of magnon scattering reflect the distinct ground states: A singlet ground state at low fields $$\mu_0H < \mu_0H_{c1} = 2.8$$ T and an $$S_z=1/2$$ product state at high fields $$\mu_0H > \mu_0H_{c2} = 4.2$$ T. Lastly, in the intermediate field regime, a continuum of scattering for $$\hbar\omega\approx J_1$$ is indicative of a strongly correlated gapless quantum state without coherent magnons.« less
Properties of Haldane Excitations and Multiparticle States in the Antiferromagnetic Spin-1 Chain Compound CsNiCl3
Journal Article Kenzelmann, M. ; Cowley, R. A. ; Buyers, W. J. L. ; ... - Physical Review B
We report inelastic time-of-flight and triple-axis neutron scattering measurements of the excitation spectrum of the coupled antiferromagnetic spin-1 Heisenberg chain system CsNiCl{sub 3}. Measurements over a wide range of wave-vector transfers along the chain confirm that above T{sub N} CsNiCl{sub 3} is in a quantum-disordered phase with an energy gap in the excitation spectrum. The spin correlations fall off exponentially with increasing distance with a correlation length {zeta} = 4.0(2) sites at T = 6.2K. This is shorter than the correlation length for an antiferromagnetic spin-1 Heisenberg chain at this temperature, suggesting that the correlations perpendicular to the chain directionmore » and associated with the interchain coupling lower the single-chain correlation length. A multiparticle continuum is observed in the quantum-disordered phase in the region in reciprocal space where antiferromagnetic fluctuations are strongest, extending in energy up to twice the maximum of the dispersion of the well-defined triplet excitations. We show that the continuum satisfies the Hohenberg-Brinkman sum rule. The dependence of the multiparticle continuum on the chain wave vector resembles that of the two-spinon continuum in antiferromagnetic spin-1/2 Heisenberg chains. This suggests the presence of spin-1/2 degrees of freedom in CsNiCl{sub 3} for T {approx}< 12 K, possibly caused by multiply frustrated interchain interactions.« less
Journal Article Sibille, Romain ; Gauthier, Nicolas ; Lhotel, Elsa ; ... - Nature Physics
Spin liquids are highly correlated yet disordered states formed by the entanglement of magnetic dipoles. Theories define such states using gauge fields and deconfined quasiparticle excitations that emerge from a local constraint governing the ground state of a frustrated magnet. For example, the '2-in–2-out' ice rule for dipole moments on a tetrahedron can lead to a quantum spin ice in rare-earth pyrochlores. However, $f$-electron ions often carry multipole degrees of freedom of higher rank than dipoles, leading to intriguing behaviours and 'hidden' orders. In this paper we show that the correlated ground state of a Ce3+-based pyrochlore, Ce2Sn2O7, is amore » quantum liquid of magnetic octupoles. Our neutron scattering results are consistent with a fluid-like state where degrees of freedom have a more complex magnetization density than that of magnetic dipoles. The nature and strength of the octupole–octupole couplings, together with the existence of a continuum of excitations attributed to spinons, provides further evidence for a quantum ice of octupoles governed by a '2-plus–2-minus' rule. Our work identifies Ce2Sn2O7 as a unique example of frustrated multipoles forming a 'hidden' topological order, thus generalizing observations on quantum spin liquids to multipolar phases that can support novel types of emergent fields and excitations.« less | CommonCrawl |
Proof that convergent Taylor Series converge to actual value of function
Taylor series (or Maclaurin Series) are the only way to get values for some functions, such as
$$\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x e^{t^2} dt = \frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty} \frac{(-1)^nx^{2n+1}}{n!(2n+1)}.$$
However, while it is easy to show the students convergence of any such series (using the ratio or root tests) and have them find the radius and interval of convergence, the students might legitimately ask: "it converges to a number (if inside the interval of convergence), but how do we know this convergence is to the actual number this function should give if we can't check by another means?"
For a series like $\sin(x)$, we can get the value expected using trigonometry for comparison to the series expansion and show they match, and use this to compare the accuracy of $n$ terms of the expansion. For a function where you cannot simply compare the actual value to the series resultant because the series is the only way to get a result, how do you show that the series converges to the actual function value and not another value? How do you establish how good the approximation is for the first n terms of the series? Especially for functions with huge radii of convergence, why should the students expect derivative information taken around a single number to give accurate values extremely far from that number? How do you give students an intuition for the number of terms needed to get an accurate approximation vs distance from the expansion point?
calculus series
Xander Henderson
ElliotElliot
$\begingroup$ Well, isn't some form of Taylor's remainder useful? I mean, the remainder, $R_{n,x_0}(x)$ is the difference of the $n-$th Taylor approximation and the actual function, hence, showing that the remainder vanishes is enough for the most cases. $\endgroup$
– Βασίλης Μάρκος
$\begingroup$ Elliot, does the textbook you use give exercises for students to practice working with the "remainder" -- the student-usable form of Taylor's Theorem? The answer to your question seems like it will be "teach your students what Taylor's Theorem says" but I'm wondering if the book you use doesn't touch on this topic, or if that is not the kind of answer you are looking for. $\endgroup$
– Chris Cunningham ♦
$\begingroup$ The Taylor remainder is just the next term. It gives you an idea of the significance of the next term, so you can use it to choose the truncation point. But it says nothing about how accurately the Taylor series approximates the actual value of the function. $\endgroup$
– Elliot
$\begingroup$ You need to take a deep dive into Taylor's theorem. I suggest focusing on the mean-value form for the remainder. $\endgroup$
$\begingroup$ Btw Taylor Series are not the only way to compute the values of functions like erf. $\endgroup$
– Michael Bächtold
Taylor series (or Maclaurin Series) are the only way to get values for some functions
This is not true in the example you give. There is at least one other way to get the value of erf, which is to do numerical integration of the integral you wrote down as a definition.
BTW, you don't need to say "Taylor series (or Maclaurin Series)," because a Maclaurin series is a Taylor series.
how do we know this convergence is to the actual number this function should give if we can't check by another means?
Well, if the only information you have about this function is its Taylor series, then you can't determine whether the Taylor series converges to the correct value (at a point inside its radius of convergence) -- because you have no other information about its correct value.
I'm sure there are functions we can define such that nobody on earth can prove any nontrivial facts about exact values of the function. For example, let $F_n$ be the nth Fibonacci number, and define the function
$$f(x)=\sum \frac{x^n}{F_n!}.$$
This function is analytic everywhere on the real line. I don't know, maybe someone can prove something about some exact value of this function other than the trivial fact that $f(0)=1$, but since I made this example up essentially at random, it seems unlikely.
But very few real-world examples are like this. In most cases, we have some reason why we're interested in this function, which implies other things about it. E.g., some books do define $e^x$ in terms of its Taylor series, but then they prove things like $(e^x)'=e^x$ and $e^{x+y}=e^xe^y$ based on that definition. This gives you a body of facts that can all be correlated with one another.
Especially for functions with huge radii of convergence, why should the students expect derivative information taken around a single number to give accurate values extremely far from that number?
In general, they should not expect this. Analytic functions are in some sense just a infinitesimal subset of the set of all functions. (WP gives the following more rigorous statement of this fact: "And in fact the set of functions with a convergent Taylor series is a meager set in the Fréchet space of smooth functions.") But many of the important functions we use a lot in math are ones that have nice properties, and the nice properties are the reason we study those functions to study. Once such nice property is if a function is analytic.
There are certainly techniques for proving whether certain Taylor series converge to certain values, but they may not be appropriate to teach in a second-semester freshman calc course. For example, if I'm remembering my long-ago complex analysis correctly, then the function $1/(x^2+1)$ is going to have a Taylor series about $x=0$ that converges to the correct value throughout its radius of convergence of 1, and this is because it's formed by the composition of functions that are analytic except at $x=\pm i$.
One can certainly say things about the error incurred by truncating a Taylor series, e.g., putting bounds on this error. But I don't think this has much to do with your question, since functions like $\exp(-1/x^2)$ would have small bounds on the truncation error, but the error relative to the desired value is large.
user507user507
$\begingroup$ When I say "only way to get" I mean that we teach in intro calculus and the students can use. A student in my class won't be able to pull out the numerical integral and check things with ti, so for them this is the only way until more advanced courses. $\endgroup$
$\begingroup$ @Elliot: Typically students at this level learn to approximate integrals using the trapezoid rule, for example. $\endgroup$
– user507
$\begingroup$ (+1) I was just about to write a comment for "only way to get", but decided to glance at the answers first. I was thinking, as you pointed out in your comment, that in the U.S. at least students often learn about Riemann sum approximations for definite integrals (and maybe things like the trapezoid rule, but even if not) at the end of first semester calculus, but won't see Taylor series until the middle or end of the second semester of calculus. And aside from things we can think of, I don't know how one could actually establish something like "only way to get" without more precision given. $\endgroup$
– Dave L Renfro
$\begingroup$ For rather detailed information about the relation between (1) convergence of a power series of a function and (2) convergence of a power series to the function, see this 9 May 2002 sci.math post and its follow-up 19 May 2002 sci.math post. $\endgroup$
If the power series $\sum_{j=0}^\infty a_j z^j$ converges to some function $f(z)$, then the Maclaurin series of $f(z)$ is $\sum_{j=0}^\infty a_j z^j$.
But the converse need not hold.
It could happen that the Maclaurin series of a function $f(z)$ is $\sum_{j=0}^\infty a_j z^j$, but $\sum_{j=0}^\infty a_j z^j$ converges to some function other than $f(z)$. Calculus textbooks often use the example $$ f(z) = \begin{cases} \exp\left(\frac{-1}{z^2}\right),\qquad &z>0\\ 0,\qquad &z \le 0\end{cases}. $$
Gerald EdgarGerald Edgar
$\begingroup$ Understanding how the example above interacts with its Taylor expansion at zero is key to facing the difficulty. The fact that the Taylor series generated by $f$ at zero is identically zero shows clearly there is a distinction between the given function and its Taylor series at zero. The Taylor series exists and is even entire, it just fails to represent $f$ anywhere except at the center. $\endgroup$
Functions whose Taylor series converge to the original function are called analytic. How do you know if a function is analytic? For teaching basic calculus it probably suffices to know that $\exp,\cos,\sin$ are analytic and that analytic functions are closed under sums, products, division, composition, inversion, derivation and anti-derivation. In particular all elementary functions are analytic at every point in the interior of their domain of definition and the function from you example is analytic. https://en.wikipedia.org/wiki/Analytic_function
Michael BächtoldMichael Bächtold
I agree with your assertion this issue is not dealt with in many introductory calculus texts for a variety of reasons. There are two rather different concepts of convergence at play:
Convergence of the Taylor series centered at $x_o$ let's say $\displaystyle \sum_{n=0}^{\infty} a_n(x-x_o)^n$. The Interval of Convergence (IOC) is the set of all $x$ for which numerical series $\displaystyle \sum_{n=0}^{\infty} a_n(x-x_o)^n$ converges. For a given $x$ and set of coefficients we have numerous tools to decide the IOC. Ratio or Root Tests cover much ground here.
Convergence of the Taylor series to the function for all $x$ near $x_o$; let $\displaystyle T_k(x) = \sum_{n=0}^{k} a_n(x-x_o)^n$ does $T_k \rightarrow f$ as $k \rightarrow \infty$ ? Actually, what sort of convergence is this ? In truth, this is not the sort of convergence we actually give careful discussion of in Calculus II (in the typical US curriculum). This is convergence of a sequence of functions. The work around for Calculus II is Taylor's Theorem with remainder. Essentially, it says $$ f(x) = T_k(x)+R_k(x) $$ where $R_k(x) = \frac{f^{(k+1)}(c)}{(k+1)!}(x-x_o)^{k+1}$ for some $c$ between $x$ and $x_o$. The Mean Value Theorem is the basic example of this result and the proof of the general result can follow a very similar path. So, we get to trade the original question of convergence for the easier task of somehow arguing $R_k(x) \rightarrow 0$ as $k \rightarrow \infty$ for appropriate $x$. It is a pain. It is certainly harder than the series convergence and divergence analysis which already kills the given audience. So, many books try to downplay this topic. Personally, I try to prove cosine or sine is analytic because I have typically built sine and cosine from radian measure and limits etc... all without power series and I just need $-1 \leq \cos \theta, \sin \theta \leq 1$. I think the argument for the exponential function is not too bad. But generally, unless I was teaching an honors Calculus II it is something I touch on then walk away from saying we will just assume the function in question is analytic. If the students are shown the example given by Gerald Edgar then they can appreciate the distinction in concepts of convergence. If you show them how to prove analyticity then perhaps they can appreciate why you don't ask them to show it usually. We have many other more pressing issues at this point in calculus: ability to apply convergence tests logically, ability to find Taylor series via non-rediculous methods, mastery of geometric series techniques...
Convergence of a sequence of functions is usually dealt with in a later analysis course where we consider pointwise convergence as well as uniform convergence. Anyway, the problem for Calculus II is we cannot even compare $f$ to its Taylor series unless we have a definition of $f$ which allows us to make the comparison.
One rather slippery solution here:
If $f$ is defined by a given power series then it necessarily converges to its Taylor expansion.
The above may seem tautological, but if you use power series to formulate the definition of a function then it really is just that simple. However, more generally, to even hope to answer if $f$ is analytic we need some formulation or information about $f$ which allows us to compare $f$ to its Taylor series.
Finally, if you want to see how to bound the error in a given Taylor series for an analytic function then I have a few examples worked out and visualized in my notes. See Pages 204-208 or so
Often we can use the alternating series estimation theorem to get around the sort of arguments I face in the notes. For example, $$ \sin(1) = 1- \frac{1}{6}+ \frac{1}{5!}+ \cdots $$ is an alternating series hence $\sin(1) = 5/6$ to within an error of $1/5!$. Many problems allow this simple run-around.
I wonder, have you thought about showing the derivative of a power series is in fact the derivative of the function as well ? This is actually quite technical and I skipped it for years without a student ever noticing... or at least without noticing and saying something. Salas Hille and Etgen actually does have arguments for this. But, most schools choose lighter fare for their course. After all, retention, enrollment and worse yet what if engineering starts their own course (gulp).
James S. CookJames S. Cook
Not the answer you're looking for? Browse other questions tagged calculus series or ask your own question.
Students using l'Hôpital's Rule on the terms of a series, instead of the Limit Comparison Test
Physical applications of higher terms of Taylor series
Demonstrating that integrals of some unbounded functions exist, and others do not
Examples of arithmetic and geometric sequences and series in daily life
Proving convergence or divergence of series: tips and recommendations
Comparison Tests in Calculus
What is the value in creating distinguishing terminology between the $x$, $y$, and $(x, y)$ values of a possible point of extremum?
Advice on Full Time Community College Math Instructor Position
Student-friendly / efficient approach to computing Taylor coefficients of infinite binomial series expansions?
Exponential & logarithm in a high school calculus class | CommonCrawl |
IUPACpal: efficient identification of inverted repeats in IUPAC-encoded DNA sequences
Hayam Alamro1,2,
Mai Alzamel1,3,
Costas S. Iliopoulos1,
Solon P. Pissis ORCID: orcid.org/0000-0002-1445-19324,5 &
Steven Watts1
BMC Bioinformatics volume 22, Article number: 51 (2021) Cite this article
An inverted repeat is a DNA sequence followed downstream by its reverse complement, potentially with a gap in the centre. Inverted repeats are found in both prokaryotic and eukaryotic genomes and they have been linked with countless possible functions. Many international consortia provide a comprehensive description of common genetic variation making alternative sequence representations, such as IUPAC encoding, necessary for leveraging the full potential of such broad variation datasets.
We present IUPACpal, an exact tool for efficient identification of inverted repeats in IUPAC-encoded DNA sequences allowing also for potential mismatches and gaps in the inverted repeats.
Within the parameters that were tested, our experimental results show that IUPACpal compares favourably to a similar application packaged with EMBOSS. We show that IUPACpal identifies many previously unidentified inverted repeats when compared with EMBOSS, and that this is also performed with orders of magnitude improved speed.
An inverted repeat (IR) is a single stranded sequence of nucleotides with a subsequent downstream sequence consisting of its reverse complement [1]. Any sequence of nucleotides appearing between the initial component and its reverse complement is referred to as the gap (or the spacer) of the IR. The gap's size may be of any length, including zero. In the event that the length is zero, the sequence as a whole is dubbed a palindromic sequence. In this event, reading from 5' to 3' in the forward direction on one strand reads the same as the sequence from 5' to 3' on the complementary strand.
IRs are a widespread occurrence [2,3,4,5,6,7] in both prokaryotic and eukaryotic genomes, and are commonly associated with a wide range of functions. Some IRs are able to extrude into DNA cruciforms, structures in which the typical double-stranded DNA denatures, and forms into intrastrand double helices or stems, consisting of complementary arms from within the same strand. At the top of each stem, unpaired loops are created from the spacer regions, and the four-way junction where the bases of the stems intersect becomes equivalent to a Holliday junction. This potential for an IR to extrude into cruciforms is dependant on the sequence composition and size of both the arms and spacer region [8]. The amount of energy required to cause such an extrusion into cruciforms via denaturing is lowered by unwinding torsional stress generated by local negative supercoiling [9, 10].
IRs are a particular class of DNA duplication in humans. Large IRs have been seen in physical maps of chromosome X, and are connected to chromosomal rearrangements and gene deletions [11,12,13,14]. The completed sequence of chromosome Y in humans, indicates the existence of many large and substantially homologous IRs, up to 1.4 Mb in size and with \(99.97\%\) identity, which harbour Y-specific genes expressed in testes and considered to be essential for spermatogenesis [15]. It is apparent that gene conversion is responsible for maintaining the homology between the arms of such palindromes, and therefore the integrity of the sequence and gene functionality in the absence of meiotic recombination between homologs [16].
A common task, carried out by many international consortia, consists in providing a complete description of common genetic variation by applying whole-genome sequencing to a diverse range of subjects from multiple populations [17]. Therefore, new and qualitatively distinct computational methods and models are required to utilise the full potential of such broad datasets. One such key example of a computational paradigm shift is indicated by new representations of genomes as graphs [18] or as degenerate sequences [19] encoding the consensus of multiple sequences, marking a transition from their previous representation as regular sequences. In particular, in IUPAC encoding [20], specific symbols, referred to as degenerate, are employed to represent a sequence position that corresponds to a set of possible alternative nucleotides.
Various algorithmic tools and software have been published to enable the study of IRs in genomes [8, 21,22,23]. However, to the best of our knowledge, the only available tool that can meaningfully process IUPAC-encoded sequences is EMBOSS palindrome [21]. In this paper, we develop an exact and efficient tool called IUPACpal as an alternative to EMBOSS palindrome (henceforth EMBOSS). We have implemented IUPACpal to mimic the workflow, parameters and output format of EMBOSS to better enable direct comparisons in performance as well as to minimise the learning curve of using our software. We show that IUPACpal compares favourably to EMBOSS. Specifically, we show that IUPACpal identifies many previously unidentified IRs when compared with EMBOSS, and also performs this task with orders of magnitude improved speed.
We begin with basic definitions and notation following [24]. An alphabet \(\Sigma\) is a finite nonempty set whose elements are called symbols. Let \(X=X[0]X[1] \dots X[n-1]\) be a string (or sequence) of length \(|X|=n\) over \(\Sigma\). By \(\varepsilon\) we denote the empty string. For two positions i and j on X, we denote by \(X[i {.\,.}j]=X[i]\dots X[j]\) the substring of X that starts at position i and ends at position j. Let Y be a sequence of length m with \(0<m\le n\). We say that there exists an occurrence of Y in X, or more simply, that Y occurs in X, when Y is a substring of X. Every occurrence of Y can be characterised by a starting position in X. Thus we say that Y occurs at the (starting) position i in X when \(Y=X[i {.\,.}i + m - 1]\).
The Hamming distance between two sequences X and Y of the same length is defined as the number of corresponding positions in X and Y with different symbols, denoted by \(\delta _H(X, Y) = |\{i : X[i] \ne Y[i], i = 0, 1,\ldots , |X| - 1\}|\). If \(|X| \ne |Y|\), we set \(\delta _H(X, Y)=\infty\) for completeness. If two sequences X and Y are at Hamming distance k or less, we call this a k-match, written as \(X \approx _k Y\).
Degenerate strings
We use the concept of a degenerate string to model IUPAC-encoded sequences. A degenerate symbol \({\tilde{x}}\) over an alphabet \(\Sigma\) is a nonempty subset of \(\Sigma\), i.e. \({\tilde{x}} \subseteq \Sigma\) and \({\tilde{x}} \ne \emptyset\). \(|{\tilde{x}}|\) denotes the size of the set and we have \(1 \le |{\tilde{x}}| \le |\Sigma |\). A finite sequence \({\tilde{X}}={\tilde{x}}_0 {\tilde{x}}_1 \dots {\tilde{x}}_{n-1}\) is said to be a degenerate string if \({\tilde{x}}_i\) is a degenerate symbol for each \(0 \le i \le n-1\). A degenerate string is built over the potential \(2^{|\Sigma |} - 1\) nonempty subsets of symbols belonging to \(\Sigma\). The length \(|{\tilde{X}}|=n\) of a degenerate string \({\tilde{X}}\) is the number of degenerate symbols.
For example, \({\tilde{X}}=[\texttt {A} \texttt {C}]\, [\texttt {A}]\, [\texttt {G}]\, [\texttt {C} \texttt {G}]\, [\texttt {A}]\, [\texttt {A} \texttt {C} \texttt {G}]\) is a degenerate string of length 6 over the alphabet \(\Sigma = \{\texttt {A},\texttt {C},\texttt {G}\}\) (or \(\{\texttt {A},\texttt {C},\texttt {G},\texttt {T}\}\) with no occurrences of \(\texttt {T}\)). If \(|{\tilde{x}}_i|=1\), that is \(\tilde{x_i}\) represents a single symbol of \(\Sigma\), we say that \({\tilde{x}}_i\) is a solid symbol and i is a solid position. Otherwise \({\tilde{x}}_i\) and i are said to be a non-solid symbol and non-solid position, respectively. For convenience we often write \({\tilde{x}}_i=\sigma\) where \(\sigma \in \Sigma\), instead of \({\tilde{x}}_i=[\sigma ]\), in the case where \({\tilde{x}}_i\) is a solid symbol. Consequently, the previous example \({\tilde{X}}\) may be written as \({\tilde{X}}=[\texttt {A} \texttt {C}]\, \texttt {A}\, \texttt {G}\, [\texttt {C} \texttt {G}]\, \texttt {A}\, [\texttt {A} \texttt {C} \texttt {G}]\). A degenerate string containing only solid symbols is a solid string and behaves the same as a standard string of symbols, and for such strings we may omit the \(\sim\) notation. In addition, a solid symbol \([\sigma ]\) and its corresponding symbol \(\sigma \in \Sigma\) may be treated as interchangeable for our purposes.
For degenerate strings, the notion of symbol equality is extended to symbol equality between degenerate symbols. Two degenerate symbols \({\tilde{x}}\) and \({\tilde{y}}\) are said to match (denoted by \({\tilde{x}} \approx {\tilde{y}}\)) if they have at least one symbol in common, i.e. \({\tilde{x}} \cap {\tilde{y}} \ne \emptyset\). Further extending this notion to degenerate strings, we say that two degenerate strings \({\tilde{X}}\) and \({\tilde{Y}}\) match (denoted by \({\tilde{X}} \approx {\tilde{Y}}\)) if \(|{\tilde{X}}|=|{\tilde{Y}}|\) and all corresponding symbols in \({\tilde{X}}\) and \({\tilde{Y}}\) match. Note that the relation \(\approx\) is not transitive. A degenerate string \({\tilde{X}}\) is said to occur at position i in another degenerate string \({\tilde{Y}}\) if \({\tilde{X}} \approx {\tilde{Y}}[i {.\,.}i+|{\tilde{X}}|-1]\).
Inverted repeats
For a given string X, we use the notation \(X^R\) to refer to the reversal of X, i.e. \(X^R=X[n-1]\cdots X[0]\). A palindrome is a string P which is equal to its reversal i.e. \(P= P^R\).
We further use the notation \({\bar{X}}\) to refer to the complement of a string X, where the complement is defined by some bijective function \(f:\Sigma \rightarrow \Sigma\). In the case of DNA alphabet, the natural choice of a complement function over the alphabet \(\Sigma = \{\texttt {A},\texttt {C},\texttt {G},\texttt {T}\}\) is such that \(\texttt {A} \longleftrightarrow \texttt {T}\) and \(\texttt {C} \longleftrightarrow \texttt {G}\). A complement string \({\bar{X}}\) is such that \({\bar{X}}[i]=f(X[i])\) for all i.
Closely related to palindromes, we define an inverted repeat (IR) as a string that can be expressed in the form \(W{\bar{W}}^R\) for some string W. We may generalise IRs by allowing a central gap, which we call a gapped inverted repeat. A gapped IR is therefore a string that can be expressed in the form \(WG{\bar{W}}^R\) for some pair of strings W and G where \(|G|\ge 0\). In particular note that if \(G=\varepsilon\) (empty string), then the IR is ungapped.
Finally, we may introduce mismatches by permitting the two occurrences of W within \(WG{\bar{W}}^R\) to differ by some number of symbols, i.e. some Hamming distance. We refer to a string as a gapped inverted repeat within k mismatches when it can be expressed in the form \(WG{\bar{W}}^R\) with \(\delta _H(W, {\bar{W}}^R)\le k\). In the remainder of this paper, we use the term inverted repeat irrespective of whether it contains a gap, unless making the distinction is necessary. An example of an IR, which makes use of a gap and mismatches is shown in Fig. 1. This illustrates the most commonly used diagrammatic representation of IRs.
Example of an IR. The sequence CT-CGCAGTCACCG-GA is an IR with a gap of 7 and a single mismatch towards the tail ends
IUPAC matching schemes
The International Union of Pure and Applied Chemistry (IUPAC) encoding is an extended alphabet \(\Sigma ^{+}\) of symbols [20], which provides a single symbol representation for every one of the 15 possible nonempty subsets of the standard 4-symbol DNA alphabet \(\Sigma = \{\texttt {A}, \texttt {C}, \texttt {G}, \texttt {T}\}\). For example, the symbol B represents the set {C,G,T}. This encoding provides a natural way to represent degenerate symbols using single symbols. The standard set of IUPAC symbols is \(\Sigma ^{+} = \{{{{{\mathbf {\mathtt{{A}}}}}}}, {{{\mathbf {\mathtt{{C}}}}}}, {{{\mathbf {\mathtt{{G}}}}}}, {{{\mathbf {\mathtt{{T}}}}}}, {{{\mathbf {\mathtt{{R}}}}}}, {{{\mathbf {\mathtt{{Y}}}}}}, {{{\mathbf {\mathtt{{S}}}}}}, {{{\mathbf {\mathtt{{W}}}}}}, {{{\mathbf {\mathtt{{K}}}}}}, {{{\mathbf {\mathtt{{M}}}}}}, {{{\mathbf {\mathtt{{B}}}}}}, {{{\mathbf {\mathtt{{D}}}}}}, {{{\mathbf {\mathtt{{H}}}}}}, {{{\mathbf {\mathtt{{V}}}}}}, {{{\mathbf {\mathtt{{N}}}}}}\}\). The symbol U may also be used instead of T, and the symbol * instead of N. We therefore treat these two ambiguous pairs interchangeably.
This raises the question of how to determine complements of such IUPAC symbols, extending the current matching scheme \(\texttt {A} \longleftrightarrow \texttt {T}\) and \(\texttt {C} \longleftrightarrow \texttt {G}\) over \(\Sigma\) to the full IUPAC alphabet \(\Sigma ^{+}\). The current palindrome application within the EMBOSS package uses a method by which every IUPAC symbol is assigned a single unique complement, by first taking complements of the underlying symbols of the represented subset of \(\Sigma\). For example, the complement of \({{{\mathbf {\mathtt{{B}}}}}}=\{\texttt {C},\texttt {G},\texttt {T}\}\) is \({{{\mathbf {\mathtt{{V}}}}}}=\{\texttt {G},\texttt {C},\texttt {A}\}\), and therefore \({{{\mathbf {\mathtt{{B}}}}}} \longleftrightarrow {{{\mathbf {\mathtt{{V}}}}}}\). We dub this scheme simple complement matching.
However, if we choose to interpret IUPAC symbols as representing a set of possibilities, then this type of matching does not take into account all possible match scenarios. Consider for example the symbol \({{{\mathbf {\mathtt{{R}}}}}}=\{\texttt {A},\texttt {G}\}\) when compared with the symbol \({{{\mathbf {\mathtt{{C}}}}}}=\{\texttt {C}\}\). Under simple complement matching, the R and C do not match, despite the fact that R contains G, the complement of C. Because of this potential shortcoming, we define the degenerate complement matching scheme over the IUPAC alphabet. Under this matching scheme, two IUPAC symbols \({{{\mathbf {\mathtt{{I}}}}}}_1\) and \({{{\mathbf {\mathtt{{I}}}}}}_2\) match if and only if there exists a pair of symbols \(\sigma _1 \in {{{\mathbf {\mathtt{{I}}}}}}_1\) and \(\sigma _2 \in {{{\mathbf {\mathtt{{I}}}}}}_2\) such that \(\sigma _1 \longleftrightarrow \sigma _2\).
Note that the underlying algorithm of IUPACpal is independent of the matching scheme used. Though currently implemented to use degenerate complement matching, a modification of the matching matrix within the open source code permits other potential matching schemes to be defined. The matching scheme may therefore be chosen to fit the intended use case.
We present a visualisation of both the simple and degenerate complement matching schemes in Fig. 2. Note that when considering sequences exclusively over the alphabet \(\Sigma =\{\texttt {A},\texttt {C},\texttt {G},\texttt {T}\}\), there is no distinction between simple and degenerate complement matching.
IUPAC matching schemes. Simple matching scheme on the left. Degenerate complement matching scheme on the right. White blocks indicate mismatch and filled blocks indicate match
Our algorithm exhaustively identifies all IRs by examining each position within a sequence and determining every valid IR with its centre at that position which adheres to the given input parameters.
This process first makes use of the kangaroo method to create a function with the ability to identify the longest matching prefix of any two substrings of a string [25, 26]. This function of two substrings is dubbed the longest common extension (LCE). For a given string X of length n and two indices i, j, we define the longest common extension \(\text {LCE}(X, i, j)\) as:
$$\begin{aligned} \text {LCE}(X,i,j)=\max (\{l : X[i {.\,.}i+l-1] = X[j {.\,.}j+l-1]\}\cup \{0\}) \end{aligned}$$
The kangaroo method requires an initial preprocessing of X, to generate indexing data structures known as the suffix array (SA) and the longest common prefix (LCP) array [27]. During preprocessing, the SA and LCP are generated twice: once for the original sequence and once for the reverse complement of the sequence. With these structures available, the kangaroo method makes it possible to find IRs with any number of mismatches with zero gap.
Our algorithm extends this capability by considering a range of possible gaps for each location in the sequence. For a given centre, the possible IRs are determined by first identifying symbols which are equidistant from the centre and are considered to mismatch.
Given that these mismatches can be identified, the procedure for finding IRs considers a minimal initial gap which is subsequently increased in order to reduce the number of mismatches inside the IR being considered, and thus permits a longer extension (inspect Fig. 3).
Example of 3 different IRs within a sequence. All have the same centre and are permitted 1 mismatch. The centre is marked in red. The size of the gap is given by \(\texttt {G}\). Mismatching symbols are marked with \(\times\). The IRs are indicated by shaded cells
This demonstrates the principle of finding several unique IRs with the same centre by extending the gap to effectively swallow an additional mismatch, such that the IR may be extended to the position directly adjacent to the next mismatch. This extending procedure is performed repeatedly to obtain all IRs for a given centre, while taking into account the parameters specifying the maximum gap and the size range for the IR itself. The algorithm maintains efficiency by calculating only the necessary mismatch locations needed for a given set of parameters, and no more.
We have implemented IUPACpal in \(\texttt {C++}\) under GNU/Linux. IUPACpal mimics the workflow, parameters and output format of EMBOSS to better enable direct comparisons in performance. By making the key features similar and output format identical, we also minimise the learning curve of using our software. Our application requests the following parameters: input file (0), sequence name (1), output file (2), minimum length (3), maximum length (4), maximum gap (5), maximum mismatches (6). IUPACpal is run with the following terminal command:
$$\begin{aligned} {{\texttt {./IUPACpal -f<0>-s<1> -o<2> -m<3> -M<4> -g<5> -x <6>}}} \end{aligned}$$
Output is given in an identical format to that of EMBOSS, in which all the discovered IRs are identified by their index locations (1-based indexing) alongside their symbol representation. An example as applied to the IR from Fig. 1 is shown below:
$$\begin{aligned}&\;\;{\texttt {4}}\;\;\;\;{\texttt {STACR}}\;\;\;\;{\texttt {8}} \\&\;\;\;\;\;\;\;\;{\texttt {|| ||} }\\&{\texttt {20}}\;\;\;\;{\texttt {GARGC}}\;\;\;\;{\texttt {16}} \end{aligned}$$
Run-time comparison. Comparison of run-time on 1,000,000 symbols of DNA. minimum length: 10. maximum length: 100. maximum gap: 100
Run-time analysis
All experiments were conducted on a computer system using one core of Intel Core CPU i5-4690 at 3.50GHz. Both EMBOSS and IUPACpal were compiled with g++ version 6.2.0 at optimization level 3 (-O3). For a fair comparison of efficiency, we ensured that IUPACpal found at least those IRs found by EMBOSS for a given sequence. Therefore some assumptions on what constitutes a unique IR are replicated in IUPACpal. The IRs found by both tools are also maximal, i.e. cannot be extended to the left or to the right (unless further mismatches are utilised). The leftmost and rightmost symbol in any reported IR must necessarily match.
We ran several performance tests, providing the palindrome tool from EMBOSS and IUPACpal the same input data, and considered both their respective run-times and numbers of IRs found. We generated real IUPAC-encoded DNA sequences by combining the Genome Reference Consortium Human Build 37 (GRCh37) with the variants obtained from the 1000 Genomes Project (October 2011 Integrated Variant Set release) [17]. Specifically, we made use of chromosome X data. Results are depicted in Figs. 4, 5, 6, and 7.
IUPACpal run-time. Run-time on 100,000 symbols of DNA for variable gap size and permitted mismatches. minimum length: 10. maximum length: 100
In Fig. 4 we see IUPACpal performing at a consistent run-time as the maximum number of permissible mismatches increases. To the contrary, EMBOSS performs faster below 4 mismatches, yet above this threshold requires increased run-time. In practice, IUPACpal will naturally require a greater run-time for an increasing number of mismatches. However for the given parameters, the change in the order of magnitude is negligible when compared to the increase for EMBOSS in the same scenario. In fact EMBOSS required such an exponentially increasing run-time that testing was limited to no more than 6 mismatches, where EMBOSS ran in excess of 3 hours compared to IUPACpal requiring approximately 15 minutes. The run-time for IUPACpal at this number of mismatches appears to be largely dominated by the preprocessing time, rather than the increased mismatch allowance. Thus IUPACpal dominates EMBOSS in terms of speed as this overhead quickly becomes less significant.
EMBOSS run-time. Run-time on 100,000 symbols of DNA for variable gap size and permitted mismatches. minimum length: 10. maximum length: 100
In Fig. 5 we see IUPACpal run-time as the number of mismatches and maximum gap are both varied. This figure may be directly compared against Fig. 6, indicating a similar pattern of variation in run-time, but with significantly increased magnitude. We note some interesting details of the heat-map, such as the run-time not necessarily reducing as the permitted gap increases. For instance, within this particular testing window we see that with 0 mismatches the run-time is lowest with a gap of approximately 400 symbols. However this run-time becomes slower not only when the gap reduces to 300, but also as the gap increases to 500. However the analogous claim does not hold when keeping the gap fixed and increasing the maximum permitted mismatches. It appears that increasing mismatches always results in a slower run-time, which is to be expected when considering the algorithmic complexity of the kangaroo method.
IUPACpal versus EMBOSS: Number of IRs found. Shows the number of IRs found on 1,000,000 symbols of DNA for variable number of permitted mismatches. minimum length: 10. maximum length: 100. maximum gap: 100
In Fig. 6 we see EMBOSS run-time as the number of mismatches and maximum gap are both varied. We may see that the lighter colouring indicates an increase in run-time when compared to Fig. 5. Of special interest is the similar pattern of run-time distribution across the heat-map between the two figures. However we see that IUPACpal completes execution significantly faster than EMBOSS. Consider for example the run-time with 9 permitted mismatches and a maximum gap of 500, where IUPACpal requires \(10^{1.5}\approx 30\)s and EMBOSS requires \(10^4\)s to complete.
Further to the comparisons with EMBOSS, an investigation was also made of the Inverted Repeats Finder (IRF) program [8], which targets a similar problem of identifying IRs. To enable a preliminary comparison, a test run of IRF was performed in accordance with the authors' example page [28]. Using the same testing environment as previous tests on IUPACpal, IRF was able to process human chromosome 21 (approximately 46 million base pairs) within an average of 930 s. Equivalently a rate of 50,000 DNA symbols per second. Scaling the timing tests of IUPACpal results in a speed of 130,000 DNA symbols per second. It is worth noting that the number of IRs found was relatively low within IRF (30,966 repeats found), due to the more restrictive parameters of the example run.
Let us stress that the efficacy of IUPACpal and IRF are not easily compared directly, as they utilise different paradigms of input parameters which do not naturally correspond. IRF requests a series of user defined weights, which implicitly define the IRs to be identified. In contrast, IUPACpal (and likewise EMBOSS) take a set of restraints in the minimum and maximum size of the IRs key features as input, namely the IR size and gap size. IUPACpal places emphasis on the simplicity of input parameters, and a broader matching criteria that permits a larger number of potential IRs to be identified.
Accuracy of output
The final testing performed verified that IUPACpal is capable of exhaustively identifying at least the same IRs as EMBOSS. In addition to ensuring the usefulness of our tool, this also serves to ensure that the increases in speed performance are a result of improved algorithmic efficiency and not the result of merely solving a simpler version of the problem. A Python script was written and included as part of the software package, to verify the commonalities of the output of both tools, in addition to identifying discrepancies between the two. It was found across numerous tests that IUPACpal does indeed identify at least the same IRs as identified by EMBOSS. In a small number of cases, it was found that EMBOSS did not identify certain instances of IRs, perhaps due to considering them equivalent to some smaller IR at the same centre. However this equivalence did not seem to apply to other pairs of IRs sharing the same centre, and therefore may represent an error or small inconsistency in EMBOSS output, reported also by [23]. The results showing a comparison of the overall number of IRs found are shown in Fig. 7. Note that with a mismatch of 0, the number of IRs found by EMBOSS was relatively small (less than 1000), and thus barely registers on the figure. We see that IUPACpal consistently identifies a greater number of IRs than EMBOSS.
We have presented IUPACpal, an exact and efficient tool for identifying IRs in IUPAC-encoded DNA sequences. IUPACpal has been shown to perform significantly faster than the popularly used EMBOSS tool. This speed increase appears to hold across several variations of the problem, whereby mismatches and gaps are included as additional parameters. IUPACpal also retains the ability to identify the same IRs as EMBOSS, in addition to increasing the number of IRs found. Finally, IUPACpal is designed in such a way that it could be effortlessly plugged into any pipeline, which currently relies on EMBOSS for IR identification.
Availability and requirements
Project name: IUPACpal
Project home page: https://sourceforge.net/projects/iupacpal/
Operating system(s): GNU/Linux
Other requirements: Not applicable
Programming language: C++
License: GNU GPL
Any restrictions to use by non-academics: License needed
The datasets analysed and generated during the current study are available in the test_data and test_results repositories respetively:
https://sourceforge.net/p/iupacpal/code/ci/master/tree/test_data/
https://sourceforge.net/p/iupacpal/code/ci/master/tree/test_results/
EMBOSS :
The European molecular biology open software suite
Inverted repeat
IRF:
Inverted repeats finder
IUPAC:
International Union of pure and applied chemistry
IUPACpal :
IUPAC palindrome tool
Ussery DW, Wassenaar TM, Borini S. Computing for comparative microbial genomics: bioinformatics for microbiologists, vol. 8. Berlin: Springer; 2009.
Pearson CE, Zorbas H, Price GB, Zannis-Hadjopoulos M. Inverted repeats, stem-loops, and cruciforms: significance for initiation of DNA replication. J Cell Biochem. 1996;63(1):1–22.
Brázda V, Bartas M, Lỳsek J, Coufal J, Fojta M. Global analysis of inverted repeat sequences in human gene promoters reveals their non-random distribution and association with specific biological pathways. Genomics. 2020.
Čutová M, Manta J, Porubiaková O, Kaura P, Št'astnỳ J, Jagelská EB, Goswami P, Bartas M, Brázda V. Divergent distributions of inverted repeats and g-quadruplex forming sequences in saccharomyces cerevisiae. Genomics. 2020;112(2):1897–901.
Tao X, Yuan S, Chen F, Gao X, Wang X, Yu W, Liu S, Huang Z, Chen S, Xu A. Functional requirement of terminal inverted repeats for efficient protorag activity reveals the early evolution of v (d) j recombination. Natl Sci Rev. 2020;7(2):403–17.
Zhou R, Macaya-Sanz D, Carlson CH, Schmutz J, Jenkins JW, Kudrna D, Sharma A, Sandor L, Shu S, Barry K, et al. A willow sex chromosome reveals convergent evolution of complex palindromic repeats. Genome Biol. 2020;21(1):1–19.
Martínez-Alberola F, Barreno E, Casano LM, Gasulla F, Molins A, Moya P, González-Hourcade M, Del Campo EM. The chloroplast genome of the lichen-symbiont microalga trebouxia sp. tr9 (trebouxiophyceae, chlorophyta) shows short inverted repeats with a single gene and loss of the rps4 gene, which is encoded by the nucleus. J. Phycol. 2020;56(1):170–84.
Warburton PE, Giordano J, Cheung F, Gelfand Y, Benson G. Inverted repeat structure of the human genome: the x-chromosome contains a preponderance of large, highly homologous inverted repeats that contain testes genes. Genome Res. 2004;14(10a):1861–9.
Shlyakhtenko LS, Hsieh P, Grigoriev M, Potaman VN, Sinden RR, Lyubchenko YL. A cruciform structural transition provides a molecular switch for chromosome structure and dynamics. J Mol Biol. 2000;296(5):1169–73.
Benham CJ, Savitt AG, Bauer WR. Extrusion of an imperfect palindrome to a cruciform in superhelical DNA: complete determination of energetics using a statistical mechanical model. J Mol Biol. 2002;316(3):563–81.
Lafrenlere RG, Brown CJ, Rider S, Chelly J, Taillon-Miller P, Chinault AC, Monaco AP, Willard HF. 2.6 mb yac contig of the human x inactivation center region in xq13: physical linkage of the rps4x, phka1, xist and dxs128e genes. Hum Mol Genet. 1993;2(8):1105–15.
Small K, Iber J, Warren ST. Emerin deletion reveals a common X-chromosome inversion mediated by inverted repeats. Nat Genet. 1997;16:96–7.
McDonell N, Ramser J, Francis F, Vinet MC, Rider S, Sudbrak R, Riesselman L, Yaspo ML, Reinhardt R, Monaco AP, et al. Characterization of a highly complex region in xq13 and mapping of three isodicentric breakpoints associated with preleukemia. Genomics. 2000;64(3):221–9.
Small K, Iber J, Warren ST. Emerin deletion reveals a common x-chromosome inversion mediated by inverted repeats. Nat Genet. 1997;16(1):96–9.
Skaletsky H, Kuroda-Kawaguchi T, Minx PJ, Cordum HS, Hillier L, Brown LG, Repping S, Pyntikova T, Ali J, Bieri T, et al. The male-specific region of the human y chromosome is a mosaic of discrete sequence classes. Nature. 2003;423(6942):825–37.
Rozen S, Skaletsky H, Marszalek JD, Minx PJ, Cordum HS, Waterston RH, Wilson RK, Page DC. Abundant gene conversion between arms of palindromes in human and ape y chromosomes. Nature. 2003;423(6942):873–6.
Consortium GP, et al. A global reference for human genetic variation. Nature. 2015;526(7571):68–74.
Marschall T, Marz M, Abeel T, Dijkstra L, Dutilh B, Ghaffaari A, Kersey P, Kloosterman W, Makinen V, Novak A, et al. Computational pan-genomics: status, promises and challenges. Brief Bioinform. 2018;19(1):118–35.
Cisłak A, Grabowski S, Holub J. SOPanG: online text searching over a pan-genome. Bioinformatics. 2018;34(24):4290–2.
Comm, IUPAC-IUB: Abbreviations and symbols for nucleic acids, polynucleotides, and their constituents. Biochemistry. 1970;9(20):4022–7.
Rice P, Longden I, Bleasby A. EMBOSS: the european molecular biology open software suite. 2000.
Kolpakov R, Kucherov G. Searching for gapped palindromes. Theor Comput Sci. 2009;410(51):5365–73.
Sreeskandarajan S, Flowers MM, Karro JE, Liang C. A matlab-based tool for accurate detection of perfect overlapping and nested inverted repeats in dna sequences. Bioinformatics. 2014;30(6):887–8.
Crochemore M, Hancart C, Lecroq T. Algorithms on strings. Cambridge: Cambridge University Press; 2007.
Galil Z, Giancarlo R. Improved string matching with k mismatches. ACM SIGACT News. 1986;17(4):52–4.
Landau GM, Vishkin U. Efficient string matching with k mismatches. Theor Comput Sci. 1986;43:239–49.
Manber U, Myers G. Suffix arrays: a new method for on-line string searches. SIAM J Comput. 1993;22(5):935–48.
Benson G. Inverted repeats finder program. https://tandem.bu.edu/irf/Human21.fa.2.3.5.80.10.40.100000.500000.26.html.
This project was supported by EPSRC DTA grant EP/M50788X-1. The funding body did not influence the study, collection, analysis or interpretation of any data. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 872539.
Department of Informatics, King's College London, 30 Aldwych, London, UK
Hayam Alamro, Mai Alzamel, Costas S. Iliopoulos & Steven Watts
Department of Information Systems, Princess Nourah bint Abdulrahman University, Riyadh, Kingdom of Saudi Arabia
Hayam Alamro
Computer Science Department, King Saud University, Riyadh, Kingdom of Saudi Arabia
Mai Alzamel
Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
Solon P. Pissis
Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
Costas S. Iliopoulos
Steven Watts
HA assisted with proof-reading and formatting. MA assisted with the creation of graphs. CSI supervised the project. SPP wrote the background context and supervised the project. SW created the IUPACpal implementation and performed the analysis. All authors read and approved the final manuscript.
Correspondence to Solon P. Pissis.
Alamro, H., Alzamel, M., Iliopoulos, C.S. et al. IUPACpal: efficient identification of inverted repeats in IUPAC-encoded DNA sequences. BMC Bioinformatics 22, 51 (2021). https://doi.org/10.1186/s12859-021-03983-2 | CommonCrawl |
"); newWindow.document.close(); } function dismissWhatsNew() { $("#whatsnew").css( { "display": "none" } ); $.ajax({ type: "GET", url: "/cgi-bin/wa", data: "PREF&0=GLOBAL_WHATSNEW&1=b" }); }
LATEX-L Archives
Mailing list for the LaTeX3 project
[email protected]
LATEX-L Home
Re: latexsearch.com: a new resource for mathematical typesetting
Juergen Fenn <[log in to unmask]>
Mailing list for the LaTeX3 project <[log in to unmask]>
Tue, 8 Feb 2011 09:46:06 +0100
Am 08.02.11 09:37 schrieb Juergen Fenn:
> But unfortunately I did not get any code snippets from there,
I have to correct this: You get the LaTeX source by simply copy and
pasting the formulae... it's as simple as that, e.g.:
Let {X n d }n≥0be a uniform symmetric random walk on Zd, and Π(d)
(a,b)={X n d ∈ Zd : a ≤ n ≤ b}. Suppose f(n) is an integer-valued
function on n and increases to infinity as n↑∞, and let
$$E_n^{\left( d \right)} = \left\{ {\prod {^{\left( d \right)} } \left(
{0,n} \right) \cap \prod {^{\left( d \right)} } \left( {n + f\left( n
\right),\infty } \right) \ne \emptyset } \right\}$$
Estimates on the probability of the event $$E_n^{\left( d \right)} $$
are obtained for $$d \geqq 3$$ . As an application, a necessary and
sufficient condition to ensure $$P\left( {E_n^{\left( d \right)}
,{\text{i}}{\text{.o}}{\text{.}}} \right) = 0\quad {\text{or}}\quad
{\text{1}}$$ is derived for $$d \geqq 3$$ . These extend some results
obtained by Erdős and Taylor about the self-intersections of the simple
random walk on Zd.
Jürgen. | CommonCrawl |
Matrix sequence
While a one-dimensional sequence is represented by a list, a two-dimensional sequence (a sequence of sequences) can be represented by a matrix. Find the pattern behind the following sequence, where a section of the matrix with unknown coordinates is shown below.
$\begin{bmatrix} &\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ \dots& 1 & 5 & 3 & 7 & 1 & 9 & 5 & 13 & 3 &\dots\\ \dots& 4 & 7 & 2 & 5 & 8 & 1 & 10 & 19 & 4 &\dots\\ \dots& 1 & 5 & 9 & 13 & 2 & 6 & 10 & 14 & 3 &\dots\\ \dots& 4 & 1 & 6 & 11 & 16 & 21 & 2 & 7 & 12 &\dots\\ &\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ \end{bmatrix}$
mathematics pattern calculation-puzzle number-sequence
crb233
crb233crb233
The pattern is that the number in the $m$th row and $n$th column is
the number $n$, written in base $m$, reversed, and converted back into base 10. The section shown is the 2nd through 5th lines, 4th through 12th columns.
For example, the $4$th row, $11$th column would be calculated like this: $11_{10}=23_4\rightarrow32_4=14_{10}$
f''f''
$\begingroup$ How did you ever figure that out?! $\endgroup$
– GentlePurpleRain ♦
$\begingroup$ @GentlePurpleRain Noticing that the second row does +3s, the third row does +4s, and the fourth row does +5s, then converting into the corresponding bases. $\endgroup$
– f''
Not the answer you're looking for? Browse other questions tagged mathematics pattern calculation-puzzle number-sequence or ask your own question.
Find the missing number: A Pentagonal pattern puzzle
Sequence dependent on my place
Bidimensional number sequence
The very special matrix
Fill the matrix with the farthest neighbouring!
Fill.in the two missing terms in this Simple Sequence
What is the Ternary Sequence™
Sergeant Sequence takes the case
You can count on Sergeant Sequence
Circling the rectangle: A strange IQ problem
Detect Number of Times pdflatex was run | CommonCrawl |
About Radius and Surface Area
I am researching about the radius of a star and a its surface area. One question I have is about the effect of changing radii in stars. If for example we have one star with radius $r$ and another one with radius $2r$, we know that the area would mathematically be $A$ for the first star and $4A$ for the second star since $A=4 \pi r^2$. However, I intuitively think that this wouldn't be the case since if I doubled the radius, the surface area will quadruple. But, that means that there will be more molecules and whatever 'materials' the star is made of, meaning that there would be a greater gravitational force towards the core of the star. Would it thus make sense to say that there would be some sort of 'compression' that occurs which pulls all of this extra mass towards the center (effectively making the star denser), and thus the star with $2r$ actually has a surface area of $3\frac{1}{2}A$ rather than $4A$?
(So technically I'm asking if $A\neq 4\pi r ^2$ in case of stars because more mass (due to the extra radius) would mean that the gravitational force towards the core would be greater, making the star denser in the process, but also have less surface area than what doubling the radius would mathematically correspond to.)
star stellar-structure
AnuragAnurag
$\begingroup$ A curious thing about stars is as they get more massive they become less dense. The internal heat pushes the outer layers further outward and more massive stars generate much more internal heat. Granted end of life stars don't follow this neat equation, but the densest main sequence stars are the small ones, the smaller red dwarfs or brown dwarfs if you count them as stars. The higher mass main sequence blue stars have much lower density and considerably lower surface gravity. $\endgroup$ – userLTK Sep 9 '19 at 7:22
You seem to be confusing the simple mathematical relationship between radius and surface area, and the more complex relationship between mass and size.
If you double the radius of a sphere the surface area quadruples. This is pure maths, and is not particular to stars. The volume is multiplied by 8.
But in a star, increasing the amount of matter by a factor of 8 will not result in the volume increasing by a factor of 8. The relationship between the mass of a star and it's volume is not simple and depends on factors like age, and the particular combination of elements in the star.
Doubling the radius does quadruple the surface area.
Doubling, or even increasing 8 fold, the mass does not quadruple the surface area.
Not the answer you're looking for? Browse other questions tagged star stellar-structure or ask your own question.
Is there a flaw with the newer purposes and correlations attributed to the HR diagram? (And would a third axis of mass correct the enclosed flaw?)
Accurate way to programmatically generate a main sequence star?
Is there anywhere I can find some kind of database on all known stars and their properties (mass, surface temperature, radius, and luminosity)?
About the reference for equation of state
Calculating star radius from temperature and absolute magnitude - what am I missing?
Assume that a star has a uniform mass density. Then how does the temperature scale as a function of the distance to the center?
When a star reaches the red giant phase, why does it become more opaque?
Estimating a star's radius, temperature, and luminosity based on its mass | CommonCrawl |
Integrability, Supersymmetry and Coherent States
Integrability, Supersymmetry and Coherent States pp 411-426 | Cite as
On Some Aspects of Unitary Evolution Generated by Non-Hermitian Hamiltonians
A Unitary Way Towards Quantum Collapse
Miloslav Znojil
First Online: 13 July 2019
Part of the CRM Series in Mathematical Physics book series (CRM)
The possibility of nontrivial quantum-catastrophic effects caused by the mere growth of the imaginary component of a non-Hermitian but \({\mathcal {P}\mathcal {T}}\)-symmetric ad hoc local-interaction potential V (x) is revealed and demonstrated. Via a replacement of coordinate \(x \in \mathbb {R}\) by a non-equidistant discrete lattice xn with n = 0, 1, …, N + 1 the model is made exactly solvable at all N. By construction, the energy spectrum shrinks with the growth of the imaginary strength. The boundary of the unitarity of the model is reached in a certain strong non-Hermiticity limit. The loss-of-stability instant is identified with the Kato's exceptional point of order N at which the model exhibits a complete N-state degeneracy. This phase-transition effect is accessible as a result of a unitary-evolution process in an amended physical Hilbert space.
Quantum systems Unitary evolution Three-Hilbert-space representation of states Non-Hermitian observables Quantum phase transitions Quantum catastrophes Exactly solvable model
Work supported by the GAČR Grant No. 16-22945S.
Appendix: The Metric as a Degree of Model-Building Freedom
The specification of quantum system \({\mathcal {S}}\) requires not only the knowledge of its Hamiltonian H(N)(z) [i.e., at any preselected dimension N and parameter z, the knowledge of matrix (20) in our case] but also a constructive access to the correct probabilistic interpretation of experiments. In other words, having solved the time-dependent Schrödinger equation (2) we still need to replace our manifestly unphysical working Hilbert space \({\mathcal {H}}^{{\mathrm{(auxiliary)}}}\) by the correct physical Hilbert space, i.e., we must modify the inner product accordingly [3].
The Abstract Theory Revisited
In the context of quantum theory of many-body systems it was Freeman Dyson [8] who conjectured that in some cases, an enormous simplification of the variational determination of the bound-state spectra could be achieved via a suitable non-unitary similarity transformation of the given realistic Hamiltonians
$$\displaystyle \begin{aligned} \mathfrak{h} \ \to \ H = \varOmega^{-1}\mathfrak{h} \varOmega\,,\ \ \ \ \varOmega^\dagger\varOmega \neq I. {} \end{aligned} $$
The trick proved particularly efficient in nuclear physics [9]. An amendment of the calculations has been achieved via a judicious choice of the operators Ω converting, e.g., the strongly correlated pairs of nucleons into weakly interacting effective bosons.
In spite of the initial success, the trial-and-error nature of the Dyson-inspired recipes and the fairly high formal mathematical costs of the replacement of the self-adjoint "realistic" operator \(\mathfrak {h}=\mathfrak {h}^\dagger \) by its manifestly non-Hermitian, quasi-Hermitian [9] alternative
$$\displaystyle \begin{aligned} H = \varTheta^{-1} H^\dagger \varTheta \neq H^\dagger\,, \ \ \ \ \ \ \varTheta=\varOmega^\dagger\varOmega {} \end{aligned} $$
have been found, beyond the domain of nuclear physics, strongly discouraging (cf., e.g., [25]).
Undoubtedly, the idea itself is sound. In the context of abstract quantum theory its appeal has been rediscovered by Bender with Boettcher [4]. In effect, these authors just inverted the arrow in Eq. (25). They conjectured that one might start a model-building process directly from Eq. (26), i.e., directly from a suitable trial-and-error choice of a sufficiently simple non-Hermitian Hamiltonian with real spectrum. Their conjecture was illustrated by the family of perturbed imaginary cubic oscillator Hamiltonians
$$\displaystyle \begin{aligned} H_{\epsilon}\kern-1pt=\kern-1pt-\frac{d^2}{dx^2} \kern-1pt +\kern-1pt V_{\epsilon}(x) \neq H^\dagger_{\epsilon}\,, \ \ \ \ V_{\epsilon}(x) \kern-1pt=\kern-1pt {\mathrm{i}}x^3 ({\mathrm{i}}x)^\epsilon \,, \ \ \ \ x \kern-1pt\in\kern-1pt (-\infty,\infty), \ \ \ \ \epsilon \in (-1,1). {} \end{aligned} $$
Technical details may be found in reviews [2, 3, 9, 18] in which several formulations of the "inverted" stationary version of the quantum model-building strategy may be found.
The Unitarity of Evolution Reestablished
It is worth adding that strictly speaking, the latter strategies are not always equivalent (cf. also further comments in [22, 32]). For our present purposes we may distinguish between the older, more restrictive "quasi-Hermitian" formulation of quantum mechanics (QHQM) of Ref. [9], and the "\({\mathcal {P}\mathcal {T}}\)-symmetric" version of quantum mechanics (PTQM, [2]).
The key difference between the latter two pictures of quantum reality lies in the strictly required non-admissibility of the unbounded Hamiltonians in the QHQM framework of Ref. [9]. This requirement is by far not only formal, and it also makes the QHQM theory mathematically better understood. In contrast, the process of the rigorous mathematical foundation of the extended, phenomenologically more ambitious PTQM theory (admitting the unbounded Hamiltonians as sampled by Eq. (27)) is still unfinished (cf., e.g., the concise progress reports [23, 33]). Hence, also the toy models with the local but not real potentials are far from being widely accepted as fully understood and consistent at present (cf., e.g., [20, 21]).
One is forced to conclude that the ordinary differential (but, unfortunately, unbounded) benchmark model (27) of a \({\mathcal {P}\mathcal {T}}\)-symmetric quantum system (where \({\mathcal {P}}\) means parity, while symbol \({\mathcal {T}}\) denotes the time reversal [4]) is far from satisfactory. At the same time, its strength may be seen in its methodical impact as well as in its simplicity and intuitive appeal. For all of these reasons one is forced to search for alternative \({\mathcal {P}\mathcal {T}}\)-symmetric quantum models which share the merits while not suffering of the inconsistencies.
Needless to add that the unitarity of the quantum evolution can be reestablished for many non-Hermitian models with real spectra. One just has to return to the standard quantum theory in QHQM formulation. The details of the implementation of the idea may vary. Thus, Bender [2] works with an auxiliary nonlinear requirement \(H {\mathcal {P}\mathcal {T}}={\mathcal {P}\mathcal {T}}H\) called "\({\mathcal {P}\mathcal {T}}\)-symmetry of H." In a slightly more general setting Mostafazadeh [3] makes use of the same relation written in the equivalent form \(H^\dagger {\mathcal {P}}={\mathcal {P}}H\), and he calls it "\({\mathcal {P}}\)-pseudo-Hermiticity of H." Still, both of these authors respect the Stone theorem. This means that both of them introduce the correct physical Hilbert-space metric Θ and both of them use it in the postulate
$$\displaystyle \begin{aligned} H =\varTheta^{-1}H^\dagger \varTheta := H^\ddagger. {} \end{aligned} $$
of the so-called quasi-Hermiticity property of the acceptable Hamiltonians. Rewritten in the form
$$\displaystyle \begin{aligned} H^\dagger \varTheta =\varTheta\,H {} \end{aligned} $$
the equation can be interpreted as a linear-algebraic system which defines, for a given Hamiltonian matrix H with real spectrum, the N-parametric family of all of the eligible matrices of metric Θ. For the tridiagonal input matrices H, the solution is particularly straightforward because the algorithm can be given a recurrent form implying that the solutions exist at any input H [34].
N. Moiseyev, Non-Hermitian Quantum Mechanics (Cambridge University Press, Cambridge, 2011), pp. 1–394CrossRefGoogle Scholar
C.M. Bender, Making sense of non-Hermitian Hamiltonians. Rep. Prog. Phys. 70, 947–1018 (2007)ADSMathSciNetCrossRefGoogle Scholar
A. Mostafazadeh, Pseudo-Hermitian representation of quantum mechanics. Int. J. Geom. Meth. Mod. Phys. 7, 1191–1306 (2010)MathSciNetCrossRefGoogle Scholar
C.M. Bender, S. Boettcher, Real spectra in non-Hermitian Hamiltonians having PT symmetry. Phys. Rev. Lett. 80, 5243–5246 (1998)ADSMathSciNetCrossRefGoogle Scholar
C.M. Bender, K.A. Milton, Nonperturbative calculation of symmetry breaking in quantum field theory. Phys. Rev. D 55, 3255–3259 (1997)ADSCrossRefGoogle Scholar
C.M. Bender, K.A. Milton, Model of supersymmetric quantum field theory with broken parity symmetry. Phys. Rev. D 57, 3595–3608 (1998)ADSMathSciNetCrossRefGoogle Scholar
M. Znojil, Non-Hermitian SUSY and singular PT-symmetrized oscillators. J. Phys. A Math. Gen. 35, 2341–2352 (2002)ADSCrossRefGoogle Scholar
F.J. Dyson, General theory of spin-wave interactions. Phys. Rev. 102, 1217 (1956)ADSMathSciNetCrossRefGoogle Scholar
F.G. Scholtz, H.B. Geyer, F.J.W. Hahne, Quasi-Hermitian operators in quantum mechanics and the variational principle. Ann. Phys. (NY) 213, 74–101 (1992)ADSMathSciNetCrossRefGoogle Scholar
H. Langer, C. Tretter, A Krein space approach to PT symmetry. Czech. J. Phys. 54, 1113–1120 (2004)ADSMathSciNetCrossRefGoogle Scholar
R. El-Ganainy, K.G. Makris, M. Khajavikhan, et al., Non-Hermitian physics and PT symmetry. Nat. Phys. 14, 11 (2018)CrossRefGoogle Scholar
M.H. Stone, On one-parameter unitary groups in Hilbert space. Ann. Math. 33, 643–648 (1932)MathSciNetCrossRefGoogle Scholar
T. Kato, Perturbation Theory for Linear Operators (Springer, Berlin, 1966), pp. 1–592CrossRefGoogle Scholar
M. Znojil, Hermitian-to-quasi-Hermitian quantum phase transitions. Phys. Rev. A 97, 042117 (2018)ADSCrossRefGoogle Scholar
M. Znojil, Quantum catastrophes: a case study. J. Phys. A Math. Theor. 45, 444036 (2012); G. Lévai, F. Růžička, M. Znojil, Three solvable matrix models of a quantum catastrophe. Int. J. Theor. Phys. 53, 2875 (2014)Google Scholar
V.V. Konotop, J.-K. Yang, D.A. Zezyulin, Rev. Mod. Phys. 88, 035002 (2016)ADSCrossRefGoogle Scholar
M. Znojil, Time-dependent version of cryptohermitian quantum theory. Phys. Rev. D 78, 085003 (2008); M. Znojil, Three-Hilbert space formulation of quantum theory. SIGMA 5, 001 (2009) (e-print overlay: arXiv:0901.0700)Google Scholar
F. Bagarello, J.-P. Gazeau, F.H. Szafraniec, M. Znojil (eds.), Non-Selfadjoint Operators in Quantum Physics: Mathematical Aspects (Wiley, Hoboken, 2015), pp. 1–407Google Scholar
L.N. Trefethen, M. Embree, Spectra and Pseudospectra (Princeton University Press, Princeton, 2005)zbMATHGoogle Scholar
P. Siegl, D. Krejčiřík, On the metric operator for the imaginary cubic oscillator. Phys. Rev. D 86, 121702(R) (2012)Google Scholar
D. Krejčiřík, P. Siegl, M. Tater, J. Viola, Pseudospectra in non-Hermitian quantum mechanics. J. Math. Phys. 56, 103513 (2015)ADSMathSciNetCrossRefGoogle Scholar
M. Znojil, in Non-Selfadjoint Operators in Quantum Physics: Mathematical Aspects, ed. by F. Bagarello, J.-P. Gazeau, F.H. Szafraniec, M. Znojil (Wiley, Hoboken, 2015), pp. 7–58zbMATHGoogle Scholar
J.-P. Antoine, C. Trapani, in Non-Selfadjoint Operators in Quantum Physics: Mathematical Aspects, ed. by F. Bagarello, J.-P. Gazeau, F.H. Szafraniec, M. Znojil (Wiley, Hoboken, 2015), pp. 345–402Google Scholar
M. Znojil, N-site-lattice analogues of V (x) = ix 3. Ann. Phys. (NY) 327, 893–913 (2012)Google Scholar
J. Dieudonné, Quasi-Hermitian operators, in Proceedings of the International Symposium on Linear Spaces (Pergamon, Oxford, 1961), pp. 115–122Google Scholar
M. Znojil, Solvable quantum lattices with nonlocal non-Hermitian endpoint interactions. Ann. Phys. (NY) 361, 226–246 (2015)MathSciNetCrossRefGoogle Scholar
M. Znojil, H.B. Geyer, Phys. Lett. B 640, 52–56 (2006); M. Znojil, Gegenbauer-solvable quantum chain model. Phys. Rev. A 82, 052113 (2010); M. Znojil, I. Semorádová, F. Růžička, H. Moulla, I. Leghrib, Problem of the coexistence of several non-Hermitian observables in PT-symmetric quantum mechanics. Phys. Rev. A 95, 042122 (2017); M. Znojil, Bound states emerging from below the continuum in a solvable PT-symmetric discrete Schrodinger equation. Phys. Rev. A 96, 012127 (2017)Google Scholar
Z. Ahmed, S. Kumar, D. Sharma, Ann. Phys. (NY) 383, 635 (2017)ADSCrossRefGoogle Scholar
N. Sukumar, J.E. Bolander, Numerical computation of discrete differential operators on non-uniform grids. Comput. Model. Eng. Sci. 4, 691–706 (2003), eq. (27)Google Scholar
M. Znojil, Maximal couplings in PT-symmetric chain models with the real spectrum of energies. J. Phys. A Math. Theor. 40, 4863–4875 (2007); M. Znojil, Tridiagonal PT-symmetric N by N Hamiltonians and a fine-tuning of their observability domains in the strongly non-Hermitian regime. J. Phys. A Math. Theor. 40, 13131–13148 (2007)Google Scholar
S. Longhi, PT-symmetric mode-locking. Optics Lett. 41, 4518–4521 (2016); C.-F. Huang, J.-L. Zeng, Opt. Laser Technol. 88, 104 (2017)Google Scholar
M. Znojil, Admissible perturbations and false instabilities in PT-symmetric quantum systems. Phys. Rev. A 97, 032114 (2018)ADSCrossRefGoogle Scholar
F. Bagarello, M. Znojil, Nonlinear pseudo-bosons versus hidden Hermiticity. II: the case of unbounded operators. J. Phys. A Math. Theor. 45, 115311 (2012)ADSzbMATHGoogle Scholar
M. Znojil, Quantum inner-product metrics via recurrent solution of Dieudonne equation. J. Phys. A Math. Theor. 45, 085302 (2012); F. Růžička, Hilbert space inner product for PT-symmetric Su-Schrieffer-Heeger models. Int. J. Theor. Phys. 54, 4154–4163 (2015)Google Scholar
© Springer Nature Switzerland AG 2019
1.NPI ASCRŘežCzech Republic
Znojil M. (2019) On Some Aspects of Unitary Evolution Generated by Non-Hermitian Hamiltonians. In: Kuru Ş., Negro J., Nieto L. (eds) Integrability, Supersymmetry and Coherent States. CRM Series in Mathematical Physics. Springer, Cham
First Online 13 July 2019
eBook Packages Physics and Astronomy | CommonCrawl |
exponential distribution calculator
Also, there is a strong relationship between exponential distribution and the Poisson distribution. Insert this widget code anywhere inside the body tag. The probability that a repair time exceeds 4 hours is, $$ \begin{aligned} P(X> 4) &= 1- P(X\leq 4)\\ & = 1- F(4)\\ & = 1- \big[1- e^{-4/2}\big]\\ &= e^{-2}\\ & = 0.1353 \end{aligned} $$, b. c. the probability that a repair time takes between 2 to 4 hours. To read more about the step by step tutorial on Exponential distribution refer the link Exponential Distribution. Now click the button "Solve" to get the output, Finally, the mean, median, variance and standard deviation of the exponential distribution will be displayed in the output field. It means that, in a process, the events occur independently and constantly at an average constant rate. The probability that the machine fails between $100$ and $200$ hours is, $$ \begin{aligned} P(100< X< 200) &= F(200)-F(100)\\ &=\big[1- e^{-200\times0.01}\big]-\big[1- e^{-100\times0.01}\big]\\ &= e^{-1}-e^{-2}\\ & = 0.3679-0.1353\\ & = 0.2326 \end{aligned} $$, c. The probability that a repair time takes at most $100$ hours is, $$ \begin{aligned} P(X\leq 100) &= F(100)\\ &=1- e^{-100\times0.01}\\ &= 1-e^{-1}\\ & = 0.6321 \end{aligned} $$, d. The value of $x$ such that $P(X>x)=0.5$ is, $$ \begin{aligned} & P(X> x) = 0.5\\ \Rightarrow & P(X\leq x)= 0.5\\ \Rightarrow & F(x)= 0.5\\ \Rightarrow & 1- e^{-0.01x}= 0.5\\ \Rightarrow & e^{-0.01x}= 0.5\\ \Rightarrow & -0.01x= \ln 0.5\\ \Rightarrow & -0.01x= -0.693\\ \Rightarrow & x= 69.3 \end{aligned} $$. measurement results of the number of defects or system failures that occur. distributed. lack of "memory" means that a certain product or part is still like a new product after a Enter a probability distribution table and this calculator will find the mean, standard deviation and variance. Your email address will not be published. Let $X$ denote the time (in hours) to failure of a machine machine. Distribution Function of exponential distribution, Mean and Variance of Exponential Distribution. In mathematics, it is used in case of improper use. Let $X$ denote the time (in hours) required to repair a machine. after a period of time t0, the life distribution of the product is the same as the life BYJU'S online exponential distribution calculator tool makes the calculation faster and it displays the probability distribution in a fraction of seconds. b. the probability that a repair time takes at most 3 hours. enter a numeric $x$ value in the, To determine a percentile, enter the percentile (e.g. Therefore, the exponential distribution can not be The actual situation is completely contradictory, which violates the process Your email address will not be published. eval(ez_write_tag([[580,400],'vrcbuzz_com-large-mobile-banner-2','ezslot_3',119,'0','0'])); Copyright © 2020 VRCBuzz | All right reserved. What is Meant by Exponential Distribution? Exponential Distribution Calculator is a free online tool that displays the mean, median, variance, standard deviation and the probability distribution of the given data. What is. Calculator Formula Exponential distribution is a continuous probability distribution with probability density function given by P(x)=ae -ax where a is the parameter of the distribution … is given by, $$ \begin{align*} f(x)&= \begin{cases} \theta e^{-\theta x}, & x>0;\theta>0 \\ 0, & Otherwise. Where x is a random variable and λ is a distribution parameter. a. the probability that a repair time exceeds 4 hours. Thank you for your questionnaire.Sending completion. Your email address will not be published. creep, etc. distribution is 1/λ, and the variance is the square of (1/λ). Therefore, its application in mechanical reliability research is limited. c. the probability that the machine fails before 100 hours. BYJU'S online exponential distribution calculator tool makes the calculation faster and it displays the probability distribution in a fraction of seconds. Probability Percentiles) ) ) ) Results: Area (probability) Sampling. You also learned about how to solve numerical problems based on Exponential distribution. In Statistics and probability theory, the exponential distribution is a particular case of a gamma distribution. Exponential Distribution Calculator is used to find the probability density and cumulative probabilities for Exponential distribution with parameter $\theta$.eval(ez_write_tag([[250,250],'vrcbuzz_com-medrectangle-3','ezslot_6',112,'0','0'])); Step 4 - Click on "Calculate" button to get Exponential distribution probabilities, Step 5 - Gives the output of $P(X < A)$ for Exponential distribution, Step 6 - Gives the output of $P(X > B)$ for exponential distribution, Step 7 - Gives the output of $P(A < X < B)$ for Exponential distribution, Step 8 - Gives the output of mean, variance and standard ddeviation for Exponential distribution, A continuous random variable $X$ is said to have an exponential distribution with parameter $\theta$ if its p.d.f. [1] 2010/10/27 23:53 Male / 20 level / A university student / Very /, [2] 2010/02/15 15:07 Male / 20 level / A specialized student / Very /, [3] 2009/05/28 05:41 Male / 50 level / A researcher / Very /, [4] 2009/02/18 11:43 Female / 50 level / A teacher / Very /. Exponential Distribution is a mathematical function or method used in the context of probability & statistics, represents the probability of reliability of applications by modelling the time elapsed between the events in statistical experiments.. Statistics formula to calculate exponential distribution
Spotted Lanternfly Bite, Cooler Dog Cooling Vest And Collar, Knee Length Shorts, Dwarf Bartlett Pear Trees, Fossil Watch Png, Dwarf Bartlett Pear Tree Size, Uni Ball Jetstream Colors, Cutleaf Evening Primrose Texas, Round Metal Obelisk, Data Analyst Excel Skills, Vintage Clothing Women's, Pokemon Dragon Majesty Card Prices, Dwarf Bartlett Pear Tree Size, | CommonCrawl |
Evidence for European presence in the Americas in ad 1021
Margot Kuitems ORCID: orcid.org/0000-0002-8803-26501,
Birgitta L. Wallace2,
Charles Lindsay2,
Andrea Scifo ORCID: orcid.org/0000-0002-7174-39661,
Petra Doeve ORCID: orcid.org/0000-0002-8322-20683,4,
Kevin Jenkins2,
Susanne Lindauer ORCID: orcid.org/0000-0001-5363-27555,
Pınar Erdil ORCID: orcid.org/0000-0001-7463-60341,
Paul M. Ledger6,7,
Véronique Forbes ORCID: orcid.org/0000-0002-1302-76036,
Caroline Vermeeren8,
Ronny Friedrich ORCID: orcid.org/0000-0001-5199-19575 &
Michael W. Dee ORCID: orcid.org/0000-0002-3116-453X1
Nature volume 601, pages 388–391 (2022)Cite this article
177k Accesses
3038 Altmetric
Transatlantic exploration took place centuries before the crossing of Columbus. Physical evidence for early European presence in the Americas can be found in Newfoundland, Canada1,2. However, it has thus far not been possible to determine when this activity took place3,4,5. Here we provide evidence that the Vikings were present in Newfoundland in ad 1021. We overcome the imprecision of previous age estimates by making use of the cosmic-ray-induced upsurge in atmospheric radiocarbon concentrations in ad 993 (ref. 6). Our new date lays down a marker for European cognisance of the Americas, and represents the first known point at which humans encircled the globe. It also provides a definitive tie point for future research into the initial consequences of transatlantic activity, such as the transference of knowledge, and the potential exchange of genetic information, biota and pathologies7,8.
The Vikings (or Norse) were the first Europeans to cross the Atlantic9. However, the only confirmed Norse site in the Americas is L'Anse aux Meadows, Newfoundland9,10,11,12 (Extended Data Figs. 1 and 2). Extensive field campaigns have been conducted at this UNESCO (United Nations Educational, Scientific, and Cultural Organization) World Heritage Site, and much knowledge has been gained about the settlement and its contemporary environment2,13,14,15 (Supplementary Note 1). Evidence has also revealed that L'Anse aux Meadows was a base camp from which other locations, including regions further south, were explored15.
The received paradigm is that the Norse settlement dates to the close of the first millennium9; however, the precise age of the site has never been scientifically established. Most previous estimates have been based on stylistic analysis of the architectural remains and a handful of artefacts, as well as interpretations of the Icelandic sagas, oral histories that were only written down centuries later2,16 (Supplementary Note 2). Radiocarbon (14C) analysis has been attempted at the site, but has not proved especially informative3,17,18. More than 150 14C dates have been obtained, of which 55 relate to the Norse occupation19. However, the calibrated age ranges provided by these samples extend across and beyond the entire Viking Age (ad 793–1066) (Fig. 1 and Extended Data Fig. 3). This is in contrast with the archaeological evidence and interpretations of the sagas. The latter offer differing scenarios for the frequency and duration of Norse activity in the Americas, but both the archaeological and written records are consistent with a very brief occupation (Supplementary Note 3 and Extended Data Fig. 4). The unfavourable spread in the 14C dates is down to the limitations of this chronometric technique in the 1960s and 1970s when most of these dates were obtained. Such impediments included far greater measurement uncertainty and restrictive sample size requirements. Furthermore, many of these samples were subject to an unknown amount of inbuilt age. The term inbuilt age refers to the difference in time between the contextual age of the sample and the time at which the organism died (returned by 14C analysis), which can potentially reach hundreds of years. This offset was also sometimes inappropriately incorporated into summary estimates3.
Fig. 1: Date ranges obtained from our wiggle matches in comparison with legacy 14C data.
a, b, Averaged probability density functions for different sample types (Extended Data Fig. 3, Supplementary Note 5 and Supplementary Data 1). a, Samples susceptible to inbuilt age. Light blue, whale bone (n = 1, uncorrected for marine reservoir effect); red, wood (n = 17); brown, burnt wood (n = 7); black, charcoal (n = 22). b, Short-lived samples. Light green, turf or sod from the Norse buildings (n = 4); olive, outermost rings and twigs from Norse-modified wood (n = 4). c, Wiggle-matched probability density functions for the last growth ring of each wood item. Dark green, 4A 59 E3-1; navy, 4A 68 J4-6; orange, 4A 68 E2-2.
Cosmic radiation events as absolute time markers
In our study, we use an advanced chronometric approach to anchor Norse activity in the Americas to a precise point in time. Exact-year 14C results can be achieved by high-precision accelerator mass spectrometry (AMS) in combination with distinct features in the atmospheric 14C record20,21,22. Measurements on known-age (dendrochronological) tree rings show that 14C production usually fluctuates by less than 2‰ per year23. However, such time series have also revealed that production of the isotope rapidly increased in the years ad; 775 and ad 993 by about 12‰ (which manifests as a decrease of about 100 14C yr)24 and about 9‰ (about 70 14C yr)6, respectively. These sudden increases were caused by cosmic radiation events, and appear synchronously in dendrochronological records all around the world25,26,27,28,29. By uncovering these features in tree-ring samples of unknown age, it is possible to effect precise pattern matching between such samples and reference series. In so doing, if the bark edge (or more specifically, the waney edge) is also present, it becomes possible to determine the exact felling year of the tree20. Moreover, it is not necessary to have 14C dates for the outermost growth rings, because once the ring that contains the ad 993 anomaly has been detected, it simply becomes a matter of counting the number of rings to the waney edge. On the basis of the state of development of the earlywood and latewood cells in the waney edge, one can even determine the precise felling season.
Precise dating of Norse activity in the Americas
Here we present 127 14C measurements, of which 115 were performed at the Centre for Isotope Research (CIO; Groningen), and 12 at the Curt-Engelhorn-Center Archaeometry (CEZA; Mannheim). The samples consisted of 83 individual tree rings from a total of 4 wooden items with find numbers 4A 59 E3-1, 4A 68 E2-2, 4A 68 J4-6 and 4A 70 B5-14 (Extended Data Fig. 5, Supplementary Note 4 and Supplementary Data 2). Unfortunately, the last item is excluded from the remainder of our analysis because it spans only nine years and does not include the ad 993 anomaly and therefore cannot be precisely dated (Supplementary Data 2). Anatomical characteristics such as different numbers of growth rings, varying growth-ring widths and the presence–absence of features such as missing rings show that wood items 4A 59 E3-1, 4A 68 E2-2 and 4A 68 J4-6 come from different trees. Furthermore, they comprise at least two different species, specifically fir, possibly balsam fir (Abies cf. balsamea), and juniper/thuja (Juniperus/Thuja type; Extended Data Fig. 6). In addition, the waney edge could be identified in all cases.
The items were found at the locations shown on the site map in Extended Data Fig. 2. The association of these pieces with the Norse is based on detailed research previously conducted by Parks Canada. The determining factors were their location within the Norse deposit and the fact that they had all been modified by metal tools, evident from their characteristically clean, low angle-in cuts30. Such implements were not manufactured by the Indigenous inhabitants of the area at the time30 (Supplementary Note 4).
Our individual 14C results are consistently better than ±2.5‰ (1σ), with some averaged results better than ±1.5‰ (about 12 14C yr). Our corpus of replicated measurements is consistent with statistical expectation, and no statistically significant offset (5.1 ± 7.9 14C yr, 1σ) was evident between the two 14C facilities involved (Supplementary Data 2).
Two steps are used to determine the exact cutting year of each piece of wood. First, the range of possible dates for the waney edges is obtained by standard 14C wiggle matching against the Northern Hemisphere calibration curve, IntCal20 (ref. 23). Here we use the D_Sequence function in the software OxCal (ref. 31) to match the full 14C time-series for each item. The resultant 95% probability (2σ) ranges for the waney edges all lie between ad 1019 and ad 1024 (Fig. 1c). This indicates that the ad 993 anomaly should be present in each of the wood pieces 26 to 31 years before they were cut. In our numbering system, this corresponds to rings −31 to −26, where the waney edge is assigned to be 0, the penultimate ring is assigned to be −1, and so forth.
A second step is then used to determine the exact cutting year of each item. This process hinges on identifying the precise ring in which the ad 993 anomaly is found, and hence the precise date of the waney edge. For this purpose, we use the Classical χ2 approach20,32 to match the 14C data from the six rings (−31 to −26) most likely to contain the ad 993 anomaly against a second Northern Hemisphere reference (henceforth B2018)28. This dataset is preferred because the ad 993 anomaly is less distinct in the smoothed IntCal20 curve (Fig. 2). The six-ring subsets are compared with B2018 such that χ2 becomes minimal for the cutting date of each item. The matches are conducted over a range for each waney edge of ad 1016−1026 (Fig. 2a).
Fig. 2: Exact date matches obtained from the χ2 tests.
The wood items are identified as follows: 4A 59 E3-1 (dark green); 4A 68 J4-6 (navy); 4A 68 E2-2 (orange). a, Outputs of the χ2 test against B2018 (ref. 28; d.f. = 5, critical value = 11.07, 95% probability), where the gold cross marks the year of best fit for the waney edge. b, All of the 14C data from 4A 59 E3-1 (n = 12, 1σ), 4A 68 J4-6 (n = 35, 1σ) and 4A 68 E2-2 (n = 29, 1σ) superimposed on IntCal20 (light blue, 1σ). Inset: detail of the 14C results (error bars omitted for legibility) for growth rings −31 to −26 against B2018 (grey, 1σ)28 and IntCal20 (light blue).
The optimal χ2 value for goodness-of-fit for the waney edge in all three cases is ad 1021 (Fig. 2a). While other solutions pass the χ2 test at 95% probability (ad 1022 for 4A 59 E3-1; ad 1022 for 4A 68 E2-2; ad 1019, ad 1020 and ad 1022 for 4A 68 J4-6), the ideal positioning for the precipitous drop in 14C years in each case is when ring −29 corresponds to ad 992 (inset of Fig. 2b). Furthermore, the formation of a small band of earlywood cells in 4A 68 J4-6 indicates a felling season in spring (Extended Data Fig. 7a). The felling season of 4A 68 E2-2 is summer/autumn (Extended Data Fig. 7b). Past polyethylene glycol (Methods) consolidation hinders determination of the felling season of 4A 59 E3-1.
Our result of ad 1021 for the cutting year constitutes the only secure calendar date for the presence of Europeans across the Atlantic before the voyages of Columbus. Moreover, the fact that our results, on three different trees, converge on the same year is notable and unexpected. This coincidence strongly suggests Norse activity at L'Anse aux Meadows in ad 1021. Further evidence reinforces this conclusion. First, the modifications are extremely unlikely to have taken place before this year, because the globally observed sudden decrease in 14C values is evident in ring −29. Second, the probability that the items would have been modified at a later stage is also negligible. This is largely because of the fact that they all had their waney edges preserved. This layer would almost certainly have been stripped off during water transport, so the possibility of driftwood is effectively discounted33. Further, the Norse would have had no need to reclaim deadwood because fresh wood was abundant in the region at the time13. Finally, if it were scavenged material, the probability that all three items would exhibit precisely the same amount of inbuilt age would be vanishingly small.
The Icelandic sagas suggest that the Norse engaged in cultural exchanges with the Indigenous groups of North America34. If these encounters indeed occurred, they may have had inadvertent outcomes, such as pathogen transmission7, the introduction of foreign flora and fauna species, or even the exchange of human genetic information. Recent data from the Norse Greenlandic population, however, show no evidence of the last of these8. It is a matter for future research how the year ad 1021 relates to overall transatlantic activity by the Norse. Nonetheless, our findings provide a chronological anchor for further investigations into the consequences of their westernmost expansion.
We provide evidence that the Norse were active on the North American continent in the year ad 1021. This date offers a secure juncture for late Viking chronology. More importantly, it acts as a new point-of-reference for European cognisance of the Americas, and the earliest known year by which human migration had encircled the planet. In addition, our research demonstrates the potential of the ad 993 anomaly in atmospheric 14C concentrations for pinpointing the ages of past migrations and cultural interactions. Together with other cosmic-ray events, this distinctive feature will allow for the exact dating of many other archaeological and environmental contexts.
After careful examination of the transversal and radial sections of the wood, and ring counting, individual samples were collected under a microscope for annual-ring measurement using a steel blade, following the standard procedure for cleaving tree rings. Sample extraction started at the waney edge. For each wood item, the sample of the waney edge was given the number 0, the second-to-last ring was given the number −1, and so forth.
Sample preparation and measurement
The tree-ring samples were cut into small fragments again using a steel blade. All of the wood samples were chemically pretreated and analysed at CIO, Groningen. For independent control, 12 of the samples were also chemically pretreated and analysed at CEZA, Mannheim. CEZA and CIO recently took part in a multi-laboratory intercomparison exercise to ensure the effectiveness of their pretreatment protocols in which tree-ring samples of unknown age were pretreated to α-cellulose and then analysed for 14C concentration by AMS35.
Procedures at CIO, University of Groningen
The first step involves pretreating the samples to α-cellulose, the most rigid and immobile fraction of the wood36. The method has previously been described in full37. In brief, the wet chemistry involves a series of strong solutions of acid–base–acid and an oxidant, with rinses to neutrality using deionized and ultrapure water after each step. The samples are then either freeze-dried or air-dried at room temperature for 72 h. To eliminate the additive polyethylene glycol (PEG), which was present in all wood items except 4A 68 E2-2, the aqueous pretreatment is preceded by placement of the samples in ultrapure water at 80 °C for 36 h. This latter step builds on past studies of this contaminant38,39,40. In cases where the starting weight was <20 mg, and the wood was not treated with PEG, the holocellulose protocol used at CIO was deemed sufficient37 .
Aliquots (about 5 mg, where possible) of the (alpha-)cellulosic product are weighed into tin capsules for combustion in an elemental analyser (IsotopeCube, Elementar). A small amount of the CO2(g) released is directed into an isotope ratio mass spectrometer (Isoprime 100) for determination of the stable isotope ratios of C and N, but the majority is cryogenically trapped into Pyrex rigs and reduced to graphite under a stoichiometric excess of H2(g) over an Fe(s) catalyst. The graphite (about 2 mg) is subsequently pressed into Al(s) cathodes for measurement by AMS (MICADAS, Ionplus). The data were refined using BATS 4.0 and stored in FileMaker Pro 14.6.0. For quality control purposes, full pretreatment and radioisotope measurements were concurrently conducted on known-age standards (for example, tree-ring material from ad 1503, UK) and background wood (Pleistocene deposit Kitzbühel, Austria). Community-wide isotope ratio mass spectrometry and AMS standards (for example, National Institute of Standards and Technology oxalic acid II, Merck caffeine, and International Atomic Energy Agency C7 and C8) were used to validate the isotope measurements.
Procedures at CEZA, Mannheim
Samples MAMS-45877–45879 and MAMS-47884–47886 are pretreated as holocellulose and are pretreated using the acid–base–acid method (acid/base/acid, HCl/NaOH/HCl) followed by bleaching with NaClO2 to extract the cellulose41. The second batch of samples (MAMS-50444–50449) is pretreated as alpha-cellulose following the protocol used by CIO described above. PEG contamination is removed in the same way as at CIO by washing in hot ultrapure water. The cellulose is combusted to CO2 in an elemental analyser. CO2 is then converted catalytically to graphite. 14C is analysed in-house using an AMS instrument of the MICADAS type. The isotopic ratios (14C/12C of samples, calibration standard oxalic acid II), blanks and control standards are measured simultaneously in the AMS. 14C ages are normalized to δ13C = −25‰ (ref. 42), where δ13C = (((13C/12C)sample/(13C/12C)standard) − 1) × 1,000.
Models in the program OxCal
All models employ OxCal 4.4 and use its standard Metropolis–Hastings Markov chain Monte Carlo algorithm and default priors31. The code for these models is provided in Supplementary Note 5 and in the repository https://github.com/mwdee/LAM1021.
Averaging
Averages are produced for each sample type using the Sum function in OxCal 4.4. In each case, all of the relevant 14C dates are included in bounded phases. The main prior information used by this model is that each date is assumed to be part of a defined group31.
Wiggle matching
14C data for each beam are wiggle matched against the IntCal20 calibration curve in OxCal 4.4 using its D_Sequence function31. All models show high convergence and run to completion.
Pattern matching using the χ 2 test
The measured 14C concentrations of tree-ring samples are matched to a reference curve through the classical statistical method of the χ2 test20,22, using the following χ2 function:
$${{X}^{2}}_{(x)}=\mathop{\sum }\limits_{i=1}^{n}\frac{{({R}_{i}-C(x-{r}_{i}))}^{2}}{{\rm{\delta }}{R}_{i}^{2}+{\rm{\delta }}C{(x-{r}_{i})}^{2}}$$
Here Ri ± δRi are the measured 14C dates of the sample; C(\(x\) − ri) ± δC(\(x\) − ri) are the 14C concentrations of the reference curve for the year (\(x\) − ri), where ri are the tree-ring numbers of the samples analysed; and \(x\) is a trial age for the waney edge. Measured dates are matched to the reference data (that is, either higher or lower) in such a way that the χ2 becomes minimal for a certain value of \(x\), which is the best estimate for the felling date of the tree20. To match the event accurately, a reference dataset is needed that has single-year resolution. We use B2018 as this reference, which combines many annual 14C results for the years relevant to this study28. The pattern-matching analyses are predominantly carried out using Python 3 in Jupyter Notebook 6.3.0. The results on each of the wood items studied are shown in Fig. 2.
Wood taxonomy
From the three main fragments of wood (4A 59 E3-1, 4A 68 E2-2 and 4A 68 J4-6), thin sections are prepared under a stereomicroscope with magnifications of up to 50×. They are cut in three directions (transverse, radial and tangential). As the wood was dry, the sections had to be soaked in soapy water to get rid of air bubbles and to be able to see the diagnostic anatomical features. The slides are examined under a transmitted light microscope with magnifications up to ×400 and identified with the help of relevant literature43,44,45. The three samples do not have any vessels, and therefore must be softwood from conifer species. The most important characteristics for identification are the lack of resin canals, the height of the rays (on average much lower in 4A 68 J4-6 than in the other two samples) and the type, number and distribution of the crossfield pits. Also, presence/absence of axial parenchyma, the shape of the ray cells in crossfields, the pitting in side walls and end walls of the ray cells, and the geographical provenance are taken into account. As wood sample 4A 68 J4-6 is compression wood (reaction wood on the lower side of branches and leaning stems), the distinction between cupressoid and taxoidoid pits cannot be made. The identification for this sample is therefore uncertain with juniper and thuja as possible candidates (Juniperus/Thuja type). The other two samples are identified with confidence as fir (Abies). Within this genus further identification is impossible, but balsam fir (A. balsamea), a very common North American species, would be a good match.
Further information on research design is available in the Nature Research Reporting Summary linked to this paper.
All of the data that support the findings of this study are available in the main text or Supplementary Information. Source data are provided with this paper.
Code availability
The codes of the OxCal models are provided in the Supplementary Information and in the repository https://github.com/mwdee/LAM1021.
Ingstad, H. & Ingstad, A. S. The Viking Discovery of America: The Excavations of a Norse Settlement at L'Anse aux Meadows, Newfoundland (Breakwater Books, 2000).
Wallace, B. L. in Contact, Continuity, and Collapse: the Norse Colonization of the North Atlantic (ed. Barrett, J.) 207–238 (Brepols, 2003).
Nydal, R. A critical review of radiocarbon dating of a Norse settlement at L'Anse aux Meadows, Newfoundland Canada. Radiocarbon 31, 976–985 (1989).
Ledger, P. M., Girdland-Flink, L. & Forbes, V. New horizons at L'Anse aux Meadows. Proc. Natl Acad. Sci. USA 116, 15341–15343 (2019).
Dee, M. W. & Kuitems, M. Duration of activity inestimable due to imprecision of the data. Proc. Natl Acad. Sci. USA 116, 22907 (2019).
Miyake, F., Masuda, K. & Nakamura, T. Another rapid event in the carbon-14 content of tree rings. Nat. Commun. 4, 1748 (2013).
ADS Article Google Scholar
Mühlemann, B. et al. Diverse variola virus (smallpox) strains were widespread in northern Europe in the Viking Age. Science 369, 6502 (2020).
Margaryan, A. et al. Population genomics of the Viking world. Nature 585, 390–396 (2020).
ADS CAS Article Google Scholar
Wallace, B. L. The Norse in Newfoundland: L'Anse aux Meadows and Vinland. Newfoundl. Labrador Stud. 19, 50–43 (2003).
Ingstad, H. The Discovery Norse House-Sites in North America (Harper & Row, 1966).
Lindsay, C. S. Was L'Anse aux Meadows a Norse outpost? Can. Geogr. J. 94, 36–43 (1977).
Ingstad, A. S. & Ingstad, H. The Norse Discovery of America Vols I and II (Univ. Oslo Press, 1986).
Davis, A. M., McAndrews, J. H. & Wallace, B. L. Paleoenvironment and the archaeological record at the L'Anse aux Meadows site, Newfoundland. Geoarchaeology 3, 53–64 (1988).
Ogilvie, A. E., Barlow, L. K. & Jennings, A. E. North Atlantic climate c.AD 1000: millennial reflections on the Viking discoveries of Iceland, Greenland and North America. Weather 55, 34–45 (2000).
Wallace, B. L. L'Anse aux Meadows, Leif Eriksson's home in Vinland. J. North Atl. Special Vol. 2, 114–125 (2009).
Smiley, J. The Sagas of the Icelanders (Penguin, 2005).
Martindale, A. et al. Canadian Archaeological Radiocarbon Database (CARD 2.1) (accessed 13 April 2021) (2016).
Kristensen, T. J. & Curtis, J. E. Late Holocene hunter-gatherers at L'Anse aux Meadows and the dynamics of bird and mammal hunting in Newfoundland. Arctic Anthropol. 49, 68–87 (2012).
Wallace, B. L. in Archaeology in America: An Encyclopedia (eds Cordell, L. S. et al.) 78–83 (ABC-CLIO, 2009).
Wacker, L. et al. Radiocarbon dating to a single year by means of rapid atmospheric 14C changes. Radiocarbon 56, 573–579 (2016).
Oppenheimer, C. et al. Multi-proxy dating the 'Millennium Eruption' of Changbaishan to late 946 CE. Quat. Sci. Rev. 158, 164–171 (2017).
Kuitems, M. et al. Radiocarbon-based approach capable of subannual precision resolves the origins of the site of Por-Bajin. Proc. Natl Acad. Sci. USA 117, 14038–14041 (2020).
Reimer, P. et al. The IntCal20 Northern Hemisphere radiocarbon age calibration curve (0–55 cal kBP). Radiocarbon 62, 725–757 (2020).
Miyake, F., Nagaya, K., Masuda, K. & Nakamura, T. A signature of cosmic-ray increase in AD 774–775 from tree rings in Japan. Nature 486, 240–242 (2012).
Usoskin, I. G. et al. The AD775 cosmic event revisited: the Sun is to blame. Astron. Astrophys. 552, L3 (2013).
Jull, A. T. et al. Excursions in the 14C record at A.D. 774–775 in tree rings from Russia and America. Geophys. Res. Lett. 41, 3004–3010 (2014).
Güttler, D. et al. Rapid increase in cosmogenic 14C in AD 775 measured in New Zealand kauri trees indicates short-lived increase in 14C production spanning both hemispheres. Earth Planet. Sci. Lett. 411, 290–297 (2015).
Büntgen, U. et al. Tree rings reveal globally coherent signature of cosmogenic radiocarbon events in 774 and 993 CE. Nat. Commun. 9, 3605 (2018).
Scifo, A. et al. Radiocarbon production events and their potential relationship with the Schwabe cycle. Sci. Rep. 9, 17056 (2019).
Wallace, B. L. Westward to Vinland: the Saga of L'Anse aux Meadows (Historic Sites Association of Newfoundland and Labrador, 2012).
Bronk Ramsey, C. Bayesian analysis of radiocarbon dates. Radiocarbon 51, 337–360 (2009).
Bronk Ramsey, C., van der Plicht, J. & Weninger, B. 'Wiggle matching' radiocarbon dates. Radiocarbon 43, 381–389 (2001).
Mooney, D. E. A. 'North Atlantic island signature' of timber exploitation: evidence from wooden artefact assemblages from Viking Age and Medieval Iceland. J. Archaeol. Sci. Rep. 7, 280–289 (2016).
Odess, D., Loring, S. & Fitzhugh W. W. in Vikings: the North Atlantic Saga (eds Fitzhugh, W. W. & Ward, E. I.) (Smithsonian Institution Press, 2000).
Wacker, L. et al. Findings from an in-depth annual tree-ring radiocarbon intercomparison. Radiocarbon 62, 873–882 (2020).
Loader, N. J., Robertson, I. & McCarroll, D. Comparison of stable carbon isotope ratios in the whole wood, cellulose and lignin of oak tree-rings. Palaeogeogr. Palaeoclimatol. Palaeoecol. 196, 395–407 (2003).
Dee, M. W. et al. Radiocarbon dating at Groningen: new and updated chemical pretreatment procedures. Radiocarbon 62, 63–74 (2020).
Brock, F. et al. Testing the effectiveness of protocols for removal of common conservation treatments for radiocarbon dating on dating. Radiocarbon 60, 35–50 (2018).
Bruhn, F., Duhr, A., Grootes, P. M., Mintrop, A., Nadeau, M.-J. Chemical removal of conservation substances by 'Soxhlet'-type extraction. Radiocarbon 43, 229–237 (2001).
Ensing, B. et al. On the origin of the extremely different solubilities of polyethers. Nat. Commun. 10, 2893 (2019).
Friedrich, R. et al. Annual 14C tree-ring data around 400 AD: mid- and high-latitude records. Radiocarbon 61, 1305–1316 (2019).
Stuiver, M. & Polach, H. A. Discussion reporting of 14C data. Radiocarbon 19, 355–363 (1977).
Schweingruber F. H. Anatomy of European Woods (Bern and Stuttgart, 1990).
Wheeler, E. A. InsideWood - a web resource for hardwood anatomy. International Association of Wood Anatomists Journal 32, 199–211 (2011).
IAWA Committee. IAWA list of microscopic features for softwood identification. IAWA J. 25, 1–70 (2004).
This work was funded by the European Research Council (grant 714679, ECHOES). M.K., A.S., P.E. and M.W.D. were supported by this grant. We thank Parks Canada for providing samples; the CIO staff, especially S. W. L. Palstra, D. van Zonneveld, R. Linker, S. de Bruin, R. A. Schellekens, P. Wietzes-Land, D. Paul, H. A. J. Meijer, J. J. Spriensma, H. G. Jansen, A. Th. Aerts-Bijma and A. C. Neocleous; and R. Doeve, E. van Hees, A. J. Huizinga, B. J. S. Pope and J. Higdon for their help and support.
Centre for Isotope Research, University of Groningen, Groningen, the Netherlands
Margot Kuitems, Andrea Scifo, Pınar Erdil & Michael W. Dee
Parks Canada Agency, Government of Canada, Dartmouth, Nova Scotia, Canada
Birgitta L. Wallace, Charles Lindsay & Kevin Jenkins
Laboratory for Dendrochronology at BAAC, 's-Hertogenbosch, the Netherlands
Petra Doeve
Cultural Heritage Agency of The Netherlands, Amersfoort, the Netherlands
Curt-Engelhorn-Center Archaeometry, Mannheim, Germany
Susanne Lindauer & Ronny Friedrich
Department of Archaeology, Queens College, Memorial University of Newfoundland, St Johns, Newfoundland, Canada
Paul M. Ledger & Véronique Forbes
Department of Geography, Memorial University of Newfoundland, St Johns, Newfoundland, Canada
Paul M. Ledger
BIAX Consult, Zaandam, the Netherlands
Caroline Vermeeren
Margot Kuitems
Birgitta L. Wallace
Charles Lindsay
Andrea Scifo
Kevin Jenkins
Susanne Lindauer
Pınar Erdil
Véronique Forbes
Ronny Friedrich
Michael W. Dee
M.W.D. conceived the idea, directed the research and co-wrote the paper; M.K. helped to design the research, conducted most of it and co-wrote the paper; B.L.W. was principal advisor on archaeology and sagas; C.L. advised on archaeology; A.S. mainly performed the χ2 analyses; P.D. advised on tree-ring anatomy; K.J. took samples; S.L. conducted pretreatments (Mannheim); P.E. conducted pretreatments (Groningen); P.M.L. and V.F. advised on archaeology and palaeoecology; C.V. analysed wood taxonomy; R.F. oversaw AMS analyses (Mannheim). All co-authors contributed to the final draft of the manuscript.
Correspondence to Margot Kuitems or Michael W. Dee.
Peer review information Nature thanks James Barrett, Dagfinn Skre, Lukas Wacker and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Extended data figures and tables
Extended Data Fig. 1 North Atlantic regions explored by the Norse.
LAM lies on the Northern Peninsula of Newfoundland. The map shows the main settlements on Greenland from where the Norse embarked, and the regions they named Helluland, Markland and Vinland. Map: R. Klaarenbeek.
Extended Data Fig. 2 Schematic overview of the site (after Wallace 2003)2 and origin of our samples.
Indicated are the contours of different Norse structures (A–J) and the locations (brown) at which the wood items were found that are used in the current study.
Extended Data Fig. 3 The 55 legacy 14C dates on Norse contexts at LAM.
Samples susceptible to inbuilt age: light blue, whale bone (n = 1, uncorrected for Marine Reservoir Effect); red, wood (n = 17); brown, burnt wood (n = 7); black, charcoal (n = 22). Short-lived samples: light green, turf or sod samples from the walls of the Norse buildings (n = 4); olive, outermost rings and twigs from Norse-modified wood (n = 4). See Supplementary Data 1.
Extended Data Fig. 4 Overview of the number and order of the different voyages by the Norse to the Americas based on the information from the Sagas.
Indicated for each voyage are the expedition leader (EL), the duration (D), the number of attending crew and the number of ships. Top, summary of the information from the Saga of the Greenlanders, which indicates that the number of winters spent at Vinland is seven. Given the short sailing seasons and the impossibility of making round trips between Greenland and Vinland in one year, the time between the first arrival of the Norse at Vinland and their ultimate return is estimated to be about thirteen years; bottom, summary of the information from the Saga of Erik the Red, with the estimated minimum time between the first arrival of the Norse at Vinland and their ultimate return, amounting to about three years.
Extended Data Fig. 5 Pictures of the wood items studied.
White X indicates the location from where samples were taken. The black bars represent 5 cm. Top left, 4A 59 E3-1; top right, 4A 68 E2-2; bottom left, 4A 68 J4-6; bottom right, 4A 70 B5-14. Photos: M. Kuitems.
Extended Data Fig. 6 Microscope pictures of the thin slices from the wood samples studied.
The white bars represent 0.05 mm, the black bars 0.1 mm. From left to right: radial, tangential and transversal sections of respectively: top, 4A 68 J4-6; middle, 4A 68 E2-2; bottom, 4A 59 E3-1. Photos: M. van Waijjen, BIAX Consult.
Extended Data Fig. 7 Microscopic depiction of the felling season of the waney edge.
The black bars represent 1 mm. ew = early wood, which is formed during the first stage of the growth year; lw = late wood, which is formed at the end of the growth season. a, Wood item 4A 68 E2-2; b, Wood item 4A 68 J4-6. Photos: P. Doeve.
This file contains Supplementary Notes 1–5 and references. (1) L'Anse aux Meadows; (2) Dating of the site; (3) Length of occupation; (4) Sample materials; (5) Codes.
Source Data Fig. 1
Kuitems, M., Wallace, B.L., Lindsay, C. et al. Evidence for European presence in the Americas in ad 1021. Nature 601, 388–391 (2022). https://doi.org/10.1038/s41586-021-03972-8
Issue Date: 20 January 2022
A radiocarbon revolution sheds light on the Vikings
James H. Barrett
Single-year radiocarbon dating anchors Viking Age trade cycles in time
Bente Philippsen
Claus Feveile
Søren M. Sindbæk
Vikings were living in North America exactly a thousand years ago
Shamini Bundell
Dan Fox
Nature Video 22 Oct 2021
News & Views 22 Dec 2021
History of Nature
Nature (Nature) ISSN 1476-4687 (online) ISSN 0028-0836 (print) | CommonCrawl |
Session code: ps
Session type: Posters
Simon Huang and Svenja Huntemann (Studc)
Unscheduled [no location]
Sarah Malick (Clemson University), A connection between grad-div stabilized FE solutions and pointwise divergence-free FE solutions on general meshes
Ismail Abouamal (Université de Montréal), A fifth-order quantum superintegrable system and its relation with the Painlevé property.
Ahmed Ashraf (Western University), Combinatorial Characters of Symmetric Group
BEATRIZ MOLINA-SAMPER (UNIVERSITY OF VALLADOLID), Combinatorial Maximal Contact Theory
Ahmed Zerouali (University of Toronto), Duistermaat-Heckman Measure of a Twisted q-Hamiltonian Space
Mariia Myronova (Université de Montréal), Dynamical generation of graphene
Pavel Zenon Sejas Paz (University of Brasília), EM heating stimulated water flooding for medium-heavy oil recovery
Masoumeh sajedi (Universite de Montreal), Fourth order Superintegrable systems separating in Cartesian coordinates- Exotic quantum potentials
Felipe Yukihide Yasumura (State University of Campinas), Gradings on upper triangular matrices and their graded automorphisms
RAMIRO PEÑAS GALEZO (Universidad del Atlántico), Mathematical model of coupled elasto plastic membranes
Sadia Ansari (Loyola University Chicago), Minimal Generating Sets of the Symmetric Group
Jeovanny de Jesus Muentes Acevedo (Universidade de Sao Paulo), On the Continuity of the Topological Entropy of Non-autonomous Dynamical Systems
Bruna Cassol dos Santos (Institute of Mathematics and Statistics - University of São Paulo), Qualitative study for a vector-borne epidemic model
Carlos Valero (University of Waterloo), Separation of Variables on Spaces of Constant Curvature
Garcia Gallegos Monica del Rocio (Univesité du Québec à Montréal (UQÀM)), Stability Conditions and Non Crossing Tree Partitions
Zofia Grabowiecka (Université de Montréal), Subsymmetry decomposition of $H_3$ polytopes
Bruno Costa (University of São Paulo), Symmetries and Lie groupoids
Héctor Barge (Universidad Politécnica de Madrid), Topology and dynamics of quasi-attractors and IFS attractors
Santiago Miler Quispe Mamani (Universidade de Brasilia), Torsion Free Modules Decomposition as Direct Sum of Modules with Rank 1
Sarah Malick
A connection between grad-div stabilized FE solutions and pointwise divergence-free FE solutions on general meshes
We prove, for Stokes, Oseen, and Boussinesq finite element discretizations on general meshes, that grad-div stabilized Taylor-Hood velocity solutions converge to the pointwise divergence-free solution (found with the iterated penalty method) at a rate of $\gamma^{-1}$, where $\gamma$ is the grad-div parameter. However, pressure is only guaranteed to converge when $(X_h, \nabla \cdot X_h)$ satisfies the LBB condition, where $X_h$ is the finite element velocity space. For the Boussinesq equations, the temperature solution also converges at the rate $\gamma^{-1}$. We provide several numerical tests that verify our theory. This extends work that required special macroelement structure in the mesh.
Supervisor: Leo Rebholz
Ismail Abouamal
A fifth-order quantum superintegrable system and its relation with the Painlevé property.
We consider a two dimensional quantum Hamiltonian in Cartesian coordinates and its coexistence with a fifth-order integral of motion. We impose the superintegrablity condition and find explicitly all exotic superintegrable potentials allowing the existence of such an integral. All of these potentials are found to have the Painlevé property and some of them are expressed in terms of Painlevé transcendents and elliptic functions.
Contributor(s): Dr. Pavel Winternitz
Supervisor: Dr. Pavel Winternitz
Ahmed Ashraf
Combinatorial Characters of Symmetric Group
We derive an expression for generating function of irreducible character of $\mathfrak{S}_n$ corresponding to two row partition $(n-k, k)$ and hook partition $(n-k, 1^k)$ in terms of cycle statistics of evaluating permutation. We use Doubilet inversion formula and homology of poset of tilings for our derivation. As an application we give a new proof of M. Rosas formula for Kronecker coefficients of two row partition and hook partition.
Supervisor: Graham Denham
BEATRIZ MOLINA-SAMPER
UNIVERSITY OF VALLADOLID
Combinatorial Maximal Contact Theory
Hironaka's characteristic polyhedra represent the combinatorial steps in almost any procedure of reduction of singularities. This is implicit in Hironaka's formulation of the polyhedra game. The main arguments to solve the combinatorial part for the reduction of singularities are contained in Spivakovsky's solution to Hironaka's game. On the other hand the globalization of the strategies as well as the geometrical structure of the induction to obtain reduction of singularities are the main ideas in the Maximal Contact Theory, developed by Hironaka, Aroca and Vicente for the case of complex analytic spaces. We present here a way of considering the combinatorial problems in terms of Systems of Newton Polyhedra and Characteristic Polyhedra. In this formulation, the combinatorial features of the problems are reflected without loosing the global aspects. We give a solution of the problem following the classical lines and in particular we need to project the problem over a "Maximal Contact Support Fabric" that plays the role of the maximal contact variety. This combinatorial structure is free of restrictions on the characteristic and can be applied simultaneously to varieties, foliations, vector fields and differential forms among other possible objects.
Supervisor: FELIPE CANO
Ahmed Zerouali
Duistermaat-Heckman Measure of a Twisted q-Hamiltonian Space
A q(uasi)-Hamiltonian $G$-space $(M,\omega,\Phi)$ can be viewed as a generalization of a symplectic manifold with a Hamiltonian action of a Lie group $G$, where one has an Ad-equivariant group-valued moment map $\Phi:M\rightarrow G$, along with an invariant 2-form $\omega$ on $M$ satisfying a minimal degeneracy condition and whose differential is the pullback of the Cartan 3-form on $G$. As in the symplectic setup, a q-Hamiltonian space has a notion of Liouville form, and its push-forward under the moment map defines a Duistermaat-Heckman (DH) measure on the Lie group $G$ that encodes the volumes of ``symplectically'' reduced spaces. Building on work of Alekseev, Bursztyn and Meinrenken, we give a characterization of the DH measure of a twisted q-Hamiltonian $G$-space. This is a generalization in which the moment map $\Phi$ is equivariant with respect to twisted conjugation: $\mbox{Ad}_{g}^{(\tau)}(h)=g\cdot h\cdot\tau(g^{-1})$ for $g,h\in G$, where $\tau$ is a Dynkin diagram automorphism. Our main result is a localization formula for the Fourier coefficients of the DH measure, and we illustrate its use with examples relevant to Lie theory and mathematical physics.
Supervisor: Eckhard Meinrenken
Mariia Myronova
Dynamical generation of graphene
In recent decades, the astonishing physical properties of carbon nanostructures have been discovered and are nowadays intensively studied. We introduce how to obtain a graphene sheet using group theoretical methods and how to construct a graphene layer using the method of dynamical generation of quasicrystals. Both approaches can be formulated in such a way that the points of infinite graphene sheet are generated. Moreover, they provide identical graphene layers. The main objective is to describe how to generate graphene step by step from a single point of the Euclidean plane $\mathbb{R}^{2}$. Some 2D examples will be shown.
Copresenter(s): Emmanuel Bourret
Supervisor: Jiri Patera
Pavel Zenon Sejas Paz
University of Brasília
EM heating stimulated water flooding for medium-heavy oil recovery
We report a study of heavy oil recovery by combined water flooding and electromagnetic (EM) heating at a frequency of $2.54$ GHz used in domestic microwave ovens. A mathematical model describing this process was developed. Model equations were solved and the solution is presented in an integral form for the one dimensional case. Experiments consisting of water injection into Bentheimer sandstone cores, either fully water-saturated or containing a model heavy oil, were also conducted, with and without EM heating. Model prediction is found to be in rather good agreement with an experiments. EM energy was efficiently absorbed by water and, under dynamic conditions, was transported deep into the porous medium. The amount of EM energy absorbed increase with water saturation. Oil recovery by water flooding combined with EM heating was up to $37.0\%$ larger than for cold water flooding. These observations indicate that EM heating induces an overall improvement of the mobility ratio between the displacing water and the displaced heavy oil.
Contributor(s): Pacelli L. J. Zitha
Supervisor: Grigori Chapiro
Masoumeh sajedi
Fourth order Superintegrable systems separating in Cartesian coordinates- Exotic quantum potentials
We consider two-dimensional quantum Superintegrable Hamiltonians with separation of variables in Cartesian coordinates. We focus on systems that allow fourth-order integrals of motion, also potentials satisfying nonlinear ODEs with the Painlevé property. We classify all potentials expressed in terms of Painlevé transcendents and their integrals.
Supervisor: Pavel Winternitz
Felipe Yukihide Yasumura
State University of Campinas
Gradings on upper triangular matrices and their graded automorphisms
It is well known that every automorphism of a central simple associative algebra is inner. The same statement was proved to be true for the associative algebra of upper triangular matrices. Similar questions can be raised for algebras with additional structure, for example, in the context of graded algebras. Recently, graded algebras constitute a subject of intense investigation, due to its naturalness in Physics and Mathematics. The polynomial algebras (in one or more commutative variables) are the most natural structure of an algebra with a grading - given by the usual degree of polynomials. For instance, the classification of finite dimensional semisimple Lie algebras gives rise to naturally $\mathbb{Z}^m$-graded algebras. Kemer solved a very difficult problem known as the Specht property in the theory of algebras with Polynomial Identities in the setting of associative algebras over fields of characteristic zero, using $\mathbb{Z}_2$-graded algebras as a tool. After the works of Kemer, interest in graded algebras increased greatly. In this poster, we present all the gradings on the algebra of upper triangular matrices and show the self-equivalences, the graded automorphisms, the Weyl and diagonal groups, considered as associative, Lie and Jordan algebras. We also cite their graded involutions on the associative case.
Supervisor: Prof. Dr. Plamen E. Koshlukov
RAMIRO PEÑAS GALEZO
Universidad del Atlántico
Mathematical model of coupled elasto plastic membranes
We present the partial differential equations of a model of two flat elastoplastic membranes, of different material, coupled, with tensions and deformations parallel to the plane. The variational formulation of the coupled problem uses the development of Matthias Liero and Alexander Mielke on elasto plastic plates. The existence and uniqueness of solutions is demonstrated by the Lax-Milgram theorem.
Supervisor: Jairo Hernández Monzón
Sadia Ansari
Minimal Generating Sets of the Symmetric Group
The goal of this project is to analyze the minimal generating sets of the symmetric group $S_n$. To accomplish the task, we first build all of them for $S_3$ through $S_5$. We use group automorphisms and cycle-type to facilitate this. Specifically, we organize our search for minimal generating sets by the cycle-types of its elements, and we identify any such $X$ with any of its images under conjugation. As such, "orbit size" becomes the first interesting aspect of the project. Given a minimal generating set $X$ from an orbit, we construct the rooted tree such that each node is an element $w$ of $S_n$. Its path to the root represents a shortest expression for $w$ in terms of the generators. The properties (such as depth and width) of such trees, uniqueness up to automorphism, the posets of minimal generating sets not of the form $\{(1,2), (2,3), \dots, (n-1, n)\}$, and the minimal generating sets (for $n = 3,4,5$) that fit into a family for any $n\geq3$ are studied. (Preliminary report of work started under the auspices of the McNair program at Loyola Chicago.)
Supervisor: Dr. Aaron Lauve
Jeovanny de Jesus Muentes Acevedo
Universidade de Sao Paulo
On the Continuity of the Topological Entropy of Non-autonomous Dynamical Systems
Let $M$ be a compact Riemannian manifold. The set $\text{F}^{r}(M)$ consisting of sequences $(f_{i})_{i\in\mathbb{Z}}$ of $C^{r}$-diffeomorphisms on $M$ can be endowed with the compact topology or with the strong topology. A notion of topological entropy is given for these sequences. I will prove this entropy is discontinuous on each sequence if we consider the compact topology on $\text{F}^{r}(M)$. On the other hand, if $ r\geq 1$ and we consider the strong topology on $\text{F}^{r}(M)$, this entropy is a continuous map.
Supervisor: Albert Meads Fisher
Bruna Cassol dos Santos
Institute of Mathematics and Statistics - University of São Paulo
Qualitative study for a vector-borne epidemic model
Many efforts have been made trying to describe the dynamic of infectious diseases and with the intention to identify which parameters have the most epidemiological importance. We study a classical SIR-SI model for arboviruses considering a variance in the size of human population. Under this hypothesis, we developed a qualitative study of the mathematical model analysing the local and global stability of the equilibrium. The disease-free equilibrium is globally stable if $Ro \leq 1$ and unstable if $Ro>1$. For the endemic equilibrium we showed that if $Ro>1$ then this equilibrium is globally stable. The results of the global stability were verified by using the Poincaré Bendixson criterion for competitive systems. Finally we take a sensitivity analysis with the aim to identify the most important parameters in the disease's spread through the $Ro$ parameter, and the prevalence of the disease through the endemic equilibrium sensitivity. We found that the bite rate and the mortality rate of the vector are the most sensitive parameters.
Contributor(s): Sergio Muniz Oliva Filho, Joyce da Silva Bevilacqua,
Supervisor: Sergio Muniz Oliva Filho
Carlos Valero
Separation of Variables on Spaces of Constant Curvature
Given a pseudo-Riemannian manifold $(M,g)$, an important and ubiquitous partial differential equation one can define is the Laplace-Beltrami equation $$g^{ij}(q) \nabla_i \nabla_j \psi + V(q)\psi = E\psi$$ which reduces to the Schrodinger equation in the Riemannian case, and a (generalized) wave equation in the Lorentzian case. Separation of variables is an old but powerful method for obtaining exact solutions to this equation, but it is not always possible. So the question we address is the following: how can we determine and classify the coordinate systems on $M$ which admit a separable solution of the Laplace-Beltrami equation? We restrict ourselves to spaces of constant curvature, in which the theory of conformal Killing tensors yields an efficient and exhaustive approach to this problem. We review some of the recent work done on this problem, highlighting interesting results, and focusing on the much more interesting Lorentzian cases, which include Minkowski, de Sitter, and anti-de Sitter spaces.
Contributor(s): Raymond G. McLenaghan
Supervisor: Raymond G. McLenaghan
Garcia Gallegos Monica del Rocio
Univesité du Québec à Montréal (UQÀM)
Stability Conditions and Non Crossing Tree Partitions
Noncrossing tree partitions were introduced by Garver and McConville to obtain an explicit description of the wide subcategories in the module category of a family of representation finite gentle algebras. Very recently, it was proven by Yurikusa that for any finite dimensional algebra of finite representation type its wide subcategories are realizable as semi-stable subcategories in the sense of King. Our goal is to provide a combinatorial construction of Yurikusa's stability conditions for the wide subcategories defined by noncrossing tree partitions. This project is the result of a Mitacs Globalink Research Internship hosted by UQAM.
Supervisor: Alexander Garver
Zofia Grabowiecka
Subsymmetry decomposition of $H_3$ polytopes
Polytopes of non-crystallographic Coxeter group in 3D are considered. The method of decorating Coxeter-Dynkin diagrams, which allows to describe polytopes in all dimensions is presented. The method of decomposing vertices of $H_3$ polytopes into orbits of lower symmetry groups is explained. The decomposition is provided for polytopes with 60 vertices.
University of São Paulo
Symmetries and Lie groupoids
Starting from a given action of a Lie groupoid on a fiber bundle, we show how to construct induced actions of certain Lie groupoids, derived from the original one, on certain fiber bundles, derived from the original one: this is an essential technical feature needed to understanding what it meant by invariance of a tensor field under the action of a Lie groupoid. As the most important example, we are able to show in which sense the multicanonical form $\theta$ and the multisymplectic form $\omega$ of the covariant hamiltonian formalism are invariant under the appropriate induced action, and similarly, the forms $\theta_\mathcal{H}$ and $\omega_\mathcal{H}$, given by the pull-back of the forms $\theta$ and $\omega$ by the hamiltonian $\mathcal{H}$, respectively, are invariant under the action of a Lie groupoid leaving the hamiltonian invariant. This is a joint work with Frank Michael Forger (University of São Paulo).
Supervisor: Frank Michael Forger
Héctor Barge
Topology and dynamics of quasi-attractors and IFS attractors
In this poster some results about quasi-attractors of flows and attractors of IFS (Iterated Function Systems) are presented. For instance, we show that every compact subset of the Euclidean space is a quasi-attractor of some flow and that every attractor of a contractive and invertible IFS has either trivial shape or the shape of the hawaaian earring, provided that it has empty interior. All the results presented have been obtained in collaboration with Antonio Giraldo and José M.R. Sanjurjo.
Contributor(s): Antonio Giraldo and José M.R. Sanjurjo
Supervisor: José M.R. Sanjurjo
Santiago Miler Quispe Mamani
Universidade de Brasilia
Torsion Free Modules Decomposition as Direct Sum of Modules with Rank 1
The aim of this paper is to present the result given by Bass in [1], which determines a condition on the integral domain R so that every finitely generated torsion free module is written as a direct sum of modules of rank 1. We show that a necessary condition is that all ideal in R is generated by two elements, in other words, that these domains are almost Dedekind domains. Then, we apply the result in the description of torsion free modules of finite rank over the coordinate rings of singular curves, whose singularities are nodal or cuspidal. Key-words: Torsion free modules. Modules of rank 1. Nodal and Cuspidal. [1] BASS, H. Torsion free and projective modules, Trans. Amer. Math. Soc.102, p. 319-327, 1962.
Supervisor: Flaviana Andrea Ribeiro | CommonCrawl |
original innovation
Classifying bridges for the risk of fire hazard via competitive machine learning
V. K. Kodur1 &
M. Z. Naser2
This study presents a machine learning (ML) approach to identify vulnerability of bridges to fire hazard. For developing this ML approach, data on a series of bridge fires was first collected and then analyzed through three algorithms; Random forest (RF), Support vector machine (SVM) and Generalize additive model (GAM), competing to yield the highest accuracy. As part of this analysis, 80 steel bridges and 38 concrete bridges were assessed. The outcome of this analysis shows that the ML based proposed approach can be effectively applied to arrive at the risk based classification of bridges from a fire hazard point of view. In addition, the developed ML algorithms are also capable of identifying the most critical features that govern bridges vulnerability to fire hazard. In parallel, this study showcases the potential of integrating ML into structural engineering applications as a supporting tool for analysis (i.e. in lieu of experimental tests, advanced simulations, and analytical approaches). This work emphasizes the need to compile data on bridge fires from around the world into a centralized and open source database to accelerate the integration of ML in to fire hazard evaluation.
Bridges are strategic structures that facilitate transportation and supply chain operations. As such, bridges are to be designed to withstand normal and extreme load conditions. However, in current practice, bridge design is carried out to mitigate most loading conditions (including wind, and earthquakes), with the exception of fire hazard (AASHTO LRFD, 2017). From this perspective, there only exists a few general guidelines aimed to limit the vulnerability of bridges to fire hazard in the National Fire Protection Association (NFPA) Report 502 (NFPA, 2017). It should be stressed that even NFPA guidelines are general and qualitative in nature and are only applicable to bridges with spans greater than 300 m. As one can see, such bridges constitute only a small percentage of the total number of bridges in a given region.
Unlike building fires, which comprises of burning of cellulose materials, bridge fires are often trigged by burning of hydrocarbon fuels and hence are shown to rapidly reach temperatures exceeding 1000 °C within a short period of time (Kodur & Naser, 2020; Peris-Sayol et al., 2017). Similarly, while structural systems in buildings are often insulated and protected by active fire means (i.e. sprinklers), load bearing structural systems in bridges continue to be designed with our without any active or passive fire protection features. Given the above, and noting that the bridges are often away from nearest fire department locations (to fight the fires), continuous exposure to the surrounding environment and their extended service life, implies that bridges become vulnerable to extreme events, especially fire hazard. Recent incidents have shown that fires on bridges can lead to the development of significant thermally-induced forces on connections, and result in collapse (NTSB, 2017; Eisel et al., 2007). Fortunately, bridge fires often extinguish quickly due to burning out of limited fuel present or firefighting activities. However, such incidents although may not cause collapse, they can still induce large damage to load bearing members, which can result in closure of the bridges for weeks for repair and retrofitting (Garlock et al., 2012).
The above discussion infers that it is of highest importance to properly identify bridges from a fire hazard perspective to enable authorities from taking appropriate actions at the design stage itself to improve the resilience of such bridges. However, given the large number of bridges (e.g. + 660,000 and + 878,000 operational bridges in the US and China, respectively) infers that identifying vulnerable bridges to fire can be challenging (Statista, 2020; LTBP, 2020). It is due to such challenges that little research has been directed towards identifying fire-vulnerable bridges (Giuliani et al., 2012; Quiel et al., 2015; Aziz & Kodur, 2013; Kodur et al., 2017; Kodur & Naser, 2019; Alos-Moya et al., 2017; Ma et al., 2019). Of the existing limited works, the majority applied similar methods to that adopted in identifying vulnerable bridges to wind and seismic hazard (i.e. importance factors) (Naser & Kodur, 2015a). Other works applied statistical and fragility analysis methods to arrive at a methodology to enable assessment of bridges to fire (Gidaris et al., 2017).
Other than the above noted traditional methods, machine learning (ML) continues to present itself as novel and effective approach to tackle data-oriented problems in the civil engineering domain (Naser, 2018; Naser, 2019a; Gandomi et al., 2011; Solhmirzaei et al., 2020; Taffese & Sistonen, 2017; Hodges et al., 2019). For example, ML methods have been proven effective when applied to a variety of problems within the domain of bridge design and maintenance including; bridge assessment (Mangalathu et al., 2019), seismic analysis of bridges (Mangalathu & Jeon, 2019), maintenance of bridges (Okazaki et al., 2020), and traffic path planning (Zuo et al., 2019) etc. However, such ML approaches is yet to be applied into classifying bridges to fire hazard.
This paper aims to bridge the above knowledge gap by applying ML to identify and classify bridges according to their vulnerability to fire hazard. Three algorithms namely; Random forest (RF), Support vector machine (SVM) and Generalize additive model (GAM), are developed and applied to examine how various features extracted from a large set of bridges, traffic flow and fire incidents can influence vulnerability to fire. These algorithms are trained to analyzed 80 steel bridges and 38 concrete bridges in pursuit of learning hidden patterns responsible for bridges vulnerability to fire hazard. Overall, all algorithms performed well with an accuracy of about 70%, a classification equivalence time of about 100 bridges per minute. Due to the unique learning nature of ML algorithms, the developed algorithms herein can be further finetuned with the addition of new bridge-related features and fire incidents. A key take home message is that ML can be a valuable tool to automatically analyze large bridge populations to identify those of high vulnerability to fire.
Development of bridge fire database
To effectively develop a ML-based approach, a good set of fire incidents that occurred on bridges is needed. Thus, a comprehensive literature review was first carried out to document notable bridge fire incidents. This review documented key and common factors that governing the response of bridges to fire as documented by the departments of transportation (DoTs) reports and from consultation with practicing engineers (Eisel et al., 2007; Quiel et al., 2015; NYDOT, 2008; Bocchini et al., 2014; Qiang et al., 2009; Davis & Tremel, 2008; Guthrie et al., 2009; Culliton, 2018). These documented factors include; bridges (structural) features, traffic flow patterns, and fire characteristics. Overall, this survey led to collecting data on fire incidents in 118 bridges (see Fig. 1). While this study considered three main features, other features can also be included once information on such features are reliably obtained or collected. It is our intention to present a general approach to enable adoption of ML into this domain and we invite interested readers to extend and update the presented database and approach as shown in earlier works (Naser & Kodur, 2015b).
Details on the compiled bridge database
Bridge features
The identified physical features that are govern the vulnerability of bridges against fire hazard include: structural system and construction materials used in load bearing elements, span and age of the bridge. Figure 1 shows that the compiled database features from 80 steel bridges and 38 concrete bridges that experienced fire incidents over the last three decades. The same figure also shows that out of these bridges 17 were box-based, 15 were cable-based, 65 were girder-type bridges, and 22 truss-like bridges. In terms of bridge span, the average span of all compiled bridges collected is 117 m. The full distribution of spans in all bridges are shown in Fig. 1. Finally, the average age of collected bridges is 45 years which coincides with that reported by US DOTs (LTBP, 2020).
Traffic features
Within traffic features, both geographical significance as well as number of lanes on the bridge were included as they represent the significance of the bridge to the region, expected traffic flow and availability of alternative routes – as these factors indirectly imply the adverse consequences of the loss of functionality of the bridge due to fire. The geographical significance of bridges is grouped under three classes: rural, sub-urban and urban as noted in a previous work by the authors (Kodur & Naser, 2013). Figure 1 shows that there are 32 rural bridges and 43 sub-urban and urban bridges. In lieu of geographical significance, the distribution of bridges' number of lanes wherein in 50% contain 1–3 lanes and the other half contained 4–11 lanes.
Herein, two features are identified to be of importance: possible fuel type to be involved in burning and position of fire breakout on the bridge. In the first feature, fuel type varied between gasoline/diesel, or hydrocarbon fuels, and other types of flammables (i.e. chemicals, wildfires etc.) – see Fig. 1. For simplicity, three positions for fire break scenarios out were considered; in the vicinity of the bridge, above the bridge and under the bridge with incidents of 4, 56 and 58 bridge fires belonging to the aforenoted positions.
Damage magnitude
Contingent upon the severity of fire, the magnitude of damage the bridge experiences and any possible traffic stress to the surrounding transportation network can vary. On one hand, if a bridge does not experience significant structural damage from fire, then this bridge can be re-opened for traffic in short order. On the other hand, moderate to major damage to structural members of a bridge require proper inspection and repair, which in turn necessitates closure of bridge from safety consideration.. To enable such inspection and timely repairs, through traffic need to be reduced on the route and have to be detoured. Thus, there are two classes of damage that are to be considered herein; no damage to bridge structure (does not necessitate full shut down), and damage (necessitates shut down). Overall, 69 of the surveyed bridges experienced nil to minor damage, 66 underwent major damage (including collapse).
Description of machine learning approach
This section describes the general description and steps associated with the development of the ML approach and associated ML algorithms.
General approach
For the application of ML approach to a problem, a user must select a series of ML algorithms. The selection process can be purely be arbitrary or can be taken as a result of a sensitivity analysis (Barber, 2012). Oftentimes, the use of 1 ML algorithm to understand a phenomenon can be sufficient. However, recent experience has shown that this practice might lead to biased ML-based solutions in some situations and also in few other instances it may not produce a near-optimal solution in a timely manner. With this consideration, this study explores the use of multiple algorithms to harnesses the advantages of multi-algorithm search. In this multiple algorithms approach, ML algorithms can search in a competitive arrangement to look for best possible solutions (which from the view of this study refers to accurately classifying bridge for the risk of fire hazard). Once a solution is identified by each algorithm, a series of fitness metrics are applied to identify the fittest solution for a problem (Naser & Alavi, 2020). Following this procedure, the identified solution is not only vetted across different search mechanisms but is also vetted through different ML analysis stages (see Fig. 2). Once a ML algorithm is properly validated, then this algorithm can be ready for deployment to assess new bridges for fire hazard. With the addition of new bridge fires and information, the algorithm can be re-tuned to improve its prediction capability.
A flowchart illustrating the various steps for the application of the ML approach
Once the vulnerable bridges are identified for fire risk, then these bridges can be incorporated with needed measures to enhance fire safety and minimize their vulnerability to fire risk. Such measure include provision of fire insulation to steel members or put in measures to minimize the occurrence of fire in the vicinity of the bridge (e.g. no storage of flammable materials under bridges). Other solutions can also be adopted as noted in recent works (Naser, 2019b).
Random forest (RF)
Random forest (RF) is an algorithm that capitalizes on principles of ensemble learning (in which a tree-like algorithm is applied multiple times with different types of algorithms that are joined together to form a more powerful prediction model that applies majority voting principle) – see Fig. 3. RF can be used in classification and is defined as a nonparametric classifier (i.e. does not require assumptions to be made on the form of relationship between the predictors and the response variable). In a classification problem, the majority voting method is used to arrive at the final output of RF analysis. A typical formulation of RF is presented herein:
$$ Y=\frac{1}{J}\sum \limits_{j=1}^J{C}_{j, full}+\sum \limits_{k=1}^K\left(\frac{1}{J}\sum \limits_{j=1}^J{contribution}_j\left(x,k\right)\right) $$
where, J is the number of trees in the forest, k represents a feature in the observation, K is the total number of features, cfull is the average of the entire dataset (initial node).
Representation of a typical RF algorithm topology (Sirakom, 2020)
Support vector machine (SVM) is an algorithm often applied in classification problem. SVM arrives at solutions through obtaining a separating hyperplane among classes (see Fig. 4). The SVM algorithm can be illustrated by considering a training data set T = {(xi, yi), i = 1, 2, …, N}. This data set consists of an N number of m-dimensional features vectors xi and their corresponding labels yi ∈ {−1, 1}. SVM aims to find the separating boundary between two or more classes. This is done through maximizing the margin between the decision hyperplane and the data set, while minimizing the misclassification. The decision/separating hyperplane is defined as
$$ {w}^tx+b=0 $$
where w represents the weight vector defining the direction of the separating boundary, whereas b denotes the bias. The decision function is defined as
$$ f(x)=\mathit{\operatorname{sgn}}\left({w}^t{x}_i+b\right) $$
where \( \mathit{\operatorname{sgn}}\left(\alpha \right)=\left\{\begin{array}{c}1,\alpha \ge 0\ \\ {}-1,\alpha <0\ \end{array}\right.. \) SVM algorithm aims to maximize the margin through minimizing ||w||, which results in the following constrained optimization problem
$$ \underset{w,\xi }{\min }{\tau}_1\left(w,\xi \right)=\underset{w,\xi }{\min}\left[\frac{1}{2}{\left|\left|w\right|\right|}^2+C\sum \limits_{i=1}^N{\xi}_i\right] $$
subject to yi(wtxi + b) ≥ 1 − ξi, ξi > 0, C > 0, i = 1, 2, …, N.
Demonstration of SVM space
where τ1(.), ‖.‖2, and ξi denote the objective function, L2-norm, and slack variable, respectively. When the data is linearly inseparable, SVM offers an alternative solution for classification. To this end, SVM employs a kernel trick projecting the data into a higher dimensional feature space to make data divisible, as illustrated in Fig. 4 (Han et al., 2012). The kernel function, in fact, defines the nonlinear mapping from the input space into a high dimensional feature space.
Generalize additive model (GAM)
Generalize additive model (GAM) is a nonparametric extension of Generalize linear model. GAM can be useful in scenarios were a user may not have a priori reason or preference for choosing a particular algorithm or response function (such as linear, quadratic, etc.). GAM separates features into knots, and then attempts to fit polynomial functions between such knots. In GAM, the model fit follows a deviance/likelihood, and hence fitted models are directly comparable using likelihood techniques. In GLM, the outcome class (Y) of a phenomenon is assumed to be a linear combination of the coefficients (β) and features (x1, …, xn) as seen in Eq. 5.
$$ Y={\beta}_0+{\beta}_1{x}_{1,i}+\dots +{\beta}_n{x}_{n,i} $$
Machine learning model development and validation
The above discussed algorithms in Section 3 are applied to analyze the compiled database shown in Section 2. For a start, the compiled database was randomly arranged to minimize biasness that might arise from a particular feature or fire incident. After that, the database was split into a training set (80%) and testing and validation set (20%) to be used to evaluate the performance (i.e. fitness) of the machine learning techniques once the training process is complete (Hasni et al., 2018). In addition, a k-fold cross validation was also applied. In this technique, the database is further divided into k subsets. Each of subset is then kept aside (in holdout), while the shuffling of data is repeated k times, such that each time, one of the k subsets is used as the test set/validation set and the other k-1 subsets are put together to form a training set. This method significantly reduces bias and variance as well as limits overfitting of the algorithms. A fold of k = 5 is used herein.
In all cases, the results of the ML analysis is examined via the following performance metrics:
Area under the ROC curve (AUC)
This metric Measures the two-dimensional area underneath the entire Receiver Operating Characteristic (ROC) curve with best performance reaching 100%, such that:
$$ AUC=\frac{1}{2}w\ \left(h+{h}^{\prime}\right) $$
where, w = width, and h and h' = heights of the sides of a trapezoid histogram.
$$ {\displaystyle \begin{array}{l} Precision=\frac{TP}{TP+ FP}{and}^{\ast \ast }\ Recall=\frac{TP}{TP+ FN} where\ TP\ \left( denotes\ true\ positive\kern0.5em incedents\right), FP\\ {}\left( denotes\ false\ positive\ incidents\right), and\ FN\ \left( denotes\ false\ negative\ incidents\right)\end{array}} $$
Table 1 shows the confusion matrix and fitness metrics for all algorithms. It is worth noting that the overall accuracy for these techniques is quite promising. All of the aforementioned metrics reveal the accuracy of the RF algorithm as opposed to SVM or GAM. Overall, the listed metrics shows that the proposed ML approach can be used to classify fire damage in bridges with confidence.
Table 1 Performance of selected algorithms (%)
In addition, a sensitivity analysis was carried out on the proposed ML approach to identify the relative impact of each feature within on the overall vulnerability of bridges. In Table 2, feature impact refers to the likelihood that increasing a specific feature leads to an increase to the outcome (i.e. if a feature has an impact of 80%, then 80% of the time an increase in this specific feature would lead to an increase the bridge undergoing damage). Table 2 shows that the main two features with the highest sensitivity (i.e. impact) are the fuel type involved in the fire incident and the span of the (primarily girders bridge, with age of the bridge and geographical significance coming in next.
Table 2 Significant features, as per sensitivity analysis, influencing the fire risk on a bridge
The ML analysis was finally used to examine the association between the influencing features and explore the degree of dependence between the selected features. Table 3 lists such association. It is clear that the association between features is minimal (less than 0.3) which implies that the independence of these features upon each other. This independence further our confidence in this analysis as the selected features were also indirectly related to each other.
Table 3 Degree of dependence between the influencing features that govern fire risk on a bridge
The above ML approach can now be deployed by authorities to identify vulnerable bridges to fire hazard. This outcome of this example shows that predictions from RF, SVM and GAM algorithms may not always agree with actual incident given the above accuracy metrics which falls short of 100%. Still, the proposed ML approach continues to be feasible as it can seemingly be extended beyond the above three algorithms.
The ML revolution is being implemented in parallel areas right now (Hamet & Tremblay, 2017; Litman, 2014) and it is of merit to the bridge community to start planting seeds to allow the use of ML in bridge applications. In addition, recent engineering graduates are becoming very familiar with ML; partly to their engagement with modern technologies. The same students will be leading our area in the coming 10 years or so and hence current works can start seeding for wide implementation of ML in the near future. For example, the proposed approach herein can be used to evaluate the fire resistance of bridge components (i.e. girders and piers). To extend the applicability of the proposed approach to bridge fire resistance design, a larger dataset is to be compiled which will require tremendous effort as data on bridge fires are not easily accessible. We hope future works will be able to compile such a database to allow developing improved ML algorithms.
The use of ML will allow engineers from drawing conclusions on the vulnerability of a given bridge towards certain extreme events by comparing its key features to that of the general population of the bridges that failed under various conditions. The use of ML will help engineers to identify such bridges and associated events that may lead to failure with ease. More specifically, properly developed ML tools can be trained to identify what are the combination of factors that are associated with bridge failures (whether fire or other hazards). Based on the identified pattens, bridges with similar patterns will be flagged under a certain criterion, and DOT engineers can examine such bridges in more details. This will not only reduce the amount of inspection work to be carried by DOT engineers to thoroughly analyze every single bridge, but will also provide a new set of eyes to the same engineers to examine bridges from a new perspective.
Based on the findings of this work, the following conclusions can be drawn.
Bridge fire incidents continue to rise around the world due to urbanization and increasing transportation of fuels and hazardous chemicals. However, there is currently lack of methodologies for identifying bridges for the risk of fire hazard and also guidelines for designing bridges for fire safety.
ML can be successfully applied to develop bridge assessment tools that can identify vulnerability of a bridge to fire hazard. These ML based techniques can be specifically tailored to account for varying features, such as those related to physical, traffic, and fire characteristics, in evaluating the risk of fire hazard on a specific bridge.
The proposed ML approaches can be improved further with the compilation of reliable data and observations of fire incidents on bridges and also the method can be extended to assess vulnerability of tunnels to fire hazard.
The data can be requested from the authors.
GAM:
Generalize additive model
ML:
NFPA:
RF:
Random forest
AASHTO LRFD (2017) Bridge design specifications, 8th edn https://store.transportation.org/item/collectiondetail/152 (Accessed 10 June 2019)
Alos-Moya J, Paya-Zaforteza I, Hospitaler A, Rinaudo P (2017) Valencia bridge fire tests: experimental study of a composite bridge under fire. J Constr Steel Res. https://doi.org/10.1016/j.jcsr.2017.08.008
Aziz E, Kodur V (2013) An approach for evaluating the residual strength of fire exposed bridge girders. J Constr Steel Res 88:34–42. https://doi.org/10.1016/J.JCSR.2013.04.007
Barber D (2012) Bayesian reasoning and machine learning. https://books.google.com/books?hl=en&lr=&id=yxZtddB_Ob0C&oi=fnd&pg=PR5&ots=A0UGQfbSAs&sig=hdHOx9r5CMuDXgk3DAyXQR65iUA (Accessed 10 Apr 2019)
Bocchini P, Frangopol DM, Ummenhofer T, Zinke T (2014) Resilience and sustainability of civil infrastructure: toward a unified approach. J Infrastruct Syst. https://doi.org/10.1061/(ASCE)IS.1943-555X.0000177
Culliton K (2018) Brooklyn bridge Car fire kills 1, FDNY says | Brooklyn Heights. Patch, NY Patch https://patch.com/new-york/heights-dumbo/1-dead-brooklyn-bridge-car-fire-fdny-says
Davis M, Tremel P (2008) Bill Williams river concrete bridge fire damage assessment. Struct Mag https://www.structuremag.org/wp-content/uploads/2014/08/SF-Bill-Williams-Bridge-Fire-Assessment-July-08.pdf
Eisel H, Palm N, Prehn W, Sedlacek G (2007) Brandschaden und Instandsetzung der Wiehltalbrücke im Zuge der A4. Köln - Olpe, Stahlbau. https://doi.org/10.1002/stab.200710011
Gandomi AH, Tabatabaei SM, Moradian MH, Radfar A, Alavi AH (2011) A new prediction model for the load capacity of castellated steel beams. J Constr Steel Res. https://doi.org/10.1016/j.jcsr.2011.01.014
Garlock M, Paya-Zaforteza I, Kodur V, Gu L (2012) Fire hazard in bridges: review, assessment and repair strategies. Eng Struct. https://doi.org/10.1016/j.engstruct.2011.11.002
Gidaris I, Padgett JE, Barbosa AR, Chen S, Cox D, Webb B, Cerato A (2017) Multiple-hazard fragility and restoration models of highway bridges for regional risk and resilience assessment in the United States: State-of-the-art review. J Struct Eng. https://doi.org/10.1061/(ASCE)ST.1943-541X.0001672
Giuliani L, Crosti C, Gentili F (2012) Vulnerability of bridges to fire, in: Bridg. Maintenance, Safety, Manag. Resil. Sustain. - Proc. Sixth Int. Conf. Bridg. Maintenance, Saf. Manag. https://doi.org/10.1201/b12352-225
Guthrie D, Goodwill V, M.H.-T.D (2009) News, undefined 2009, Tanker fire shuts down I-75, collapses Nine Mile bridge.
Hamet P, Tremblay J (2017) Artificial intelligence in medicine. Metabolism 69:S36–S40. https://doi.org/10.1016/j.metabol.2017.01.011
Han S, Cao Q, Han M (2012) Parameter selection in SVM with RBF kernel function. In: World Automation Congress
Hasni H, Jiao P, Lajnef N, Alavi AH (2018) Damage localization and quantification in gusset plates: a battery-free sensing approach. Struct Control Health Monit. https://doi.org/10.1002/stc.2158
Hodges JL, Lattimer BY, Luxbacher KD (2019) Compartment fire predictions using transpose convolutional neural networks. Fire Saf J. https://doi.org/10.1016/j.firesaf.2019.102854
Kodur V, Naser M (2020) Structural Fire Engineering, 1st edn. McGraw Hill Professional
Kodur VK, Aziz EM, Naser MZ (2017) Strategies for enhancing fire performance of steel bridges. Eng Struct 131. Elsevier, Netherlands. https://doi.org/10.1016/j.engstruct.2016.10.040
Kodur VKR, Naser MZ (2013) Importance factor for design of bridges against fire hazard. Eng Struct 54:207–220. https://doi.org/10.1016/j.engstruct.2013.03.048
Kodur VKR, Naser MZ (2019) Designing steel bridges for fire safety. J Constr Steel Res. https://doi.org/10.1016/j.jcsr.2019.01.020
Litman T (2014) Autonomous vehicle implementation predictions: implications for transport planning. Transp Res Board Annu Meet. https://doi.org/10.1613/jair.301
LTBP InfoBridge - analytics, (2020). https://infobridge.fhwa.dot.gov/BarStackChart Accessed 28 Jan 2020
Ma R, Cui C, Ma M, Chen A (2019) Performance-based design of bridge structures under vehicle-induced fire accidents: basic framework and a case study. Eng Struct. https://doi.org/10.1016/j.engstruct.2019.109390
Mangalathu S, Hwang SH, Choi E, Jeon JS (2019) Rapid seismic damage evaluation of bridge portfolios using machine learning techniques. Eng Struct. https://doi.org/10.1016/j.engstruct.2019.109785
Mangalathu S, Jeon JS (2019) Machine learning-based failure mode recognition of circular reinforced concrete bridge columns: comparative study. J Struct Eng. https://doi.org/10.1061/(ASCE)ST.1943-541X.0002402
Naser MZ (2018) Deriving temperature-dependent material models for structural steel through artificial intelligence. Constr Build Mater 191:56–68. https://doi.org/10.1016/J.CONBUILDMAT.2018.09.186
Naser MZ (2019a) AI-based cognitive framework for evaluating response of concrete structures in extreme conditions. Eng Appl Artif Intell 81:437–449. https://doi.org/10.1016/J.ENGAPPAI.2019.03.004
Naser MZ (2019b) Can past failures help identify vulnerable bridges to extreme events? A biomimetical machine learning approach. Eng Comput. https://doi.org/10.1007/s00366-019-00874-2
Naser MZ, A. Alavi, Insights into performance fitness and error metrics for machine learning, (2020). http://arxiv.org/abs/2006.00887 (Accessed 4 Aug 2020)
Naser MZ, Kodur VKR (2015a) A probabilistic assessment for classification of bridges against fire hazard. Fire Saf J 76:65–73. https://doi.org/10.1016/j.firesaf.2015.06.001
Naser MZ, Kodur VKR (2015b) Application of importance factor for classification of bridges for mitigating fire hazard, in: Struct. Congr. 2015 - Proc. 2015 Struct. Congr, pp 1206–1214. https://doi.org/10.1061/9780784479117.103
NFPA, NFPA 502: standard for road tunnels, bridges, and other limited access highways, 2017
NTSB, Fire damage to bridge and subsequent collapse, Atlanta, Georgia, march 30, 2017, 2017. https://www.ntsb.gov/investigations/AccidentReports/Reports/HAB1802.pdf (Accessed 27 June 2019)
NYDOT, Bridge fire incidents in New York state, 2008
Okazaki Y, Okazaki S, Asamoto S, Chun PJ (2020) Applicability of machine learning to a crack model in concrete bridges. Comput Civ Infrastruct Eng. https://doi.org/10.1111/mice.12532
Peris-Sayol G, I. Payá-Zaforteza, Bridge Fires Database, (2017). https://www.researchgate.net/publication/317561066_Bridge_Fires_Database
Peris-Sayol G, Paya-Zaforteza I, Balasch-Parisi S, Alós-Moya J (2017) Detailed analysis of the causes of bridge fires and their associated damage levels. J Perform Constr Facil. https://doi.org/10.1061/(ASCE)CF.1943-5509.0000977
Qiang H, Xiuli D, Jingbo L, Zhongxian L, Liyun L, Jianfeng Z (2009) Seismic damage of highway bridges during the 2008 Wenchuan earthquake. Earthq Eng Vib. https://doi.org/10.1007/s11803-009-8162-0
Quiel SE, Yokoyama T, Bregman LS, Mueller KA, Marjanishvili SM (2015) A streamlined framework for calculating the response of steel-supported bridges to open-air tanker truck fires. Fire Saf J. https://doi.org/10.1016/j.firesaf.2015.03.004
Sirakom, Ensemble Bagging - File: Ensemble Bagging.svg - Wikimedia Commons, (2020). https://commons.wikimedia.org/wiki/File:Ensemble_Bagging.svg#/media/File:Ensemble_Bagging.svg (Accessed 3 Dec 2020)
Solhmirzaei R, Salehi H, Kodur V, Naser MZ (2020) Machine learning framework for predicting failure mode and shear capacity of ultra high performance concrete beams. Eng Struct. https://doi.org/10.1016/j.engstruct.2020.111221
Statista, China: number of road bridges 2019, (2020). https://www.statista.com/statistics/258358/number-of-road-bridges-in-china/ (Accessed 11 Nov 2020)
Taffese WZ, Sistonen E (2017) Machine learning for durability and service-life assessment of reinforced concrete structures: recent advances and future directions. Autom Constr. https://doi.org/10.1016/j.autcon.2017.01.016
Zuo Y, Wu Y, Min G, Cui L (2019) Learning-based network path planning for traffic engineering. Futur Gener Comput Syst. https://doi.org/10.1016/j.future.2018.09.043
The authors would like to acknowledge Ignacio Paya-Zaforteza and Guillem Peris-Sayol for sharing their database on bridge fires from their recent works (Peris-Sayol et al., 2017; Peris-Sayol & Payá-Zaforteza, 2017).
No funding was received during this research.
Department of Civil and Environmental Engineering, Michigan State University, East Lansing, MI, USA
V. K. Kodur
Glenn Department of Civil Engineering, Clemson University, Clemson, SC, USA
M. Z. Naser
Both author contributed equally to the formulation and drafting of this paper. The authors read and approved the final manuscript.
Correspondence to V. K. Kodur.
Kodur, V.K., Naser, M.Z. Classifying bridges for the risk of fire hazard via competitive machine learning. ABEN 2, 2 (2021). https://doi.org/10.1186/s43251-020-00027-2
Concrete bridges | CommonCrawl |
Antibacterial and antibiofilm potential of Lacticaseibacillus rhamnosus YT and its cell-surface extract
Chengran Guan ORCID: orcid.org/0000-0001-6145-29491,
Wenjuan Zhang1,
Jianbo Su1,
Feng Li1,
Dawei Chen1,
Xia Chen1,
Yujun Huang1,
Ruixia Gu1 &
Chenchen Zhang1
BMC Microbiology volume 23, Article number: 12 (2023) Cite this article
Foodborne pathogens and spoilage bacteria survived in the biofilm pose a serious threat to food safety and human health. It is urgent to find safe and effective methods to control the planktonic bacteria as well as the biofilm formation. Substances with antibacterial and antibiofilm activity found in lactic acid bacteria were mainly metabolites secreted in the cell-free supernatant. Previously, Lacticaseibacillus rhamnosus YT was isolated because its cell pellets displayed distinguished antibacterial activity under neutral conditions. This study aimed to investigate the antibacterial and antibiofilm properties of the L. rhamnosus YT cells and its crude cell-surface extract.
The antibacterial activity of the L. rhamnosus YT cells constantly increased with cells growth and reached the peak value after the cells grew into stationary phase. After cocultivation with the L. rhamnosus YT cells, the biofilm formation of B. subtilis and S. enterica was reduced. The antibacterial activity of the L. rhamnosus YT cells was varied along with various culture conditions (carbon sources, nitrogen sources, medium pH and cultural temperatures) and the antibacterial intensity (antibacterial activity per cell) was disproportional to the biomass. Furthermore, the cell-surface extract was isolated and displayed broad antimicrobial spectrum with a bacteriostatic mode of action. The antibiofilm activity of the extract was concentration-dependent. In addition, the extract was stable to physicochemical treatments (heat, pH and protease). The extract performed favorable emulsifying property which could reduce the water surface tension from 72.708 mN/m to 51.011 mN/m and the critical micelle concentration (CMC) value was 6.88 mg/mL. Besides, the extract was also able to emulsify hydrocarbon substrates with the emulsification, index (E24) ranged from 38.55% (for n-hexane) to 53.78% (for xylene). The E24 for xylene/extract emulsion was merely decreased by 5.77% after standing for 120 h. The main components of the extract were polysaccharide (684.63 μg/mL) and protein (120.79 μg/mL).
The properties of the extract indicated that it might be a kind of biosurfactant. These data suggested that L. rhamnosus YT and the cell-surface extract could be used as an alternative antimicrobial and antibiofilm agent against foodborne pathogens and spoilage bacteria in food industry.
Foodborne disease is one of the most important public health issues around the world due to the ingestion of food contaminated by foodborne pathogens and spoilage bacteria [1]. Most of the foodborne bacteria survived in the shape of biofilm making planktonic bacteria aggregate, adhere to each other and encapsulate the bacteria in a structural colony [2]. Compared with the planktonic bacteria, biofilm make the bacteria 1000-fold more resistant to antibiotics and the immune system of the host which is critical and a matter of concern for many industries such as medical instrumentation, food, dairy, brewery, drinks and juices, aquaculture, etc [3, 4]. Therefore, to control food contamination caused by pathogens and spoilage bacteria, it is urgent to find safe and effective methods to control the planktonic bacteria as well as the biofilm formation.
Some lactic acid bacteria (LAB) with probiotic function and the GRAS (generally regarded as safe) status, are wildly used in food and pharmaceuticals industry [5]. These LAB has been shown to be effective strains for inhibiting foodborne pathogenic bacteria in numerous studies [6]. The substances with antibacterial activity were mainly organic acids, carbon dioxide, hydrogen peroxide, diacetyl, ethanol and bacteriocin [7, 8]. Nisin, produced by Lactococcus lactis ssp. Lactis, is highly active against Gram positive bacteria such as Listeria monocytogenes, Staphylococcus aureus, Bacillus cereus, Lactiplantibacillus plantarum, Micrococcus luteus and Micrococcus flavus. As the oldest known and most widely studied natural antibacterial bacteriocin, nisin is permitted as a safe food additive in over 50 countries around the world [9, 10].
Recently, LAB with both antibacterial and antibiofilm activity were found [11]. The cell-free supernatant of Limosilactobacillus fermentum TCUESC01 and L. plantarum TCUESC02 was demonstrated to inhibit the growth and the biofilm formation of S. aureus [12]. Exopolysaccharides produced by L. plantarum YW32 showed the ability to suppress the formation of biofilm by Gram-positive and negative pathogens [13]. To date, most of the substances with antibacterial and antibiofilm capabilities discovered in LAB were mostly metabolites secreted in the cell-free supernatant [14]. However, a few studies showed that substances on the surface of Lactobacillus also performed antibacterial and antibiofilm inhibition activities [15, 16]. For example, surface proteins containing cytoplasmic hydrolases from L. acidophilus inhibited E. coli growth by damaging the cell wall [17]. Jung et al. [18] showed lipophosphatidic acid of L. plantarum inhibited the biofilm formation of Enterococcus faecalis in a dose-dependent manner. L. rhamnosus cell-surface-derived biosurfactant displayed potent antiadhesion and antibiofilm ability by inhibiting the bacterial attachment to surfaces [19].
Studies have shown that the physicochemical properties of antibacterial agents varied widely among strains. Previously, L. rhamnosus YT was isolated in our lab because its cell pellets displayed distinguished antibacterial activity to Gram positive and negative spoilage bacteria. In this study, the antibacterial and antibiofilm properties of L. rhamnosus YT cells and the crude materials extracted from the cell surface were evaluated.
Growth kinetics and antibacterial potential of L. rhamnosus YT cells
L. rhamnosus YT was cultivated in the deMan, Rogosa and Sharpe (MRS) broth and sampled with specific time intervals. The cell pellets were obtained and resuspended in the phosphate buffered saline (PBS) buffer to measure the viable counts and the antibacterial/antibiofilm activity. The cell growth reached stationary phase with the highest biomass of 10.31 log CFU/mL after 20 h cultivation. Antibacterial activity of the cells was constantly increasing and the biggest diameter of inhibitory zone to B. subtilis and S. enterica was 8.17 mm at 20 h and 9.83 mm at 16 h, respectively (Fig. 1 a). Moreover, the biofilm formation of the indicator strains cocultured with L. rhamnosus YT was reduced (Fig. 1 b). The reduction rate was rising with the increase of cells concentration. With 8.0 log CFU/mL of L. rhamnosus YT, the formed biofilm of B. subtilis and S. enterica was reduced by 63 and 35%, respectively.
Growth profile of L. rhamnosus YT and its antibacterial and antibiofilm activity against B. subtilis and S. enterica. L. rhamnosus YT was cultivated and sampled at time intervals. The cell growth was measured and the antibacterial capacity of the cell pellets were assessed by the diameter of the inhibition zone on the plate (a). Inhibition of biofilm was proceeded with L. rhamnosus YT suspending in PBS buffer (10 mM, pH 7.0) (b)
Effect of culture conditions on the antibacterial activity of L. rhamnosus YT cells
To explore the factors affecting the antibacterial activity of the L. rhamnosus YT cells, the culture conditions including varied carbon sources, nitrogen sources, successive medium pH and temperatures were tested. After cultivation for 24 h, the cell pellets were separated for detection of viable counts and antibacterial activity.
Cultivating in the broth of MRS-glucose, MRS-maltose, MRS-lactose and MRS-rhamnose, the viable counts were varied from 9.57 log CFU/mL (with rhamnose) to 10.78 log CFU/mL (with maltose). The diameter of inhibitory zone to B. subtilis and S. enterica ranged from 0 mm to 10.5 mm and 3.5 mm to 10.5 mm, respectively. L. rhamnosus YT cultivated with glucose and maltose possessed similar biomass while the corresponding antibacterial activity was quite different. Furthermore, using rhamnose as the carbon source, L. rhamnosus YT cells performed very tiny inhibitory zone although the cell concentration was higher than 9.0 log CFU/mL (Fig. 2 a).
Effect of culture conditions on the biomass and antibacterial activity of L. rhamnosus YT. The culture conditions including varied carbon (a) and nitrogen (b) sources were tested at successive medium pH value (c) and temperatures (d). After cultivation for 24 h, the cell pellets were separated to detect the viable counts and antibacterial activity
Cultivating in the MRS broth independently using tryptone, soy peptone and fish peptone as nitrogen sources, the viable counts were varied from 9.92 log CFU/mL (with fish peptone) to 10.25 log CFU/mL (with soy peptone). There was slight difference in the cell biomass and the antibacterial activity of L. rhamnosus YT cells cultivated with different nitrogen sources (Fig. 2 b).
To determine the effect of initial medium pH, L. rhamnosus YT was cultivated in the MRS broth with a series of initial pH values from 3.0 to 9.0. L. rhamnosus YT grew better when the initial medium pH value was higher or equal to 7.0. Meanwhile, the antibacterial activity of L. rhamnosus YT cells to B. subtilis and S. enterica was steadily enhanced with the pH value increasing from 4.0 to 7.0. Then as the pH value continually raised to 9.0, the antibacterial activity of L. rhamnosus YT basically kept stable against B. subtilis while significantly decreased against S. enterica (Fig. 2 c).
L. rhamnosus YT was cultivated at 22 °C, 27 °C, 32 °C, 37 °C and 42 °C, respectively. With temperature increasing, the biomass and the antibacterial activity showed similar variation of increasing first and then decreasing. The viable counts ranged from 9.84 log CFU/mL (at 22 °C) to 10.34 log CFU/mL (at 32 °C). The biggest inhibitory zone against B. subtilis and S. enterica was L. rhamnosus YT cultivated at 27 °C (Fig. 2 d).
Extraction of the cell-surface antibacterial substances from L. rhamnosus YT and its antibacterial potential
Ultrasonication was employed to isolate the antibacterial substances from the cell surface of L. rhamnosus YT. Firstly, the ultrasonic procedure was optimized to avoid the leakage of the intracellular materials. After ultrasonication, the cell pellets were much easier to be compacted to be separated and the viable counts kept consistent suggesting the cell structure remaining intact. The ultrasonic extract displayed apparent bacteriostatic ring on the plate while the ultrasonicated cells without visible inhibitory zone (Fig. 3a). These results indicated that the antibacterial extract was merely obtained from the cell surface of L. rhamnosus YT.
Extraction of the cell-surface antibacterial substances from L. rhamnosus YT (a) and its antibacterial spectrum (b), mode of action (c) and antibiofilm potential (d). Ultrasonic procedure was optimized to obtain the maximal amount of extract under the premise of keeping the cell integrity. After sonification and centrifugation, the viable counts and antibacterial activity were measured (a). Antibacterial ability of the extract to multiple bacteria was measured by agar well diffusion method (b). The extract concentrated at 15 mg/mL was cocultivated with indicator strains to analyze its antibacterial action mode (c). The antibiofilm potential of the extract with different concentration was determined (d)
The extract showed varied inhibitory capacity to typical spoilage bacteria detected in food contamination (Fig. 3 b). By co-culture with the extract, the lag growth phase of B. subtilis and S. enterica was obviously delayed by 2 h and 4 h, respectively. The viable counts of B. subtilis and S. enterica were independently decreased by 65.7 and 31.67% after cultivation for 10 h (Fig. 3 c). The biofilm formation of B. subtilis and S. enterica was significantly reduced with the ultrasound extract at 36.50 mg/mL and 73.30 mg/mL (Fig. 3 d).
Properties of the cell-surface extract
The physicochemical sensitivity of the extract to heat, pH and proteases was measured. The antibacterial activity of the extract to B. subtilis was slightly influenced with different temperature gradients. However, the antibacterial ability against S. enterica was largely reduced with temperature higher than 70 °C and the corresponding activity reduced by more than 20.57% after incubation at 80 °C and over (Fig. 4 a). Furthermore, the extract with different pH values showed similar effect tendency on the antibacterial activity against B. subtilis and S. enterica, and the antibacterial zone of the extract at pH 7.0 was slightly larger than that of the other pH values (Fig. 4 b). Unlike heat and pH, proteases played insignificant role on the antibacterial activity of the extract against B. subtilis and S. enterica (Fig. 4 c).
Stability and emulsifying property of the cell-surface extract. Using 15 mg/mL of the ultrasound extract, the antibacterial stability to different temperatures (a), pH values (b) and protease (c) were tested. And then the emulsifying characteristics of surface tension (d), E24 against different substrates (e) and emulsification stability to xylene (f) were evaluated
Moreover, the extract was evaluated for reduction of surface tension and critical micelle concentration (CMC). The result showed that the extract could reduce the surface tension from 72.708 mN/m to 51.011 mN/m associated with the concentration increased from 0.33 mg/mL to 7.5 mg/mL and then the surface tension basically kept stable even the concentration continuous increasing (Fig. 4 d). According to the logarithmic plot of the extract concentration, the CMC of 6.88 mg/mL was obtained. Besides, the extract was able to emulsify different hydrocarbon substrates, such as n-hexane, isooctane, xylene, rapeseed oil and olive oil (Fig. 4 e). The highest emulsification index (E24) of 53.78% was achieved for the xylene/extract emulsion which was equal to the emulsifying capacity of Tween80. The lowest E24 was obtained for the n-hexane oil/extract emulsion (28.69%). In addition, the E24 for xylene/extract emulsion was merely decreased by 5.77% after standing for 120 h (Fig. 4 f).
Foodborne pathogens and spoilage bacteria are the major cause of foodborne illnesses and cause a huge challenge to food security around the world. Most of the foodborne pathogens or spoilage bacteria are survival in the shape of biofilm by adsorbing on the biological and abiotic surfaces. Compared with planktonic cells, bacteria in biofilm are much more resistant toward antimicrobial agents, harsh environment and host immunity. Foodborne pathogens and spoilage bacteria in the biofilm shape caused critical concern for many food industries. It is attractive to develop agents with specific antibacterial and antibiofilm activity [3]. Lactobacillus has shown to be effective for inhibiting foodborne pathogenic bacteria with antibacterial metabolites including organic acids, bacteriocin, hydrogen peroxide (H2O2), etc. [20]. Till now, there were scarce work about cell-surface compounds of Lactobacillus origin with clear molecular structure and antibacterial-antibiofilm mechanism. In our lab, L. rhamnosus YT was isolated because its cells suspended into ddH2O showed distinctive inhibition zone to various Gram positive and negative foodborne spoilage bacteria on the agar plate. In this work, using the typical foodborne spoilage bacteria B. subtilis and S. enterica, the antibacterial and antibiofilm property of L. rhamnosus YT cells and its cell-surface substances was investigated.
The kinetic profile of the L. rhamnosus YT cells's antibacterial capacity of was represented by plotting the curves of growth and the corresponding inhibitory diameter to B. subtilis and S. enterica. The antibacterial activity of L. rhamnosus YT cells was growth-dependent. The antibacterial substance might be composed of more than one component due to the different cultural time for the largest antibacterial diameter to B. subtilis and S. enterica. Moreover, the antibiofilm activity of the L. rhamnosus YT cells was detected in the LB broth because L. rhamnosus YT could not grow and form biofilm in this medium. The biofilm formation of the indicator strains was obviously inhibited by co-incubation with L. rhamnosus YT cells. These data indicated that L. rhamnosus YT cells had the antibacterial and antibiofilm capacity to B. subtilis and S. enterica. Moreover, the antibacterial capacity of the cell pellets increased with a constant level during the exponential growth phase and reached the peak value about 20 h after the startup of the fermentation process. The antibacterial substance was produced in a short time which would be beneficial for its production from aspects of saving energy, convenient separation, synthesis of more products in a monthly work schedule, etc.
As the antibacterial activity was closely related to the L. rhamnosus YT growth, the factors usually affecting strain growth were selected to further explore the relationship between growth and antibacterial activity. L. rhamnosus YT grow well with the biomass higher than 9.0 log CFU/mL in the broth containing various carbon and nitrogen sources. Generally, more biomass displayed higher antibacterial activity. L. rhamnosus YT cultivated with maltose displayed the highest biomass and the largest inhibitory zone to both B. subtilis and S. enterica. However, the difference of the biomass and parallel antibacterial activity of L. rhamnosus YT cultivated with various carbon and nitrogen sources was disproportional. For instance, under the circumstance of small difference in biomass, the antibacterial activity of L. rhamnosus YT cultivated with maltose (10.78 log CFU/mL) were obviously higher than that of with glucose (10.22 log CFU/mL) and with soybeans (10.25 log CFU/mL). Moreover, even though L. rhamnosus YT cultivated with glucose and soybeans displayed similar biomass and antibacterial activity to B. subtilis, the corresponding inhibitory zones to S. enterica were different. These results suggested that the carbon and nitrogen sources might affect the composition of the antibacterial substances which aimed specifically at B. subtilis and S. enterica. In many reported studies, the structure and the content of some cell-binding active compounds were significantly influenced by carbon and nitrogen sources. The total cell surface antigenicity of L. rhamnosus GG was increased by switching the carbohydrate source from glucose to fructose [21]. In the study carried out by Mouafo et al. [22], the biosurfactants yields of three indigenous bacterial strains (L. delbrueckii N2, L. cellobiosus TM1 and L. plantarum G88) with molasses or glycerol were significantly high compared to those obtained with MRS broth as substrate and the crude biosurfactants were mainly glycoproteins and glycolipids with substrate of molasses and glycerol, respectively. In contrast to carbon sources, nitrogen sources mainly affect the expression of proteins or peptides. In L. acidophilus NCC2628, both peptone and yeast extract had a considerable influence on the bacterial cell wall which was witnessed by changes in surface charge, hydrophobicity, and the nitrogen-to-carbon ratio. In particular, expression of the surface-layer protein was dependent on the protein source of the fermentation medium [23].
Besides carbon and nitrogen sources, the cell amounts of L. rhamnosus YT were constantly raised along with the increase of the initial medium pH. However, the antibacterial activity of L. rhamnosus YT displayed a trend of first increasing and then decreasing with pH 7.0 as a dividing point. According to these data, the yield of antibacterial substance was not continually increased accompanying with the cell growth and the most feasible initial pH value for the antibacterial activity against S. enterica was different from that of B. subtilis. Moreover, L. rhamnosus YT grew well at broad temperature ranges varied from 27 °C to 37 °C. The highest antibacterial activity to S. enterica was found at 27 °C which was lower than the optimum temperature for growth (32 °C), and temperatures higher or lower than the optimum growth temperature (32 °C) showed reduced antibacterial activity. Comparatively, the largest bacteriostatic ring to B. subtilis seemed to occur at condition favorable for bacterial growth. In brief, the biomass and the antibacterial capacity of L. rhamnosus YT could be disproportionally affected by the experimental conditions tested in this work.
The antibacterial intensity (antibacterial activity per cell) with these culture conditions was evaluated. The antibacterial intensity was disproportional to the biomass. Moreover, the antibacterial intensity against S. enterica was commonly higher than that of B. subtilis. However, when soy peptone or fish peptone was used as nitrogen source or the initial medium pH was 8.0 or 9.0, the antibacterial intensity against S. enterica was lower than that B. subtilis. These results suggested that antibacterial intensity, antibacterial composition and content could be influenced by these growth-related factors. It was in accordance with several studies that bacteriocin titers can be modified by altering the cultivation conditions of the producing bacteria and certain combinations of influencing factors [22, 24].
At present, it is paramount important for commercial exploitation to optimize factors affecting production of the antibacterial substances. Specific requirements with reference to the production of metabolites through microbial fermentation and the influencing factors may be strain dependent and could vary with different types of metabolites. The properties of the growth media including amino acid composition, carbon/nitrogen ratio, pH and lactose levels play important roles in the variation of biomass and the level of bacteriocin production [25]. In this study, the antibacterial activity of L. rhamnosus YT was inclined to be significantly influenced by carbon source among these factors. Particularly, L. rhamnosus YT cultured with rhamnose grew higher than 9.0 log CFU/mL whereas barely showed antibacterial activity. Therefore, antibacterial capacity of L. rhamnosus YT was tightly related to carbon source metabolism. To increase the yield of the antibacterial substance, less costly and more readily available carbon substrates should be firstly searched. From this perspective, several studies have been carried out using sugar cane molasses and glycerol as promising substrates for biosurfactants production [22]. In China, there are lots of by-products from the increasing industries of sugar cane processing, biodiesel and oleochemicals production. Therefore, it would be very interesting to carry out a test on these byproducts using L. rhamnosus YT strains in the antibacterial substances' production.
Antagonistic substances isolated from the cell surface of Lactobacillus has been reported, such as teichoic acids from L. plantarum IMB19, capsular polysaccharides from L. casei NA-2, cell-bound exopolysaccharide from L. fermentum S1, chitinase from L. rhamnosus GG and glycolipid from L. helvetius M5 [26,27,28,29,30]. Here, combining the effects of cultural conditions and the physicochemical property of the reported cell-bound antibacterial materials, phenol and LiCl were separately used for extraction of the antibacterial substance from the cell surface of L. rhamnosus YT. Despite the cells lost antibacterial activity after treated by phenol or LiCl, the related extract barely performed antibacterial effects to B. subtilis and S. enterica (Fig. S1). It was speculated that the concentration of the extracted substance was not enough to perform the antibacterial function or the extraction methods were not suitable as phenol and LiCl were conventionally used to extract polysaccharide and proteins from the Lactobacillus surface [31, 32]. Then, ultrasonication was employed to isolate capsular material from L. rhamnosus YT. Firstly, the ultrasonic procedure was optimized to obtain the most of surface components and meanwhile keep the integrity of cellular structure avoiding the leakage of intracellular substances. By the optimized ultrasonic procedure, the cell pellets lost the antibacterial activity and the extracted materials displayed broad inhibitory capacity to the usual spoilage and pathogenic strains in food contamination. In addition, the antibacterial ability of the extract against Gram negative strains was much stronger than that of Gram-positive strains. It might be caused by the different cell wall composition between Gram positive and negative bacteria. Moreover, by incubation with the indicator strain, the extract exhibited a bacteriostatic mode of action against the indicator strains. Besides, the extract showed concentration dependent antibiofilm performance to B. subtilis and S. enterica. These data implied that the extract could be used as excellent candidates to control microorganism and biofilm pollution in food industry.
Till date, the antibacterial materials separated from the cell surface of Lactobacillus were biosurfactants, peptides, surface-proteins and teichoic acid, etc. [18, 33, 34]. These reported substances were different in physicochemical characteristics. Most of the bacteriostatic substances (extracellular polysaccharides, phosphopeptides, bacteriocins, etc.) possessed good inhibition ability under acidic to neutral conditions while reduced or even inactivated activity under alkaline conditions [35, 36]. In this work, the properties of the crude extract were tentatively explored to determine the identification of the extract. The inhibitory activity of the extract was sensitive to temperature higher than 70 °C while kept stable with a wide range of pH and multiple proteases treatments. Combining with the unsuccessful extraction with phenol and LiCl, the extract was probably not surface proteins, polysaccharide or teichoic acid which were the usual antibacterial substances separated from the cell surface of Lactobacillus. Recently, some cell-bound biosurfactants with antibacterial activity were isolated from L. rhamnosus. Biosurfactant derived from L. rhamnosus ATCC7469 exhibited a significant inhibitory effect on the biofilm formation of S. mutans due to down regulation of biofilm formation associated genes, gtfB/C and ftf [13]. Biosurfactants isolated from L. rhamnosus of human breast milk origin displayed potent antibiofilm ability by inhibiting surface attachment [37]. In this work, the emulsifying properties of the extract were evaluated which displayed favorable emulsifying property. Moreover, there were 684.63 μg/mL of polysaccharide and 120.79 μg/mL of protein in 1 mg/mL of the extract. Hence, the extract was probably a kind of biosurfactants.
So far, most of the discovered biosurfactants with antibacterial and antibiofilm activity were crude extract, and the corresponding properties varied widely among strains. There is inadequate information about the chemical composition and structure of biosurfactants derived from LAB, mainly due to their complexity [38]. Therefore, the specific antibacterial component would be purified and identified from the extract obtained in this work and its antibacterial and antibiofilm mechanism would be explored.
In this study, the antibacterial and antibiofilm characteristics of L. rhamnosus YT cells were investigated. The antibacterial activity of the L. rhamnosus YT cells was varied along with various culture conditions and the antibacterial intensity (antibacterial activity per cell) was disproportional to the biomass. Furthermore, the cell-surface extract was isolated which displayed broad antimicrobial spectrum and antibiofilm capacity. The antibiofilm activity of the extract demonstrated to be bacteriostatic mode of action, concentration-dependent, stable to physicochemical treatments and having favorable emulsifying property. The main components of the extract were polysaccharide and protein. The properties of the extract indicated that it might be a kind of biosurfactant.
Strains and growth conditions
Bacillus subtilis CICC10012 (B. subtilis), Salmonella enterica WX29 (S. enterica), Staphyloccocus aureus CICC10201 (S. aureus), Bacillus cereus ATCC11778 (B. cereus), Escherichia coli CICC10899 (E. coli), Pseudomonas brenneri CICC10271 (P. brenneri) were purchased from the China Center of Industrial Culture Collection (Beijing, China) and were cultivated in Luria Bertani (LB) broth with aeration at 37 °C. L. rhamnosus YT, preserved by the key laboratory of dairy biotechnology and safety control of Jiangsu province, was isolated from the feces of Bama longevity, Guangxi province, China. L. rhamnosus YT was inoculated in dMRS broth at 37 °C in static condition. The biomass was determined by viable counts [39].
Preparation of ultrasonic extract
Two hundred microliter of MRS culture broth was inoculated with 6 mL of an overnight culture of L. rhamnosus YT and cultivated at 37 °C for 24 h. Cell pellets were collected by centrifugation (10,000 rpm, 4 °C, 10 min), washed twice with double distilled water (ddH2O), and resuspended with the same volume of ddH2O for ultrasonic treatment. The ultrasonic extract was obtained by high intensity ultrasonic liquid processor (Sonics & Materials, Inc., USA) with the specific sets (power 160 w, working 3 s and then pausing 2 s, 60 cycles). After that, bacteria were removed by centrifugation and the supernatant was obtained by filtering through a 0.22 μm filter. The filtered sterile supernatant was lyophilized with the given parameter (quick-frozen to − 50 °C, heating to − 5 °C within 2 h and then keeping for 14 h, heating to 5 °C within 2 h and then keeping for 2 h, heating to 15 °C within 1 h and then keeping for 24 h) by the lyophilizer (LGJ-50, Sihuan Scientific Instrument Factory, Beijing, China). The freeze-dried ultrasonic extract was resuspended in PBS buffer (10 mM, pH 7.0) and stored at − 20 °C.
Measurement of antibacterial activity
B. subtilis and S. enterica were used as indicator bacteria. Antibacterial activity was determined by agar well diffusion method. Briefly, the colony of the indicator bacteria was inoculated into a tube containing 5 mL of LB medium and cultivated at 37 °C for 12 h. And then 100 μL of indicator bacteria suspension diluting to 1.0 × 106 CFU/mL was used to spread on the plate which was prepared by pouring 30 ml LB agar medium into a plate of 90 mm diameter. After drying for 2 h, 7-mm-diameter wells were made in the plate using a sterile punch. Two hundred microliter of the tested sample solution (1.0 × 108 CFU/mL of L. rhamnosus YT cells or 15 mg/mL of the extract) were added into the well and diffused at 4 °C for 4 h. Then the plate was transferred to the incubator at 37 °C for 8 h. Antimicrobial activity was determined by measuring the diameter size of the clear zone around the well (excluding the 7-mm hole).
Co-incubation with indicator bacteria assay
B. subtilis and S. enterica were used as indicator bacteria. The ultrasonic extract at 15 mg/mL was used for co-incubation with the indicator bacteria. Using the same volume of PBS buffer (10 mM, pH 7.0) as control, 1 mL of the extract was added into LB broth containing 1% (v/v) indicator bacteria. The co-incubation was processed in a shaker at 37 °C for 10 h, during which the culture medium was sampled every 2 h. The viable bacterial counts of the indicator bacteria in every sample were performed as described [39].
Inhibition of biofilm formation
The indicator bacteria suspension of B. subtilis and S. enterica was diluted to 1.0 × 108 CFU/mL with fresh LB liquid medium. The L. rhamnosus YT suspension was adjusted independently to 1.0 × 107 and 1.0 × 108 CFU/mL with PBS buffer (10 mM, pH 7.0). And then, 100 μL of the indicator bacteria solution and 100 μL of L. rhamnosus YT suspension of different concentrations was added to each well of sterile 96-well microplate, and the wells with 100 μL of the indicator bacteria solution and 100 μL of PBS buffer (10 mM, pH 7.0) were used as control. In order to prevent boundary effects, 200 μL of distilled water was added to the peripheral wells of the 96-well microplate. After incubation at 37 °C for 24 h, the biofilm biomass was determined by the crystal violet staining method [40]. The inhibition of biofilm biomass was calculated as the followed formula.
$$\textrm{Inhibition}\ \textrm{rate}\ \left(\%\right)=\left(1-\frac{{\textrm{OD}}_{\textrm{sample}}}{{\textrm{OD}}_{\textrm{control}}}\right)\times 100\%$$
Likewise, 100 μL of ultrasonic extract solution with different concentrations (7.30 mg/mL, 36.50 mg/mL and 73.30 mg/mL) was used for the inhibition of biofilm formation followed the procedure above mentioned.
Properties of ultrasonic extract
Determination of physicochemical characteristics: using PBS buffer as control, ultrasonic extract at 15. 0 mg/mL was processed by heat (treated independently at 30 °C, 40 °C, 50 °C, 60 °C, 70 °C, 80 °C, 90 °C, and 100 °C for 15 min), pH treatment (15. 0 mg freeze-dried ultrasonic extract dissolved in 1 mL PBS buffer solution with pH 3.0, 5.0, 7.0 and 9.0, respectively) and enzyme treatment (separately treated at 37 °C for 2 h with 2 mg/mL of pepsin, trypsin, papain, α-amylase and β-amylase). The corresponding antibacterial ability of the processed extract was determined by the agar well diffusion method.
Determination of surface tension: The surface tension of ultrasonic extract with different concentrations (0.33 ~ 10.00 mg/mL) was measured as previously described [41]. For this study, using ddH2O and ethanol as control, surface tension values (mN/m) was detected at 25 °C by a tensiometer (DCAT11, Dataphysics, Germany).
Determination of emulsifying activity: According to the previously reported method [42], using 1:1 ratio (v/v), n-hexane, isooctane, xylene, olive oil, and sunflower seed oil was added separately into the 1 mg/mL solution of ultrasonic extract. Vortexing for 2 min to obtain maximum emulsification and then setting at 20 °C for 24 h, the height of the emulsion layer (H24) and total liquid height (H) were measured. The emulsification index (E24) was calculated as (H24/H) × 100%. Water and 1 mg/mL of Tween 80 were used as negative and positive controls, respectively.
Statistical analysis was done by SPSS 19.0 software (SPSS Inc., Chicago). Each trial was performed in triplicate and data of 3 independent experiments was statistically analyzed by one-way analysis of variance (ANOVA) and expressed as mean ± standard deviation (SD).
All data generated or analyzed during this study are included in this published article.
Zhang Z, Chen YH, Wu LH. Effects of governmental intervention on foodborne disease events: evidence from China. Int J Environ Res Public Health. 2021;18(24):13311.
Roy R, Tiwari M, Donelli G, Tiwari V. Strategies for combating bacterial biofilms: a focus on anti-biofilm agents and their mechanisms of action. Virulence. 2018;9(1):522–54.
Li XH, Lee JH. Antibiofilm agents: a new perspective for antimicrobial strategy. J Microbiol. 2017;55:753–66.
Li J, Chen DR, Lin HC. Antibiofilm peptides as a promising strategy: comparative research. Appl Microbiol Biotechnol. 2021;105:1647–56.
Mokoena MP. Lactic acid bacteria and their bacteriocins: classification, biosynthesis and applications against uropathogens: a mini-review. Molecules. 2017;22(8):1255.
Abramov VM, Kosarev IV, Priputnevich TV, Machulin AV, Khlebnikov VS, et al. S-layer protein 2 of lactobacillus crispatus 2029, its structural and immunomodulatory characteristics and roles in protective potential of the whole bacteria against foodborne pathogens. Int J Biol Macromol. 2020;150:400–12.
Wasfi R, Abd El-Rahman OA, Zafer MM, Ashour HM. Probiotic lactobacillus sp inhibit growth, biofilm formation and gene expression of caries-inducing Streptococcus mutans. J Cell Mol Med. 2018;22:1972–83.
Valyshev AV. Antimicrobial compounds of enterococci. Zh Mikrobiol Epidemiol Immunobiol. 2014;5:119–26.
Tong Z, Ni L, Ling J. Antibacterial peptide nisin: a potential role in the inhibition of oral pathogenic bacteria. Peptides. 2014;60:32–40.
Gharsallaoui A, Oulahal N, Joly C, Degraeve P. Nisin as a food preservative: part 1: physicochemical properties, antimicrobial activity, and main uses. Crit Rev Food Sci. 2016;56:1262–74.
Krishnamoorthi R, Srinivash M, Mahalingam PU, Malaikozhundan B, Suganya P, et al. Antimicrobial, anti-biofilm, antioxidant and cytotoxic effects of bacteriocin by Lactococcus lactis strain CH3 isolated from fermented dairy products-an in vitro and in silico approach. Int J Biol Macromol. 2022;220:291–306.
Melo TA, Dos Santos TF, de Almeida ME, Junior LA, Andrade EF, et al. Inhibition of Staphylococcus aureus biofilm by lactobacillus isolated from fine cocoa. BMC Microbiol. 2016;16:250.
Wang J, Zhao X, Yang Y, Zhao A, Yang Z. Characterization and bioactivities of an exopolysaccharide produced by lactobacillus plantarum YW32. Int J Biol Macromol. 2015;74:119–26.
Scillato M, Spitale A, Mongelli G, Privitera GF, Mangano K, et al. Antimicrobial properties of lactobacillus cell-free supernatants against multidrug-resistant urogenital pathogens. Microbiologyopen. 2021;10:e1173.
Giordani B, Costantini PE, Fedi S, Cappelletti M, Abruzzo A, et al. Liposomes containing biosurfactants isolated from lactobacillus gasseri exert antibiofilm activity against methicillin resistant Staphylococcus aureus strains. Eur J Pharm Biopharm. 2019;139:246–52.
Englerová K, Nemcová R, Styková E. Biosurfactants and their role in the inhibition of the biofilm forming pathogens. Ceska Slov Farm. 2018;67(3):107–12.
Meng J, Gao SM, Zhang QX, Lu RR. Murein hydrolase activity of surface layer proteins from lactobacillus acidophilus against Escherichia coli. Int J Biol Macromol. 2015;79:527–32.
Jung S, Park OJ, Kim AR, Ahn KB, Lee D, et al. Lipoteichoic acids of lactobacilli inhibit enterococcus faecalis biofilm formation and disrupt the preformed biofilm. J Microbiol. 2019;57:310–5.
Tahmourespour A, Kasra-Kermanshahi R, Salehi R. Lactobacillus rhamnosus biosurfactant inhibits biofilm formation and gene expression of caries-inducing Streptococcus mutans. Dent Res J. 2019;16:87–94.
De Keersmaecker SC, Verhoeven TL, Desair J, Marchal K, Vanderleyden J, et al. Strong antimicrobial activity of lactobacillus rhamnosus GG against salmonella typhimurium is due to accumulation of lactic acid. FEMS Microbiol Lett. 2006;259:89–96.
Savijoki K, Nyman TA, Kainulainen V, Miettinen I, Siljamaki P, et al. Growth mode and carbon source impact the surfaceome dynamics of lactobacillus rhamnosus GG. Front Microbiol. 2019;10:1272.
Mouafo TH, Mbawala A, Ndjouenkeu R. Effect of different carbon sources on biosurfactants' production by three strains of lactobacillus spp. Biomed Res Int. 2018;2018:5034783.
Schar-Zammaretti P, Dillmann ML, D'Amico N, Affolter M, Ubbink J. Influence of fermentation medium composition on physicochemical surface properties of lactobacillus acidophilus. Appl Environ Microbiol. 2005;71:8165–73.
Satpute SK, Kulkarni GR, Banpurkar AG, Banat IM, Mone NS, et al. Biosurfactant/s from lactobacilli species: properties, challenges and potential biomedical applications. J Basic Microbiol. 2016;56:1140–58.
Yang E, Fan L, Yan J, Jiang Y, Doucette C, et al. Influence of culture media, pH and temperature on growth and bacteriocin production of bacteriocinogenic lactic acid bacteria. AMB Express. 2018;8:10.
Garcia-Vello P, Sharma G, Speciale I, Molinaro A, Im SH, et al. Structural features and immunological perception of the cell surface glycans of lactobacillus plantarum: a novel rhamnose-rich polysaccharide and teichoic acids. Carbohydr Polym. 2020;233:115857.
Wang K, Niu MM, Yao D, Zhao J, Wu Y, et al. Physicochemical characteristics and in vitro and in vivo antioxidant activity of a cell-bound exopolysaccharide produced by lactobacillus fermentum S1. Int J Biol Macromol. 2019;139:252–61.
Xu XQ, Peng Q, Zhang YW, Tian DD, Zhang PB, et al. Antibacterial potential of a novel lactobacillus casei strain isolated from Chinese northeast sauerkraut and the antibiofilm activity of its exopolysaccharides. Food Funct. 2020;11:4697–706.
Allonsius CN, Vandenheuvel D, Oerlemans EFM, Petrova MI, Donders GGG, et al. Inhibition of Candida albicans morphogenesis by chitinase from lactobacillus rhamnosus GG. Sci Rep. 2019;9:2900.
Kadhum MKH, Haydar NH. Production and characterization of biosurfactant (glycolipid) from lactobacillus helviticus M5 and evaluate its antimicrobial and antiadhesive activity. Iraqi J Agric Sci. 2020;51:1543–58.
Zhao BB, Meng J, Zhang QX, Kang TT, Lu RR. Protective effect of surface layer proteins isolated from four lactobacillus strains on hydrogen-peroxide-induced HT-29 cells oxidative stress. Int J Biol Macromol. 2017;102:76–83.
Bonhomme D, Werts C. Purification of LPS from Leptospira. Methods Mol Biol. 2020;2134:53–65.
Paraszkiewicz K, Moryl M, Plaza G, Bhagat D, Satpute SK, et al. Surfactants of microbial origin as antibiofilm agents. Int J Environ Heal R. 2021;31:401–20.
Sun ZL, Li PP, Liu F, Bian H, Wang DY, et al. Synergistic antibacterial mechanism of the lactobacillus crispatus surface layer protein and nisin on staphylococcus saprophyticus. Sci Rep-UK. 2017;7(1):265.
Seo SH, Jung M, Kim WJ. Antilisterial and amylase-sensitive bacteriocin producing enterococcus faecium SH01 from Mukeunji, a Korean over-ripened kimchi. Food Sci Biotechnol. 2014;23:1177–84.
Grosu-Tudor SS, Stancu MM, Pelinescu D, Zamfir M. Characterization of some bacteriocins produced by lactic acid bacteria isolated from fermented foods. World J Microbiol Biotechnol. 2014;30:2459–69.
Patel M, Siddiqui AJ, Hamadou WS, Surti M, Awadelkareem AM, et al. Inhibition of bacterial adhesion and antibiofilm activities of a glycolipid biosurfactant from lactobacillus rhamnosus with its physicochemical and functional properties. Antibiotics-Basel. 2021;10(12):1546.
Ghasemi A, Moosavi-Nasab M, Setoodeh P, Mesbahi G, Yousefi G. Biosurfactant production by lactic acid bacterium Pediococcus dextrinicus SHU1593 grown on different carbon sources: strain screening followed by product characterization. Sci Rep-UK. 2019;9:5287.
Guan C, Chen X, Zhao R, Yuan Y, Huang X, et al. A weak post-acidification lactobacillus helveticus UV mutant with improved textural properties. Food Sci Nutr. 2021;9:469–79.
Lee D, Im J, Park DH, Jeong S, Park M, et al. Lactobacillus plantarum lipoteichoic acids possess strain-specific regulatory effects on the biofilm formation of dental pathogenic bacteria. Front Microbiol. 2021;12:758161.
Abruzzo A, Giordani B, Parolin C, Vitali B, Protti M, et al. Novel mixed vesicles containing lactobacilli biosurfactant for vaginal delivery of an anti-Candida agent. Eur J Pharm Sci. 2018;112:95–101.
Madhu AN, Prapulla SG. Evaluation and functional characterization of a biosurfactant produced by lactobacillus plantarum CFR 2194. Appl Biochem Biotechnol. 2014;172:1777–89.
The investigation was supported by a project funded by the National Natural Science Foundation of China (31972094, 31700079), the Natural Science Foundation of Jiangsu Province (BK20170496) and China Post-Doctorate Foundation and the Scientific and Technological Innovation Platform Co-built by Yangzhou City-Yangzhou University (YZ2020265).
Key Lab of Dairy Biotechnology and Safety Control, College of Food Science and Engineering, Yangzhou University, Yangzhou, Jiangsu, China
Chengran Guan, Wenjuan Zhang, Jianbo Su, Feng Li, Dawei Chen, Xia Chen, Yujun Huang, Ruixia Gu & Chenchen Zhang
Chengran Guan
Wenjuan Zhang
Jianbo Su
Feng Li
Dawei Chen
Xia Chen
Yujun Huang
Ruixia Gu
Chenchen Zhang
CG designed this research and write the draft manuscript. WZ performed most of the experiments. JS and FL performed antibacterial experiment. DC and CZ analyzed the antibiofilm data and revised the manuscript. XC, YH and RG supported experimental materials and supervised the experiment. All authors read and approved the final manuscript.
Correspondence to Chengran Guan or Chenchen Zhang.
Antibacterial activity of L. rhamnosus YT cells and the cell surface extract isolated with phenol (a) and LiCl (b). L. rhamnosus YT cells were treated by phenol or LiCl to obtain the cell bound substance. Then the antibacterial activity of the treated cells and the substance against B. subtilis and S. enterica was evaluated respectively.
Guan, C., Zhang, W., Su, J. et al. Antibacterial and antibiofilm potential of Lacticaseibacillus rhamnosus YT and its cell-surface extract. BMC Microbiol 23, 12 (2023). https://doi.org/10.1186/s12866-022-02751-3
Antibiofilm
Lactobacillus rhamnosus YT
Cell-surface extract | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.