text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Help with math
Visual illusions
Cut the knot!
What is what?
Inventor's paradox
Math as language
Outline mathematics
Eye opener
Analogue gadgets
Proofs in mathematics
Things impossible
Index/Glossary
Fast Arithmetic Tips
Stories for young
Make an identity
Elementary geometry
Inequality with Constraint from Dan Sitaru's Math Phenomenon
Note that
$\displaystyle\begin{align} 2\sum_{cycl}\frac{a^2+ab+b^2}{a+b}&\ge 2\sum_{cycl}\frac{\frac{3}{4}(a+b)^2}{a+b}\\ &= 2\sum_{cycl}\frac{3}{4}(a+b)\\ &=2\cdot\frac{3}{2}(a+b+c)\\ &=3(a+b+c)\\ &\ge (a+b+c)+2(a+b+c)\\ &\ge b+2c+20, \end{align}$
because $a\ge c.\,$ This proves the right inequality. The left inequality is equivalent to
$\displaystyle 4a+3b+2c\ge 2\sum_{cycl}\frac{a^2+ab+b^2}{a+b},$
which, in turn is equivalent to
$\displaystyle\begin{align} \left(2a+b-\frac{2(a^2+ab+b^2)}{a+b}\right)&+\left(2b+c-\frac{2(b^2+bc+c^2)}{b+c}\right)\\ &+\left(2a+c-\frac{2(a^2+ac+c^2)}{a+c}\right)\ge 0, \end{align}$
$\displaystyle \frac{b(a-b)}{a+b}+\frac{b(b-c)}{b+c}+\frac{c(a-c)}{c+a}\ge 0,$
which is true because $a\ge b\ge c.$
The left inequality is equivalent to
$\displaystyle \begin{align} 4a+3b+2c &\ge 2\sum_{cycl}\frac{a^2+ab+b^2}{a+b}\\ &=2\sum_{cycl}\frac{a^2+2ab+b^2}{a+b}-2\sum_{cycl}\frac{ab}{a+b}\\ &=2\sum_{cycl}(a+b)-2\sum_{cycl}\frac{ab}{a+b}\\ &=4(a+b+c)-2\sum_{cycl}\frac{ab}{a+b}, \end{align}$
which can be rewritten as
$\displaystyle 2\sum_{cycl}\frac{ab}{a+b}\ge b+2c.$
Now note that
$\displaystyle\begin{align}\frac{2ab}{a+b}+\frac{2bc}{b+c}&\ge\frac{2ab}{2a}+\frac{2bc}{2b}\\ &=b+c. \end{align}$
Also $\displaystyle \frac{2ca}{c+a}\ge c\,$ is equivalent to $2ca\ge c^2+ca,\,$ or $a\ge c,\,$ which is true. Now, adding this to
$\displaystyle\frac{2ab}{a+b}+\frac{2bc}{b+c}\ge b+c$
completes the proof of the left inequality. For the right inequality, observe that, as we just showed,
Thus suffice it to prove that
$\displaystyle 2\sum_{cycl}\frac{a^2+b^2}{a+b}\ge 20.$
This is indeed so due to Bergstrom's inequality:
$\displaystyle\begin{align}2\sum_{cycl}\frac{a^2+b^2}{a+b}&=2\sum_{cycl}\frac{a^2}{a+b}+2\sum_{cycl}\frac{b^2}{a+b}\\ &\ge 2\frac{(a+b+c)^2}{2(a+b+c)}+2\frac{(a+b+c)^2}{2(a+b+c)}\\ &=2(a+b+c)=20. \end{align}$
Let $\displaystyle f=2\sum_{cycl}\frac{a^2+ab+b^2}{a+b}.\,$ We reexpress:
$\displaystyle f=2\sum_{cycl}\left(\frac{(a+b)^2-ab}{a+b}\right)=2\left(20-\sum_{cycl}\frac{ab}{a+b}\right).$
Let us further establish that from the assumtions $a\ge b\ge c\gt 0\,$ and $a+b+c=10,\,$ it is necessary that $\displaystyle a\ge \frac{10}{3}\,$ and $\displaystyle c\le\frac{10}{3}.$
We have the right side inequality:
$\displaystyle b+2c+20\le 20+\left(1-\frac{10}{3}\right)+\frac{10}{3}\le 30.$
We also have $f\ge 30,\,$ for, $\displaystyle \frac{ab}{a+b}\le\frac{1}{4}(a+b)\,$ from which
$\displaystyle \sum_{cycl}\frac{ab}{a+b}\le\frac{1}{2}(a+b+c)=5.$
For the left inequality, we have
$\displaystyle \frac{ab}{a+b}+\frac{cb}{b+c}+\frac{ca}{c+a}\ge\frac{ab}{2a}+\frac{bc}{2b}+\frac{ca}{2a}=\frac{b}{2}+c,$
so $f\le 40-b-2c.\,$ The left side inequality becomes $b+2a+20\ge 40-b-2c,\,$ i.e., $2(a+b+c)\ge 20\,$ whcih is true.
The above problem, originally from his book Math Phenomenon, has been posted by Dan Sitaru at the CutTheKnotMath facebook page. Solution 1 is by Diego Alvariz; Solution 3 is by N. N. Taleb.
A Cyclic But Not Symmetric Inequality in Four Variables $\left(\displaystyle 5(a+b+c+d)+\frac{26}{abc+bcd+cda+dab}\ge 26.5\right)$
An Inequality with Constraint $\left((x+1)(y+1)(z+1)\ge 4xyz\right)$
An Inequality with Constraints II $\left(\displaystyle abc+\frac{2}{ab+bc+ca}\ge\frac{5}{a^2+b^2+c^2}\right)$
An Inequality with Constraint III $\left(\displaystyle \frac{x^3}{y^2}+\frac{y^3}{z^2}+\frac{z^3}{x^2}\ge 3\right)$
An Inequality with Constraint IV $\left(\displaystyle\sum_{k=1}^{n}\sqrt{x_k}\ge (n-1)\sum_{k=1}^{n}\frac{1}{\sqrt{x_k}}\right)$
An Inequality with Constraint VII $\left(|(2x+3y-5z)-3(x+y-5z)|=|-x+10z|\le\sqrt{101}\right)$
An Inequality with Constraint VIII $\left(\sqrt{24a+1}+\sqrt{24b+1}+\sqrt{24c+1}\ge 15\right)$
An Inequality with Constraint IX $\left(x^2+y^2\ge x+y\right)$
An Inequality with Constraint X $\left((x+y+p+q)-(x+y)(p+q)\ge 1\right)$
Problem 11804 from the AMM $\left(10|x^3 + y^3 + z^3 - 1| \le 9|x^5 + y^5 + z^5 - 1|\right)$
Sladjan Stankovik's Inequality With Constraint $\left(abc+bcd+cda+dab-abcd\le\displaystyle \frac{27}{16}\right)$
An Inequality with Constraint XII $\left(abcd\ge ab+bc+cd+da+ac+bd-5\right)$
An Inequality with Constraint XIV $\left(\small{64(a^2+ab+b^2)(b^2+bc+c^2)(c^2+ca+a^2) \le 3(a+b+c)^6}\right)$
An Inequality with Constraint XVII $\left(a^3+b^3+c^3\ge 0\right)$
An Inequality with Constraint in Four Variables II $\left(a^3+b^3+c^3+d^3 + 6abcd \ge 10\right)$
An Inequality with Constraint in Four Variables III $\left(\displaystyle\small{abcd+\frac{15}{2(ab+ac+ad+bc+bd+cd)}\ge\frac{9}{a^2+b^2+c^2+d^2}}\right)$
An Inequality with Constraint in Four Variables V $\left(\displaystyle 5\sum \frac{abc}{\sqrt[3]{(1+a^3)(1+b^3)(1+c^3)}}\leq 4\right)$
An Inequality with Constraint in Four Variables VI $\left(\displaystyle \sum_{cycl}a^2+6\cdot\frac{\displaystyle \sum_{cycl}abc}{\displaystyle \sum_{cycl}a}\ge\frac{5}{3}\sum_{sym}ab\right)$
A Cyclic Inequality in Three Variables with Constraint $\left(\displaystyle a\sqrt{bc}+b\sqrt{ca}+c\sqrt{ab}+2abc=1\right)$
Dorin Marghidanu's Cyclic Inequality with Constraint $\left(\displaystyle 2a^2-2\sqrt{2}(b+c)a+3b^2+4c^2-2\sqrt{bc}\gt 0\right)$
Dan Sitaru's Cyclic Inequality In Three Variables with Constraints $\left(\displaystyle \frac{1}{\sqrt{a+b^2}}+ \frac{1}{\sqrt{b+c^2}}+ \frac{1}{\sqrt{c+a^2}}\ge\frac{1}{\sqrt{a+b+c}}\right)$
Dan Sitaru's Cyclic Inequality In Three Variables with Constraints II $\left(\displaystyle \sum_{cycl}\frac{\displaystyle \frac{x}{y}+1+\frac{y}{x}}{\displaystyle \frac{1}{x}+\frac{1}{y}}\le 9\right)$
Dan Sitaru's Cyclic Inequality In Three Variables with Constraints III $\left(\displaystyle 12+\sum_{cycl}\left(\sqrt{\frac{x^3}{y}}+\sqrt{\frac{x^3}{y}}\right)\ge 8(x+y+z)\right)$
Another Problem from the 2016 Danubius Contest $\left(\displaystyle \frac{1}{a^2+2}+\frac{1}{b^2+2}+\frac{1}{c^2+2}\le 1\right)$
Gireaux's Theorem (If a continuous function of several variables is defined on a hyperbrick and is convex in each of the variables, it attains its maximum at one of the corners)
An Inequality with a Parameter and a Constraint $\left(\displaystyle a^4+b^4+c^4+\lambda abc\le\frac{\lambda +1}{27}\right)$
Unsolved Problem from Crux Solved $\left(a_1a_2a_3a_4a_5a_6\le\displaystyle \frac{5}{2}\right)$
An Inequality With Six Variables and Constraints Find the range of $\left(a^2+b^2+c^2+d^2+e^2+f^2\right)$
Cubes Constrained $\left(3(a^4+b^4)+2a^4b^4\le 8\right)$
Dorin Marghidanu's Inequality with Constraint $\left(\displaystyle \frac{1}{a_1+1}+\frac{2}{2a_2+1}+\frac{3}{3a_3+1}\ge 4\right)$
Dan Sitaru's Integral Inequality with Powers of a Function $\left(\displaystyle\left(\int_0^1f^5(x)dx\right)\left(\int_0^1f^7(x)dx\right)\left(\int_0^1f^9(x)dx\right)\ge 2\right)$
Michael Rozenberg's Inequality in Three Variables with Constraints $\left(\displaystyle 4\sum_{cycl}ab(a^2+b^2)\ge\sum_{cycl}a^4+5\sum_{cycl}a^2b^2+2abc\sum_{cycl}a\right)$
Dan Sitaru's Cyclic Inequality In Three Variables with Constraints IV $\left(\displaystyle \frac{(4x^2y^2+1)(36y^2z^2+1)(9x^2z^2+1)}{2304x^2y^2z^2}\geq \frac{1}{(x+2y+3z)^2}\right)$
Refinement on Dan Sitaru's Cyclic Inequality In Three Variables $\left(\displaystyle \frac{(4x^2y^2+1)(36y^2z^2+1)(9x^2z^2+1)}{2304x^2y^2z^2}\geq \frac{1}{3\sqrt{3}}\right)$
An Inequality with Arbitrary Roots $\left(\displaystyle \sum_{cycl}\left(\sqrt[n]{a+\sqrt[n]{a}}+\sqrt[n]{a-\sqrt[n]{a}}\right)\lt 18\right)$
Leo Giugiuc's Inequality with Constraint $\left(\displaystyle 2\left(\frac{1}{a+1}+\frac{1}{b+1}+\frac{1}{c+1}\right)\le ab+bc+ca\right)$
Problem From the 2016 IMO Shortlist $\left(\displaystyle \sqrt[3]{(a^2+1)(b^2+1)(c^2+1)}\le\left(\frac{a+b+c}{3}\right)^2+1\right)$
Dan Sitaru's Cyclic Inequality with a Constraint and Cube Roots $\left(\displaystyle \sum_{cycl}\sqrt[3]{\frac{abc}{(a+1)(b+1)(c+1)}}\le\frac{4}{5}\right)$
Dan Sitaru's Cyclic Inequality with a Constraint and Cube Roots II $\left(\displaystyle \sqrt[3]{a}+\sqrt[3]{b}+\sqrt[3]{c}+\sqrt[3]{d}\le\sqrt[3]{abcd}\right)$
A Simplified Version of Leo Giugiuc's Inequality from the AMM $\left(\displaystyle a^3+b^3+c^3\ge 3\right)$
Kunihiko Chikaya's Inequality $\displaystyle \small{\left(\frac{(a^{10}-b^{10})(b^{10}-c^{10})(c^{10}-a^{10})}{(a^{9}+b^{9})(b^{9}+c^{9})(c^{9}+a^{9})}\ge\frac{125}{3}[(a-b)^3+(b-c)^3+(c-a)^3]\right)}$
A Cyclic Inequality on [-1,1] $\left(xy+yz+zx\ge 1\right)$
An Inequality with Two Triples of Variables $\left(\displaystyle\sum_{cycl}ux\ge\sqrt{\left(\sum_{cycl}xy\right)\left(2\sum_{cycl}uv-\sum_{cycl}u^2\right)}\right)$
6th European Mathematical Cup (2017), Junior Problem 4 $\left(x^3 - (y^2 + yz + z^2)x + y^2z + yz^2 \le 3\sqrt{3}\right)$
Dorin Marghidanu's Example $\left(\displaystyle\frac{\displaystyle\frac{1}{b_1}+\frac{2}{b_2}+\frac{3}{b_3}}{1+2+3}\ge\frac{1+2+3}{b_1+2b_2+3b_3}\right)$
A Trigonometric Inequality with Ordered Triple of Variables $\left((x+y)\sin x+(x-z)\sin y\lt (y+z)\sin x\right)$
Three Variables, Three Constraints, Two Inequalities (Only One to Prove) - by Leo Giugiuc $\bigg(a+b+c=0$ and $a^2+b^2+c^2\ge 2$ Prove that $abc\ge 0\bigg)$
Hung Nguyen Viet's Inequality with a Constraint $\left(1+2(xy+yz+zx)^2\ge (x^3+y^3+z^3+6xyz)^2\right)$
A Cyclic Inequality by Seyran Ibrahimov $\left(\displaystyle \sum_{cycl}\frac{x}{y^4+y^2z^2+z^4}\le\frac{1}{(xyz)^2}\right)$
Dan Sitaru's Cyclic Inequality In Three Variables with Constraints V $\left(\displaystyle \frac{1}{\sqrt{ab(a+b)}}+\frac{1}{\sqrt{bc(b+c)}}+\frac{1}{\sqrt{ca(c+a)}}\le 3+\frac{a+b+c}{abc}\right)$
Cyclic Inequality In Three Variables From Kvant $\left(\displaystyle \frac{a}{bc+1}+\frac{b}{ca+1}+\frac{c}{ab+1}\le 2\right)$
Cyclic Inequality In Three Variables From Vietnam by Rearrangement $\left(\displaystyle \frac{x^3+y^3}{y^2+z^2}+\frac{y^3+z^3}{z^2+x^2}+\frac{z^3+x^3}{x^2+y^2}\le 3\right)$
A Few Variants of a Popular Inequality And a Generalization $\left(\displaystyle \frac{1}{(a+b)^2+4}+\frac{1}{(b+c)^2+4}+\frac{1}{(c+a)^2+4}\le \frac{3}{8}\right)$
Two Constraints, One Inequality by Qing Song $\left(|a|+|b|+|c|\ge 6\right)$
A Moscow Olympiad Question with Two Inequalities $\left(\displaystyle b^2\gt 4ac\right)$ A Problem form the Short List of the 2018 JBMO $\left(ab^3+bc^3+cd^3+da^3\ge a^2b^2+b^2c^2+c^2d^2+d^2a^2\right)$
An Inequality from a Mongolian Exam $\left(\displaystyle 2\sum_{i=1}^{2n-1}(x_i-A)^2\ge \sum_{i=1}^{2n-1}(x_i-x_n)^2\right)$
|Contact| |Front page| |Contents| |Algebra|
Copyright © 1996-2018 Alexander Bogomolny | CommonCrawl |
Crypto state_ encrypted ufi box
Cpython build
Huawei mediapad t3 10 quick start guide
Ninja feed chute lid bl681a
Elev8r vape review
(a) What is the hybridization of N in the molecule? (b) Which structure has a dipole moment? 10.83 Cyclopropane (C3H6) has the shape of a triangle in which a C atom is bonded to two H atoms and two other C atoms at each corner. Cubane (C8H8) has the shape of a cube in which a C atom is bonded to one H atom and three other C atoms at each corner.
The in situ hybridization protocol described here allows a direct localization of mRNA and small RNA expression at the... Note: All steps up to and including the hybridization step are sensitive to RNAse activity. It is therefore essential to work in clean conditions.
Zoom exhaust baffle removal
How to turn an old pc into a server
In this case, carbon will sp2 hybridize; in sp2 hybridization, the 2s orbital mixes with only two of the three available 2p orbitals, forming a total of three sp hybrid orbitals with one p-orbital remaining. The three hybridized orbitals explain the three sigma bonds that each carbon forms.
Jan 02, 2010 · What is the hybridization on CH4, NH3, O2, N2, and H2O? I'm getting really confused about the hybridization of O2, N2, and H2O. I think CH4 and NH3 is sp3. Also I was wondering if what and how many sigma and pi bonds c3h6 had? Any help would be appreciated! Thanks
Armstrong furnace won t start
Dmv renew license nj
Red Caps fit 2 mL screw cap tubes and are ideal for high-speed sample preparation instruments. Application Notes . Red Caps fit EMPTY FASTPREP® Tubes and are used in Fastprep systems, to hold sample for Homogenization.
Alkenes and alkynes can be transformed into almost any other functional group you can name! We will review their nomenclature, and also learn about the vast possibility of reactions using alkenes and alkynes as starting materials.
Best bbcor bats 2018
John t. walton
What hybridization is predicted for the nitrogen atom in the N03- ion ? (a) sp2, (b) sp3, (c) sp3d, (d) sp3d2, (e) none is correct. ... C2H4, (c) C3H6, (d) C4Hg, (e ...
Dispersion formula $$n^2-1=\frac{0.83189λ^2}{λ^2-0.00930}+\frac{-0.15582λ^2}{λ^2+49.45200}$$ Comments. 20 °C. References. S. Kedenburg, M. Vieweg, T. Gissibl ...
Google form approval script
Tracker boats for sale
Student Solutions Manual to accompany Organic Chemistry, Seventh Edition | Francis Carey, Neil Allison | download | Z-Library. Download books for free. Find books
(Redirected from Variable hybridization). In chemistry, isovalent or second order hybridization is an extension of orbital hybridization, the mixing of atomic orbitals into hybrid orbitals which can form chemical bonds, to include fractional numbers of atomic orbitals of each type (s, p, d)...
Amsco to what extent did manifest destiny and territorial expansion unite
Tk maxx jobs
The hybridization wavelength was designed at 1562 nm and observed at 1537 nm. Potential applications include fundamental mode conversion, polarization rotation, polarization splitter, and polarization insensitive waveguides in optical receiver module.
1. What is the ground-state electronic configuration of a carbon atom? 2 2 5 A. 1s , 2s , 2p 2 2 2 B. 1s , 2s , 2p 2 2 6 C. 1s , 2s , 2p 2 2 4 D. 1s , 2s , 2p Accessibility: Keyboard Navigation Bloom's Level: 2.
Fusion 360 mcmaster carr not working
Port 554 exploit metasploit
8dpo back pain
The separation of DNA restriction enzyme digestion products by gel electrophoresis and immobilization of the fragments onto a solid support (or filter) have been described in the preceding chapters (see Chapters 3 and 4). This chapter describes the detection of specific DNA sequences by hybridization...
1910 University Drive SCNC 153, Boise, ID 83725-1520 Secondary Navigation. myBoiseState; Safety, Security and Support; Career Opportunities
A) 6.40V B) 31.0V C) 1.24V D) 6.00V Both DNA and RNA are synthesize by the process of: A) Transcription B) Replication C) Polymerization D) PCR The cross between two dissimillar individuals is called: A) Test cross B) Interbreeding C) Eplstasls D) Hybridization ‗CHUCKLE' mean: A) Bouquet of flowers B) displeasing manner C) suppressed ...
Propane, a colourless, easily liquefied, gaseous hydrocarbon (compound of carbon and hydrogen), the third member of the paraffin series following methane and ethane. The chemical formula for propane is C3H8. It is separated in large quantities from natural gas, light crude oil, and oil-refinery
AP Chemistry. C2H2(g) + 2H2(g) --> C2H6(g) Information about the substances involved in the reaction represented above is summarized in the following Hello! my question is. Cumulene has chemical formula C4H4 with 7sigma and 3pie bonds. the 2 outer C atoms have a hybridization of "sp2", the H...
• Pauling said to modify VB approach with ORBITAL HYBRIDIZATION. • — mix available orbitals to form a new set of orbitals — HYBRID ORBITALS — that will give the maximum overlap in The bonds between C and H are all sigma bonds between sp2 hybridized C atoms and the s-orbitals of Hydrogen.
What Is The Hybridization At Each Carbon Atom In The Molecule
Click and drag the molecle to rotate it. Decision: The molecular geometry of Br 3 is T-shaped with an asymmetric charge distribution on the central atom. Therefore this molecule is polar.
Synaptics fingerprint sensor driver
1Tipo gas GN C1 C2 C3 C4 CH4 97 55 70 25 35 C2H6 1 10 0 8 3 C3H8 1 0 16 25 35 C4H10 0 4 5 10 12 C2H4 0,5 5 3 10 7 C3H6 0,5 2 0 5 8 H2S 0 4 1 2 0 H2 0 20 5 15 0 PCI 913 955 1200 1530 1800 Fuente: elaboracion propia …AP Chemistry. C2H2(g) + 2H2(g) --> C2H6(g) Information about the substances involved in the reaction represented above is summarized in the following Hello! my question is. Cumulene has chemical formula C4H4 with 7sigma and 3pie bonds. the 2 outer C atoms have a hybridization of "sp2", the H...Chapter 2 : Alkanes. sp3 hybridisation. but these bonds will have different lengths and strengths the 3 C-H bonds from the p orbitals maybe expected to have H-C-H bond angles of 90 degrees
Free nationwide inmate search
C2H3F hybridization. 7:54. Valence Bond Theory, Hybrid Orbitals, and Molecular Orbital Theory. Professor Dave Explains 792.627 views4 year ago. C2H2Cl2 - Polar or Nonpolar, Molecular Geometry, Hybridization, Bond Angle, & Lewis Structure. The Organic Chemistry Tutor 34.936...
Polaris equipment
Drawing Structural Formulas. For each of the following problems, draw a formula using the drawing window on the left. When you are finished, check your answer by pressing the Check Molecule # button under the name.
Employee engagement survey results ceo message
4. For the ClNO2 molecule, show the geometry (shape) and hybridization by using the VSEPR theory. 5. A large flask is evacuated and found to weigh 134.567 g. It is then filled to a pressure of 735 torrs at 31 degrees Celsius with a gas of unknown molar mass and then reweighed. Its new mass is 137.456 g.
Kansas city chiefs rumors warpaint
Hybridization - . honors chemistry mrs. coyle. hybrization of orbitals. the merging of several atomic orbitals to form. Alkanes C H n 2n+2 all C are sp3 hybridized bonds n root suffix formula 3 prop ane C3H8 4 but ane C4H10 5 pent ane C5H12 6 hex ane C6H14 7 hept ane C7H16 8 oct ane C8H18...
Electric stove making crackling noise
Texas dmv near me
3 sp3 Hybridization Molecules that have tetrahedral geometry like CH4, NH3, H2O, SO42-, and ClO3- exhibit sp3 hybridization on the central atom. 4 Methane with Hybridized Orbitals Overlap of the Hydrogen 1s orbitals with the hybridized sp3 orbitals from the central Carbon.
Trailblazer ss radio install
Alibaba.com offers 895 hybridization system products. A wide variety of hybridization system options are available to you, such as applicable industries, warranty, and after-sales service provided.
EMPIRICAL FORMULAS (FORMULAE) "Empirical Formula is the simplest representation of a compound with its atoms shown in correct ratios of small whole numbers" ...
What s an acquaintance
Hybridization is a model that attempts to remedy the shortcomings of simple valence bond theory. Below, the concept of hybridization is described using four simple organic molecules Experimentally, ethane contains two elements, carbon and hydrogen, and the molecular formula of ethane is C2H6.
I need the hybridization of the following isomers of C3H4: H2-C=C=C-H2 and H3-C-triple bond-C-H Please Help!!! Hybridization of C3H4?
Xfinity appointment time
Nov 08, 2010 · The carbon and oxygen hybridizations are correct - we generally don't use principal quantum numbers with hybridization and just say sp2. level 2. 1 point · 4 years ago.
Oct 07, 2014 · Molecules, the 'sequel' to Elements: A Visual Exploration of Every Known Atom in the Universe, isn't quite as good as the original but still a fascinating book.The main draw for the book are the outstanding photos on every single page.
Samsung a6 plus charging port type
The SP hybridization is the type of hybridization that is found in ammonia NH3. I think there are two isomers of C3H6. One is propene (CH2=CHCH3) and the other is cyclopropane (looks like a triangle). Hope this helps.
Hence this hybridization is called trigonal hybridization.SP2 hybird orbital is slightly smaller in size than the sp3 hybrid orbital. Therefore the shape of sp2 hybridized atom is trigonal planar with bond angle of 1200.Examples:-BCl3: Boron trichloride BF3: Boron trifluoride.
I also go over hybridization, shape and bond angle. Lone pairs, unpaired electrons, and single, double, or triple bonds are used to indicate where the valence electrons are located around each atom in a Lewis structure.
23 The Shapes of Molecules The chemical bonding in a compound is very obviously related to its reactivity and properties – Na2O and H 2O being quite different materials.
Introgressive hybridization of divergent species has been important in increasing variation, leading to new morphologies and even new species, but how that By hybridizing 2 species of Australian fruit flies in the laboratory and following them for several generations, Lewontin and Birch (1) showed that...
A quick explanation of the molecular geometry of C3H8 (Propane) including a description of the C3H8 bond angles.Looking at the C3H8 Lewis structure we can se...
What is the hybridization of the central atom in each of the following? (a) BeH2 = sp (b) SF6 = sp 3 d 2 (c) PO4 3− = sp 3 (d) PCl5 = dsp 3 16. Two important industrial chemicals, ethene, C2H4, and propene, C3H6, are produced by the steam (or thermal) cracking process: 2C3 H8(g) C2 H4(g) + C3 H6(g) + CH4(g) + H2(g) For each of the four carbon ...
Feb 18, 2013 · To find the hybridization of a central atom is basically (1) counting the number of valence electrons in the molecule, (2) draw the Lewis structure of the molecule, (3) count the number of electron groups (this includes lone pairs, free radicals, bonds, etc.), (4) determine the shape based on the number of electron groups, and (5) determine the hybridization based on the shape.
This diagram of CH 4 illustrates the standard convention of displaying a three-dimensional molecule on a two-dimensional surface. The straight lines are in the plane of the page, the solid wedged line is coming out of the plane toward the reader, and the dashed wedged line is going out of the plane away from the reader.
Dec 10, 2015 · Notes. Note 1: Although alkoxides (RO –, the conjugate base of alcohols, pKa 16-18) are not on anyone's list of Great Leaving Groups, they are some 25 orders of magnitude better leaving groups than hydrides (H–, the conjugate base of hydrogen, pKa 40) and more than 30 orders of magnitude better than alkyl groups (R- , the conjugate base of alkanes, pKa 50). Question: In the Workbook on Quiz 2 preparation, number 4 asks, What is the hybridization of each nitrogen atom in N2? A linear shape only requires 2 orbitals: s and p; hence, sp hybridization. Table 3.2 on page 109 in the textbook is a great guide for this type of problems: connecting...2 Chemical Kinetics Kinetics • Chemical Kinetics is the study of the rate at which a chemical process occurs. • Besides information about the speed at
Ipconfig chromebook | CommonCrawl |
Journal Home About Issues in Progress Current Issue All Issues Feature Issues
OSA Continuum
Vol. 4,
•https://doi.org/10.1364/OSAC.442447
MEMS–VCSEL design method using a diffraction loss map
Chikako Kurokawa, Yuta Suzuki, Yuma Kitagawa, and Shin–ichiro Tezuka
Chikako Kurokawa,1,* Yuta Suzuki,2 Yuma Kitagawa,2 and Shin–ichiro Tezuka2
1Marketing Headquarters, Innovation Center, Life Research & Development Department, Yokogawa Electric Corporation, 2–9–32 Nakacho, Musashino City, Tokyo, Japan
2Marketing Headquarters, Innovation Center, Sensing Research & Development Department, Yokogawa Electric Corporation, 2–9–32 Nakacho, Musashino City, Tokyo, Japan
*Corresponding author: [email protected]
C Kurokawa
Y Suzuki
Y Kitagawa
S Tezuka
Chikako Kurokawa, Yuta Suzuki, Yuma Kitagawa, and Shin–ichiro Tezuka, "MEMS–VCSEL design method using a diffraction loss map," OSA Continuum 4, 3129-3138 (2021)
One-shot three-dimensional measurement method with the color mapping of light direction
Hiroshi Ohno
OSA Continuum 4(3) 840-848 (2021)
Measurement and correction of misalignments in corneal topography using the null-screen method
Manuel Campos-García, et al.
Simple method for high beam quality laser resonator design
Qiao Wen, et al.
OSA Continuum 4(7) 2036-2043 (2021)
Lasers and Laser Optics
Bragg reflectors
Finite difference time domain
Fresnel number
Propagation methods
Vertical cavity surface emitting lasers
Original Manuscript: September 10, 2021
Revised Manuscript: October 20, 2021
Manuscript Accepted: October 24, 2021
OSA Continuum Optics and Photonics Design and Fabrication Technology (2021)
Calculation model and method
Equations (11)
The micro–optical resonator of the microelectromechanical system(MEMS) tunable vertical cavity surface emitting laser(VCSEL), which is a gain–guided laser, consists of two mirrors fabricated on semiconductor chips of different materials. To understand the relationship between the curvature radius of the concave mirror and the diameter of the tunnel junction forming the active region, the diffraction loss map was obtained. The Fox–Li method was used with the integral kernel of the Rayleigh–Sommerfeld diffraction to simulate an optical resonator with a large Fresnel number. We derived the guideline for the actual bonding process of the different material chips by simulating with each parameter.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Vertical cavity surface emitting lasers(VCSELs) are commonly used in several areas, including optical communication, three–dimensional sensing, and gas analysis utilizing laser spectroscopy [1–3]. In this study, a widely–tunable VCSEL with a microelectromechanical system(MEMS) movable mirror, called a MEMS–VCSEL, was developed. MEMS–VCSEL exhibits rapid tuning of 500 kHz with a tunable range of 50 nm–60 nm in three bands of ${1.55}\;\mu \textrm {m}$, ${1.62}\;\mu \textrm {m}$, and ${1.69}\;\mu \textrm {m}$ [4–6].
Figure 1 shows the schematic of the MEMS–VCSEL. The MEMS–VCSEL consists of a VCSEL chip with an in–plane distributed Bragg reflector(DBR) on the upper side and a Si–MEMS chip with a concave DBR on the lower side. To form the optical resonator, the Au–electrode of each chip is bonded using a thermocompression method. To tune the wavelength, the length of the resonator is controlled by displacing the concave mirror on a membrane in the direction of the optical axis using an electrostatic force [7].
Fig. 1. Schematic of the MEMS–VCSEL. The VCSEL also has a TJ to form the active region, which introduces a gain–guided mechanism. The distance between the two mirrors is equivalent to the length of the cavity.
The MEMS–VCSEL is a gain–guided laser consisting of semiconductors made of different materials. This difference causes diffraction loss of the resonator, as the active layer and the optical axis of the mirror are inevitably inclined and/or misaligned during fabrication. As a result, the lasing threshold can be increased with an increase in the diffraction loss. Although the tolerance of the mirror inclination and the misalignment of the optical axis are important for the bonding process, fabricating several resonators to obtain the tolerance is time–consuming. Therefore, numerical simulation is preferable to conducting multiple experiments.
The beam propagation method(BPM), the conventional method of calculating the mode of the optical resonator, uses paraxial approximation [8–11]. Therefore, it is not applicable to micro–optical resonators such as MEMS–VCSEL, in which the diameter of the mirror is larger than the cavity length(the Fresnel number $N_{F}$ [12,13] is high). Although the finite difference time domain(FDTD) method [14,15], which discretizes Maxwell's equations, does not require approximation, it has certain disadvantages, such as difficulty in calculating the mode and high consumption of computational resources.
On the other hand, the Fox–Li method can efficiently simulate an optical resonator with less approximation [16,17]. Therefore, in the present study, this method was adopted to calculate the mode profile and the dependence of the diffraction loss on the mirror's concave curvature radius, the inclination of the mirror, and the misalignment of the optical axis [18,19]. Further, it allows the integral kernel to be derived without the paraxial approximation [20].
However, previous applications of the Fox–Li method did not consider the gain–guided structure, which squeezes the beam and plays an important role in the optical resonator. Thus, the calculated mode profiles were dependent on the mirror area, and the results differ significantly from the actual mode profile of the MEMS–VCSEL.
In this study, an optical aperture(AP) [21] is introduced in addition to the existing model using the Fox–Li method to simulate the gain–guided structure, that is, the tunnel junction(TJ) in the laser active layer. The dependence of the diffraction loss on the misalignment of the optical axis of the mirror and the TJ, as well as on the mirror inclination, was calculated using the modified model. Additionally, the diffraction loss map, which demonstrates the effect of mirror inclination and misalignment on the diffraction loss, is introduced to investigate the characteristics of the resonator. The guidelines for the actual bonding process of the different semiconductor chips were derived by investigating the dependence of the diffraction loss on the TJ diameter and curvature radius of the concave mirror from the diffraction loss map.
2. Calculation model and method
2.1 Calculation model
Figure 2 shows a calculation model composed of two circular mirrors, $\mathrm {m}_1$ and $\mathrm {m}_2$, and a circular AP. $\mathrm {m}_1$ and $\mathrm {m}_2$ are equivalent to a concave mirror on the Si–MEMS chip and a plane mirror on the VCSEL chip. A concave mirror was fabricated onto the MEMS membrane by chemical mechanical polishing(CMP) [4]. The CMP process condition determines the curvature radius $R$ [22], which is also a design parameter for the MEMS–VCSEL. The AP with diameter $a_{0}$ is the approximate structure of the active region formed by the TJ, as explained below. $\theta$ is an inclination angle of $\mathrm {m}_2$. $d$ is the position shift of the optical axis of $\mathrm {m}_2$ and AP, which shows the difference in the distances between the centers of $\mathrm {m}_1$ and $\mathrm {m}_2$ in the $xy$–plane. Here, $a$ is the diameter of the two mirrors, and $b$ is the cavity length, which indicates the distance between $\mathrm {m}_1$ and $\mathrm {m}_2$. $b^{\prime }$ and $b^{\prime \prime }$ are the distances between $\mathrm {m}_1$ and AP and AP and $\mathrm {m}_2$, respectively. The reflectivities of $\mathrm {m}_1$ and $\mathrm {m}_2$ are $R_{1}$ and $R_{2}$, respectively, which allows consideration of the reflection loss. However, $R_{1}=R_{2}=100\%$ were set in this study.
Fig. 2. The calculation model is composed of two circular mirrors, $\mathrm {m}_1$ and $\mathrm {m}_2$, and a circular optical aperture AP. Here, $a$ is the diameter of the two mirrors, and $b$ denotes the cavity length. $b^{\prime }$ are the distances between $\mathrm {m}_1$ and AP, $b^{\prime \prime }$ are the distances between AP and $\mathrm {m}_2$. $R$ is the curvature radius of $\mathrm {m}_1$. $d$ is the position shift of the optical axis of $\mathrm {m}_2$ and AP. $\theta$ is an inclination angle of $\mathrm {m}_2$. The reflectivity of $\mathrm {m}_1$ and $\mathrm {m}_2$ are $R_{1}$ and $R_{2}$, respectively. Here, the parameters were set as $a={24}\lambda$, $b^{\prime }={7}\lambda$, $b^{\prime \prime }={3}\lambda$, and $R_{1}=R_{2}=100\%$. $s_i$, $s_j$, and $s_k$ are the coordinates of the mirror $\mathrm {m}_1$, $\mathrm {m}_2$, and AP, respectively. $K^{\left (1\right )}$, $K^{\left (2\right )}$, $K^{\left (3\right )}$, and $K^{\left (4\right )}$ are integral kernels in one way correspond to the region $\mathrm {m}_1$–AP, AP–$\mathrm {m}_2$, $\mathrm {m}_2$–AP, and AP–$\mathrm {m}_1$.
The MEMS–VCSEL has a gain–guiding mechanism owing to the active region formed by a TJ in the VCSEL. The transverse mode is strongly confined by the active region because the light is gained when passing through the region. In fact, because of the sufficiently short distance between TJ and Si–MEMS(several wavelengths), the beam diameter on the Si–MEMS mirror is almost equal to the TJ diameter. From this experimental result, the active region was simply approximated using the AP.
Therefore, the design parameters of a micro–optical resonator are the mirror diameter $a$, cavity length $b$, TJ diameter $a_{0}$, and curvature radius $R$. When the TJ diameter is larger than the mirror diameter($a_{0} \gg a$), the former can be disregarded in the design parameters. In this study, the parameters are assumed to be $a={24}\lambda$, $b^{\prime }={7}\lambda$, $b^{\prime \prime }={3}\lambda$, and $b=b^{\prime } + b^{\prime \prime }={10}\lambda$, as shown in Fig. 2. The other design parameters are the TJ diameter and curvature radius.
When the beam diameter is sufficiently confined by the TJ, the Fresnel number, which expresses the characteristics of the optical resonator, is approximated as below:
(1)$$N_\mathrm{F, eff}=\cfrac{(a_{0}/2)^2}{\lambda b}.$$
2.2 Fox–Li method
The Fox–Li method enables the calculation of the characteristics of Fabry–Pérot resonators, that is, the diffraction loss and mode profile. This method is governed by an integral equation that can be solved by a computer by discretizing and diagonalizing the equation. In this section, the applied governing equation, integral kernel, and integral method are introduced.
The electrical fields propagated from one mirror to another in the resonator are expressed using diffraction equations. Now, $f^q_{\mathrm {m}_1}$ is the electrical field on $\mathrm {m}_1$ after being reflected $q$ times. $f^{q+1}_{\mathrm {m}_2}$ on $\mathrm {m}_2$ and $f^{q+2}_{\mathrm {m}_1}$ on $\mathrm {m}_1$ are also the fields after being reflected $q+1$ times and $q+2$ times, respectively. These electrical fields are expressed using the following equations:
(2)$$f^{q+1}_{\mathrm{m}_2}\left(s_{j}\right) = \int_{S_{\mathrm{m}_1}} K{(s_{j})},{(s_{i})} f^{q}_{\mathrm{m}_1}\left(s_{i}\right)\textrm{d}s_{i}$$
(3)$$f^{q+2}_{\mathrm{m}_1}\left(s_{i}\right) = \int_{S_{\mathrm{m}_2}} K{(s_{i})},{(s_{j})} f^{q+1}_{j}\left(s_{j}\right)\textrm{d}s_{j}$$
where $s_i$ and $s_j$ are the coordinates of the mirror $\mathrm {m}_1$ and $\mathrm {m}_2$, $K{(s_{j})},{(s_{i})}$ and $K{(s_{i})},{(s_{j})}$ are integral kernels, which denote the optical propagation from $\mathrm {m}_1$ to $\mathrm {m}_2$ and from $\mathrm {m}_2$ to $\mathrm {m}_1$, respectively. Following several transitions in the optical resonator, the electrical fields on $\mathrm {m}_1$ will not change significantly during the round–trip, and they will converge to a steady state. When the electrical fields resonate in the optical cavity, those on $\mathrm {m}_1$ become identical, except for a complex constant $f^{q+2}_{\mathrm {m}_1}=\gamma f^{q}_{\mathrm {m}_1}$. From Eqs. (2, 3), and the above assumption, the governing equation can be obtained as follows:
(4)$$\gamma f\left(s_{i^{\prime}}\right) = \int_{S_{\mathrm{m}_1}} K{(s_{i^{\prime}})},{(s_{i})} f\left(s_{i}\right)\textrm{d}s_{i}$$
where $K{(s_{i^{\prime }})},{(s_{i})}$ is the integral kernel that describes the back and forth optical propagation in a resonator. $\gamma$ is the eigenvalue of the equation, and $\gamma$ represents the variation in the amplitude of the electric fields. The diffraction loss of the power is defined as
(5)$$\alpha_\mathrm{D}=1 - \left| \gamma \right|^{2}$$
In this study, the integral kernel $K{(s_{i^{\prime }})},{(s_{i})}$ is expressed as follows:
(6)$$K{(s_{i^{\prime}})},{(s_{i})} = \int_{S_{\mathrm{AP}}}\int_{S_{\mathrm{m}_2}}\int_{S_{\mathrm{AP}}} K^{\left(4\right)}\left(s_{i^{\prime}},s_{k^{\prime}}\right) K^{\left(3\right)}\left(s_{k^{\prime}},s_{j}\right) K^{\left(2\right)}\left(s_{j},s_{k}\right) K^{\left(1\right)}\left(s_{k},s_{i}\right) \textrm{d}s_{k^{\prime}}\textrm{d}s_{j}\textrm{d}s_{k}$$
where $S_{\mathrm {m}_1}$, $S_{\mathrm {m}_2}$ and $S_{\mathrm {\mathrm {AP}}}$ are the integral regions of $\mathrm {m}_1$, $\mathrm {m}_2$, $\mathrm {m}_\mathrm {AP}$. $K^{\left (1\right )}$, $K^{\left (2\right )}$, $K^{\left (3\right )}$, and $K^{\left (4\right )}$ are integral kernels corresponding to the region $\mathrm {m}_1$–AP, AP–$\mathrm {m}_2$, $\mathrm {m}_2$–AP, AP–$\mathrm {m}_1$. The integral kernel $K^{\left (\delta \right )}\left (s^{\prime }, s\right )$ is expressed by the Rayleigh–Sommerfeld diffraction integral [13] and obtained by differentiating ($e^{-ikd_{s^{\prime },s}}/d_{s^{\prime },s}$) normal from the mirror surface and expressed as follows:
(7)$$K^{\left(\delta\right)}\left(s^{\prime},s\right) = \frac{1}{2\pi} \frac{\partial}{\partial\boldsymbol{n}} \left(\frac{e^{{-}ikd_{s^{\prime},s}}}{d_{s^{\prime},s}}\right)$$
where $\boldsymbol {n}$ is a unit vector on the mirror surface, $k$ is the wavenumber of the light in the resonator($k=2\pi /\lambda$), and $d_{s^\prime, s}$ is the length between $s$ and $s^\prime$, which represent any coordinates. Transforming Eq. (7), the integral kernel is finally obtained as follows:
(8)$$K^{\left(\delta\right)}\left(s^{\prime},s\right) ={-}\frac{e^{{-}ikd_{s^{\prime},s}}}{2\pi d^2_{s^{\prime},s}} \left(ik + \frac{1}{d_{s^{\prime},s}}\right) \boldsymbol{d}_{s^{\prime},s}\cdot\boldsymbol{n},$$
where $\boldsymbol {d}_{s^\prime, s}$ is the vector from $s$ to $s^\prime$. The surface shape and slope of the mirror are determined by the coordinates of the mirror and its unit normal vector $\boldsymbol {n}$, and do not appear directly in the integral kernel. The integral kernel involves less approximation than the Fox–Li integral kernel; further, the characteristics of the optical–micro resonator, which do not match the paraxial approximation, can be calculated more precisely.
The integral equation was discretized to calculate Eq. (4) and Eq. (6) as follows:
(9)$$\gamma f\left(s_{i^{\prime}}\right) = \sum^N_{i, j, k, k^{\prime}} K^{\left(4\right)}\left(s_{i^{\prime}},s_{k^{\prime}}\right) W_{k^{\prime}}\cdot K^{\left(3\right)}\left(s_{k^{\prime}},s_{j}\right) W_{j}\cdot K^{\left(2\right)}\left(s_{j},s_{k}\right) W_{k}\cdot K^{\left(1\right)}\left(s_{k},s_{i}\right) W_{i}\cdot f\left(s_{i}\right),$$
(10)$$W_{\eta=i, j, k, k^{\prime}}=\cfrac{4} {NP^{\prime}_{N_{x}} \left(x_\eta\right) P^{\prime}_{N_{y}}\left(y_\eta\right) P_{N_{x}-1}\left(x_\eta\right) P_{N_{y}-1}\left(y_\eta\right)},$$
(11)$$P_{N_{x}}(x_\eta)=P_{N_{y}}(y_\eta)=0.$$
where $N_{x}$ and $N_{y}$ are the mesh sizes along the $x$–and $y$–axes, respectively, and $N=N_{x}N_{y}$ denotes the total mesh size. $W_{\eta }(\eta =i, j, k, k^\prime )$ are the weights of the numerical integration. $P_{N_{x}}\left (x_\eta \right )$ and $P_{N_{y}}\left (y_\eta \right )$ are the $N_{x}, N_{y}$–degree Legendre polynomials, and $P^{\prime }_{N_{x}}\left (x_\eta \right )$ and $P^{\prime }_{N_{y}}\left (y_\eta \right )$ are the derivatives. $x_\eta$ and $y_\eta$ is obtained by Eq. (11). Equation (9) can be solved using a computer by discretizing and diagonalizing the equation. However, Fox and Li solved it by propagating light between mirrors several times to converge the eigenvalues to a constant value. In a previous study [19], we performed the numerical integration using the higher–order Gaussian–Legendre quadrature formula. This method enables precise and simultaneous calculation of the mode profiles and diffraction loss to higher orders with a low mesh size.
This section first presents the calculation of the dependency of the beam diameter on the curvature radius of the concave mirror and the TJ diameter in the ideal case, where both the inclination angle and the position shift are zero($\theta =0, d=0$). Then, the effects of the curvature radius and TJ diameter on the beam diameter are discussed. Next, a diffraction loss map is introduced to visualize the dependence of the diffraction loss of the optical resonator on the position shift and inclination angle. Lastly, the dependency of the diffraction loss map on the curvature radius and TJ diameter is discussed.
3.1 Dependency of the beam diameter on curvature radius and TJ diameter
Figure 3 shows the dependency of the beam diameter on curvature radius $R$ and TJ diameter $a_{0}$ when the inclination angle of the plane mirror and the position shift are zero($\theta =0, d=0$). $R$ and $a_{0}$ are changed from ${50}\lambda$ to ${1200}\lambda$ and from ${4}\lambda$ to ${24}\lambda$, respectively. The beam diameter was calculated from the TEM$_{00}$ mode on the mirror $\mathrm {m}_1$ and defined as the diameter at which the electric fields are $1/e$ from the peak. The beam diameter without TJ is represented by the black line in Fig. 3. Therefore, the beam diameter depends exclusively on $R$, and not on the TJ diameter, even in the case where a TJ with $a_{0}={24}\lambda$ is present. This is indicated by the overlap between the black line(without TJ) and the green line($a_{0}={24}\lambda$) in Fig. 3. When the TJ diameter is smaller than the green line, the beam diameter remains almost constant even if $R$ changes(orange and blue lines in Fig. 3). This means that the beam is confined by the TJ rather than the concave mirror. When the TJ diameter is $a_{0}={4}\lambda$, the beam diameter is constant for $R > {50}\lambda$, indicating that the beam confinement by TJ is always dominant in that region. In the case of $a_{0}={12}\lambda$, the beam diameter begins to decrease at approximately $R={200}\lambda$, which clearly shows the change in the main cause of beam confinement from the TJ diameter to the curvature radius.
Fig. 3. Dependency of the beam diameter on TJ diameter $a_{0}$ and curvature radius $R$.
3.2 Diffraction loss maps
The diffraction loss map is a contour plot of the dependence of the diffraction loss $\alpha _\mathrm {D}$ of the optical resonator on the position shift $d$ and the inclination angle $\theta$. Specifically, $d$ is taken as the horizontal axis, $\theta$ is the vertical axis, and the diffraction loss $\alpha _\mathrm {D}$ is a contour. In this study, the case of the position shift $d$ and inclination angle $\theta$ in the $y$–axis were considered. $d$ varies over a range of $\pm {2}\lambda$ in increments of ${0.2}\lambda$, and $\theta$ varies over a range of $\pm 1^{\circ }$ in increments of $0.1^{\circ }$.
3.3 Dependency of the diffraction loss map on the TJ diameter
Figure 4 shows the dependency of the diffraction loss map on the TJ diameter $a_{0}$ for a curvature radius $R={500}\lambda$, calculated by changing $a_{0}$ from ${4}\lambda$ to ${14}\lambda$. The white line shows the contour line of diffraction loss in $5\%$ increments. The diffraction loss map is symmetrical at the origin $(d, \theta )=(0, 0)$ owing to the same optical resonator structure between the resonator with position shift $d$ and inclination angle $\theta$ and that with $-d$ and $-\theta$. The diffraction loss is always lowest when there is no position shift or inclination angle, that is, $(d, \theta )=(0, 0)$. The contour line of the diffraction loss becomes elliptical because the beam transmission is higher at the AP when the beam shift by tilt is in the same direction as the AP shift. On the other hand, the diffraction loss increases at a higher rate when the beam shift direction by tilt differs from the AP shift direction. The cusps of the contour line are observed in some diffraction loss maps because of the rough mesh size, where a position shift of ${0.2}\lambda$ or less is negligible.
Fig. 4. Dependency of the diffraction loss maps on the TJ diameter when the carvature radius is $R={500}\lambda$. The white line shows the contour line of diffraction loss in $5\%$ increments.
When $a_{0}={4}\lambda$, the diffraction loss $\alpha _\mathrm {D}$ primarily depends on the position shift $d$. However, as $a_{0}$ increases, the effect of the inclination angle $\theta$ is observed. When $a_{0}={14}\lambda$, the diffraction loss primarily depends on $\theta$. As the TJ diameter decreases, the diffraction loss increase. This can be interpreted as a decrease of the light in the resonator not pass through the active region.
3.4 Dependency of the diffraction loss map on the curvature radius
Figure 5 shows the obtained results of the dependency of the diffraction loss map on the curvature radius $R$ for the TJ diameter $a_{0}={8}\lambda$. The curvature radius $R$ changed from ${20}\lambda$ to $\infty$. The white line shows the contour line of diffraction loss in $5\%$ increments basically and the numbers on the line show the value of the diffraction loss. The basic characteristics of the diffraction loss map are the same as shown in the Section3.3. The diffraction loss is always lowest when there is no position shift or inclination angle. The symmetry at the origin $(d, \theta )=(0, 0)$ is present and the contour line of the diffraction loss becomes elliptical. As the curvature radius decreases, the diffraction loss decreases, and the dependency on $\theta$ reduces owing to the focusing effect of the concave mirror on the optical axis.
Fig. 5. Dependency of the diffraction loss maps on the radius of the curvature when the TJ diameter is $a_{0}={8}\lambda$. The white line shows the contour line of diffraction loss in $5\%$ increments.
The design guidline for MEMS–VCSEL was discussed in this Section. From the diffraction loss maps in the Section3., when the beam diameter becomes small, which means the curvature radius and/or the TJ diameter is small, the inclination angle of the mirror becomes insensitive to the diffraction loss, while the deviation of the optical axis is sensitive to it. Therefore, by designing the small beam diameter, the focus is on decreasing the deviation of the optical axis in the process of bonding the Si–MEMS chip and the VCSEL chip because misalignment of the optical axis is dominant in the diffraction loss. In the actual device, the small TJ diameter is usually designed to decrease the threshold current in laser operation by increasing the current density at the active region. However, the large diffraction losses were obtained in the small TJ diameter less than $a_{0}={10}\lambda$ in Fig. 4. The results indicate that most of the electrical fields in the resonator do not pass through the active region. The small curvature radius can improve the diffraction loss, for example, the diffraction loss was $2.9\%$ when the TJ diameter is $a_{0}={6}\lambda$ and the curvature radius is $R={20}\lambda$. Therefore the beam confinement by the concave mirror is preferred to that by the TJ because of the low diffraction loss. Especially for MEMS–VCSEL, the low diffraction loss leads to the laser characteristics such as low threshold current, high power, and wide tunable range.
We have calculated the micro–optical resonator to study the influence of the inclination angle of the mirror and the deviation of the optical axis, which can be problematic when bonding the Si–MEMS chip and the VCSEL chip. In the numerical calculation, the Fox–Li method was used, and its integral kernel was adopted to the Rayleigh–Sommerfeld integral kernel changing from an existing one so that the calculation could be performed in cases involving a very large Fresnel number. Moreover, the AP, which approximates the laser active layer(the TJ), was adopted, and the gain–guided mechanism of the active region formed by TJ was replicated as an approximate design model for bonding the Si–MEMS chip and the VCSEL chip in the MEMS–VCSEL.
In the resonator configuration of the concave mirror – the TJ – the plane mirror, we demonstrate the dependence of the curvature radius of the concave mirror and the TJ diameter on the beam diameter. The smaller the curvature radius of the concave mirror and the TJ diameter, the smaller the beam diameter. In addition, when comparing the curvature radius and the TJ diameter, the beam diameter is determined by the parameter with a larger confinement effect of the beam.
We also denoted the dependence of the curvature radius of the concave mirror and the TJ diameter on the diffraction loss caused by the mirror inclination and misalignment by using the diffraction loss map. The diffraction loss map, which shows the effect of the mirror inclination and misalignment on the diffraction loss, is introduced. The diffraction loss map facilitates understanding the dependence of the curvature radius of the concave mirror and the TJ diameter.
The dependencies of the diffraction loss map on the TJ diameter and the curvature radius were investigated. The smaller the TJ diameter, the greater the dependence on the optical misalignment, and the larger the TJ diameter, the greater the dependence on the inclination angle of the mirror. The smaller the curvature radius, the less the dependence on inclination angle of the mirror owing to the focusing effect of the concave mirror on the optical axis.
Finally, the design guideline for bonding different materials was obtained from this study. Designing the small beam diameter enables the focus on decreasing the misalignment of the optical axis in the process of bonding the Si–MEMS chip and the VCSEL chip because the inclination angle is not sensitive to the diffraction loss.
In the actual design, the curvature radius was designed to have a smaller impact on the diffraction loss, and the diameter of the TJ was made larger to reduce the impact on the diffraction loss. This magnitude relation was designed using diffraction loss maps. In this manner, the dependency of the inclination angle of the mirror on the diffraction loss was decreased, and focus was placed on joining the two chips while considering the deviation of the optical axis.
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
1. T. Anan, N. Suzuki, K. Yashiki, K. Fukatsu, H. Hatakeyama, T. Akagawa, K. Tokutome, and M. Tsuji, "High-speed 1.1-μm-range InGaAs VCSELs," in Optical Fiber Communication Conference/National Fiber Optic Engineers Conference, (OSA, 2008), pp. 1–3.
2. A. Larsson, "Advances in VCSELs for Communication and Sensing," IEEE J. Sel. Top. Quantum Electron. 17(6), 1552–1567 (2011). [CrossRef]
3. H. Moench, M. Carpaij, P. Gerlach, S. Gronenborn, R. Gudde, J. Hellmig, J. Kolb, and A. van der Lee, "VCSEL-based sensors for distance and velocity," in Vertical-Cavity Surface-Emitting Lasers XX, vol. 9766K. D. Choquette and J. K. Guenter, eds., International Society for Optics and Photonics (SPIE, 2016), pp. 40–50.
4. N. Kanbara, S. Tezuka, and T. Watanabe, "MEMS Tunable VCSEL with Concave Mirror using the Selective Polishing Method," in IEEE/LEOS International Conference on Optical MEMS and Their Applications Conference, 2006., (IEEE, 2006), pp. 9–10.
5. T. Yano, H. Saitou, N. Kanbara, R. Noda, S. Tezuka, N. Fujimura, M. Ooyama, T. Watanabe, T. Hirata, and N. Nishiyama, "Wavelength Modulation Over 500 kHz of Micromechanically Tunable InP-Based VCSELs With Si-MEMS Technology," IEEE J. Sel. Top. Quantum Electron. 15(3), 528–534 (2009). [CrossRef]
6. Y. Kitagawa, Y. Suzuki, T. Saruya, K. Tezuka, N. Kanbara, and N. Nishiyama, "1.6um to 1.7um micromachined tunable VCSELs," (2016), pp. 1–5.
7. T. Watanabe, T. Hirata, N. Kanbara, N. Fujimura, T. Yano, S. Tezuka, H. Saitou, M. Ooyama, and R. Noda, "Tunable Laser Using Silicon MEMS Mirror," IEEJ Trans. SM 130(5), 176–181 (2010). [CrossRef]
8. J. V. Roey, J. van der Donk, and P. E. Lagasse, "Beam-propagation method: analysis and assessment," J. Opt. Soc. Am. 71(7), 803–810 (1981). [CrossRef]
9. L. Thylen and D. Yevick, "Beam propagation method in anisotropic media," Appl. Opt. 21(15), 2751–2754 (1982). [CrossRef]
10. D. Yevick and L. Thylén, "Analysis of gratings by the beam-propagation method," J. Opt. Soc. Am. 72(8), 1084–1089 (1982). [CrossRef]
11. D. Yevick, P. Meissner, and E. Patzak, "Modal analysis of inhomogeneous optical resonators," Appl. Opt. 23(13), 2127–2133 (1984). [CrossRef]
12. F. A. Jenkins and H. E. White, Fundamentals of optics (McGraw-Hill, 1957).
13. M. Born and E. Wolf, Principles of optics (Cambridge University, 2000).
14. K. Yee, "Numerical solution of initial boundary value problems involving maxwell's equations in isotropic media," IEEE Trans. Antennas Propag. 14(3), 302–307 (1966). [CrossRef]
15. A. Taflove, "Application of the Finite-Difference Time-Domain Method to Sinusoidal Steady-State Electromagnetic-Penetration Problems," IEEE Trans. Electromagn. Compat. EMC-22(3), 191–202 (1980). [CrossRef]
16. A. G. Fox and T. Li, "Resonant Modes in a Maser Interferometer," Bell Syst. Tech. J. 40(2), 453–488 (1961). [CrossRef]
17. A. Fox and T. Li, "Modes in a maser interferometer with curved and tilted mirrors," Proc. IEEE 51(1), 80–89 (1963). [CrossRef]
18. S. Tezuka and N. Kanbara, "Stability diagram of higher modes of the Fabry-Perot resonator," Opt. Rev. 15(1), 1–5 (2008). [CrossRef]
19. S. Tezuka, "The diagonalization of the 3D Fox-Li integral equation with the gaussian quadrature formula of very high order," (2010), pp. 20PSa–11.
20. Y. Suzuki and S. Tezuka, "Numerical simulation of 3D Fox-Li integral equation described by Rayleigh-Sommerfeld diffraction for MEMS-VCSEL," Opt. Rev. 26(5), 430–435 (2019). [CrossRef]
21. W. W. Rigrod, "Diffraction loss of stable optical resonators with internal limiting apertures," IEEE J. Quantum Electron. 19(11), 1679–1685 (1983). [CrossRef]
22. Y. Kitagawa, Y. Suzuki, and S. Tezuka, "Mathematical Shape Evaluation of a Concave MEMS Mirror," IEEJ Trans. SM 140(5), 109–112 (2020). [CrossRef]
T. Anan, N. Suzuki, K. Yashiki, K. Fukatsu, H. Hatakeyama, T. Akagawa, K. Tokutome, and M. Tsuji, "High-speed 1.1-μm-range InGaAs VCSELs," in Optical Fiber Communication Conference/National Fiber Optic Engineers Conference, (OSA, 2008), pp. 1–3.
A. Larsson, "Advances in VCSELs for Communication and Sensing," IEEE J. Sel. Top. Quantum Electron. 17(6), 1552–1567 (2011).
H. Moench, M. Carpaij, P. Gerlach, S. Gronenborn, R. Gudde, J. Hellmig, J. Kolb, and A. van der Lee, "VCSEL-based sensors for distance and velocity," in Vertical-Cavity Surface-Emitting Lasers XX, vol. 9766K. D. Choquette and J. K. Guenter, eds., International Society for Optics and Photonics (SPIE, 2016), pp. 40–50.
N. Kanbara, S. Tezuka, and T. Watanabe, "MEMS Tunable VCSEL with Concave Mirror using the Selective Polishing Method," in IEEE/LEOS International Conference on Optical MEMS and Their Applications Conference, 2006., (IEEE, 2006), pp. 9–10.
T. Yano, H. Saitou, N. Kanbara, R. Noda, S. Tezuka, N. Fujimura, M. Ooyama, T. Watanabe, T. Hirata, and N. Nishiyama, "Wavelength Modulation Over 500 kHz of Micromechanically Tunable InP-Based VCSELs With Si-MEMS Technology," IEEE J. Sel. Top. Quantum Electron. 15(3), 528–534 (2009).
Y. Kitagawa, Y. Suzuki, T. Saruya, K. Tezuka, N. Kanbara, and N. Nishiyama, "1.6um to 1.7um micromachined tunable VCSELs," (2016), pp. 1–5.
T. Watanabe, T. Hirata, N. Kanbara, N. Fujimura, T. Yano, S. Tezuka, H. Saitou, M. Ooyama, and R. Noda, "Tunable Laser Using Silicon MEMS Mirror," IEEJ Trans. SM 130(5), 176–181 (2010).
J. V. Roey, J. van der Donk, and P. E. Lagasse, "Beam-propagation method: analysis and assessment," J. Opt. Soc. Am. 71(7), 803–810 (1981).
L. Thylen and D. Yevick, "Beam propagation method in anisotropic media," Appl. Opt. 21(15), 2751–2754 (1982).
D. Yevick and L. Thylén, "Analysis of gratings by the beam-propagation method," J. Opt. Soc. Am. 72(8), 1084–1089 (1982).
D. Yevick, P. Meissner, and E. Patzak, "Modal analysis of inhomogeneous optical resonators," Appl. Opt. 23(13), 2127–2133 (1984).
F. A. Jenkins and H. E. White, Fundamentals of optics (McGraw-Hill, 1957).
M. Born and E. Wolf, Principles of optics (Cambridge University, 2000).
K. Yee, "Numerical solution of initial boundary value problems involving maxwell's equations in isotropic media," IEEE Trans. Antennas Propag. 14(3), 302–307 (1966).
A. Taflove, "Application of the Finite-Difference Time-Domain Method to Sinusoidal Steady-State Electromagnetic-Penetration Problems," IEEE Trans. Electromagn. Compat. EMC-22(3), 191–202 (1980).
A. G. Fox and T. Li, "Resonant Modes in a Maser Interferometer," Bell Syst. Tech. J. 40(2), 453–488 (1961).
A. Fox and T. Li, "Modes in a maser interferometer with curved and tilted mirrors," Proc. IEEE 51(1), 80–89 (1963).
S. Tezuka and N. Kanbara, "Stability diagram of higher modes of the Fabry-Perot resonator," Opt. Rev. 15(1), 1–5 (2008).
S. Tezuka, "The diagonalization of the 3D Fox-Li integral equation with the gaussian quadrature formula of very high order," (2010), pp. 20PSa–11.
Y. Suzuki and S. Tezuka, "Numerical simulation of 3D Fox-Li integral equation described by Rayleigh-Sommerfeld diffraction for MEMS-VCSEL," Opt. Rev. 26(5), 430–435 (2019).
W. W. Rigrod, "Diffraction loss of stable optical resonators with internal limiting apertures," IEEE J. Quantum Electron. 19(11), 1679–1685 (1983).
Y. Kitagawa, Y. Suzuki, and S. Tezuka, "Mathematical Shape Evaluation of a Concave MEMS Mirror," IEEJ Trans. SM 140(5), 109–112 (2020).
Akagawa, T.
Anan, T.
Born, M.
Carpaij, M.
Fox, A.
Fox, A. G.
Fujimura, N.
Fukatsu, K.
Gerlach, P.
Gronenborn, S.
Gudde, R.
Hatakeyama, H.
Hellmig, J.
Hirata, T.
Jenkins, F. A.
Kanbara, N.
Kitagawa, Y.
Kolb, J.
Lagasse, P. E.
Larsson, A.
Li, T.
Meissner, P.
Moench, H.
Nishiyama, N.
Noda, R.
Ooyama, M.
Patzak, E.
Rigrod, W. W.
Roey, J. V.
Saitou, H.
Saruya, T.
Suzuki, N.
Suzuki, Y.
Taflove, A.
Tezuka, K.
Tezuka, S.
Thylen, L.
Thylén, L.
Tokutome, K.
Tsuji, M.
van der Donk, J.
van der Lee, A.
Watanabe, T.
White, H. E.
Wolf, E.
Yano, T.
Yashiki, K.
Yee, K.
Yevick, D.
Appl. Opt. (2)
Bell Syst. Tech. J. (1)
IEEE J. Quantum Electron. (1)
IEEE J. Sel. Top. Quantum Electron. (2)
IEEE Trans. Antennas Propag. (1)
IEEE Trans. Electromagn. Compat. (1)
IEEJ Trans. SM (2)
J. Opt. Soc. Am. (2)
Opt. Rev. (2)
Proc. IEEE (1)
(1) N F , e f f = ( a 0 / 2 ) 2 λ b .
(2) f m 2 q + 1 ( s j ) = ∫ S m 1 K ( s j ) , ( s i ) f m 1 q ( s i ) d s i
(3) f m 1 q + 2 ( s i ) = ∫ S m 2 K ( s i ) , ( s j ) f j q + 1 ( s j ) d s j
(4) γ f ( s i ′ ) = ∫ S m 1 K ( s i ′ ) , ( s i ) f ( s i ) d s i
(5) α D = 1 − | γ | 2
(6) K ( s i ′ ) , ( s i ) = ∫ S A P ∫ S m 2 ∫ S A P K ( 4 ) ( s i ′ , s k ′ ) K ( 3 ) ( s k ′ , s j ) K ( 2 ) ( s j , s k ) K ( 1 ) ( s k , s i ) d s k ′ d s j d s k
(7) K ( δ ) ( s ′ , s ) = 1 2 π ∂ ∂ n ( e − i k d s ′ , s d s ′ , s )
(8) K ( δ ) ( s ′ , s ) = − e − i k d s ′ , s 2 π d s ′ , s 2 ( i k + 1 d s ′ , s ) d s ′ , s ⋅ n ,
(9) γ f ( s i ′ ) = ∑ i , j , k , k ′ N K ( 4 ) ( s i ′ , s k ′ ) W k ′ ⋅ K ( 3 ) ( s k ′ , s j ) W j ⋅ K ( 2 ) ( s j , s k ) W k ⋅ K ( 1 ) ( s k , s i ) W i ⋅ f ( s i ) ,
(10) W η = i , j , k , k ′ = 4 N P N x ′ ( x η ) P N y ′ ( y η ) P N x − 1 ( x η ) P N y − 1 ( y η ) ,
(11) P N x ( x η ) = P N y ( y η ) = 0.
Takashige Omatsu, Editor-in-Chief
Feature Issues | CommonCrawl |
The impact of food fortification on stunting in Zimbabwe: does gender of the household head matter?
Terrence Kairiza1,
George Kembo2,
Asankha Pallegedara3 &
Lesley Macheka4
High prevalence of stunting in children under 5 years poses a major threat to child development in developing countries. It is associated with micronutrient deficiency arising from poor diets fed to children under 5 years. Food fortification is amongst the interventions focused at reducing the incidence of stunting in children under 5 years.
Using a large-scale household data from Zimbabwe, we investigated the gender-based importance of household adoption of food fortification on the proportion of stunted children in the household. We employed propensity score matching to mitigate self-selection bias associated with household adoption of food fortification.
We offer three major findings. Firstly, we find statistically weak evidence that female headed households are more likely to adopt food fortification than their male counterparts. Secondly, food fortification reduces the proportion of stunted children in the household. Finally, in comparison to non-adopters, female headed households that adopt food fortification are more able to reduce the proportion of stunted children in their households than their male counterparts.
The results highlight the need for policy makers to actively promote food fortification, as such interventions are likely to contribute to the reduction of stunting and to involve men in fortification interventions to improve on their knowledge and appreciation of fortified foods and the associated benefits.
The preponderance of stunting (anthropometric indicator, height-for-age z-score, HAZ < 2 standard deviations below the WHO International Growth Reference) in children under 5 years poses an intractable threat to child development in developing countries [34]. Stunting in children under 5 years is associated with elevated risk of child morbidity, mortality, as well as, poor cognitive and psychomotor development [20]. Long-term consequences of stunting include deficits in school achievement and work capacity [17]. Low micronutrient density and poor protein quality in cereal based diets availed to children under 5 years has been identified amongst the preeminent causes of stunting in resource poor settings. Whilst the prevalence of stunting in children under five has been decreasing worldwide over the past two decades, [23] estimated that globally, at least 165 million children under 5 years were stunted in in 2011. In Zimbabwe, the proportion of children under 5 years who were stunted in 2018 was 26% representing a decline from 34% recorded in 2010 [35]. The 2018 stunting rate of 26% in Zimbabwe however still falls short of the acceptable target of 20% by UNICEF. Iron deficiencies among children under 5 years is exceedingly high, estimated at 72% and anemia is also prevalent in this age group [25]. Iron deficiency is even higher among infants of 6 to 11 months (81%), hence micronutrient stores among infants are a major determinant to stunting in the Zimbabwean setting. Poor complementary diets after 6 months indicate that the entire 1000 days the Zimbabwean child thrives in an environment that lacks iron, calcium, vitamin A, and high-quality protein [25]. These are all linked to stunting in the Zimbabwean setting.
Interventions aimed at ameliorating the low micronutrient intake in children under 5 years include the promotion of household adoption of food fortification, which can be industrial fortification (adding micronutrients and minerals to industrially processed and widely consumed edible products), supplementation (addition of an essential micronutrient during food preparation) or biofortification (improving the nutritional quality of food crops through agronomic practices, conventional plant breeding, or modern biotechnology). In June 2017, the Government of Zimbabwe made it mandatory for major local food manufacturers to fortify processed staple foods with micronutrients. The food vehicles targeted for fortification included sugar (vitamin A), cooking oil (vitamin A and D), maize meal and wheat flour (A, B1, B2, B3, B6, B12, folic acid, iron and zinc). In addition to mandatory fortification, the Government of Zimbabwe is promoting three biofortified food crops which are, biofortified orange maize (Vitamin A), Nua 45 beans (Zinc and Iron), and protein maize.
Notwithstanding the efforts to improve the adoption of food fortification, in Zimbabwe and in developing countries in general is still low [33]. Adoption of food fortification at the household level is confounded by a host of systemic and idiosyncratic factors, including the gender of the household head. Sachs (1996) [31] and Quisumbing et al. [27] note that while women are typically responsible for the preparation of the food for children under 5 years, men tend to exercise control over the economic availability of the food which points to the need to incorporate household head gender into studies that seek to identify the impact of food fortification on stunting. Identification of the impact of food fortification using observational data is confounded by self-selection bias associated with household adoption of food fortification [5, 6, 18]. Randomized controlled trials circumvent the self-selection bias due to the exogenous assignment of households into treatment and control group [18]. The findings of randomized controlled trials on the impact of household adoption of food fortification on stunting and other child development outcomes is however ambivalent [13]. The reason for the inconclusive results could be due to the fact that the pathogenesis of stunting is poorly understood. Furthermore, to the best of our knowledge, extant studies have not incorporated heterogeneity in the impact food fortification on the basis of the gender of the household head who determines both access and preparation of fortified foods in the household.
The main aim of this study was to investigate the impact of food fortification on the basis of gender of the household head, who usually determines both access and preparation of fortified foods in the household. We address the aforementioned gaps in the literature by examining the gender attributes of the impact of household adoption of food fortification on the proportion of stunted children in the household using the 2018 nationally representative sample of 25,297 Zimbabwean households surveyed by the Food and Nutrition Council of Zimbabwe (FNC). We measure the adoption of food fortification using five proxies that indicate both knowledge and usage of food fortification. To identify the average treatment effect of household adoption of fortified foods on the proportion of stunted children under 5 years, we employ propensity score matching techniques to counter the self-selection bias associated with the household adoption of food fortification [3].
The high incidence and grievous consequences of childhood undernutrition in sub-Saharan Africa necessitated emphasis on early prevention [26]. Food fortification, in its three forms: industrial fortification, supplementation fortification, and biofortification is one of the strategies that has been used to prevent vitamin and mineral deficiencies [8]. Notwithstanding the efforts to improve the adoption of food fortification in Zimbabwe and in developing countries in general is still low [33]. We therefore propose the following hypothesis linking the gender of the household head and the household probability of adopting food fortification.
Hypothesis 1
Female headed households are more likely to adopt food fortification than their male counterparts.
Several efficacy and impact studies have shown that food fortification in its three forms can have a nutritional impact [7, 9, 12, 14, 22]. For example, studies conducted in rural Uganda showed that the introduction of Orange-fleshed sweet potato (OFSP) resulted in increased vitamin A intakes among children and women, and improved vitamin A status among children [19]. An efficacy study conducted in Zambia with 5–7-year-old children showed that, after 3 months of consumption of biofortified provitamin A, the total body stores of vitamin A in the children who were in the orange maize group increased significantly compared with those in the control group [15]. Results from community-based, randomized controlled supplementation trials (zinc, iron and vitamin A) [29] showed that provision of iron supplements to the anemic infants or young children resulted in improved growth. Furthermore, vitamin A supplementation had a significant positive effect on stunting reduction in subgroups of children of low socioeconomic status. In this background we therefore propose the following hypothesis linking the adoption of food fortification and the proportion of stunted children in the household.
Household adoption of food fortification reduces the proportion of stunted children in that household.
Although there is burgeoning evidence on the impact of food fortification on stunting, research on gender heterogeneity in the impact of food fortification on stunting is still limited. Several studies have shown that increasing women's bargaining power is associated with improved child outcomes, e.g. reduced stunting [4, 28]. Richards et al. [28] discussed the significant and positive nutritional outcomes relating to women's household authority. Intra-household gender dynamics regarding decisions about crop choice and child-feeding practices have proven to play a role in adoption decisions. It is likely that if women are household heads, they are likely to have higher impact of food fortification since they are also largely responsible for food preparation in the household [27, 31]. We therefore propose the following hypothesis linking the gender of the household head and the impact of food fortification on the proportion of stunted children in the household.
Female headed households that adopt fortification are more able to reduce the proportion of stunted children in their households than their male counterparts.
The data employed herein stems from the 2018 Zimbabwe National Nutrition Survey (NNS) which was carried out by the Food and Nutrition Council of Zimbabwe (FNC) supported by the multisectoral National Nutrition Survey Technical Committee (NNSTC). NNSTC is a consortium of Government Ministries, UN partners, Technical Organisations and NGOs. The data comprises of a sample of 25,297 households with at least one child under 5 years. The sample households are randomly drawn from the sampling frame of the 2012 National Census so that they are representative of the national population of households with children under 5 years.
Measurement of key variables
Proportion of stunted children in the household
Height for children under 5 years was measured during the survey using standard equipment and methods. Children younger than 24 months were measured lying down, whilst children between 24 and 59 months were measured whilst standing. Anthropometric index height for age Z-score (HAZ) was analyzed using WHO Anthro software version 3.2.2. A child under 5 years is categorized as stunted if HAZ < − 2 standard deviations below the WHO International Growth Reference.
Our analysis seeks to identify the drivers of stunting at the household level, our outcome of interest is therefore the proportion of stunted children in the household which is measured as follows:
Proportion of stunted children in the household i = Number of children under 5 years with HAZ < 2 in household i / Number of children under 5 years in household i.
Household adoption of food fortification
The 2018 Zimbabwe National Nutrition Survey asked five questions that relate to the adoption and knowledge of food fortification. We use these five questions as proxies for the household adoption of food fortification. Firstly, the survey asked whether the household head had ever heard about fortified foods. Secondly, it asked whether the household head is able to identify fortified foods in the market. Thirdly, it asked whether the household had purchased any fortified food product in the past 30 days. These three questions relate to industrial fortification. Fourthly, the survey asked whether the household had fed an under 5 child meals containing micronutrient powders in the past 30 days, relating to supplementation. Finally, the survey asked whether the household head had ever heard about biofortified crops. The final question relates to biofortification adoption. The five proxies of fortification take the value of 1 if the household head answered Yes and, 0 otherwise.
Other control variables
The survey also asked other questions pertaining to the socio-demographic characteristics of the household head as well as the household, among which are the gender, age, marital status and education of the household head. Household level control variables include, household size, proportion of economically active household members, number of children with chronically or mentally ill mothers or fathers, as well as whether the household is located in rural areas. We also control for the province where the household is resident.
Empirical estimation
To test Hypothesis 1 of this study, we employ binary response models to estimate the impact of the gender of the household head on the probability of the household adoption of food fortification and present the results in Table 4. Assessing the impact or the treatment effect of food fortification on stunting using observational data as ours, is confounded by incomplete information arising from the self-selection of observations into adopting food fortification [18]. We therefore employ Propensity Score Matching (PSM) to eliminate the self-selection bias. Using PSM, we can reduce or eliminate the problem of systemic differences in baseline characteristics between treated and untreated groups [18].
We define an indicator variable, Forti, which takes the value of 1 for household i, if the household adopted food fortification, and 0 otherwise. We also define the dependent variable, the proportion of stunted children in household i as Yi. The counterfactual problem is that for each household we can only observe either Yi1, or Yi0 which are the proportion of stunted children in the household given Forti = 1 and Forti = 0, respectively.
Propensity score matching techniques circumvent the counterfactual problem by matching Forti = 1 and Forti = 0 households using Pr (Forti = 1| X) which is the probability of household i having Forti = 1 on the basis of observed covariates, X. In this study, we use nearest neighbour matching technique which chooses an individual from the comparison group for treated individual that is closest in terms of propensity score. We estimate the average treatment effect on the treated (ATT) that provides the impact of food fortification on the proportion of stunted children in the household as follows:
$$ \mathrm{ATT}=\mathrm{E}\ \left({\mathrm{Y}}_{\mathrm{i}1}\ |\ {\mathrm{Fort}}_{\mathrm{i}}=1\right)-\mathrm{E}\ \left\{\mathrm{E}\ \left({\mathrm{Y}}_{\mathrm{i}0}\ |\ {\mathrm{Fort}}_{\mathrm{i}}=0,\Pr\ \left({\mathrm{Fort}}_{\mathrm{i}}=1|\mathrm{X}\right)\ |{\mathrm{Fort}}_{\mathrm{i}}=1\right)\right\} $$
We employ the user written Stata module PSMATCH2 developed by Leuven and Sianesi [21] to implement matching and estimate treatment effects and present the results in Table 5 which portray the test to Hypotheses 2.
To test Hypotheses 3 which examine gender heterogeneity in the impact of Forti = 1 on the basis of the gender of the household head, Femalei, which takes the value of 1 if the household head is female and 0 otherwise we separately estimate ATT presented in Eq. 1 given that Femalei = 1 and Femalei = 0 and present the results in Tables 6.
The validity of the ATT requires the Conditional Independence Assumption (CIA), that assignment to Forti = 1 or Forti = 0 is random after controlling for observed covariates X [18]. We perform a balance of covariates before and after propensity score matching as robustness check and present the results in Table 7.
Descriptive analysis
Differences in background characteristics by food fortification adoption status of the household
On the basis of whether the household head had ever heard about fortified foods, Table 1 shows the differences in the background characteristics of the sample households by the food fortification status of the household. The table shows that out of the 25,297 households with children under 5 years that were surveyed, 3038 (12%) knew fortified foods. These households are taken to have adopted food fortification. Furthermore, the table reveals that households that had adopted food fortification are less likely to be female headed than those that had not adopted food fortification by 1.9% at the 5% level of significance, before controlling for observed confounders.
Table 1 Background characteristics of households by treatment status
Table 1 further reveals that households that adopted food fortification tend to be more educated than those that had not adopted food fortifications. Specifically, 39.6% of the households that adopted food fortification achieved O′ level education versus 33.6% of the households that had not adopted food fortification. This finding seems reasonable given that those household heads that are more educated are likely to have acquired knowledge about fortification through education. Furthermore, households that adopted food fortification, tend to be larger and have more economically active members than those households that did not adopt food fortification. Moreover, save for Mash Central there are statistically, significant province differences in the food fortification adoption status of the household. The differences in the background characteristics between those that adopted food fortification and those households that did not adopt point to self-selection bias in the adoption of food fortification (e.g., Heckman et al. [18]).
3.1.2. Gender differences food fortification adoption
Table 2 shows that in comparison to male headed households, female headed households were less likely to have heard about fortified foods, identify them on the market or purchased them in the past 30 days of the survey. There is however no statistically significant gender difference in the probability of having fed an under 5 child meals with micronutrient powders in the past 30 days. Moreover, the table also shows that only 4.5% of female household heads had heard about biofortified crops versus the 5.6% of male household heads. In summary, the findings presented in Table 2 show that whilst females are less likely to have adopted food fortification than their male counterparts before controlling for other confounders. Furthermore, knowledge and usage of food fortification is generally low in Zimbabwe as only 11.2% of females and 12.3% of male household heads had heard about fortified foods. These results are consistent with the findings of Talsma et al. [33] who also reported low (below 15%) knowledge and usage of fortified foods in Benin, Brazil, Nigeria and South Africa.
Table 2 Uptake of fortified foods by gender of the household head
Proportion of stunted children
Table 3 shows the proportion of stunted children in the household by the gender of the household head as well as the food fortification adoption status. The table reveals that there is no statistically significant gender difference in the proportion of household heads that adopted food fortification. When looking at the subsample of female headed households, those who adopted food fortification have a lower proportion of stunted children under 5 years of 24.2% in comparison to the 30.2% for those female headed households who did not adopt food fortification. Furthermore, the difference of 6% is statistically valid at the 1% level of significance. The respective proportions for the male headed households are 25.7 and 29.0% establishing a difference of 3.3%.
Table 3 Proportion of stunted children in household by gender and food adoption status
The sum total of these findings is that adoption of food fortification is correlated with reduction of stunting and furthermore, female who adopt fortification are more able to reduce stunting than their male counterparts, before controlling for self-selection bias associated with adoption of food fortification. However, it is important to control for the self-selection bias to estimate true relationship between food fortification and reduction of child stunting.
Estimation results
The impact of gender on the adoption of fortification
Table 4 shows the probit estimates of the marginal effects of the gender of the household head on the adoption of food fortification. Columns (I) to (III) of the table indicate no statistically significant impact of household head gender on the probability of ever having heard of fortified foods, being able to identify fortified foods on the market or purchasing any fortified foods in the past 30 days. Columns (IV) and (V) of the table displays statistically weak evidence of female household heads having a fed an under 5 child meals with micronutrient powders in the past 30 days or having have ever heard about biofortified crops. The table reveals that rather than gender, the most important variable determining the adoption of food fortification is education of the household head. Columns (I) to (V) of Table 4 show that in compared to base uneducated household heads, attaining any level of education increases the probability of adopting food fortification. Moreover, the impact of education on the probability of adopting food fortification increases as the level of education increases. This result is consistent with earlier studies such as [1, 2, 10, 16, 24] which reported significant adoption of fortified foods by mothers who had secondary/tertiary education in comparison with uneducated mothers.
Table 4 Probit estimates of the impact of gender on the adoption of fortified foods
Homogeneous treatment effects of food fortification on stunting
Table 5 shows the impact of the adoption of food fortification on the proportion of stunted children in the household. The table reveals that all five proxies of food fortification adoption reduces the proportion of stunted children in the household. Specifically, Column (I) of the table shows that having heard about fortification reduces the proportion of stunted children by 4.69%. Furthermore, Column (II) and (III) show that being able to identify fortified foods in the market or actually purchasing the fortified foods in the past 30 days reduces the proportion of stunted children in the household by 2.08 and 3.33%, respectively. Moreover Columns (IV) and (V) indicate that having fed an under 5 child meals with micronutrient powders in the past 30 days or having ever heard about bio fortified crops reduces the proportion of stunted children by 2.73 and 3.56%, respectively.
Table 5 PSM estimates of homogeneous treatment effects on stunting
The findings in Table 5 show that the adoption of food fortification reduces the proportion of stunted children in the household and confirms earlier results from both observational studies [29] and randomized controlled experiments [15]. The findings indicate that to reduce stunting governments should promote food fortification through projects that are being currently undertaken.
Gender heterogeneous treatment impacts of fortification on stunting
We explore potential heterogeneities in the impact of food fortification on the proportion of stunted children in the household by the gender of the household head and present the results in Table 6. Table 6 shows that when one looks at all measures of food fortification, the impact of food fortification on stunting is higher when the household head is female than when the household head is male. Specifically, purchasing any fortified food in the past 30 days reduces the proportion of stunted children by 6.02% when the household head is female versus the statistically insignificant impact when the household head is male. Furthermore, having heard biofortified crops reduces the proportion of stunted children by 9.42% when the household head is female versus the 4.15% when the household head is male. The findings presented in Table 6 imply that food fortification has higher impact in reducing the proportion of stunted children in the household when the household head is female rather than when he is male. These findings therefore imply when the household head is female, she is in charge of both the preparation and economic availability of the food for the children under 5 years which gives extra benefit to them.
Table 6 PSM estimates of gender heterogeneous treatment effects on stunting
Robustness checks to observed heterogeneity
Table 7 presents results from covariate balance tests to appraise the comparability of covariates before and after matching. P-values for the equality of means of covariates like household head is female, widow/widower, education dummies, proportion of economically active household member, household size as well as several province dummies are smaller than 0.05 before matching but larger than 0.1 after matching, indicating that covariates were unbalanced before matching but became balanced after matching. Failure to reject the hypothesis of joint equality of means after matching indicated by a p-value larger than 0.05, shows that covariates for households that adopted food fortification and those that did not adopt food fortification are drawn from comparable distributions [11]. Additionally, a mean absolute bias of 1.5% is far smaller than the 5% recommended to yield reliable estimates [30].
Table 7 Covariate balance check before and after propensity score matching
The paper analysed the impact of food fortification on stunting in Zimbabwe. It has three major findings. Firstly, we found little evidence for gender differences in the knowledge or adoption of fortified foods. Secondly, we found that the adoption or knowledge of fortified foods reduces the proportion of stunted children in the household. Finally, we found that female headed households that adopt or know about fortified foods are more able to reduce the proportion of stunted children than their male counterparts. These results highlight the need for policy makers to actively promote fortification and biofortification programmes as such promotions can contribute to the reduction of stunting. More so, there is need to involve men in all fortification programmes to improve on their knowledge and appreciation of fortified foods and the associated benefits. Efficacy studies to gain insights in to the bioaccessibility of micronutrients from the fortified foods are essential to clearly understand the impact of fortification and biofortification on stunting.
Policy implications
The results of this study are important for informing policy makers and programmers involved in fortification and biofortification programmes on the need to positively influence adoption of food fortification. The low knowledge on fortified (Table 2) reflects the need to integrate fortification and biofortification programmes into public and private policies, programmes, and investments. Policymakers should also give higher priority to the role of agriculture in improving health. At national level, there is need to include fortification and biofortification on the nutrition agenda. Moreover, food processors and other actors along the value chain must include fortified crops in their processed products.
The datasets analysed during the current study are available from the Food and Nutrition Council of Zimbabwe (FNC) but restrictions apply to the availability of these data, which were used under a Memorandum of Understanding for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of FNC.
Abeshu M, Geleta B. The role of fortification and supplementation in mitigating the 'hidden hunger'. J Nutr Food Sci. 2016;6(1):1–4. https://doi.org/10.4172/2155-9600.1000459.
Abuya BA, Ciera J, Kimani-Murage E. Effect of mother's education on child's nutritional status in the slums of Nairobi. BMC Pediatr. 2012;12:80. https://doi.org/10.1186/1471-2431-12-80.
Allen L, de Benoist D, Dary O, Hurrell R. Guidelines on food fortification with micronutrients. WHO Library. 2006. https://www.who.int/nutrition/publications/guide_food_fortification_micronutrients.pdf. Accessed 21 Jan 2020.
Anderson CL, Reynolds TW, Gugerty MK. Husband and wife perspectives on farm household decision-making authority and evidence on intra-household accord in rural Tanzania. World Development. 2017;90:169–83. https://doi.org/10.1016/j.worlddev.2016.09.005.
Austin PC. Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples. Stat Med, 2009;28(25):3083–107.
Austin PC. An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies. Multivar Behav Res. 2011;46(3):399–424. https://doi.org/10.1080/00273171.2011.568786.
Biebinger R, Hurrell RF. Chapter 3, Vitamin and mineral fortification of foods In: Ottaway P. B, (eds) Food Fortification and Supplementation. United Kingdom: Woodhead Publishing Series in Food Science. 2008. https://doi.org/10.1533/9781845694265.1.27.
Bouis HE, Saltzman A. Improving nutrition through biofortification: a review of evidence from HarvestPlus, 2003 through 2016. Global Food Security. 2017;12:49–58. https://doi.org/10.1016/j.gfs.2017.01.009.
Boy E, Haas J, Petry N, Cercamondi C, Gahutu J, Mehta S, Finkelstein J, Hurrell R. Efficacy of iron-biofortified crops. Afr J Food Agric Nutr Dev. 2017;17(2):11879–92.
Buvinić M, Gupta G. Female-headed households and female-maintained families: are they worth targeting to reduce poverty in developing countries? Econ Dev Cult Chang. 1997;45(2):259–80.
Caliendo M, Kopeinig S. Some practical guidance for the implementation of propensity score matching. J Econ Surv. 2008;22(1):31–72. https://doi.org/10.1111/j.1467-6419.2007.00527.x.
Das JK, Khan RS, Bhutta ZA. Chapter 21, Zinc fortification. In: Mannar MGV, Hurrell RF (Eds.). Food Fortification in a Globalized World. London: Academic press; 2018, p. 213–9.
Dewey KG. Reducing stunting by improving maternal, infant and young child nutrition in regions such as South Asia: evidence, challenges and opportunities. Matern Child Nutr. 2016;12(S1):27–38. https://doi.org/10.1111/mcn.12282.
Finkelstein JL, Haas JD, Mehta S. Iron-biofortified staple food crops for improving iron status: a review of the current evidence. Curr Opin Biotechnol. 2017;44:138–45. https://doi.org/10.1016/j.copbio.2017.01.003.
Gannon B, Kaliwile C, Arscott SA, Schmaelzle S, Chileshe J, Kalungwana N, Mosonda M, Pixley K, Masi C, Tanumihardjo SA. Biofortified orange maize is as efficacious as a vitamin a supplement in Zambian children even in the presence of high liver reserves of vitamin a: a community-based, randomized placebo-controlled trial. Am J Clin Nutr. 2014;100(6):1541–50. https://doi.org/10.3945/ajcn.114.087379.
García Cruz LM, González Azpeitia G, Reyes Súarez D, Santana Rodríguez A, Loro Ferrer JF, Serra-Majem L. Factors associated with stunting among children aged 0 to 59 months from the central region of Mozambique. Nutrients. 2017;9(5):491. https://doi.org/10.3390/nu9050491.
Grantham-McGregor S, Cheung YB, Cueto S, Glewwe P, Richter L, Strupp B, the International Child Development Steering, G. Developmental potential in the first 5 years for children in developing countries. Lancet. 2007;369(9555):60–70. https://doi.org/10.1016/S0140-6736(07)60032-4.
Heckman JJ. Sample selection bias as a specification error. Econometrica. 1979;47:153–61.
Hotz C, Loechl C, de Brauw A, Eozenou P, Gilligan D, Moursi M, Munhaua B, van Jaarsveld P, Carriquiry A, Meenakshi JV. A large-scale intervention to introduce orange sweet potato in rural Mozambique increases vitamin a intakes among children and women. Br J Nutr. 2012;108(1):163–76. https://doi.org/10.1017/s0007114511005174.
Larson LM, Phiri KS, Pasricha SR. Iron and cognitive development: what is the evidence? Annals of Nutrition and Metabolism. 2017;71(suppl 3):25–38. https://doi.org/10.1159/000480742.
Leuven E, Sianesi B. PSMATCH2: Stata Module to Perform Full Mahalanobis and Propensity Score Matching, Common Support Graphing, and Covariate Imbalance Testing. Software. 2003. http://ideas.repec.org/c/boc/bocode/s432001.html.
Lividini K, Fiedler JL, De Moura FF, Moursi M, Zeller M. Biofortification: a review of ex-ante models. Glob Food Secur. 2018;17:186–95. https://doi.org/10.1016/j.gfs.2017.11.001.
Lutter C, Pena-Rosas J, Perez-Escamilla R. Maternal andchild nutrition. Lancet. 2013;382(9904):1550–1. https://doi.org/10.1016/S0140-6736(13)62319-3.
Makoka D, Masibo PK. Is there a threshold level of maternal education sufficient to reduce child undernutrition? Evidence from Malawi, Tanzania and Zimbabwe. BMC Pediatr. 2015;15:96. https://doi.org/10.1186/s12887-015-0406-8.
National Nutrition Micronutrient Survey. Ministry of Health and Child Care. Zimbabwe. 2015. https://www.worldcat.org/title/zimbabwe-national-micronutrient-survey-report/oclc/1016031596. Accessed 18 Jan 2020.
Phuka JC, Maleta K, Thakwalakwa C, et al. Complementary feeding with fortified spread and incidence of severe stunting in 6- to 18-month-old rural malawians. Arch Pediatr Adolesc Med. 2008;162(7):619–26. https://doi.org/10.1001/archpedi.162.7.619.
Quisumbing AR, Haddad L, Meinzen-Dick R, Brown LR. Gender Issues for Food Security in Developing Countries: Implications for Project Design and Implementation. Can J Dev Stud. 1998;19:(4):185–208.
Richards E, Theobald S, George A, Kim JC, Rudert C, Jehan K, Tolhurst R. Going beyond the surface: gendered intra-household bargaining as a social determinant of child health and nutrition in low and middle income countries. Soc Sci Med. 2013;95:24–33. https://doi.org/10.1016/j.socscimed.2012.06.015.
Rivera JA, Hotz C, González-Cossío T, Neufeld L, García-Guerra A. The effect of micronutrient deficiencies on child growth: a review of results from community-based supplementation trials. J Nutr. 2003;133(11):4010S–20S. https://doi.org/10.1093/jn/133.11.4010S.
Rosenbaum PR, Rubin DB. Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. Am Stat. 1985;39:33–38.
Sachs, C. Rural Women: Agriculture and the Environment in Vietnam. In: Barry K, (eds). Vietnam's Women in Transition. International Political Economy Series. London: Palgrave Macmillan; 1996.
Sianesi B. An Evaluation of the Active Labour Market Programmes in Sweden. Rev Econ Stat. 2004;86(1):133–55.
Talsma EF, Melse-Boonstra A, Brouwer ID. Acceptance and adoption of biofortified crops in low- and middle-income countries: a systematic review. Nutr Rev. 2017;75(10):798–829. https://doi.org/10.1093/nutrit/nux037.
Voth-Gaeddert LE, Stoker M, Cornell D, Oerther DB. What causes childhood stunting among children of San Vicente, Guatemala: employing complimentary, system-analysis approaches. Int J Hygiene Environ Health. 2018;221(3):391–9. https://doi.org/10.1016/j.ijheh.2018.01.001.
ZimVAC Report. Zimbabwe Vulnerability Assessment Committee. 2018. http://fnc.org.zw/documents/. Accessed 19 Feb 2020.
The authors would like to thank Food and Nutrition Council of Zimbabwe for providing the National Nutrition Survey Data which was used in this paper. Our gratitude goes to participants of the Scaling Up Nutrition Research and Academia Platform (SUNRAP) for providing useful comments on the early drafts of this paper.
The study was not funded. Authors used own resources.
Department of Economics, Bindura University of Science Education, P. Bag 1020, Bindura, Zimbabwe
Terrence Kairiza
Food and Nutrition Council of Zimbabwe, 1574 Alpes Road, Hatcliffe, Harare, Zimbabwe
George Kembo
Department of Industrial Management, Wayamba University of Sri Lanka and Chair of Development Economics, Passau University, Passau, Germany
Asankha Pallegedara
Centre for Innovation and Technology Transfer, Marondera University of Agricultural Sciences and Technology, P. Bag 35, Marondera, Zimbabwe
Lesley Macheka
TK and AP performed the statistical analyses and GK and LM contributed by writing the paper. However, all four authors equally scrutinized all sections of the paper ensuring high quality of tea paper. All authors read and approved the final manuscript.
Correspondence to Asankha Pallegedara.
The data analysed in this study was collected during the 2018 Zimbabwe National Nutrition Survey. An ethical approval was granted by the Medical Research Council of Zimbabwe. All interviews were conducted after participants gave consent, which was expressed through signing of the content form.
Consent for publication was sort from all participants interviewed and they all consented by signing the consent form seeking authority to anonymized data.
Kairiza, T., Kembo, G., Pallegedara, A. et al. The impact of food fortification on stunting in Zimbabwe: does gender of the household head matter?. Nutr J 19, 22 (2020). https://doi.org/10.1186/s12937-020-00541-z
DOI: https://doi.org/10.1186/s12937-020-00541-z
Child stunting; gender
Biofortification
Fortification
Micronutrient deficiency
Sex and gender differences in dietary intake and other dietary behaviors across the life course | CommonCrawl |
Wall pressure beneath a transitional hypersonic boundary layer over an inclined straight circular cone
Siwei Dong1,2,
Jianqiang Chen1,2,
Xianxu Yuan1,2,
Xi Chen1,2 &
Guoliang Xu1,2
Properties of wall pressure beneath a transitional hypersonic boundary layer over a 7∘ half-angle blunt cone at angle of attack 6∘ are studied by Direct Numerical Simulation. The wall pressure has two distinct frequency peaks. The low-frequency peak with f≈10−50 kHz is very likely the unsteady crossflow mode based on its convection direction, i.e. along the axial direction and towards the windward symmetry ray. High-frequency peaks are roughly proportional to the local boundary layer thickness. Along the trajectories of stationary crossflow vortices, the location of intense high-frequency wall pressure moves from the bottom of trough where the boundary layer is thin to the bottom of shoulder where the boundary layer is thick. By comparing the pressure field with that inside a high-speed transitional swept-wing boundary layer dominated by the z-type secondary crossflow mode, we found that the high-frequency signal originates from the Mack mode and evolves into the secondary crossflow instability.
The boundary layers over realistic aircrafts are usually three-dimensional, i.e., there is crossflow perpendicular to the streamwise direction. The instability mechanism is accordingly much more complicated than that in extensively studied two-dimensional boundary layers (e.g. [1–3]), especially in hypersonic conditions. Therefore, in the last decades, considerable efforts have been made towards building a fundamental understanding to the transition of hypersonic three-dimensional boundary layers by wind tunnel experiments [4–9], theoretical analyses [10–13], direct numerical simulations (Chen et al.: Transition of hypersonic boundary layer over a yawed blunt cone, submitted) (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted) and flight tests (see overview [14]). Nevertheless, much is still unknown.
Due to its simple configuration with practical significance, an inclined (i.e., non-zero angle of attack) straight circular cone has become a canonical model to study the transition of hypersonic boundary layer with crossflow. Stability analyses have found that, at the side of the hypersonic inclined straight circular cone, the Mack mode [15] and the secondary instabilities of crossflow vortices [16] are the two principal high-frequency instability groups [10, 12]. However, not all the unstable modes identified by the stability analyses will present or appear with equal importance during the natural transition process. A typical example is the breakdown of crossflow vortices. In both the low- and high-speed flows with crossflow, it has been found that the z-type (produced by the spanwise or azimuthal gradients of the streamwise mean flow) secondary crossflow instability is responsible for the natural transition process [17] (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted), although stability analyses [13, 17] also detected the y-type (produced by the wall-normal gradient of the streamwise mean flow) secondary crossflow instabilities.
In experiments, there is still controversy about the transition routine in the hypersonic boundary layers over inclined straight circular cones. The key point of issue is whether the transition is triggered by the Mack mode or by the z-type secondary crossflow mode. Based on the wall pressure, [4] claimed that the Mack mode was responsible for the transition. In contrast, [5, 7, 8] preferred that the transition was triggered by the z-type secondary crossflow instability, which is in agreement with [6], who for the first time obtained the mean and fluctuations of velocity fields inside the boundary layer by employing hot-wire probes instead of only measuring wall pressure signals. However, the frequency response in [6] only reaches to 180 kHz. It is possible that not all the instabilities had been detected. Meanwhile, however, [8] also speculated that the transition could be attributed to the crossflow modulated Mack mode.
The reason for the above controversy can be largely attributed to the limited information that the surface pressure sensors could afford. Surface pressure sensors have been widely used in wind tunnel experiments and flight tests. The propagation direction, wave length, convection velocity and temporal spectra of instability modes reflected by the wall pressure, for instance, can be measured by these sensors. But usually, wall pressure at very limited positions can be obtained by those sensors instead of the whole pressure field in numerical simulations (e.g. [18–21]). For flow with potential instabilities associated with separate frequency bands, the frequency alone can sometimes be sufficient to detect the principal instability responsible for the transition. However, in the hypersonic three-dimensional boundary layer with crossflow, frequencies of the most amplified Mack mode and the z-type secondary crossflow mode are in the same range [10]. Then it is not trivial to differentiate the Mack mode and the z-type secondary crossflow mode only by frequency [8, 10].
The best way to distinguish the secondary crossflow instability modes and the Mack mode is by measuring their eigenfunctions within the boundary layer as [10, 12], which is, however, impossible by surface pressure sensors. The Mack mode has a unique characteristic, i.e. its mode shapes of velocity and temperature have two peaks in the wall-normal direction. Li et al. [10] differentiates the two modes by the pressure distribution inside the boundary layer. Specifically, the Mack mode has two pressure extremes, with a stronger one beneath the trough and a weaker one at the shoulder of stationary crossflow vortices. While the z-type secondary crossflow instability only has one pressure extreme at the shoulder of the crossflow vortices. However, recently (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted) found that the vortical structure of the z-type secondary crossflow instability in a high-speed swept-wing boundary layer reaches further to the wall, which should be reflected by the wall pressure. We will see later that such speculation is proved to be correct.
Understanding the transition mechanism is the prerequisite for control strategy, which depends on the transition routine. For the Mack mode, ultrasonically absorptive coatings or wavy wall are usually used (see overview [22]). For the crossflow modes, plasma actuators [23], localized suction [24] and discrete-roughness-elements [25] are possible options. Consequently, the transition routine in experiments needs to be further confirmed preferably with more information inside the boundary layer. For this purpose, we intend to find what else is required except the wall pressure signals, because the direct numerical simulation is able to offer all the information compared with wind tunnel experiments or flight tests. The present paper studied the wall pressure in a highly-resolved direct numerical simulation of the transition process in a hypersonic boundary layer over a 7∘ half-angle straight circular cone at an angle of attack (AoA) 6∘. Properties of pressure around the transition front and also along the trajectories of stationary crossflow vortices are compared with those during the natural breakdown of stationary crossflow vortices in a high-speed swept-wing boundary layer in the absence of the Mack mode (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted). Hopefully, our results may shed some light on the understanding of measured signals by surface pressure sensors in experiments.
The present work is a companion to [26] (Chen et al.: Transition of hypersonic boundary layer over a yawed blunt cone, submitted), in which the results of stability analyses and the evolution of vortical structures have been presented. To the best knowledge of the authors, the direct numerical simulation in the present paper and [26] (Chen et al.: Transition of hypersonic boundary layer over a yawed blunt cone, submitted) is the largest one for the transition process in hypersonic boundary layer over an inclined straight circular cone.
Numerical simulation
As shown in Fig. 1, we are considering flow around a straight circular cone at AoA 6∘ with half-angle 7∘, nose radium 1 mm and length 700 mm. The free stream conditions are from a wind tunnel experiment with Mach number M∞=6, unit Reynolds number 107/m and temperature T∞=79 K. The wall temperature is 300 K. The freestream condition is the same as that in [27], where a blunt cone with similar configuration is placed at AoA= 1∘. The 6∘ AoA is achieved by inclining the freestream velocity vector. Two coordinate systems are introduced, namely the Cartesian coordinate system (x,y,z) with its origin at the circle center of the nose and the local body-fitted orthogonal coordinate system (ξ,η,ζ). Velocity components in the two coordinate systems are respectively (u,v,w) and (un,vn,wn). The total velocity is denoted by \(q=\sqrt {u^{2}+v^{2}+w^{2}}\). Note that the azimuthal angle θ changes from 0∘ at the leeward symmetry plane to 180∘ at the windward symmetry plane, which is the opposite to the definition in most of the experiments.
The sketch of the simulation, notations and coordinates
The simulation procedure in [27] is adopted in the present paper. First, the laminar baseflow including the nose and the shock wave is obtained with a second-order finite volume method with grid size Nξ×Nη×Nζ=1000×301×181. The wall distance of the first layer grid varies linearly from 4 ×10−3 mm at the nose to 8 ×10−3 mm at the axial outlet. In this step, only half of the model is used considering the symmetry of the flow. Figure 2a and b respectively display the axial slice of nondimensional u and T on the leeward side of the base flow. Due to the pressure gradient, the flow accumulates toward the leeward symmetry plane where mushroom vortex is formed.
Axial slice of nondimensional u and T on the leeward side of undisturbed basic state computed by second order finite volume method
Second, a subdomain without the nose (x≥50 mm) and the shock wave is extracted, with the inlet and farfield interpolated from the previous step. At the inlet, the respective distances of the farfield to the wall is about four and two times of the local boundary layer thickness δ at θ=0∘ and 180∘. Subdomain excluding the nose (leading edge) and the shock is also employed by (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted), who studied the break down of stationary crossflow vortices in a high-speed swept-wing boundary layer. In this step, the whole azimuthal domain is computed since the symmetry is not satisfied for instantaneous flow fields. To alleviate the computation cost, a buffer region is used between θ=202.5∘ and θ=337.5∘ with coarse grid. A steady and azimuthal random blowing-suction with intensity 0.1% q∞ is introduced between 90 mm<x<100 mm. Ideally, steady blowing-suction can only generate stationary crossflow vortices. But it was only after the simulation finished when we realized that the initial flow field is actually a copy of the inlet (the same as in [27]), which inevitably introduces considerable numerical errors that lead to secondary instabilities of crossflow vortices and the subsequent breakdown process.
Compressible Navier-Stokes equations in conservative formulation are discretized in the curvature mesh and solved using a widely used parallel software OPENCFD developed by [28]. This code has been validated [27, 28] and used widely in compressible boundary-layer transition and turbulence [29–33]. Non-slip condition is applied at the wall. The inviscous and viscous terms are discretized respectively with modified 7th-order WENO scheme [34] and 6th-order central difference. Time is advanced with explicit third-order Runge-Kutta method. In the wall-normal direction, at each azimuthal ray (k=1,2,...,Nζ), exponentially stretched grid points with wall-normal distances satisfying
$$\eta\left(i,j\right)=\frac{\textup{exp}\left(\frac{j-1}{N_{\eta}-1}b\right)-1}{\textup{exp}(b)-1}L_{\eta}(i,k), \ i=1,2,...,N_{\xi} \ \textup{and} \ j=1,2,...,N_{\eta} $$
are applied to capture strong derivatives near the wall, where Lη(i,k) is the local wall-normal distance of the farfield to the wall. The factor b is determined by prescribing the wall distance of the first layer grid η(i,2), i=1,2,...,Nξ (10 −2 mm in the present work). The grid size excluding the buffer region in the axial and azimuthal directions is Nξ×Nη×Nζ=2800×300×1039, yielding grid resolutions in the axial, wall-normal and azimuthal directions are \(\Delta \xi ^{+}\approx 10, \Delta \eta _{w}^{+}\approx 0.4\) and Δζ+≈7−10 downstream of the transition front, where Δηw is measured at the wall. Quantities with superscript '+' are in the wall units based on the wall-friction velocity \(u_{\tau }=\sqrt {\tau _{w}/\rho _{w}}\) and kinematic viscosity νw=μw/ρw, where \(\tau _{w}=\mu \partial \overline {q}_{n}/\partial \eta |_{\eta =0}\) and \(\overline {q}_{n}\) is the total mean velocity that is tangent to the wall, where \(\overline {\bullet }\) denotes the temporal average.
Transition front and the temporal spectra of wall pressure
Since most of the details about the transitional flow fields such as instantaneous snapshots in the wall-parallel or azimuthal planes, vortical structures, temporal spectra inside the boundary layer and stability analyses have been studied thoroughly in [26] (Chen et al.: Transition of hypersonic boundary layer over a yawed blunt cone, submitted). We will only give a brief introduction to the transition process before discussing properties of the wall pressure.
Figure 3 gives an overall picture of the transition process by showing the slices of instantaneous u in wall-normal planes at different axial locations. At the side of the cone, the growth and breakdown of crossflow vortices are clearly observed.
Contours of instantaneous u in wall-normal planes at several axial locations
Figure 4 shows the temporal averaged mean wall heat flux
$$ \bar{Q}_{w}=2\frac{\mu_{w}C_{p}}{RePr}\partial_{\eta}T|_{\eta=0} $$
on the unrolled cone surface, where \(C_{p}=\gamma /M_{\infty }^{2}(\gamma -1)\) with γ the specific heat ratio and the Prandtl number Pr=0.7. In Fig. 4, ζ represents the distance with respect to the θ=90∘ ray in terms of the arc length. The windward and leeward symmetry planes are respectively at the lower and upper limits of ζ. The transition onset locates at x≈300 mm and θ≈52∘, yielding Rex=3×106, which is close to that in a Mach 8 noisy wind tunnel experiment [7]. Obviously, \(\bar {Q}_{w}\) appears in streaky patterns on the leeward side (θ<90∘), suggesting that the transition is due to the breakdown of stationary crossflow vortices in that area. The dashed lines represent trajectories of two representative stationary crossflow vortices determined by the local maximum of \(\bar {Q}_{w}\). The trajectories are roughly inclined by 12−13∘ with respect to the θ=90∘ ray, which is consistent with both experiments [35] and numerical simulations [10, 36] under similar conditions. The axial evolution of \(\bar {Q}_{w}\) along the two trajectories are given in Fig. 4b, suggesting that the two stationary crossflow vortices grow exponentially between x=260 mm and 320 mm. On the windward side (θ>90∘), the transition is highly possible caused by unsteady modes.
a The temporal mean of the wall heat flux \(\bar {Q}_{w}\). Dashed lines represent two trajectories of stationary crossflow vortices V1 and V2 determined by the local maximum of \(\bar {Q}_{w}\). \(\bar {Q}_{w}\) along the two trajectories are given in b with solid and dashed lines respectively for V1 and V2
We will focus on the wall pressure
$$ p_{w}=\frac{\rho_{w}T_{w}}{\gamma M_{\infty}^{2}} $$
hereinafter. pw is sampled with a frequency ≈3.2 MHz, which should be sufficient to capture all the possible instabilities at the side. The duration of the sampling is roughly two times of the go-through period xL/q∞. Results do not change when using half of the samples.
Figure 5a shows one instantaneous snapshot of fluctuating pw. Upstream of the transition front (where the wall heat flux grows violently in Fig. 4a), especially beyond x=400 mm, there are quasi two-dimensional structures, indicating the origin of the Mack mode, whose presence is justified by quasi two-dimensional vortical structures shown in (Chen et al.: Transition of hypersonic boundary layer over a yawed blunt cone, submitted). Wall pressure of similar pattern was also identified by [27] on the surface of a cone placed at AoA= 1∘ with the same freestream condition. Figure 5b displays the r.m.s. of the wall pressure. Similar to the wall heat flux in Fig. 4a, (pw)rms is also trapped by stationary crossflow vortices at θ≲90∘. We will investigate the properties of pw at several probe points marked in Fig. 5b around the transition front. They are divided into two groups, namely the Group 1 and Group 2 respectively upstream and downstream of the transition front. Their coordinates are tabulated in the Table 1.
The instantaneous a and r.m.s. b of the fluctuating wall pressure. The dashed lines in b represent the trajectories of stationary crossflow vortices shown in Fig. 4. The markers are probe points with those upstream and downstream of the transition front labeled as Group 1 and Group 2, respectively
Table 1 The coordinates of probe points in the Group 1 (upstream of the transition front) and Group 2 (downstream of the transition front)
Figure 6 shows the temporal spectra of pw at these probe points. Two obvious peaks can be observed, i.e., a low-frequency one centered around 30 kHz independent of the locations. And a high-frequency one centered at 450-500 kHz for probe points in Group 1 and 350-450 kHz in Group 2. The frequencies at high-frequency peaks roughly follow the estimation \(f=\bar {q}_{e}/2\delta \), which are given by the filled circles in the figure. The high-frequency peaks downstream of the transition front are broader, which is likely due to nonlinear effects. Harmonics of the high-frequency peaks are also clearly seen, indicating the presence of self interactions.
The temporal spectra of pw at probe points in the Group 1 a and Group 2 b. Symbols denote the estimation \(f=\bar {q}_{e}/2\delta \)
Based on the above temporal spectra, we further filtered pw by a cut-off filter with frequency bands 10-50 kHz and 300-600 kHz. The instantaneous pw of low and high frequencies are shown in Fig. 7a, c. The low-frequency pw exhibits large-scale streaky structures, roughly following the transition front in Fig. 4a, while the high-frequency pw shows small-scale wavepacket structures. Moreover, the wavepackets reside closer to the windward ray than the streaks, especially at the rear of the cone, where they are almost two-dimensional. The combined low- and high-frequency pw form a pattern that small-scale wavepackets ride on large-scale streaks in the vicinity of the transition front as shown in Fig. 5a. The resulting (pw)rms of low- and high-frequency pw are given in Fig. 7b, d.
The instantaneous a, c and r.m.s. b, d of low- and high-frequency pw. a, b for low-frequency pw with frequency 10-50 kHz and c, d for high-frequency pw with 300-600 kHz. Details of high-frequency pw are given in two regions by the subfigures
Distribution of fluctuating pressure inside stationary crossflow vortices
For a sharp, 7∘ half-angle cone with 6∘ AoA in a Mach 6 quiet wind tunnel, [8] found that the high-frequency (175-325 kHz) wall pressure disturbances coincide with the local maximum of the wall heat flux. While the low-frequency wall pressure (100-175 kHz) disturbances lie between heat streaks. They argued that if the high-frequency signal is the crossflow modified Mack mode, then the local maximum of (pw)rms should be under the thin trough of crossflow vortex where the wall heat flux is high, as they had observed. [4] also detected high-frequency signals, which were regarded as the Mack mode, in the position where the boundary layer is thin.
In a similar way to [8], Fig. 8 displays (pw)rms and \(\bar {Q}_{w}\) in the neighbourhood of the trajectories of stationary crossflow vortices V1 and V2. Dashed lines in the figure correspond to the local maximum of \(\bar {Q}_{w}\). There is no particular pattern between (pw)rms of low-frequency pw and \(\bar {Q}_{w}\) (Fig. 8a-b, I). Specifically, around V1, (pw)rms has no preference about the heat flux, while around V2, (pw)rms lies between heat streaks upstream of x≈260 mm, while it coincides with the heat peaks when stationary crossflow vortices are strong. But for high-frequency pw, Fig. 8a-b, II clearly shows that the spatially localized (pw)rms presented in Fig. 7 is concentrated under the peak heating areas up to x≈280 mm, beyond where (pw)rms and \(\bar {Q}_{w}\) deviate.
The distribution of the r.m.s. (shaded contour) of the low- (10-50 kHz, I) and high-frequency (300-600 kHz, II) wall pressure and the wall heat flux (dashed lines) in the neighbourhood of stationary crossflow vortex trajectories V1 a and V2 b. Trajectories V1 and V2 are colored in black
Figure 9a shows the amplitude evolution of (pw)rms along the trajectories V1 and V2. The corresponding growth rates
$$ \textup{dln}\left[(p_{w})_{\textup{rms}}\right]/\textup{d}x_{\textup{vortex}} $$
are shown in Fig. 9b. High-frequency pw along V1 experience the most violent growth, while others only grow mildly, especially for the low-frequency pw along V1. This dependence of growth rates on the trajectories of stationary crossflow vortices is probably due to the inhomogeneity in the azimuthal direction. The peaks of (pw)rms arrive at x≈280 mm, where the wall heat flux starts to grow exponentially as indicated by Fig. 4b.
Axial evolution of (pw)rms (a) and its growth rate (b) along the stationary crossflow vortex trajectories V1 (solid) and V2 (dashed). Lines with and without symbols are respectively for the low- and high-frequency wall pressure
Since we have all the information inside the boundary layer, Fig. 10 further shows the evolution of prms inside the boundary layer along the vortex trajectory V2 at several axial locations from x=220 mm to 320 mm. prms has its maximum near the wall, distributed locally in the azimuthal direction even when the boundary layer has not been markedly modulated by the stationary crossflow vortex yet (Fig. 10a). But when the rolling up of stationary crossflow vortex becomes prominent, it turns out that intense prms locates roughly beneath the trough of crossflow vortex, where the boundary layer is thin. However, when the stationary crossflow vortex is about to saturate (Fig. 10e-f) as indicated by Fig. 4b, the maximum of near-wall prms shifts to the bottom of the shoulder, where the boundary layer is thicker and the wall heat transfer is accordingly weaker. Meanwhile, another weaker peak of prms emerges at the shoulder. The evolution of prms inside the boundary layer gives a more clear explanation about the relation between (pw)rms and \(\bar {Q}_{w}\).
The evolution of prms in the wall-normal plane along the vortex path V2. The dashed lines represent the temporal mean axial velocity. The markers in c, d are probe points. Axial locations are from x=220 mm to 320 mm by an increment of 20 mm
The distribution of prms inside the boundary layer when stationary crossflow vortices are still weak agrees well with that of the extracted Mack mode by stability analysis [10]. It means that the high-frequency signals here are in fact the Mack mode, which is supported by the distribution of the r.m.s. of the axial velocity shown in Fig. 11. The r.m.s. of the axial velocity inside the boundary layer obviously has two peaks, which is the unique feature of the Mack mode.
The same as Fig. 10 but for the r.m.s. of axial velocity with frequency between 250 kHz and 550 kHz
Also in a Mach 6 quiet wind tunnel, [5] found that when pressure sensors at the side of an inclined cone are rotated by a small azimuthal angle (≲±5∘). The measured peak frequency with f≈400 kHz disappears, whereas a lower frequency peak with f≈150 kHz appears. Edelman and Schneider [8] also found similar phenomenon. They speculated that the frequency jump probably can be ascribed to the relative position between sensors and crossflow vortices. While in a Mach 8 conventional wind tunnel [7], such frequency jump was not observed. Instead, the peak frequency only decreases smoothly as a function of the local boundary layer thickness. The frequency jump observed by [5, 8] is also examined in the present work. Results are shown in Fig. 12, which displays the temporal spectra at different locations inside a stationary crossflow vortex following the trajectory V2 shown in Fig. 10c, d. The two probes at the wall have a difference of azimuthal angle by ≈3.5∘. Apparently, inside this crossflow vortex, the peak frequency does not vary with the azimuthal angle, neither with the wall-normal distance. But the intensities at the high-frequency peaks vary significantly, which is also indicated in Fig. 7d. The same conclusion holds along the vortex path V1. Our results are consistent with [7], probably because the present transition process also happens in a noisy environment.
The temporal spectra measured at the probe points marked in Fig. 10c, d. a x=260 mm and b x=280 mm. Lines are for ×, for , for and for
Based on the results shown in Figs. 8 and 10, probably what [4, 8] observed between (pw)rms and \(\bar {Q}_{w}\) were in the region where crossflow vortices were still weak. The relative position between pressure footprint and stationary crossflow vortices during the whole transition process shown above needs to be further verified by experiments.
Space-time correlation
In this section, we study the space-time correlation coefficient of wall pressure defined by
$$ C_{pp}\left(\Delta_{\xi},\Delta_{\zeta},\Delta_{t}\right)=\frac{\overline{p_{w}\left(\xi',\zeta',t'\right)p_{w}\left(\xi,\zeta,t\right)}}{(p_{w})_{\textup{rms}}\left(\xi',\zeta',t'\right)\left(p_{w}\right)_{\textup{rms}}\left(\xi,\zeta,t\right)}, $$
where Δξ=ξ−ξ′ and Δζ=ζ−ζ′ are respectively the spatial separations in the axial and the azimuthal directions and Δt=t−t′ is the time delay. (ξ′,ζ′) is the position of the reference point marked in Fig. 5b and (ξ,ζ) is the position of moving points. Δζ<0 means the moving point is closer to the windward symmetry plane than the reference point, while the opposite is true for Δζ>0. Cpp(Δξ,Δζ,0) reveals the spatial organization of pw, while Cpp(Δξ,0,Δt) or Cpp(0,Δζ,Δt) tells the temporal propagation in the space. The space-time correlation has been widely used in turbulent boundary layers (e.g. [18, 19]).
We computed Cpp(Δξ,Δζ,Δt) for low- and high-frequency pw separately. The spatial correlation coefficients Cpp(Δξ,Δζ,0) are presented first. The results for low-frequency pw at probe points in Group 1 are shown in Fig. 13. The inclination angle αp of the low-frequency pw streaks or the angle of the wave front against the θ=90∘ meridian is roughly 22∘ except near the rear of the cone, where αp≈26∘. This angle is very close to that of low-frequency wall pressure disturbances detected in a Mach 6 wind tunnel [4]. The length lt of the low-frequency pw streak in its inclined direction is roughly 27–40 mm, taking Cpp(Δξ,Δζ,0)=0.5. The distance ln between low-frequency pw streaks increases from ≈2.1 mm to ≈4.8 mm as a function of the axial positions, measured by the distance between the positive and negative lobes normal to the inclined direction. The low-frequency pw at probe points in Group 2 have similar inclination angles (not shown) as those in Group 1, except for higher noise-to-signal ratio since more small scales are expected to generate downstream of the transition front. The length lt of low-frequency pw streak is in the range ≈21−32 mm and the distance between low-frequency pw streaks ln increases from ≈2.7 mm to ≈4.2 mm as a function of the axial positions.
Cpp(Δξ,Δζ,0) for low-frequency wall pressure at probe points in Group 1. Contour levels are [-1:0.1:-0.2 0.2:0.1:1]. The dashed lines have slopes arctan(Δζ/Δξ)=20∘ in a, 22.5∘ in b, d, 21.5∘ in c, 24.5∘ in e and 26∘ in f, g
On the other hand, the spatial organization of high-frequency pw displayed in Figs. 14 and 15, is totally different from that of low-frequency pw, as we have already noticed in the instantaneous snapshots shown in Fig. 7. High-frequency pw appears as wavepackets, with the wavelength measured by the distance between the consecutive positive lobes roughly double of the local boundary layer thickness, as tabulated in the Table 2, which is the feature of the Mack mode [37]. At those positions closer to the windward symmetry plane (Fig. 14f-g), the wavepackets tend to be nearly two-dimensional, with the azimuthal length roughly five times of the wavelength. The size of the wavepackets shrinks significantly in both the axial and azimuthal directions downstream of the transition front. For instance, the axial length of the wavepacket is roughly 20 mm at probe points in Group 1, and it decreases to 10 mm in Group 2. The inclination angles αp of the high-frequency wavepackets with respect to the θ=90∘ ray are represented by the dashed lines. αp are listed in the Table 3 and compared to those of flow at the edge of boundary layer
$$ \alpha_{e}=\textup{atan}\left(\bar{w}_{n}/\bar{u}_{n}|_{\eta=\delta}\right), $$
and to those of near-wall flow at the first layer of the grid
$$ \alpha_{w}=\textup{atan}\left(\bar{w}_{n}/\bar{u}_{n}|_{\eta=\eta_{w}}\right). $$
αp does not seem to follow either αw or αe but is roughly in the middle of them. αw is larger than αe, indicating that the streamlines near the wall are more inclined to the leeward ray than those at the boundary edge.
Cpp(Δξ,Δζ,0) for high-frequency wall pressure at probe points in Group 1. Contour levels are the same as in Fig. 13. The dashed lines have slopes arctan(Δζ/Δξ)=1∘ in a, 3∘ in b, 8∘ in c, 10∘ in d, 15∘ in e, 15∘ in f and 3∘ in g
Cpp(Δξ,Δζ,0) for high-frequency wall pressure at probe points in Group 2. Contour levels are the same as in Fig. 13. The dashed lines have slopes arctan(Δζ/Δξ)=10∘ in a, b, 12∘ in c, 15∘ in d, e and 8∘ in f
Table 2 The boundary layer thickness δ and the wavelength λ of high-frequency pw at probe points in Group 1 (upstream of the transition front) and in Group 2 (downstream of the transition front)
Table 3 The inclination angles of the high-frequency pw (αp) and those of the wall parallel velocity components at the edge of the boundary layer (αe) and the first layer of grid (αw)
Next, the temporal propagation is presented. The results for low-frequency pw at probe points in Group 2 are shown in Figs. 16 and 17 respectively for Cpp(0,Δζ,Δt) and Cpp(Δξ,0,Δt). The shape of the contours reflects the convective nature of pressure signals. The convection velocity Uc can be estimated by the ratio Δξ(Δζ)/Δt at a given time delay Δt with the maximum of Cpp obtained at Δξ(Δζ). When Uc is not a function of Δt as in the present case, the convection velocity can be estimated by the slopes of the contour lines, as indicated by the dashed lines. The low-frequency pw is convected from the leeward side to the windward side, with a convection velocities ranging from 0.175 q∞ to 0.25 q∞. In the axial direction, the convection velocity is about 0.45-0.50 q∞, yielding a propagation angle roughly between −29∘ and −19∘. Similar values can be found for those probe points in the Group 1 thus results are not shown.
Cpp(Δξ,0,Δt) for low-frequency wall pressure at probe points in Group 2. Contour levels are [0.5:0.1:1] for solid and [-1:0.1:-0.5] for dashed. The slopes of the dashed lines are 0.45 q∞ in a, 0.5 q∞ in b, 0.425 q∞ in c, 0.43 q∞ in d, 0.46 q∞ in e and 0.45 q∞ in f
Cpp(0,Δζ,Δt) for low-frequency wall pressure at probe points in Group 2. Contour levels are the same as in Fig. 16. The slopes of the dashed lines are -0.175 q∞ in a-c, -0.225 q∞ in d, -0.2 q∞ in e and -0.25 q∞ in f
In summary, the low-frequency pw travels downstream and toward the windward symmetry plane, which is in the expected direction for travelling crossflow modes. Muñoz et al. [4] believed that the low-frequency signal measured in experiments travelling in the same manner maybe either the first-mode instability mode or the travelling crossflow mode. Whereas [7] preferred that they are actually the noise in the wind tunnel. However, in quiet wind tunnels, [5, 6, 8] believed that they are the travelling crossflow modes.
High-frequency pw are convected in the axial direction much faster than the low-frequency ones, with a convection velocity Uc≈0.85q∞ (\(\approx 0.85\bar {q}_{e}\)), independent of the position as shown by Cpp(Δξ,0,Δt) in Fig. 18 for those probe points in the Group 2. While in the azimuthal direction, Cpp(0,Δζ,Δt) is almost vertical, yielding a convection velocity up to 10 q∞ at the probe point F ′ and around 4 q∞ at other points (not shown). Such a high velocity is probably only mathematical and non-physical. Because with such a velocity, the time for travelling one millimeter is less than 0.25/q∞, which is already smaller than our sampling time interval 0.32/q∞. We prefer that the results only reflect the azimuthal coherence of pw when convected as a wavepackets along the axial direction, instead of the realistic propagation in the azimuthal direction.
Cpp(Δξ,0,Δt) for high-frequency wall pressure at probe points in Group 2. Solid and dashed contours for 0.5 and -0.5 respectively. Solid lines represent Δξ/Δt=0.85q∞
The above results (Figs. 16, 17 and 18) only show Cpp(Δξ,0,Δt) and Cpp(0,Δζ,Δt) for probe points in the Group 2, i.e., downstream of the transition front. The convection velocities for both low- and high-frequency pw in the Group 1 are respectively similar to those in the Group 2. But the average window for computing the correlation coefficient, i.e., the extent of Δξ and Δt is much larger, especially for large-scale low-frequency pw near the rear of the cone.
The footprints of genuine z-type secondary crossflow instability
We have seen in Figs. 8 and 10 that intense prms shifts from the bottom of trough to the bottom of shoulder with the development of stationary crossflow vortices. We believe that the high-frequency signals originate from the Mack mode and evolve into the secondary crossflow mode as they are modulated by the crossflow vortices. In other words, it may suggest that the pressure footprint of genuine z-type secondary crossflow instability is beneath the shoulder of crossflow vortices.
Recently, (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted) investigated the natural transition process in a high-speed swept-wing boundary layer. The transition is triggered by the z-type secondary crossflow instability riding on stationary crossflow vortices. In that work, the freestream Mach number is 6, but the Mach number at the boundary edge is less than four. No unstable Mack modes were identified by stability analysis. Therefore, the properties of pressure in (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted) probably could be representative for the genuine z-type secondary crossflow instability in the hypersonic condition. Readers are referred to (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted) for more details about the setup of numerical simulation and how the z-type secondary crossflow instability is excited.
First, we show the distribution of prms in wall-normal planes at three axial locations with x=200 mm, 230 mm and 260 mm in Fig. 19. x=230 mm and 260 mm roughly correspond to the saturation point of stationary crossflow vortices and the disturbances excited by the blowing-suction, respectively (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted). It is clear that in the high-speed swept-wing boundary layer, the pressure is distributed separately in two regions: one at the shoulder of crossflow vortex coinciding with the r.m.s. of the vortex-axial oriented velocity and another one just below the shoulder and close to the wall with similar intensity. Although they are spatially separated, the peak frequencies at the two (pw)rms lobes are the same to that of the vortex-axial oriented velocity, indicating that they are of the same thing, or more precisely, they are different parts of the z-type secondary crossflow instability. Obviously, the pressure extreme of the z-type secondary crossflow instability beneath the shoulder contradicts with [10]. The reason for the difference is unclear.
The evolution of u2,rms (a, I-III) and prms (b, I-III) in the wall-normal planes at x=200 mm (I), 230 mm (II) and 260 mm (III) during the breakdown of stationary crossflow vortices in a high-speed swept wing boundary layer (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted). The dashed lines represent the temporal mean of the vortex-axial oriented velocity u2
Next, Fig. 20a shows the r.m.s. of the pressure beneath the high-speed swept-wing boundary layer. Since there is only one principal frequency, we did not filter the pressure field. Note that the wall is adiabatic in (Chen et al.: Stationary cross-flow breakdown in a high-speed swept-wing boundary layer, submitted), thus the dashed lines in Fig. 20a represent the local maximum of the skin-friction coefficient Cf instead of the wall heat flux. With the presence of stationary crossflow vortices, the streaky (pw)rms in Figs. 20a and 8 are not intrinsically different. But (pw)rms and Cf are always separated because the former lies beneath the thicker part of boundary layer as shown in Fig. 19.
a The r.m.s. of the wall-pressure. Dashed lines represent the local maximum of the mean skin-friction coefficient; b The spatial correlation coefficient Cpp(Δξ,Δz,0) at the probe point in a. Contours lines are [-1:0.1:-0.2 0.2:0.1:1]. The dashed line has a slope by 45∘; c and d are respectively Cpp(Δξ,0,Δt) and Cpp(Δz,0,Δt). Contours levels are [-1:0.1:-0.5 0.5:0.1:1]. Dashed lines are the best fit for the center of the contour
The spatial correlation coefficient of pw at the probe point (x=260 mm) in Fig. 20a is shown in Fig. 20b. The wave length of pw is about δ, where δ is defined by the wall-normal height where \(\overline {w}(x)\) reaches 0.99w∞. This is different from high-frequency pw we have seen in Figs. 14 and 15, whose wavelength λ≈2δ. The spanwise direction of the swept-wing boundary layer is periodic, thus both of the axial and spanwise extent of the wavepacket reaches to 40 mm, which is roughly 10 times of the local boundary layer thickness. The wavepacket as a whole is inclined with respect to the axial direction by 45∘, which is equal to the swept angle. But the high-correlation lobes do not follow the dashed line.
The temporal propagation of the pw beneath the high-speed swept wing boundary layer is shown in Fig. 20c, d. As indicated by the dashed lines, the convection velocity Uc is not constant. The general trend is that pw propagates slower with time, in both streamwise and spanwise directions. Specifically, Uc,ξ roughly varies between 1.0\(\bar {q}_{e}\) to 2.0\(\bar {q}_{e}\) and Uc,z varies roughly between 0.8\(\bar {q}_{e}\) to 0.96\(\bar {q}_{e}\). We are not clear why the high-frequency signals on the cone surface only propagate in the axial direction.
What we observed in Figs. 19 and 20a is different from the evolution of prms along the trajectories of stationary crossflow vortices shown in Figs. 8 and 10e, f. It further verifies that the high-frequency disturbances on the side of the cone have its origin from the Mack mode, which is then modulated by crossflow vortices and turns into the secondary crossflow instability. This probably can be inferred from [10], who found that the modulated Mack mode was more highly amplified than the unmodulated Mack mode. And the Mack mode grows much upstream and achieves higher N-factors than the z-type crossflow secondary instability.
We have conducted a direct numerical simulation of the transition process of a Mach 6 boundary layer over a blunt cone with half-angle 7∘ placed at AoA 6∘. The transition is achieved by disturbing the boundary layer with a blowing-suction at the wall. The pressure field around the transition front is investigated and compared with that in a transitional high-speed swept-wing boundary layer without unstable Mack mode. The main conclusion is that the transition is due to the crossflow modulated Mack mode, instead of the genuine secondary crossflow modes. This probably also holds for experiments in quiet tunnels or flight tests.
The wall pressure has two apparent frequency peaks. The low-frequency peak has a frequency f≈10−50 kHz, independent of the axial and azimuthal locations. Disturbances of the low-frequency peaks manifest as large-scale streaks inclined by ≈20∘−−26∘ against the θ=90∘ ray. They propagate downstream and toward the windward symmetry plane, with respective convection velocities 0.45−0.50q∞ and 0.175−0.25q∞. They are likely the travelling crossflow mode.
The high-frequency peak roughly follows the relationship of \(f\approx \bar {q}_{e}/2\delta \). The high-frequency wall pressure is small-scale and quasi two-dimensional wavepackets with wavelength λ≈2δ, indicating that they are the Mack mode. They are convected in the axial direction by a velocity 0.85q∞. When stationary crossflow vortices are still weak, intense (pw)rms lies beneath the trough of the stationary crossflow vortices where the wall heat flux (isothermal wall) or the skin friction coefficient is high. With the development of stationary crossflow vortices, the Mack mode is modulated. Meanwhile, the intense (pw)rms shifts to the bottom of the shoulder where the boundary layer is thicker, which is the same to that of the z-type secondary crossflow instability in a high-speed swept-wing boundary layer without the Mack mode.
Finally, to determine the origin of the high-frequency signals measured by the surface pressure sensors, [8] and our results suggest that comparing the locations of the maximum wall heat flux and the maximum (pw)rms could be the simplest way based only on the wall information in experiments.
All the data for the current study are available from the corresponding author on reasonable request.
Su C, Zhou H (2007) Stability analysis and transition prediction of hypersonic boundary layer over a blunt cone with small nose bluntness at zero angle of attack. Appl Math Mech 28:563–572.
Hader C, Fasel HF (2019) Three-dimensional wave packet in a Mach 6 boundary layer on a flared cone. J Fluid Mech 885:3.
Paredes P, Choudhari MM, Li F (2020) Mechanism for frustum transition over blunt cones at hypersonic speeds. J Fluid Mech 894:22.
MathSciNet Article Google Scholar
Muñoz F, Heitmann D, Radespiel R (2014) Instability modes in boundary layers of an inclined cone at Mach 6. J Spacecr Rocket 51:442–454.
Ward CAC, Henderson RO, Schneider SP (2015) Possible secondary instability of stationary crossflow vortices on an inclined cone at Mach 6. AIAA Paper 46:2773–2913.
Craig SA, S. SW (2016) Crossflow instability in a hypersonic boundary layer. J Fluid Mech 808:224–244.
Edelman JB, Casper KM, Henfling JF, Spillers RW (2017) Crossflow transition on a pitched cone at Mach 8. AIAA Paper 46:2899–2913.
Edelman JB, Schneider SP (2018) Secondary instabilities of hypersonic stationary crossflow waves. AIAA J 56:182–192.
Neel IT, Leidy AN, Tichenor NR, Bowersox RDW (2018) Influence of environmental disturbances on hypersonic crossflow instability on the HIFiRE-5 elliptic cone In: SciTech Forum, AIAA, 1821. https://doi.org/10.2514/6.2018-1821.
Li F, Choundhari M, Paredes P, Duan L (2016) High-frequency instabilities of stationary crossflow vortices in a hypersonic boundary layer. Phys Rev Fluids 1:053603.
Paredes P, Gosse R, Theofilis V, Kimmel R (2016) Linear modal instabilities of hypersonic flow over an elliptic cone. J Fluid Mech 804:442–466.
Moyes AJ, Paredes P, Kocian TS, Reed HL (2017) Secondary instabilitiy analysis of crossflow on a hypersonic yawed straight circular cone. J Fluid Mech 812:370–397.
Xu G, Chen J, Dong S, Fu S (2019) The secondary instabilities of stationary cross-flow vortices in a Mach 6 swept wing flow. J Fluid Mech 873:914–941.
Kimmel RL, W. AD, Borg MP, S. JJ, Juliano TJ, A. SS, T. BK (2019) First and fifth hypersonic international flight research experiment's flight and ground tests. J Spacecr Rocket 56(2):421–431.
Mack LM (1984) Boundary-layer linear stability theory. AGARD Rep. 709.
Malik MR, Li F, Choudhari MM, Chang C (1999) Secondary instability of crossflow vortices and swept-wing boundary layer transition. J Fluids Mech 399:85–115.
Bonfigli G, Kloker M (2007) Secondary instability of crossflow vortices: validation of the stability theory by direct numerical simulation. J Fluid Mech 583:229–272.
Bernardini M, Pirozzoli S (2011) Wall pressure fluctuations beneath supersonic turbulent boundary layers. Phys Fluids 23:085102.
Bernardini M, Pirozzoli S, Grasso F (2011) The wall pressure signature of transonic shock/boundary layer interaction. J Fluid Mech 671:288–312.
Choi H, Moin P (1990) On the space-time characteristics of wall-pressure fluctuations. Phys Fluids 2(8):1450.
Ritos K, Drikakis D, Kokkinakis IW (2019) Acoustic loading beneath hypersonic transitional and turbulent boundary layers. J Sound Vib 441:50–62.
Fedorov AV (2015) Prediction and control of laminar-turbulent transition in high-speed boundary-layer flows. Procedia IUTAM 14:3–14.
Dörr PC, Kloker MJ (2016) Transition control in a three-dimensional boundary layer by direct attenuation of nonlinear crossflow vortices using plasma actuators. Intl J Heat Fluid Flow 61:449–465.
Friederich T, Kloker MJ (2012) Control of the secondary cross-flow instability using localized suction. J Fluid Mech 706:470–495.
Malik M, Liao W, Li F, Choudhari M (2015) Discrete-roughness-element-enhanced swept-wing natural laminar flow at high Reynolds numbers. AIAA J 53:2321–2334.
Chen X, Chen J, Dong S, Xu G, X. Y (2020) Stability analyses of leeward streamwise vortices for a hypersonic yawed cone at 6 degree angle of attack. Acta Aerodynamica Sin 38(2):299–307.
Li X, Fu D, Ma Y (2010) Direct numerical simulation of hypersonic boundary layer transition over a blunt cone with a small angle of attack. Phys Fluids 22:025105.
Li X, Fu D, Ma Y (2008) Direct numerical simulation of hypersonic boundary-layer transition over a blunt cone. AIAA J 46:2899–2913.
Zhang Y, Bi W, Hussain F, She Z (2014) A generalized Reynolds analogy for compressible wall-bounded turbulent flows. J Fluid Mech 739:392–420.
Zheng W, Yang Y, Chen S (2016) Evolutionary geometry of Lagrangian structures in a transitional boundary layer. Phys Fluids 28:035110.
Tong FL, Li XL, Duan YH, Yu CP (2017) Direct numerical simulation of supersonic turbulent boundary layer subjected to a curved compression ramp. Phys Fluids 29:125101.
Chen X, Huang GL, Lee CB (2019) Hypersonic boundary layer transition on a concave wall: stationary Görtler vortices. J Fluid Mech 865:1–40.
Zheng W, Ruan S, Yang Y, He L, Chen S (2019) Image-based modeling of the skin-friction coefficient in compressible boundary-layer transition. J Fluid Mech 875:1175–1203.
Jiang G, Shu C (1996) Efficient implementation of weighted ENO schemes. J Comput Phys 126:202.
Swanson EO, Schneider SP (2010) Boundary layer transition on cones at angle of attack in a Mach-6 quiet tunnel. Paper, AIAA:1062.
Gronvall JE, Johnson HB, Candler GV (2012) Hypersonic three-dimensional boundary transition on a cone at angle of attack. Paper, AIAA:2822.
Mack LM (1987) Stability of axisymmeric boundary layers on sharp cones at hypersonic Mach numbers. AIAA Conference Paper 87:1413.
We would like to acknowledge Prof. Li Xinliang and Dr. Tong Fulin for their generous help for the setting up of the simulation and also fruitful discussion. We would also like to acknowledge Dr. Zhao Lei in Tianjin U. for fruitful discussion.
This work was supported by the National Key Research and Development Program of China 2016YFA0401200 and 2019YFA0405200, the National Numerical Wind tunnel (NNW) project, and National Natural Science Foundation of China under contract 11702307.
State Key Laboratory of Aerodynamics, China Aerodynamics Research and Development Center, Mianyang, 621000, China
Siwei Dong, Jianqiang Chen, Xianxu Yuan, Xi Chen & Guoliang Xu
Computational Aerodynamics Institute, China Aerodynamics Research and Development Center, Mianyang, 621000, China
Siwei Dong
Jianqiang Chen
Xianxu Yuan
Xi Chen
Guoliang Xu
XY conceived of the study. All the authors analysed the data. SD prepared the manuscript and all the authors read and approved the final manuscript.
Correspondence to Xianxu Yuan.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Dong, S., Chen, J., Yuan, X. et al. Wall pressure beneath a transitional hypersonic boundary layer over an inclined straight circular cone. Adv. Aerodyn. 2, 29 (2020). https://doi.org/10.1186/s42774-020-00057-4
Wall pressure
Mack mode
Secondary crossflow instability
Inclined cone | CommonCrawl |
Plasticity varies with boldness in a weakly-electric fish
Kyriacos Kareklas1,
Gareth Arnott1,
Robert W. Elwood1 &
Richard A. Holland1,2
The expression of animal personality is indicated by patterns of consistency in individual behaviour. Often, the differences exhibited between individuals are consistent across situations. However, between some situations, this can be biased by variable levels of individual plasticity. The interaction between individual plasticity and animal personality can be illustrated by examining situation-sensitive personality traits such as boldness (i.e. risk-taking and exploration tendency). For the weakly electric fish Gnathonemus petersii, light condition is a major factor influencing behaviour. Adapted to navigate in low-light conditions, this species chooses to be more active in dark environments where risk from visual predators is lower. However, G. petersii also exhibit individual differences in their degree of behavioural change from light to dark. The present study, therefore, aims to examine if an increase of motivation to explore in the safety of the dark, not only affects mean levels of boldness, but also the variation between individuals, as a result of differences in individual plasticity.
Boldness was consistent between a novel-object and a novel-environment situation in bright light. However, no consistency in boldness was noted between a bright (risky) and a dark (safe) novel environment. Furthermore, there was a negative association between boldness and the degree of change across novel environments, with shier individuals exhibiting greater behavioural plasticity.
This study highlights that individual plasticity can vary with personality. In addition, the effect of light suggests that variation in boldness is situation specific. Finally, there appears to be a trade-off between personality and individual plasticity with shy but plastic individuals minimizing costs when perceiving risk and bold but stable individuals consistently maximizing rewards, which can be maladaptive.
Variation in behaviour between individuals has been shown extensively in many animal populations and linked to the way animals cope with their environment [1, 2]. Often, the variation is indicated on a continuum ranging from the lowest to the highest level of behavioural response within the population [3] and as such indicates the degree each individual exhibits the behaviour in relation to the rest of the population. This variation can be consistent across contexts (i.e. functional behavioural categories such as feeding), situations (i.e. sets of current conditions such as feeding with and without predators) and time [4–6]. Each behaviour that is consistently variable between individuals is termed an animal personality trait and a number of such traits can be used to describe personality in animals [7]. One of the most examined animal personality traits is boldness, which is indicated on a shy–bold axis [8]. Human-derived terminology defines boldness as the consistent willingness to take risks in unfamiliar situations [9]. This definition is often appropriated when studies consider its evolutionary and ecological consequences [10]. However, 'ecologically-based' approaches typically define bolder individuals as those that are the least affected by risk and more willing to approach and explore novel objects or environments [11, 12].
Boldness, like all personality traits, remains consistent depending on the degree in which behavioural plasticity varies between individuals [13]. On one hand, individuals can adjust their behaviour, but the extent of adjustment may be relatively uniform within the population. Thus, even if mean levels of behaviour change, between-individual variation is maintained, i.e. all individuals show similar plasticity [14]. For example, the mean boldness (propensity to exit shelter) of salamander larvae decreases in the presence of predators, but the variation between individuals is maintained across situations with and without predators [15]. On the other hand, environmental changes can affect the behaviour and physiology of some individuals more than others [16, 17], e.g. rainbow trout that exhibit lower activity and aggressiveness are affected more by increasing environmental stressors [18]. Consequently, behavioural variability within populations can be biased by the variable degree in which environmental changes affect individuals. Individuals may be more or less flexible over an environmental gradient of changing conditions, i.e. they exhibit variable levels of individual plasticity [19].
Links between personality and individual plasticity have been reported when testing boldness across situations varying in their level of risk and familiarity [20]. Lima and Bednekoff suggest that behavioural response depends on the level of perceived risk, which can vary between individuals [21]. A greater response can thus be associated with a greater perception of risk, even when uncertain about its presence, while the ability to adjust response, depending on risk levels, can be overall more beneficial for surviving in the wild [22]. This manifests in risk-taking behaviour, with individuals that respond more to risk (i.e. those taking less risk) also showing greater changes across shifting levels of perceived risk. For example, between situations that vary in perceived predatory risk (presence or absence of sparrowhawk model), shy chaffinches (least active in a novel environment) show greater behavioural plasticity than bold chaffinches (most active in a novel environment) [23]. Mortality, growth and fecundity can all be affected by an individual's response to changes in risk [24], e.g. shier damselfish show lower mortality rates by being less active in unfamiliar environments [25]. It is therefore imperative to examine how changes in levels of perceived risk can affect boldness and individual plasticity.
For weakly-electric fish, the level of perceived risk in their environment is most significantly affected by light conditions. Most species prefer lower light transmission, where they can integrate their electric-sensing with other senses in the absence of light [26, 27]. One example is the Central African mormyrid Gnathonemus petersii, which favours nocturnal activity and turbid, vegetated waters [28, 29]. This species can perceive spatial features, navigate and explore objects and environments by using active electrolocation, i.e. the sensing of changes to a self-produced electric discharge [30, 31]. Though often being prey to bigger electric fish, it is argued that a function of electrolocation is avoiding risk from visually-guided predators in darker environments [31, 32]. The lower predation risk would increase their motivation to approach and explore objects and environments, hence their preference to be active in the dark [26, 27]. However, the change in motivation can be greater in some individuals, depending on how plastic they are, which can affect mean boldness levels. This is supported by evidence of differences between individuals in the degree of change in food searching times across light conditions [32]. The aim of the present study was to examine boldness and changes in boldness across situations, with a particular interest in the effect of light conditions on individuals.
Boldness was indicated by the willingness of G. petersii to approach (latency times) and inspect (exploration times) novel objects and environments. First, fish were tested with a different novel object on four occasions, to control for differences in object characteristics. The tests were carried out in a bright, familiar environment. Then, individuals were tested in two separate novel-environment situations differing in light condition, i.e. a dark and a bright novel-environment. Finally, an intra-individual variance statistic was used to measure individual plasticity across the environmental gradient between bright and dark [19, 33]. It was tested whether boldness from the novel-object tests 1) was consistent with boldness in the bright and dark novel-environment situations and 2) related to individual plasticity across these novel-environment situations.
Animal maintenance and housing
Twelve juvenile (70–100 mm length), wild-caught G. petersii of unknown gender (external sexual dimorphism is lost in captivity) [34] were imported and commercially supplied by Grosvenor's Tropicals, Lisburn, Northern Ireland. Fish were housed individually in ~25 L of water, fed 15–20 chironomid larvae daily and kept on a 12 h:12 h light to dark photoperiod. Housing tanks were enriched with shelter (plastic pipes), sediment and plastic plants, stones and ceramics. Housing and experimental tanks were fitted with filtering and heating equipment and kept on same-level benches. Water quality in all tanks was tested twice-weekly and maintained by partial water changes (mixed tap and reverse osmosis water). The pH was kept at 7.2 ± 0.4, temperature at 26 ± 1o and conductivity at a range between 150–300 μS/cm.
Behavioural tests
Test conditions and procedures
Light conditions varied between those within (bright light at 350–600 nm and 300 lux at water surface) and those exceeding (dark in infra-red light >800 nm and 0 lux at water surface) the visible spectrum of G. petersii [35]. Water conductivity in the test tanks was 150 ± 50 μS/cm. External cues were limited by attaching visual barriers (opaque blue plastic sheets) around both the novel-environment test tanks and the housing tanks, during testing. Behavioural variables were measured live during the novel-object test and from recordings of the novel-environment test. This was carried out by a single observer (KK), with a response latency of 1–2 s, using a stopwatch with a ±0.2 s measuring error.
Novel-object tests
Novel-object tests were in bright light. These were carried out following a 2 week acclimatisation period to ensure that the objects were novel to the fish, but not the environment (housing tank). Each individual received four separate novel-object tests, with a 5 min interval between each test. The test was repeated with different novel objects in order to control for variation in potential effects elicited by the differences in the characteristics of novel objects. These effects could result from how each object is perceived by individuals. G. petersii can sense multiple properties of objects, some of which are typically not perceived by non-electrosensing fish, such as resistance and capacitance [29]. To that end, the novel objects not only differed in shape, colour and size, but also material. Objects included: a ~ 5 cm long black fishing weight (A), a ~7 cm long stainless-steel fishing lure without the hook (B), a ~15 cm long yellow/green plastic dinosaur toy (C) and a 10 cm3 multicolour wooden cubic toy attached to a small brass weight (D). Following recommendations from Wilson et al. [36], objects were presented to each fish in the same order (A-B-C-D) to control for carryover effects. The objects were lowered in housing tanks at the furthest non enriched area from the individual's shelter using a monofilament-line pulley-system. Fish were given up to five minutes to approach each object (within ~1.5 body-lengths), which was measured as latency time [11]. Then a further 1 min was allowed for exploration (75 % of individuals explored new objects under 55 s in preliminary studies; see Additional file 1), during which the time spent performing electrosensing movements (motor probing acts, e.g. lateral and chin probing) [37] within the 1.5 body-length distance was measured as exploration time.
Novel environment tests
The recording of the novel-environment tests was carried out both under bright light and in the dark and started a week after the novel-object tests (overall 3 weeks in the laboratory), which allowed individuals to acclimatise to laboratory light conditions. Timers switched between bright light and dark photoperiods every 12 h (lights went on at 7 am and off at 7 pm), daily. Novel-environment tests were carried out with a random light-condition order between fish. Individuals randomly selected to be tested first in the dark, were tested between 5 am and 6 am and then in bright light between 8 am and 9 am. Those randomly selected for being tested first in bright light, were tested between 5 pm and 6 pm and then in the dark between 8 pm and 9 pm. This procedure of recording during normal laboratory photoperiods controlled for the risk of effects from circadian rhythms [31]. Each individual was introduced to a segregated housing section (30 cm Length by 30 cm Width and 30 cm Height, ~27 L) of the experimental tank with shelter and enrichments. Here, individuals were allowed to habituate for ~12 h prior to their first novel-environment test, and ~2 h during photoperiod changes between tests (~ an hour before and ~ an hour after lights turned on or off). Tests began by lifting the plastic opaque divider creating the housing section via a pulley system, allowing the fish entry to the rest of the tank (60 cm Length by 30 cm Width and 30 cm Height, ~54 L). This area constituted the novel environment and included items that were similar to enrichments in their housing tanks i.e. shelters (plastic pipes), ceramics, stones and plastic plants of variable sizes. The items within the novel area were rearranged and/or replaced between bright and dark tests for all fish. A wall-mounted infra-red camera provided a live feed of the entire novel-environment test-tank from a birds-eye view. This was relayed through a recorder to a computer placed out of view from the tank. During recording, fish where allowed up to a maximum of 1 h to enter the novel environment (i.e. until an individual's tail passed the mark on the bottom of the tank) and a further 10 min to explore. During the later viewing of the recordings, latency time was measured until an individual entered the novel environment or until the hour-mark was reached, in which case latency was recorded at 3600 s and exploration at 0 s (this was the case for only one individual in the bright novel environment). Exploration was measured as the time actively moving in the novel area and performing electrosensory probing acts.
Calculations, statistical analyses and graphical representations were all produced in Minitab® statistical software (version 17; Minitab Inc., State College, PA). Data from the novel-object tests were either normally or approximately normally distributed. Only exploration times from the novel-environment test data were normally distributed. Measures were summed to produce composite, standardized boldness scores. This was carried out by adding positive (time exploring) and subtracting negative (latency time to approach) indicators and then standardising (z-scores).
In novel-object tests, some individuals were both less latent to approach and more explorative than others (Fig. 1a). Preliminary analyses on the novel-object tests indicated a strong linear relationship between latency and exploration (R 2 = 0.500, F 1,47 = 47.32, P < 0.01). Even though some differences were apparent between objects (Fig. 1a), these were not significant (R 2 = 0.065, F 3,47 = 2.04, P = 0.122). This suggested that boldness levels were indicated by both measures with no effect from object characteristics. Measures from all four novel-object tests were, thus, used to create boldness scores. Inter-individual differences in latency and exploration were not similar between bright and dark novel environments (Fig. 1b). Separate boldness scores were produced for each novel-environment situation, dark and bright. Composite scores were used to test consistency in boldness across novel-environment situations and between novel-environment and novel-object situations. For this, two Linear Regression models (LR) were used. The first (LR1) tested the relationship between bright and dark novel-environment scores. The second (LR2) tested if the effect of situation also affected how novel-environment scores related to novel-object scores, i.e. were predicted by situation, dark or bright, and its interaction with novel-object scores.
Latency and exploration times for each individual, as measured in all novel-object tests (a) and each of the novel-environment situations (b). Individuals that were more explorative, were also less latent to approach objects. Similarly, some individuals were more explorative and less latent in the bright novel environment. However, in the dark novel environment individuals were overall more explorative and less latent
To calculate individual plasticity statistics, typically a measure of each individual's variance between two situations is used [38]. Following Asendorpf's [33] suggestions, here, this was measured as the intra-individual variance (Var) of each fish such that
$$ Va{r}_{xy}=\frac{{\left({z}_x-{z}_y\right)}^2}{2} $$
where z is the standardized phenotypic score (here the novel-environment boldness score) at situation x (bright) and y (dark). Higher intra-individual variance values designated greater degree of change and therefore greater individual plasticity. In order to test if individual plasticity varied with boldness, intra-individual variance statistics were then correlated with novel-object boldness scores (Spearman's, r s ).
Individual scores were not consistent between novel-environment situations (LR1, R 2 = 0.251, F 1,11 = 3.35, P = 0.097) (Fig. 2a). Boldness was significantly different between the bright and dark novel environment (LR2, R 2 = 0.211, F 1,23 = 6.85, P = 0.016), being on average greater and less variable in the dark (x̄=0.45, s = 0.09) than in the bright (x̄= -0.45, s = 1.28) novel environment (Fig. 2a). However, the change between bright and dark was greater for some fish (Fig. 2b). Those with the greater change were also ones with below-median novel-object boldness (Fig. 3). The change between bright and dark affected the relationship between novel-object and novel-environment scores (LR2, interaction: R 2 = 0.143, F 1,11 = 4.65, P = 0.043), which was stronger with the bright than the dark novel-environment scores (Fig. 3). The intra-individual variance in boldness between the two novel-environment situations was strongly negatively correlated with boldness score from the novel-object tests (Spearman's, r s = -0.776, P = 0.003) (Fig. 4).
Comparisons between the bright and dark novel environment. The marginal plot (a) shows an average increase in boldness and a decrease in variability in the dark novel environment (box-plots), but also no significant linear relationship between boldness scores from the two novel-environment situations (regression). The individual line plot (b) shows some individuals changing more than others between bright and dark
Linear relationships in boldness between the novel-object situation and each of the novel-environment situations, bright and dark. Novel-object boldness scores were significantly more consistent with those in the bright than those in the dark environment. Those with novel-object boldness scores below the median (dotted line) showed more change between light and dark
Rank correlation between intra-individual variance and boldness scores from the novel-object tests. Bolder individuals were less plastic between the bright and dark novel environment
This study provides compelling evidence supporting the hypothesis that the degree of individual plasticity varies significantly with personality. Boldness was inconsistent between bright and dark novel-environments (Fig. 2a) and the intra-individual variance exhibited across these environments depended on boldness (Fig. 4). However, when maintaining bright light conditions, changes in levels of familiarity/novelty (whether it is a single unfamiliar object in a familiar environment or a completely unfamiliar environment) seem to have little effect on behavioral variability between individuals (Fig. 3a). These findings emphasize the overwhelming effect of light condition and indicate a boldness trait which is specific to higher risk situations, given that bright light is naturally avoided by G. petersii [27].
An indirect effect of the environment can be seen when regularly changing conditions (e.g. light, temperature and turbidity) influence the motivational state of individuals. For example, small within-day increases in temperature relate to an increase in the tendency of damselfish to exit a shelter (measure of boldness), but more so in some individuals than others [39]. It is suggested that an increased motivation to exit shelter and look for food can be associated with the need to compensate for the increased metabolic rates under elevated temperatures [39, 40]. The present study reaffirms that a similar effect is induced by perceived risk through manipulations of light. The decrease in risk in the dark (lower predator threat) increases the motivation to explore a novel environment in some individuals and as a result impacts mean boldness in that situation. Notably, the results presented here also show that the effect varies with boldness (Fig. 3), i.e. perceived risk affects the motivation of shier individuals more. Motivation levels can vary as a function of personality [41] and therefore the impact on motivation by changing conditions may also vary depending on personality traits like boldness.
The negative relation between boldness and individual plasticity (Fig. 4) indicates trade-offs that enable bolder individuals to out-compete shier ones (e.g. for food) in higher-risk situations. However, maintaining bold behaviour in risky situations can be disadvantageous and in the long-term maladaptive [42]. Shier individuals, which are more responsive to change and more plastic [43], gain less when risks are high but compensate in safer environments. This manifests in the behaviour of G. petersii, which is more variable in situations with greater selective pressure (i.e. in bright light with high predatory risk) where risk-aversion is elicited in shier fish, while in the safe dark situation boldness scores are overall high (Fig. 2).
The selection of plastic or consistent behaviour with changing conditions can depend on both the physiological and cognitive state of individuals [44, 45]. Differences between individuals in their physiological stress response [16, 17] and cognitive risk-assessment [22] can explain the differences in strategy, i.e. plastic boldness vs. stable boldness [46]. For example, recent evidence suggests that bolder fish make faster decisions [47]. There is therefore a need to examine mechanisms further, including those used for sensing and processing information, and test how they relate to individual plasticity and personality.
The current study highlights that individuals can vary in the degree of behavioural plasticity exhibited between situations differing in risk level depending on their position along an important animal personality axis, the shy-bold continuum. This strongly suggests that the ability to cope with changing conditions, especially ones associated with the perception of risk, vary between individuals as a function of their personality. Finally, it accentuates that individual variation can be a significant predictor of behaviour and behavioural change in wild populations.
Dall SRX, Bell AM, Bolnick DI, Ratnieks FL. An evolutionary ecology of individual differences. Ecol Lett. 2012;15:1189–98.
Wolf M, Weissing FJ. Animal personalities: consequences for ecology and evolution. Trends Ecol Evol. 2012;27:452–61.
Koolhaas JM, Korte SM, De Boer SF, Van Der Vegt BJ, Van Reenen CG, Hopster H, De Jong IC, Ruis MAW, Blokhuis HJ. Coping styles in animals: current status in behavior and stress-physiology. Neurosci Biobehav R. 1999;23:925–35.
Sih A, Bell A, Johnson JC. Behavioral syndromes: an ecological and evolutionary overview. Trends Ecol Evol. 2004;19:372–8.
Sih A, Bell AM, Johnson JC, Ziemba RE. Behavioral syndromes: an integrative overview. Q Rev Biol. 2004;79:241–77.
Bell AM, Hankison SJ, Laskowski KL. The repeatability of behaviour: a meta-analysis. Anim Behav. 2009;77:771–83.
Biro PA, Stamps JA. Are animal personality traits linked to life-history productivity? Trends Ecol Evol. 2008;23:361–8.
Wilson DS, Clark AB, Coleman K, Dearstyne T. Shyness and boldness in humans and other animals. Trends Ecol Evol. 1994;9:442–6.
Coleman K, Wilson D. Shyness and boldness in pumpkinseed sunfish: individual differences are context-specific. Anim Behav. 1998;56:927–36.
Wilson DS, Coleman K, Clark AB, Biederman L. Shy-bold continuum in pumpkinseed sunfish (Lepomis gibbosus): An ecological study of a psychological trait. J Comp Psychol. 1993;107:250.
Toms CN, Echevarria DJ, Jouandot DJ. A methodological review of personality-related studies in fish: focus on the shy-bold axis of behavior. Int J Comp Psychol. 2010;23:1–25.
Mowles SL, Cotton PA, Briffa M. Consistent crustaceans: the identification of stable behavioural syndromes in hermit crabs. Behav Ecol Sociobiol. 2012;66:1087–94.
Brown AL, Robinson BW. Variation in behavioural plasticity regulates consistent individual differences in Enallagma damselfly larvae. Anim Behav. 2016;112:63–73.
Briffa M, Bibost AL. Effects of shell size on behavioural consistency and flexibility in hermit crabs. Can J Zoolog. 2009;87:597–603.
Sih A, Kats LB, Maurer EF. Behavioural correlations across situations and the evolution of antipredator behaviour in a sunfish–salamander system. Anim Behav. 2003;65:29–44.
Coppens CM, de Boer SF, Koolhaas JM. Coping styles and behavioural flexibility: towards underlying mechanisms. Philos T Roy Soc B. 2010;365:4021–8.
Sørensen C, Johansen IB, Øverli Ø. Neural plasticity and stress coping in teleost fishes. Gen Comp Endocr. 2013;181:25–34.
Øverli Ø, Pottinger TG, Carrick TR, Øverli E, Winberg S. Differences in behaviour between rainbow trout selected for high-and low-stress responsiveness. J Exp Biol. 2002;205:391–5.
Dingemanse NJ, Kazem AJ, Réale D, Wright J. Behavioural reaction norms: animal personality meets individual plasticity. Trends Ecol Evol. 2010;25:81–9.
Dammhahn M, Almeling L. Is risk taking during foraging a personality trait? A field test for cross-context consistency in boldness. Anim Behav. 2012;84:1131–9.
Lima SL, Bednekoff PA. Temporal variation in danger drives antipredator behavior: the predation risk allocation hypothesis. Am Nat. 1999;153:649–59.
Mathot KJ, Wright J, Kempenaers B, Dingemanse NJ. Adaptive strategies for managing uncertainty may explain personality‐related differences in behavioural plasticity. Oikos. 2012;121:1009–20.
Quinn JL, Cresswell W. Personality, anti-predation behaviour and behavioural plasticity in the chaffinch Fringilla coelebs. Behaviour. 2005;142:1377–402.
Dingemanse NJ, Wolf M. Recent models for adaptive personality differences: a review. Philos T Roy Soc B. 2010;365:3947–58.
White JR, Meekan MG, McCormick MI, Ferrari MC. A comparison of measures of boldness and their relationships to survival in young fish. PLoS One. 2013. doi:10.1371/journal.pone.0068900.
Moller P. Electric Fishes: History and behaviour. London: Chapman and Hall; 1995.
Berra TM. Freshwater fish distribution. California: Academic Press; 2001.
Onyeche VEO, Onyeche LE, Akankali JA, Enodiana IO, Ebenuwa P. Food and fish feeding habits in Anwai stream ichthyofauna, Niger-Delta. Int J Fish Aquac. 2013;5:286–94.
von der Emde G, Amey M, Engelmann J, Fetz S, Folde C, Hollmann M, Metzen M, Pusch R. Active electrolocation in Gnathonemus petersii: behaviour, sensory performance, and receptor systems. J Physiol-Paris. 2008;102:279–90.
Kramer B. Electric Organ Discharge. In: Binder MD, Nobutaka H, Windhorst U, editors. Encyclopedia of Neuroscience. Berlin: Springer; 2009. p. 1050–6.
Moller P. Multimodal sensory integration in weakly electric fish: a behavioral account. J Physiol-Paris. 2002;96:547–56.
von der Emde G, Bleckmann H. Finding food: senses involved in foraging for insect larvae in the electric fish Gnathonemus petersii. J Exp Biol. 1998;201:969–80.
Asendorpf JB. Beyond stability: Predicting inter-individual differences in intra-individual change. Eur J Pers. 1992;6:103–17.
Landsman RE. Captivity affects behavioral physiology: plasticity in signaling sexual identity. Experientia. 1991;47:31–8.
Ciali S, Gordon J, Moller P. Spectral sensitivity of the weakly discharging electric fish Gnathonemus petersi using its electric organ discharges as the response measure. J Fish Biol. 1997;50:1074–87.
Wilson CD, Arnott G, Elwood RW. Freshwater pearl mussels show plasticity of responses to different predation risks but also show consistent individual differences in responsiveness. Behav Process. 2012;89:299–303.
Toerring MJ, Moller P. Locomotor and electric displays associated with electrolocation during exploratory behavior in mormyrid fish. Behav Brain Res. 1984;12:291–306.
Cleasby IR, Nakagawa S, Schielzeth H. Quantifying the predictability of behaviour: statistical approaches for the study of between‐individual variation in the within‐individual variance. Methods Ecol Evol. 2015;6:27–37.
Biro PA, Beckmann C, Stamps JA. Small within-day increases in temperature affects boldness and alters personality in coral reef fish. P Roy Soc B. 2010;77:71–7.
Careau V, Thomas D, Humphries MM, Réale D. Energy metabolism and animal personality. Oikos. 2008;117:641-53.
David M, Auclair Y, Giraldeau LA, Cézilly F. Personality and body condition have additive effects on motivation to feed in Zebra Finches Taeniopygia guttata. Ibis. 2012;154:372–8.
Jandt JM, Bengston S, Pinter‐Wollman N, Pruitt JN, Raine NE, Dornhaus A, Sih A. Behavioural syndromes and social insects: personality at multiple levels. Biol Rev. 2014;89:48–67.
de Lourdes Ruiz-Gomez M, Huntingford FA, Øverli Ø, Thörnqvist PO, Höglund E. Response to environmental change in rainbow trout selected for divergent stress coping styles. Physiol Behav. 2011;102:317–22.
Luttbeg B, Sih A. Risk, resources and state-dependent adaptive behavioural syndromes. Philos T Roy Soc B. 2010;365:3977–90.
Mathot KJ, van den Hout PJ, Piersma T, Kempenaers B, Réale D, Dingemanse NJ. Disentangling the roles of frequency‐vs. state‐dependence in generating individual differences in behavioural plasticity. Ecol Lett. 2011;14:1254–62.
Rodríguez-Prieto I, Martín J, Fernández-Juricic E. Individual variation in behavioural plasticity: direct and indirect effects of boldness, exploration and sociability on habituation to predators in lizards. P Roy Soc B. 2011;278:266–73.
Mamuneas D, Spence AJ, Manica A, King AJ. Bolder stickleback fish make faster decisions, but they are not less accurate. Behav Ecol. 2015;26:91–6.
Guidelines for the treatment of animals in behavioural research and teaching. Anim Behav. 2012; doi: 10.1016/j.anbehav.2011.10.031
We thank Clair McAroe, Gillian Riddel and Iolanda Rocha for husbandry and ideas. The project and K.K. are funded and supported by the Department for Employment and Learning, NI, and the School of Biological Sciences, Queen's University Belfast.
The datasets supporting the conclusions of this article are included within the article and its additional files.
KK carried out the set-up, tests, recordings and data collection, participated in the study conception and design, carried out statistical analysis, results illustration and data interpretation, and drafted the manuscript; GA offered critical revisions and input for the final version of the manuscript; RWE contributed significantly to the design of the project, assisted with data analysis, interpretation and results illustration, participated in the writing of the manuscript and carried out manuscript revisions; RAH conceived and coordinated the study, participated in the design, data analysis and interpretation of results, and revised the manuscript. All authors gave final approval for publication and agreed to be accountable for all the aspects of the work.
No animal was harmed. Strict procedures were followed [48] and sample size was the minimum required. Procedures and laboratory conditions were inspected by the Veterinary services of the DHSSPS Northern Ireland which deemed no need for licensing. Fish were kept for separate experiments.
School of Biological Sciences, Queen's University Belfast, Medical Biology Centre, 97 Lisburn Road, Belfast, BT9 7BL, UK
Kyriacos Kareklas
, Gareth Arnott
, Robert W. Elwood
& Richard A. Holland
School of Biological Sciences, Bangor University, Deiniol Road, Bangor, Gwynedd, LL57 2UW, UK
Richard A. Holland
Search for Kyriacos Kareklas in:
Search for Gareth Arnott in:
Search for Robert W. Elwood in:
Search for Richard A. Holland in:
Correspondence to Kyriacos Kareklas.
Datasets and calculated statistics. The file includes: 1) datasets of recordings from preliminary and experimental (novel object and novel environment) tests, and 2) tables with calculated boldness scores and intra-individual variance statistics. (XLSX 15 kb)
Kareklas, K., Arnott, G., Elwood, R.W. et al. Plasticity varies with boldness in a weakly-electric fish. Front Zool 13, 22 (2016) doi:10.1186/s12983-016-0154-0
Received: 26 February 2016
Behavioural plasticity
Individual variation
Weakly-electric fish | CommonCrawl |
Population genomics reveals that an anthropophilic population of Aedes aegypti mosquitoes in West Africa recently gave rise to American and Asian populations of this major disease vector
Jacob E. Crawford†1, 2,
Joel M. Alves†3, 4,
William J. Palmer†3,
Jonathan P. Day3,
Massamba Sylla5,
Ranjan Ramasamy6,
Sinnathamby N. Surendran6, 7,
William C. BlackIV5,
Arnab Pain8 and
Francis M. Jiggins3Email author
© Jiggins et al. 2017
The mosquito Aedes aegypti is the main vector of dengue, Zika, chikungunya and yellow fever viruses. This major disease vector is thought to have arisen when the African subspecies Ae. aegypti formosus evolved from being zoophilic and living in forest habitats into a form that specialises on humans and resides near human population centres. The resulting domestic subspecies, Ae. aegypti aegypti, is found throughout the tropics and largely blood-feeds on humans.
To understand this transition, we have sequenced the exomes of mosquitoes collected from five populations from around the world. We found that Ae. aegypti specimens from an urban population in Senegal in West Africa were more closely related to populations in Mexico and Sri Lanka than they were to a nearby forest population. We estimate that the populations in Senegal and Mexico split just a few hundred years ago, and we found no evidence of Ae. aegypti aegypti mosquitoes migrating back to Africa from elsewhere in the tropics. The out-of-Africa migration was accompanied by a dramatic reduction in effective population size, resulting in a loss of genetic diversity and rare genetic variants.
We conclude that a domestic population of Ae. aegypti in Senegal and domestic populations on other continents are more closely related to each other than to other African populations. This suggests that an ancestral population of Ae. aegypti evolved to become a human specialist in Africa, giving rise to the subspecies Ae. aegypti aegypti. The descendants of this population are still found in West Africa today, and the rest of the world was colonised when mosquitoes from this population migrated out of Africa. This is the first report of an African population of Ae. aegypti aegypti mosquitoes that is closely related to Asian and American populations. As the two subspecies differ in their ability to vector disease, their existence side by side in West Africa may have important implications for disease transmission.
Anthropophilic
Arboviral diseases
Mosquito evolution
Arthropod-borne viruses (arboviruses) are a major threat to human health in many tropical and subtropical countries. The most important vector of human arboviruses is the mosquito Aedes aegypti, which transmits dengue, chikungunya, yellow fever and Zika viruses. A widespread epidemic of the Zika virus has recently occurred across South America, Central America and the Caribbean and has been linked to fetal brain abnormalities [1]. Over the last decade, chikungunya virus, which is transmitted by both Aedes albopictus and Ae. aegypti, has emerged as a major cause for concern, causing epidemics in Asia and many Indian Ocean islands as well as in southern Europe and the Americas [2]. Dengue virus, which is responsible for the most common human arboviral disease infecting millions of people every year, has greatly increased its range in tropical and subtropical regions [3, 4].
Ae. aegypti occurs throughout the tropics and subtropics, but populations vary in their ability to transmit disease (vector capacity) [5–11]. Outside of Africa, Ae. aegypti has a strong genetic preference for entering houses to blood-feed on humans and an ability to survive and oviposit in relatively clean water in man-made containers in the human environment [5, 6]. However, across sub-Saharan Africa there is considerable variation among populations in their ecology, behaviour and appearance [10, 12–15]. Some populations are less strongly human associated, being found in forests, ovipositing in tree holes and feeding on other mammals [5–8]. Elsewhere, populations have become 'domesticated,' developing in water in and around homes and feeding on humans. Aside from a few locations on the coast of Kenya that appear to have been colonised by non-African populations, African populations tend to cluster together genetically regardless of whether they are forest or domestic forms [12]. This was interpreted as suggesting that these human-associated populations in Africa have arisen independently from the domestic populations found elsewhere in the tropics [12]. However, as we discuss later, such interpretations of genetic data can be misleading.
Ae. aegypti has long been hypothesised to have originated in Africa, probably travelling in ships along trading routes [7, 8]. This out-of-Africa model has been supported by genetic data, as African populations have higher genetic diversity than those from elsewhere in the tropics [16]. Furthermore, rooted trees constructed from the sequences of a small number of nuclear genes have consistently found that the genetic diversity in Asian and New World populations is a subset of that found in Africa [16]. The exact origin of this migration out of Africa remains uncertain. Furthermore, it is not known whether the species evolved to specialise on humans in Africa or after it had migrated out of Africa [17].
The species Ae. aegypti has been split into two subspecies [7]. Outside Africa, nearly all populations belong to the subspecies Ae. aegypti aegypti, which is light in colour and strongly anthropophilic. In Africa the subspecies Ae. aegypti formosus is darker in colour and lives in forested habitats. The two subspecies were originally defined based on these differences in colouration, with Ae. aegypti aegypti having pale scales on the first abdominal tergite [7]. However, West African populations that have these pale scales appear to be genetically more similar to Ae. aegypti formosus populations than Ae. aegypti aegypti from elsewhere in the tropics [10, 14, 15]. This has led some authors to call all African populations Ae. aegypti formosus, while others have continued to use the original morphological definition.
Population genetics studies of Ae. aegypti have a long history, but until recently they were limited by the small numbers of genetic markers available. Whole genome sequencing is prohibitively expensive due to the large genome size [18], but three approaches have made genome-scale analyses possible. Restriction site-associated DNA (RAD) sequencing has been used to score large numbers of single nucleotide polymorphisms (SNPs) [16, 19, 20], although the repetitive genome coupled with PCR duplicates due to the low DNA yield of mosquitoes can complicate this approach [20]. An Ae. aegypti SNP chip can genotype more than 25,000 SNPs [21], although the analysis of these data can be complicated because a biased set of SNPs is genotyped [22]. Finally, we recently developed exome capture probes, which allow the protein-coding regions of the genome to be selectively resequenced [23]. This makes sequencing affordable, minimises ascertainment bias and avoids repetitive regions where it is difficult to map short sequence reads.
Here we have used exome sequencing to investigate the origins of the domestic Ae. aegypti aegypti populations that are the main vectors of human viruses. To do this, we sampled mosquitoes from two nearby populations in Senegal, West Africa, one of which was from a forested region and has the classical phenotype of Ae. aegypti formosus, and the other of which was from an urban location and resembled Ae. aegypti aegypti. These samples were then compared to populations from East Africa, Mexico and Sri Lanka. We found that the domestic population in West Africa is most closely related to domestic populations in Mexico and Sri Lanka. We conclude that the species likely became domesticated in Africa, and the migration out of Africa came from populations related to extant domestic African populations. Furthermore, the out-of-Africa migration and probably the original domestication event in Africa were associated with population bottlenecks.
Mosquito samples
We investigated Ae. aegypti from five populations (the sample details are given in Additional file 1). Wherever possible, mosquitoes were sampled from multiple nearby sites. Mexican mosquitoes were all collected from independent sites in Yucatán state and supplied as extracted DNA by William Black. This group of mosquitoes was a mixture of males and females, with the sex of individuals unknown. The collection sites were urban and peri-urban. Female Sri Lankan Ae. aegypti were supplied by Ranjan Ramasamy and Sinnathamby Surendran. Nine individuals from the Jaffna district [24] and one from the Batticaloa district [24] had been collected from separate oviposition traps in 2012 and reared to adulthood in the laboratory. These specimens were from urban and peri-urban areas. Female Ugandan Ae. aegypti were supplied by Jeff Powell. They had been collected in Lunyo, Entebbe in 2012 using oviposition traps and reared in the laboratory.
The samples from two populations in Senegal were supplied as extracted DNA by William Black [10]. They fell into two phenotypically and geographically distinct groups. The first of these we called 'Senegal Forest'; this group is from the rural forested locations near Kedougou [10]. Here the mosquitoes lacked pale scales on the first abdominal tergite, which is the classical phenotype associated with Ae. aegypti formosus [10, 25]. This group of mosquitoes was a mixture of males and females, with the sex of individuals unknown. The second group of mosquitoes, which we call 'Senegal Urban', came from the urban location of Kaolack and had the pale scales on the first abdominal tergite that are classically associated with Ae. aegypti aegypti [10, 25]. This sample consisted of 2 males and 10 females. The two locations are approximately 420 km apart.
Aedes bromeliae eggs were collected in July 2010 from Kilifi in coastal Kenya using oviposition traps. Eggs were hatched in the laboratory in the UK and reared to maturity. A single female was then used for sequencing.
Library preparation and sequencing
DNA was extracted from Ae. aegypti mosquitoes using the DNeasy Blood and Tissue Kit (Qiagen). Illumina sequencing libraries were constructed from individual mosquitoes using the Illumina TruSeq Library Prep Kit. The concentration of each library was estimated by quantitative PCR, and four equimolar pools of the libraries from Mexico, Senegal, Uganda and Sri Lanka were made. Exome capture was then performed to enrich for coding sequences using custom SeqCap EZ Developer probes (Nimblegen) [23]. Overlapping probes covering the protein-coding sequence, not including untranslated regions (UTRs), in the AaegL1.3 gene annotations [18] were produced by Nimblegen based on coding sequence coordinates (covering 22.2 Mb) specified by us. In total, 26.7 Mb representing 2% of the genome was targeted by capture probes, which includes regions flanking the coding sequence that were added during the proprietary design process. Exome capture coordinates are available in Additional file 2 (from [23]). Each of the four exome-captured pools of libraries was then separately sequenced in one lane each of 100-bp paired-end HiSeq2000 runs by the Beijing Genomics Institute (China).
DNA was then extracted from a single Ae. bromeliae individual using the QIAamp DNA Mini Kit. A whole-genome sequencing library was constructed using the Illumina Nextera DNA Library Prep Kit. This library was sequenced in one lane of MiSeq (2 × 250 bp paired-end reads; Oxford Genomics) and two lanes of HiSeq2000 (2 × 100 bp paired-end reads; King Abdullah University of Science and Technology, KAUST, sequencing core).
Sequence alignment and variant calling
Initially Aedes aegypti reads were demultiplexed using fastq-grep [26] and hard matching of Illumina barcodes. As such, reads with any errors in barcode sequence were discarded. The following steps were then performed on reads from each of the populations, and Aedes bromeliae, separately.
Paired reads were quality trimmed from the 3′ end, cutting when average quality scores in sliding windows of 5 bp dropped below 30, and trimmed when the quality score at the end of the read dropped below 30 using Trimmomatic version 0.27 [27]. As the insert size from some individuals was shorter than the length of two sequencing reads, we initially observed some sequence overlap of paired-end reads. This is undesirable, as when mapped they violate the later sampling assumption that a given SNP observation results from a single molecule. As such, overlapping reads were merged into single pseudoreads with FLASH version 1.2.11 [28] and then treated as single-end sequencing reads. Both paired- and single-end pseudoreads were then aligned to the Aedes aegypti reference genome AaegL3.3 using BWA-MEM version 0.7.10 [29]. Unmapped reads as well as those mapping below a mapQ of 30 were then discarded using SAMtools view [30]. SAMtools was then used to merge and sort the paired- and single-end pseudoreads read alignments into a single BAM file, which was used for all subsequent analyses. We observed a number of Ae. bromeliae reads mapping with coordinates outside the normal range, so for this set we used a custom script to remove read pairs with mapping start positions less than 100 bp or greater than 400 bp. Reads were then realigned around indels using GATK version 3.4-0 [31], and both optical and PCR duplicates were removed using Picard [32] version 1.90. An uncompressed BCF was generated using SAMtools mpileup version 0.1.19 with Indel calling disabled; skipping bases with a baseQ/BAQ less than 30; and mapQ adjustment (-C) set to 30. This was finally converted to a VCF using bcftools. Low-quality SNPs were removed by using SNPcleaner version 2.2.4 [33] to remove sites that had a total depth across all individuals of >1500 or had less than 10 individuals with at least 10 reads. Additional sites were filtered based on default settings within the SNPcleaner script. VCF files were queried using SNPcleaner for each population separately in order to obtain a set of robust sites for analysis. This list was used as a -sites file input for ANGSD [34], such that subsequent analysis within ANGSD was restricted to these sites. For some analyses that require comparison among populations, we found the intersect between the lists of high-quality sites for each population and used this common set for analysis. Minimum map quality and base quality thresholds of 30 and 20 were used. For some analyses we converted genotype likelihoods into hard-called genotypes using the doGeno function in ANGSD with a cutoff of 0.95 for posterior probabilities on the genotype calls and a minimum read depth of 8. This read processing and genotype calling process was repeated for the sequence reads from Ae. bromeliae, except that the Ae. aegypti sites list was used since SNPcleaner is not intended for single diploid samples.
Population genetics analysis
We estimated the nucleotide diversity π using ANGSD, which calculates π based on estimates of per-site allele frequencies across each population sample (i.e. without the need to call genotypes), directly accounting for sample size and read depth. We estimated 95% bootstrap confidence intervals (CIs) by resampling scaffolds with replacement 500 times and recalculating the statistic. As nucleotide diversity is reduced in coding sequence due to purifying selection, we only used sites >500 bp from exons in this analysis (≥399,259 in each population).
To construct a neighbour-joining tree of our samples, we first estimated the pairwise genetic distance (D xy ) between all pairs of samples based on genotype calls. D xy was calculated from the called genotypes as (h + 2H)/2 L, where h is the number of sites where one or both individuals carry heterozygous genotypes, H is the number of sites where the two individuals are homozygous for different alleles and L is the number of sites where both individuals have called genotypes.
To investigate population structure and the ancestry of individual mosquitoes, we performed an admixture analysis using NGSadmix, which makes inferences based on genotype likelihoods [35]. We also analysed data from the three chromosomes separately using the chromosome assignments of Juneja et al. [20]. As an alternative approach to investigate genetic structure, we performed a principal component analysis (PCA). The PCA was based on a covariance matrix among individuals that was computed while accounting for genotype uncertainty using the function ngsCovar in ngsTools [33].
We calculated F ST [36] between populations from allele frequencies estimated for each population directly from read data using ANGSD. This analysis used data from 17,351,731 coding and non-coding sites with no minimum minor allele frequency.
We investigated the historical relationships between our populations by reconstructing a population maximum likelihood tree based in allele frequencies using the program TreeMix [37]. This analysis used all high-quality coding and non-coding sites in our dataset, and Ae. bromeliae was used as an outgroup. We chose this species, as the more closely related outgroup Ae. mascarensis frequently shares polymorphisms with Ae. aegypti [16]. To account for the non-independence of sites due to linkage disequilibrium, we used a block size (k) of 100 SNPs. To evaluate the confidence in the inferred tree topology, 1000 bootstrap replicates were conducted by resampling blocks of 100 SNPs. To test whether there had been migrations between the populations after they split, we used the three- and four-population tests of Reich et al. [38], also implemented in TreeMix.
We estimated one- and two-dimensional site frequency spectra (SFS) using the doSaf function within ANGSD to estimate per-site allele frequencies combined with the realSFS program [39] to optimize the genome-wide SFS. We minimised the effect of natural selection on the SFS by including only third codon position sites as well as non-coding sites more than 100 bp from the nearest exon, and as before, only sites passing all filters were included for analysis. Approximately 6.44 Mb was included in this dataset. To facilitate comparison among populations, we down-sampled the larger population samples and chose 10 randomly selected individuals from each population. Two-dimensional (2D) spectra were plotted using dadi [40].
We fit two classes of demographic models to the data from Senegal Forest, Senegal Urban and Mexico using fastsimcoal2 version 2.5.2 [41] to distinguish between the hypotheses that Senegal Urban is evolutionarily intermediate because it (1) is admixed with domesticated, non-African ancestry, or (2) represents the domesticated form within Africa that is the genetic ancestor of non-African domesticated populations. We first fit simple three-population models with no size changes for each of the two classes, and then fit a second version of the model including size changes in each of the three populations. Schematics of the two models and their parameters can be found in Additional file 3.
We note that for the admixture models, the order of divergence times for Mexico and Senegal Urban was not specified such that either could diverge before the other from Senegal Forest. In addition, we fixed the current effective size of Senegal Forest to 1,000,000 in order to anchor the models and reduce the number of free variables. To obtain best-fit parameter values, we first conducted a round of 500 optimizations for each model using wide parameter ranges and the following fastsimcoal2 parameters: -n 1000 -N 100000 -c0 -d -M 0.001 -l 10 -L 40. Simulations were structured to model exomes by simulating 17,000 independent regions using the mutation rate estimated for Drosophila melanogaster, 3.5 × 10–9 [42], since this parameter is not available for mosquitoes, and an equivalent within-region recombination rate. We then conducted a second round of 500 optimizations using a more narrow set of possible starting parameter values tuned on the first set of optimizations in order to improve model fitting. We used the parameter values from the replicate with the highest likelihood value from the second set of optimizations as the best-fit model and used this model for a final likelihood calculation by conducting a final set of 106 simulations for a more accurate calculation of the likelihood value. Confidence values were estimated for model parameters using block-bootstrapping, where 100 bootstrapped datasets were generated by arbitrarily assembling scaffolds into a contiguous pseudochromosome, dividing this 'chromosome' into 1000 identically sized blocks and resampling with replacement. Best-fit models were obtained for each bootstrapped dataset using a set of 50 optimizations with broad starting parameter value ranges. The same bootstrapping approach was performed to obtain 95% CIs for 1D site frequency spectra as well.
We scanned the exome for regions with exceptional genetic differentiation consistent with the action of recent positive selection using a normalised version of the population branch statistic (PBSn1) [43]:
$$ PB{S}_{n1}=\frac{PB{S}_1}{1+ PB{S}_1+ PB{S}_2+ PB{S}_3} $$
where PBS 1 indicates PBS calculated with the domesticated population as the focal population, PBS 2 indicates PBS calculated with the Ugandan population as the focal population and PBS 3 indicates PBS calculated with Senegal Forest as the focal population. For this analysis, we obtained admixture-corrected allele frequencies using NGSadmix analysis but with no minimum allele frequency filter. We then used allele frequencies to calculate F ST between the focal population (Sri Lanka, Senegal Urban or Mexico) and both Senegal Forest and Uganda. These values were then used to calculate PBSn1 for non-overlapping blocks of 5 SNPs. We annotated top windows by identifying the gene (Ae. aegypti, AaegL3.3) with the exon on or nearest the most differentiated SNP within the window and pulling external metadata for these genes from VectorBase [44].
For each population pairwise comparison we calculated the Weir and Cockerham F ST at each variant position (using the hard-called genotypes generated from ANGSD) with VCFtools version 0.1.12 [45]. All positions with less than 10 individuals in each population comparison were excluded. The annotation for each candidate SNP was determined using SnpEff, version 4.1 [46].
Final plots were generated in R [47] using the built-in functions and the R package ggplot2 [48].
High-coverage population exome sequences and an Ae. bromeliae genome sequence
The Ae. aegypti genome is large (1.4 GB), repetitive and poorly assembled, which makes it expensive and challenging to resequence [18, 23]. To overcome this, we used probes to capture the predicted protein-coding sequence [23], which both reduces the cost of sequencing and avoids the repetitive and most poorly assembled regions of the genome. In total, we sequenced 15 mosquitoes from Uganda, 22 from Senegal, 10 from Sri Lanka and 24 from Mexico. Each mosquito was individually barcoded in the sequencing library. The exome capture was efficient, with 89% of mapped reads on target, resulting in >400X greater coverage of the exome compared to the genome average. The mean on-target coverage of the exomes was 29X, with the mean coverage of individual mosquitoes ranging from 15X to 48X. In total we genotyped 17,351,731 sites, 1,321,924 of which were variable when genotypes were called. We called 436,559 polymorphisms in Mexico, 782,744 in Senegal Forest, 464,665 in Senegal Urban, 286,307 in Sri Lanka and 645,547 in Uganda.
For many types of analyses it is helpful to have the genome sequence of a relatively closely related species as an outgroup. For this reason we sequenced the whole genome of Ae. bromeliae and mapped the reads to the Ae. aegypti reference genome. In total we called genotypes at 104,017,808 sites. Of the 17,351,731 sequenced sites in the Ae. aegypti dataset, 13,806,549 (80%) had called genotypes in Ae. bromeliae. The mean coverage of the exome was 6.54X; coverage of intergenic regions was substantially lower (presumably due to low rates of mapping).
Reduced genetic diversity and fewer rare variants support the out-of-Africa migration of Ae. aegypti
Ae. aegypti is believed to have originated in Africa and subsequently colonised Asia and the Americas [7, 8, 12]. We found that the genetic diversity (π) of our three African populations was substantially higher than those from Mexico and Sri Lanka, which is consistent with a population bottleneck during the out-of-Africa migration (Fig. 1a). Interestingly, our domestic population from West Africa (Senegal Urban) has a nucleotide diversity that is intermediate between the other African populations and those from outside Africa (Fig. 1a). This indicates that historically the effective population size of this population has been reduced below that of the nearby Senegal Forest population.
Nucleotide diversity (a) and site frequency spectrum (b) of five populations of Ae. aegypti. a Nucleotide diversity (π) was estimated for non-coding sites >500 bp from exons. b The site frequency spectrum was estimated for 10 individuals from each population using third codon positions and non-coding sites >100 bp from exons. Ae. bromeliae was used to polarize sites. The grey bars are the expected frequencies assuming variant sites are neutral and the effective population size is constant. In both panels, error bars are 95% confidence intervals from block-bootstrapping
Population bottlenecks and other changes in the effective population size not only alter the nucleotide diversity but also the allele frequency spectrum [49]. There has been a striking reduction in the number of rare alleles in the Mexican and Sri Lankan populations relative to both the neutral, equilibrium expectation and the populations in Uganda and Senegal Forest (Fig. 1b). This loss of rare variants is expected if these populations have experienced a population bottleneck [50]. Unexpectedly, the domestic Senegal Urban population has a similar reduction in rare variants, suggesting that it too may have experienced a population bottleneck in its history (Fig. 1b). Interestingly, the Senegal Forest population has an excess of rare variants compared to the neutral expectation. This may indicate a recent increase in population size in this population, but it could also reflect the fact that a large proportion of our data is protein-coding sequences, and it is common to find that purifying selection keeps slightly deleterious amino acid polymorphisms at a low frequency [51].
Anthropophilic Ae. aegypti from Senegal are genetically distinct from other African populations and populations outside of Africa
There is clear genetic structure among the five populations we studied, with principal component analysis (PCA) clustering samples from the same location together. This analysis revealed three major groups in our data: Mexico + Sri Lanka, Uganda + Senegal Forest and Senegal Urban (Fig. 2a). Therefore, the Senegal Forest population is grouping with the population in Uganda rather than with the nearby Senegal Urban population.
Genetic structure in Ae. aegypti populations. a Principal component analysis of Ae. aegypti exome sequences from five populations. The PCA was calculated from a covariance matrix calculated from all variants in the genome while accounting for genotype uncertainty. The percentage of the variance explained by each component is shown on the axis. b Ancestry proportions for Ae. aegypti individuals from five populations. Ancestry is conditional on the number of genetic clusters (K = 2–5) and is inferred from all sites in our dataset
This division between the Senegal Urban population and other populations in Africa is also apparent when an admixture analysis is used to infer the ancestry of the individuals from the five populations [35]. When we assumed that there were three ancestral populations (K = 3, Fig. 2b), the populations again grouped into Mexico + Sri Lanka, Uganda + Senegal Forest and Senegal Urban. Allowing higher levels of K recovers the division between Mexico and Sri Lanka and the genetic structure within the Ugandan population (Fig. 2b).
These patterns of population structure were broadly supported when we compared allele frequencies between populations using 2D site frequency spectra (SFS). Strikingly, the allele frequencies were markedly more similar when Senegal Forest was compared to Uganda than when it was compared to the relatively nearby Senegal Urban population (Fig. 3a). This is reflected in F ST , which was greater between Senegal Urban and Senegal Forest (Fig. 3b; F ST = 0.08) than between Uganda and Senegal Forest (F ST = 0.03). Therefore, genetic differentiation between our African populations does not reflect geographic distance, but the Senegal Urban population is distinct from the other African populations. This is consistent with this population morphologically resembling the Ae. aegypti aegypti subspecies.
Differences in allele frequencies between populations. a Two-dimensional site frequency spectra. Colours represent the number of sites at a given frequency within each population (0-20) with frequency increasing from left to right and bottom to top in each spectrum. Allele frequencies were estimated using 10 randomly sampled individuals from each population. b Pairwise F ST
The frequency of alleles was strongly correlated in Sri Lanka versus Mexico (Fig. 3a), and F ST between these populations was low (Fig. 3b). This supports a single out-of-Africa migration giving rise to these two populations. The non-African populations are clearly distinct from the African ones (Fig. 3; F ST > 0.19 and different 2D SFS). Strikingly, the 2D SFS suggest that the Senegal Urban population is intermediate between the other African and the non-African populations (Fig. 3a). When Sri Lanka and Mexico are compared to Senegal Urban, there are more intermediate frequency polymorphisms in common than when these populations are compared to the other two African populations (Fig. 3a).
In Senegalese populations of Ae. aegypti there is evidence of polymorphic chromosomal inversions [52]. These are expected to suppress recombination and may lead to elevated differentiation between populations or species in these regions of the genome. This might be especially important around the sex-determining locus (sex in Ae. aegypti is determined by a single locus on an autosome) [52]. To examine this, we performed the principal component and admixture analyses on the three chromosomes separately and plotted F ST in a sliding window across the genome. Although there appears to be some variation across chromosomes, we found no evidence that the patterns we see are driven by a single region of the genome or a single chromosome (Additional file 4).
Domestic populations of Ae. aegypti in Senegal and outside of Africa share a different common ancestor from other African populations
Understanding the historical relationships between populations based on approaches like PCA, F statistics or admixture analysis is not straightforward [37, 53]. For example, the main groups distinguished by PCA are African versus non-African populations. PCA reflects the average coalescent times between pairs of samples [54], so this clustering may result from a bottleneck that occurred during the out-of-Africa migration rather than all the African populations sharing a different common ancestor from the non-African populations.
To reconstruct historical relationships between the populations, we made rooted trees using Ae. bromeliae as an outgroup. The first approach we took was to draw a neighbour-joining tree based on the pairwise genetic distance (D xy ) between our samples. With the exception of a single mosquito, the five populations formed five monophyletic groups (Fig. 4a). The major split within the tree separated Uganda + Senegal Forest from Sri Lanka + Mexico + Senegal Urban. Therefore, the pan-tropical Ae. aegypti aegypti populations shared a common ancestor with the population in Senegal that shares a similar ecology and has the classical phenotype associated with the Ae. aegypti aegypti subspecies.
Historical relationships between Ae. aegypti populations. a Neighbour-joining tree of Ae. aegypti exome sequences from five populations. The tree is rooted with the sequence of Ae. bromeliae. Branches leading to samples from different populations are colour-coded. The scale is genetic distance (D xy ). b Relationships between populations. The branch lengths are proportional to the amount of genetic drift that has occurred. The scale bar shows ten times the average standard error of the entries in the sample covariance matrix. The numbers on branches are percent bootstrap support calculated by resampling blocks of 100 SNPs. The population tree was reconstructed using allele frequency data using the TreeMix program [37]. Both panels use all sites in our dataset
To investigate these relationships further, we used allele frequency data to reconstruct the relationships among our populations (Fig. 4b). This again supported the hypothesis that among the populations sampled there has been a single 'domestication' of Ae. aegypti that presumably occurred in Africa, and this ancestral population has given rise to human-associated Ae. aegypti populations in Senegal, Asia and the Americas. This approach also estimates the amount of genetic drift that has occurred in these populations, which is a measure of their effective population size (branch lengths in Fig. 4b). From this it is clear that the effective population size of the Senegal Urban population has been reduced relative to Ae. aegypti formosus populations found elsewhere in Africa. There was a further increase in the rate of drift in the non-African populations, likely reflecting a bottleneck during the out-of-Africa migration.
Populations need not be related by a simple bifurcating tree, since they may also subsequently mix. An alternative hypothesis to explain the similarity of the Senegal Urban population to populations in Mexico and Sri Lanka is that Ae. aegypti aegypti from outside Africa have migrated back to Africa and mixed with the local Ae. aegypti formosus population [12]. This hypothesis has some support from the admixture analysis under the model that separates African and non-African populations (K = 2) with the Senegal Urban individuals all showing evidence of non-African ancestry (Fig. 2b; note this pattern is not seen at K > 2). We further tested whether the Senegal Urban population was a mixture of the nearby forest population and non-African populations using the three-population test of Reich et al. [38]. Regardless of whether we tested for admixture between Mexico or Sri Lanka and Senegal Forest, the f3 statistic was positive, indicating that there was no evidence of admixture (source populations Senegal Forest and Mexico: f3 = 0.008; source populations Senegal Forest and Sri Lanka: f3 = 0.007). Furthermore, when we added migration events between the populations in Fig. 4b in the TreeMix model [37], we never detected any migration from outside Africa into Senegal Urban.
Despite finding no evidence using the three-population test for the Senegal Urban population being a mixture of African and non-African populations, we do find evidence for admixture among our five populations. We used the four-population test [38] to examine whether the allele frequencies were compatible with groups of four populations being related by a simple unrooted bifurcating tree without any mixing. We were able to reject this hypothesis in three cases ([[Mexico, Senegal Urban], [Senegal Forest, Uganda]]: z = –13.9, p < < 0.0001; [[Mexico, Sri Lanka], [Senegal Forest, Senegal Urban]]: z = –29.6, p < < 0.0001; [[Mexico, Sri Lanka], [Senegal Urban, Uganda]]: z = –27.2, p < < 0.0001). When we attempted to infer specific migrations between these populations using either f3 statistics or TreeMix, we found that the results were inconsistent. Importantly, however, allowing migration does not alter the topology of the tree in Fig. 4b. Therefore, we can conclude that there has been some mixing between populations (possibly involving populations that we did not sample), but we are unable to infer which populations have mixed with each other.
Domestic populations in Mexico and Senegal diverged very recently and experienced strong reductions in population size
We next fitted explicit demographic models to our genetic data, both to provide an additional test of how our populations are related to each other, and to understand when population splits occurred and how population sizes changed [41]. We fitted two demographic models to pairwise 2D SFS from the Senegal Forest, Senegal Urban and Mexico populations (see Methods and Additional file 3). In the admixture-back-to-Africa model, Senegal Urban is admixed with non-African ancestry, while in the serial founder model Senegal Urban shares a common ancestor with non-African populations (Additional file 3). After extensive optimization of each model with and without population size changes, we found that a serial founder model with population size changes fit the data substantially better than any other model tested, with both a higher log likelihood (despite fewer parameters) and a considerably lower Akaike information criterion (AIC) value than the other models (Fig. 5a, Additional file 3). Therefore, modelling of demography supports the population relationships inferred above with an absence of gene flow back to Senegal Urban.
Demographic modelling for African and non-African populations does not support admixture-back-to-Africa model. a Statistical support for four demographic models. Log likelihood indicates likelihood of data given each model, with higher values corresponding to better fit. Lower Akaike information criterion (AIC) values indicate better support for model (AIC = 2d – 2(Log Likelihood), where d is the number of model parameters estimated). b Admixture analysis of data simulated under best-fit demographic model generates evidence for mixed ancestry in Senegal Urban similar to Fig. 2, despite including no admixture in model. Five thousand 500-bp exons were simulated using fastsimcoal2 and analysed using admixture [67]. c Schematic representing the maximum likelihood estimated model. Parameters are effective population sizes, and times when populations split or changed in size. d Confidence intervals (CIs) for model parameters
In apparent contradiction of these conclusions, our admixture analysis (Fig. 2b; K = 2) suggested that there may have been migration back to Senegal Urban from non-African populations. Similar results have been reported in previous admixture analyses of populations from Senegal [12]. However, changes in population size are known to create false signals of population mixing in such analyses [53]. To examine if this was the case here, we used our best-fit serial founder model (i.e. with no population mixing) to simulate sequence data. Repeating the admixture analysis on this simulated data, we found that Senegal Urban is assigned a similar level of mixed ancestry as we inferred from the real data (Fig. 5b versus Fig. 2b). Furthermore, this plot gives the incorrect impression that the two African populations are closely related (Fig. 2b). Therefore, our admixture analysis is compatible with the demographic model.
The demographic model allows us to infer when populations split and how their population size has changed (Fig. 5c, CIs in Fig. 5d). Following the split from the Senegal Urban lineage 4366 generations ago, the effective population size of the 'Mexican' lineage was initially large (~106), suggesting that this ancestral population was still in Africa. Therefore, the two populations likely separated shortly before the out-of-Africa migration. Approximately 3000 generations ago, there was a strong reduction in the effective population size of the Mexican population, presumably reflecting a bottleneck associated with the out-of-Africa migration. Alongside this, the Senegal Urban population experienced a reduction in its effective population size ~4150 generations ago. The divergence of the Senegal Forest population from Senegal Urban and Mexico was considerably more ancient (163,825 generations ago).
Adaptation during domestication
When the anthropophilic subspecies Ae. aegypti aegypti arose, it evolved a suite of characters that increased its capacity to vector dengue virus and yellow fever virus [10, 11, 14, 15]. Alongside this there were changes in colouration [14, 15], and the expansion into a novel ecological niche will likely have involved adaption to many other challenges. We examined sites that were strongly differentiated between the two subspecies, as these are likely to be enriched for sites that were selected in this transition. This is complicated, because the out-of-Africa migration was accompanied by large shifts in allele frequencies which are likely to obscure any effects of selection — we found 786 sites fixed for different alleles (F ST = 1) in Senegal Forest versus Sri Lanka and 254 such sites when Senegal Forest was compared to Mexico (Additional file 5). By contrast there were just 3 such sites when Senegal Forest was compared to Senegal Urban (Additional file 5). Therefore, we focussed our analysis on the three African populations where the confounding effects of genetic drift are less strong. We scanned exomes from the three African populations using a normalised version of the population branch statistic (PBSn1) [55] to identify regions with strong differentiation specific to the Senegal Urban population. Our scan included 1,237,042 variable sites grouped into 240,609 non-overlapping windows of 5 SNPs spanning 13.17 Mb of the exome and nearby regions. We provide lists of strongly differentiated genes based on PBSn1 and per-SNP F ST in Additional files 5 and 6.
McBride et al. [5] found that odorant receptor 4 (Or4; AAEL015147) plays a key role in Ae. aegypti aegypti's preference for feeding on humans. Three windows in our dataset tag this gene, but they show little evidence for genetic differentiation in any of the three domesticated populations (maximum windows: Senegal Urban PBSn1 = 0.0701; Mexico PBSn1 = 0.3135; Sri Lanka PBSn1 = 0.2892). We similarly found no individual SNPs in this gene that were strongly differentiated between the subspecies (F ST ; Additional file 5). Nonetheless, the 25 most differentiated genes included three odorant receptor/binding genes and a gustatory receptor (Table 1 and Additional file 6). Furthermore, the most differentiated gene encodes a pickpocket sodium channel, which is a family of proteins whose functions include olfaction and taste, and an ortholog (ppk10) of the gene we identified is associated with genetic variation in Drosophila olfaction [56]. While these are interesting candidates, to our knowledge, none of these genes have previously been implicated in habitat or host-seeking behaviour, nor were genes involved in taste or olfaction significantly overrepresented in this list relative to the genome average [57].
Genes that are highly differentiated in the Senegal Urban population relative to Uganda and Senegal Forest
PBSn1 a
AAEL013219
Near_exon
Pickpocket sodium channelb,c
Importin alphab
Intergenic
DNA bindingb
Lipaseb
Chitin bindingb
CYP12F7
Cytochrome P450b
Odorant binding protein [68]
Gustatory receptorb
Tyrosine catabolismb
Lipid transportb
Krebs cycleb,c
tRNA editingb,c
Sugar transporterb
SCRBQ2
Class B Scavenger Receptorb
Vesicle transportb
Lachesinb
Odorant receptorb
Sulfonylurea receptorb
Ion channelb
Or50
aNormalised population branch statistic
bVectorBase gene description or Gene Ontology (GO) term
cFlyBase Drosophila ortholog
A key selection pressure on many Ae. aegypti aegypti populations is insecticides. An important mechanism involves changes to the target of DDT and pyrethroids that makes it insensitive to these insecticides (the voltage-gated sodium channel, aka VGSC, knock-down Resistance, kdr; AAEL006019) [58]. The gene encoding this protein is not exceptionally differentiated in this analysis — 53 windows fall within the coding region of VGSC, and we find only marginal evidence of differentiation (maximum windows: Senegal Urban PBSn1 = 0.5457). However, two amino acid variants known to be associated with insecticide resistance are at frequencies of 73% and 85% in Mexico but absent elsewhere (Additional files 5 and 7; V756I and F1249C, which are referred to as V1016I and F1534C in previous annotations of the genome). Two genes in our top 25 encode two cytochrome P450s (CYP12F7, AAEL001960); cytochrome P450 is a family of proteins whose functions include breaking down insecticides in Aedes aegypti [59] (Table 1).
Using exome sequence data, we found that an urban population from Senegal was considerably more closely related to populations in Mexico and Sri Lanka than to a forest population just 420 km away. We estimate that the populations in urban Senegal and Mexico diverged just 4366 generations ago — 291 years ago if we assume 15 generations per year and a mutation rate of 3.5 × 10–9. By contrast, with the same assumptions, we estimate that the two nearby populations in Senegal split 10,921 years ago. The urban population in Senegal has the typical characteristics of the subspecies Ae. aegypti aegypti that is found throughout the tropics outside Africa: it lives alongside humans and has the characteristic pale scales on the first abdominal tergite [10, 14, 15]. Therefore, we can conclude that this population is a descendant of an ancestral African population of Ae. aegypti aegypti that evolved to be anthropophilic and subsequently colonised other continents, ultimately resulting in global pandemics of dengue virus, Zika virus and chikungunya virus.
Our conclusions contradict the prevailing model of Ae. aegypti evolution. Previous genetic studies have concluded that populations across sub-Saharan Africa are closely related and distinct from non-African populations (excluding some populations in coastal Kenya) [12]. Under this model, populations outside Africa belonged to the subspecies Ae. aegypti aegypti, while populations within Africa were Ae. aegypti formosus. Furthermore, anthropophilic populations in sub-Saharan Africa evolved independently from those outside Africa. Our data and analyses consistently reject this model.
An alternative scenario is that the urban population in Senegal arose when Ae. aegypti aegypti from elsewhere in the world migrated back to Africa. It is clear that this population is not directly derived from non-African populations, as it has greater genetic diversity than the Mexican or Sri Lankan populations (and this pattern has been consistently reported for other populations within and outside Africa [16]). Furthermore, the more plausible hypothesis that the Senegal Urban population was a mixture of African and non-African populations was rejected by three separate analyses: the formal test of admixture from Reich et al. [38], inferences of migration events in our population tree [37] and comparisons of explicit demographic models [41]. Therefore, we can conclude that the Senegal Urban population represents a close relative of an African population of Aedes aegypti aegypti that colonised other regions of the tropics.
Recent population bottlenecks result in a loss of rare genetic variants and reductions in genetic diversity. There was a considerably lower proportion of rare genetic variants in the Ae. aegypti aegypti populations from Senegal Urban, Mexico and Sri Lanka than in the Ae. aegypti formosus populations. Furthermore, genetic diversity was lowest outside of Africa, intermediate in the Senegal Urban population of Ae. aegypti aegypti and highest in the African Ae. aegypti formosus populations. This was reflected in the rates of genetic drift in these populations (Fig. 4b). Our demographic model confirmed that there was a sharp reduction in the effective population size during the out-of-Africa migration, presumably due to the small number of mosquitoes migrating out of Africa. Furthermore, genetic diversity is lower in Sri Lanka than in Mexico, which is consistent with other analyses that suggest that Ae. aegypti migrated to the New World first and subsequently colonised Asia [16, 17] (although a population bottleneck when this island was colonised from the mainland would produce the same pattern). Intensive control efforts may also have reduced population sizes and affected genetic diversity. However, the highest rate of genetic drift was in the common ancestor of the Sri Lankan and Mexican populations (Fig. 4b), suggesting that the reduction in the genetic diversity of these populations was due to a bottleneck caused by the out-of-Africa migration.
The sharp reduction in population size in the Mexican lineage (Fig. 5) allows us to estimate the date of the out-of-Africa migration as 2938 generations ago. Assuming 15 generations per year, this would be 196 years ago (95% CI: 152–242 years). The first historical record of the appearance of yellow fever in the New World that we are aware of was in 1648 [17], more than 100 years before our lower CI for the arrival of Aedes aegypti. Given that our estimates depend on the generation time of the mosquitoes and assumptions of our model such as the mutation rate, this small difference between genetic and historical data is expected.
The finding that close relatives of American and Asian Ae. aegypti aegypti exist side by side with Ae. aegypti formosus in Africa — and have remained genetically distinct — may have important implications for disease transmission. For example, Ae. aegypti is responsible for urban yellow fever outbreaks in West Africa but is not known to transmit the disease in East Africa [60], and it is tempting to speculate that this is due to the presence of Ae. aegypti aegypti being restricted to West Africa. Initial studies in Senegal indicated that Ae. aegypti aegypti populations have a substantially higher vector competence for dengue virus (DENV-2) than Ae. aegypti formosus [10], and similar results have been reported for yellow fever virus [11]. However, more work is needed, as this pattern was subsequently found not to hold when other virus genotypes were used [61]. In addition to high vector competence, Ae. aegypti aegypti's importance as a disease vector results from it living alongside and biting humans [5]. It will be important to examine whether the genetic forms that we describe consistently differ in their ecology, behaviour and vector competence. For example, while our population of Ae. aegypti formosus in Senegal was from a forested area, our Ugandan population was from a human-disturbed region outside Kampala. Furthermore, previous studies in West Africa have found mosquitoes that morphologically resemble Ae. aegypti formosus breeding indoors [13]. Therefore, the extent to which Ae. aegypti formosus lives alongside and feeds on humans in Africa is unclear.
Another unanswered question is the distribution of the two forms across Africa. Further sampling and analysis will not only resolve this, but will also reveal the extent of gene flow between the two subspecies. This may help us understand why they have remained genetically distinct in Africa. In East Africa crosses have found no evidence of assortative mating or intrinsic reproductive incompatibilities [62]. However, a recent study in Senegal found that the two subspecies showed evidence of post-zygotic reproductive isolation [52]. It will also be of interest to understand how our populations are related to Ae. aegypti aegypti populations on the coast of Kenya which appear genetically distinct from other African populations [16].
Our results have important implications for the definition of the two subspecies of Ae. aegypti. The subspecies were originally defined based on colouration [7], but genetic studies have led many to view all populations in sub-Saharan Africa as Ae. aegypti formosus (excluding coastal Kenya; see Background). However, our results demonstrate that Ae. aegypti aegypti occurs in Senegal, and there is no conflict between genetic and morphological definitions of the subspecies in our dataset. Therefore, an important question is whether other African populations fall neatly into the two subspecies and whether they can be identified from morphological characteristics.
Why do our conclusions differ from those of previous studies? There have been numerous population genetics studies of Ae. aegypti in the past, most of which have used small numbers of genetic markers. Where datasets are small, there can be a lack of statistical power; for example, a previous study of 11 SNPs in Senegal found no significant genetic differentiation between the subspecies [10]. Many studies used mitochondrial DNA [63], but making inferences about the history of the entire genome from a single locus is problematic, with patterns inferred from mitochondrial DNA frequently differing from the nuclear genome [64–66]. Other studies have used microsatellites and the sequences of small numbers of nuclear loci and, more recently, larger datasets from RAD tag sequencing or SNP chips [12, 16, 19, 21].
In contrast to our results, previous studies of microsatellites and SNPs concluded that domestic Ae. aegypti populations in Africa arose separately from domestic populations elsewhere in the tropics [12, 16]. This conclusion was reached because African and non-African populations cluster separately in admixture and principal component analyses [12]. We see this same pattern (Fig. 2). However, drawing conclusions about the order of population splits from such analyses or from summary statistics like F ST is not straightforward [37]. For example, principal component analysis is based on the average coalescent times between pairs of genomes, and this will be strongly affected by population bottlenecks [54]. Therefore, the reason that non-African populations do not cluster with Ae. aegypti aegypti from Senegal is not because these populations are unrelated, but is instead due to the population bottleneck associated with the out-of-Africa migration that caused large changes in allele frequencies that differentiate African from non-African populations. We confirmed this argument for our dataset by simulating data genomic data under our demographic model, and demonstrated that this led to distantly related African populations being incorrectly grouped together in an admixture analysis.
The genetic basis of the changes in vector competence and behaviour that occurred when Ae. aegypti aegypti evolved remains an important question. One approach to identify these changes is to look for regions of the genome that are strongly differentiated between the subspecies. This is greatly helped by comparing African populations of the two subspecies, as the shifts in allele frequencies that occurred during the out-of-Africa migration are likely to have obscured any effects of natural selection. We have catalogued the most strongly differentiated genes between subspecies in our dataset, and we hope that this list of candidate genes will be of interest to researchers interested in specific traits. However, to conclusively identify the genetic basis of adaptation, it will be necessary to include more populations, sequence the genome outside the exome to allow more powerful tests of selection and ultimately link these differences to phenotypic changes.
We conclude that a domestic population of Ae. aegypti in Senegal and domestic populations on other continents share a different common ancestor from other African populations. The most parsimonious explanation of this observation is that an ancestral population of Ae. aegypti evolved to specialise on humans in Africa, giving rise to the subspecies Ae. aegypti aegypti. The descendants of this population are still found in Africa today. The rest of the world was colonised when mosquitoes from this population migrated out of Africa. Non-African populations are genetically distinct from African ones due to the population bottleneck that accompanied this migration.
We thank Jeff Powell for supplying mosquitoes from Uganda.
This work was funded by European Research Council grant Drosophila Infection 281668 to FMJ, a KAUST AEA award to FMJ and AP, a Medical Research Council Centenary Award to WJP and a National Institutes of Health Ruth L. Kirschstein National Research Service Award to JC.
The raw sequencing data supporting the conclusions of this article are available in the NCBI Sequence Read Archive repository with accession number SRP092518. The sequence alignment data (BAM files) and genotype calls (VCF format) supporting the conclusions of this article are available in the University of Cambridge data repository (http://dx.doi.org/10.17863/CAM.6367).
FMJ, WJP and AP conceived the project. RR, SNS, WB and MS collected samples, with WB providing insights into the biology of Senegal populations. JD made the sequencing libraries and performed exome captures. WJP, JC and JMA analysed the data. FMJ coordinated the project and wrote the paper with assistance from the other authors. All authors read and approved the final manuscript.
Additional file 1: Details of mosquito samples used in this study. (XLSX 12 kb)
Additional file 2: Genome coordinates of regions that the exome capture probes were designed to target. The file is in BED format. (BED 2005 kb)
Additional file 3: Demographic models fitted. (A) Schematics of the demographic models tested and respective parameters. (B) Maximum likelihood estimate parameter values from admixture and serial founder demographic models. (C) Comparison of observed and expected 2D site frequency spectra under the different demographic models. (PDF 890 kb)
Additional file 4: Genetic structure across different chromosomes and regions of the genome. (A) Mean F ST values for 1000-bp non-overlapping windows for each population pairwise comparison. The x-axis represents a physical map (bp) made by arranging scaffolds along the genetic map with scaffolds mapping to the same genetic map position being ordered randomly. Scaffolds according to Juneja et al. [23]. All positions with less than 10 individuals in each population comparison were excluded. Only windows containing at least 10 SNPs were plotted. (B) Ancestry proportions for Ae. aegypti individuals from five populations calculated for each chromosome separately. (C) Principal component analysis of Ae. aegypti exome sequences from five populations calculated for each chromosome separately. The PCA was calculated from a covariance matrix calculated from all variants in the dataset and accounting for genotype uncertainty. The percentage of the variance explained by each component is shown on the top of the plot. Ancestry is conditional on the number of genetic clusters (K = 2–5). (PDF 10034 kb)
Additional file 5: F ST calculated per SNP from called genotypes for all pairs of populations. A SNP is included if it is in the top 1000 highest values in any pairwise comparison of populations. F ST is only reported where it is in the top 1000 for that comparison. Positions with less than 10 individuals in each population were excluded from the top highest values. (XLSX 1209 kb)
Additional file 6: The most differentiated regions of the genome between Ae. aegypti aegypti populations and Ae. aegypti formosus populations based on the normalised population branch statistic. The three tabs show highly differentiated regions between Senegal Forest + Uganda and Senegal Urban, Mexico or Sri Lanka. Data are analysed in 5-bp non-overlapping windows. (XLSX 117 kb)
Additional file 7: Polymorphisms in the kdr gene. Note that the annotation of this gene has changed, so the numbering refers to the genome version used in this manuscript and is different from that of most published work on this gene. (XLSX 204 kb)
Department of Integrative Biology, University of California, Berkeley, CA 94720-3140, USA
Present Address: Verily Life Sciences, South San Francisco, CA 94080, USA
Department of Genetics, University of Cambridge, Downing Street, Cambridge, CB2 3EH, UK
CIBIO/InBIO, Centro de Investigação em Biodiversidade e Recursos Genéticos, Campus Agrário de Vairão, Universidade do Porto, 4485-661 Vairão, Portugal
Department of Microbiology, Immunology and Pathology, Colorado State University, Fort Collins, CO, USA
ID-FISH Technology, Palo Alto, CA 94303, USA
Department of Zoology, University of Jaffna, Jaffna, Sri Lanka
Biological and Environmental Sciences and Engineering Division, KAUST, Thuwal, Kingdom of Saudi Arabia
Fauci AS, Morens DM. Zika virus in the Americas — yet another arbovirus threat. N Engl J Med. 2016;363:1–3.Google Scholar
Rezza G. Dengue and chikungunya: long-distance spread and outbreaks in naïve areas. Pathog Glob Health. 2014;108:349–55.View ArticlePubMedPubMed CentralGoogle Scholar
Gubler DJ. Resurgent vector-borne diseases as a global health problem. Emerg Infect Dis. 1998;4:442–50.View ArticlePubMedPubMed CentralGoogle Scholar
Mackenzie JS, Gubler DJ, Petersen LR. Emerging flaviviruses: the spread and resurgence of Japanese encephalitis, West Nile and dengue viruses. Nat Med. 2004;10:S98–S109.View ArticlePubMedGoogle Scholar
McBride CS, Baier F, Omondi AB, Spitzer SA, Lutomiah J, Sang R, et al. Evolution of mosquito preference for humans linked to an odorant receptor. Nature. 2014;515:222–7.View ArticlePubMedPubMed CentralGoogle Scholar
Trpis M, Hausermann W. Genetics of house-entering behaviour in East African populations of Aedes aegypti (L.) (Diptera: Culicidae) and its relevance to speciation. Bull Entomol Res. 1978;68:521.View ArticleGoogle Scholar
Mattingley PF. Genetical aspects of the Aedes aegypti problem. I. Taxonomy and bionomics. Ann Trop Med Parasitol. 1957;51:392–408.View ArticleGoogle Scholar
Tabachnick WJ. Evolutionary genetics and arthropod-borne disease: the yellow fever mosquito. Am Entomol. 1991;37:14–26.View ArticleGoogle Scholar
Kraemer MUG, Sinka ME, Duda KA, Mylne AQN, Shearer FM, Barker CM, et al. The global distribution of the arbovirus vectors Aedes aegypti and Ae. Albopictus Elife. 2015;4:e08347.PubMedGoogle Scholar
Sylla M, Bosio C, Urdaneta-Marquez L, Ndiaye M, Black IV WC. Gene flow, subspecies composition, and dengue virus-2 susceptibility among Aedes aegypti collections in Senegal. PLoS Negl Trop Dis. 2009;3:e408.View ArticlePubMedPubMed CentralGoogle Scholar
Black IV WC, Bennett KE, Gorrochótegui-Escalante N, Barillas-Mury CV, Fernández-Salas I, Muñoz MDL, et al. Flavivirus susceptibility in Aedes aegypti. Arch Med Res. 2002;33:379–88.View ArticlePubMedGoogle Scholar
Brown JE, McBride CS, Johnson P, Ritchie S, Paupy C, Bossin H, et al. Worldwide patterns of genetic differentiation imply multiple "domestications" of Aedes aegypti, a major vector of human diseases. Proc Biol Sci. 2011;278:2446–54.View ArticlePubMedPubMed CentralGoogle Scholar
Nasidi A, Monath TP, Decock K, Tomori O, Oialeye OD, Adeniyi JA, et al. Urban yellow fever epidemic in western Nigeria, 1987. Trans R Soc Trop Med Hyg. 1989;83:401–6.View ArticlePubMedGoogle Scholar
Sylla M, Ndiaye M, Black WC. Aedes species in treeholes and fruit husks between dry and wet seasons in southeastern Senegal. J Vector Ecol. 2013;38:237–44.View ArticlePubMedGoogle Scholar
Paupy C, Brengues C, Ndiath O, Toty C, Herve JP, Simard F. Morphological and genetic variability within Aedes aegypti in Niakhar, Senegal. Infect Genet Evol. 2010;10:473–80.View ArticlePubMedGoogle Scholar
Brown JE, Evans BR, Zheng W, Obas V, Barrera-Martinez L, Egizi A, et al. Human impacts have shaped historical and recent evolution in Aedes aegypti, the dengue and yellow fever mosquito. Evolution. 2014;68:514–25.View ArticlePubMedGoogle Scholar
Powell JR, Tabachnick WJ. History of domestication and spread of Aedes aegypti — a Review. Mem Inst Oswaldo Cruz. 2013;108:11–7.View ArticlePubMedPubMed CentralGoogle Scholar
Nene V, Wortman JR, Lawson D, Haas B, Kodira C, Tu ZJ, et al. Genome sequence of Aedes aegypti, a major arbovirus vector. Science. 2007;316:1718–23.View ArticlePubMedGoogle Scholar
Rašić G, Filipović I, Weeks AR, Hoffmann AA. Genome-wide SNPs lead to strong signals of geographic structure and relatedness patterns in the major arbovirus vector, Aedes aegypti. BMC Genomics. 2014;15:275.View ArticlePubMedPubMed CentralGoogle Scholar
Juneja P, Osei-Poku J, Ho YS, Ariani CV, Palmer WJ, Pain A, et al. Assembly of the genome of the disease vector Aedes aegypti onto a genetic linkage map allows mapping of genes affecting disease transmission. PLoS Negl Trop Dis. 2014;8:e2652.View ArticlePubMedPubMed CentralGoogle Scholar
Evans BR, Gloria-Soria A, Hou L, McBride C, Bonizzoni M, Zhao H, et al. A multipurpose high throughput SNP chip for the dengue and yellow fever mosquito, Aedes aegypti. G3 (Bethesda). 2015;5:711–8.View ArticleGoogle Scholar
Lachance J, Tishkoff SA. SNP ascertainment bias in population genetic analyses: why it is important, and how to correct it. Bioessays. 2013;35:780–6.View ArticlePubMedPubMed CentralGoogle Scholar
Juneja P, Ariani CV, Ho YS, Akorli J, Palmer WJ, Pain A, et al. Exome and transcriptome sequencing of Aedes aegypti identifies a locus that confers resistance to Brugia malayi and alters the immune response. PLoS Pathog. 2015;11:1–32.View ArticleGoogle Scholar
Ramasamy R, Surendran SN, Jude PJ, Dharshini S, Vinobaba M. Larval development of Aedes aegypti and Aedes albopictus in peri-urban brackish water and its implications for transmission of arboviral diseases. PLoS Negl Trop Dis. 2011;5:e1369.View ArticlePubMedPubMed CentralGoogle Scholar
McClelland GAH. A worldwide survey of variation in scale pattern of the abdominal tergum of Aedes aegypti (L.) (Diptera: Culicidae). Trans R Entomol Soc London. 2009;126:239–59.View ArticleGoogle Scholar
Droop AP. Fqtools: an efficient software suite for modern FASTQ file manipulation. Bioinformatics. 2016;32:1883–4.View ArticlePubMedPubMed CentralGoogle Scholar
Bolger AM, Lohse M, Usadel B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics. 2014;30:2114–20.View ArticlePubMedPubMed CentralGoogle Scholar
Magoc T, Salzberg SL. FLASH: Fast length adjustment of short reads to improve genome assemblies. Bioinformatics. 2011;27:2957–63.View ArticlePubMedPubMed CentralGoogle Scholar
Li H. Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. arXiv Prepr arXiv. 2013;0:3.Google Scholar
Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, et al. The sequence alignment/map format and SAMtools. Bioinformatics. 2009;25:2078–9.View ArticlePubMedPubMed CentralGoogle Scholar
McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, et al. The genome analysis toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010;20:1297–303.View ArticlePubMedPubMed CentralGoogle Scholar
Picard [Internet]. https://broadinstitute.github.io/picard/.
Fumagalli M, Vieira FG, Linderoth T, Nielsen R. NgsTools: methods for population genetics analyses from next-generation sequencing data. Bioinformatics. 2014;30:1486–7.View ArticlePubMedPubMed CentralGoogle Scholar
Korneliussen TS, Albrechtsen A, Nielsen R. ANGSD: Analysis of next generation sequencing data. BMC Bioinformatics. 2014;15:356.View ArticlePubMedPubMed CentralGoogle Scholar
Skotte L, Korneliussen TS, Albrechtsen A. Estimating individual admixture proportions from next generation sequencing data. Genetics. 2013;195:693–702.View ArticlePubMedPubMed CentralGoogle Scholar
Weir B, Cockerham CC. Estimating F-statistics for the analysis of population structure. Evolution. 1984;38:1358–70.View ArticleGoogle Scholar
Pickrell JK, Pritchard JK. Inference of population splits and mixtures from genome-wide allele frequency data. PLoS Genet. 2012;8:e1002967.View ArticlePubMedPubMed CentralGoogle Scholar
Reich D, Thangaraj K, Patterson N, Price AL, Singh L. Reconstructing Indian population history. Nature. 2009;461:489–94.View ArticlePubMedPubMed CentralGoogle Scholar
Nielsen R, Korneliussen T, Albrechtsen A, Li Y, Wang J. SNP calling, genotype calling, and sample allele frequency estimation from new-generation sequencing data. PLoS One. 2012;7:e37558.View ArticlePubMedPubMed CentralGoogle Scholar
Gutenkunst RN, Hernandez RD, Williamson SH, Bustamante CD. Inferring the joint demographic history of multiple populations from multidimensional SNP frequency data. PLoS Genet. 2009;5:e1000695.View ArticlePubMedPubMed CentralGoogle Scholar
Excoffier L, Dupanloup I, Huerta-Sánchez E, Sousa VC, Foll M. Robust demographic inference from genomic and SNP data. PLoS Genet. 2013;9:e1003905.View ArticlePubMedPubMed CentralGoogle Scholar
Keightley PD, Trivedi U, Thomson M, Oliver F, Kumar S, Blaxter ML. Analysis of the genome sequences of three Drosophila melanogaster spontaneous mutation accumulation lines. Genome Res. 2009;19:1195–201.View ArticlePubMedPubMed CentralGoogle Scholar
Malaspinas AS, Westaway MC, Muller C, Sousa VC, Lao O, Alves I, Bergström A, Athanasiadis G, Cheng JY, Crawford JE, Heupink TH, Macholdt E, Peischl S, Rasmussen S, Schiffels S, et al. The genomic history of Australia. Nature. 2016;538: 207–14.Google Scholar
Giraldo-Calderon GI, Emrich SJ, MacCallum RM, Maslen G, Emrich S, Collins F, et al. VectorBase: an updated bioinformatics resource for invertebrate vectors and other organisms related with human diseases. Nucleic Acids Res. 2015;43:D707–13.View ArticlePubMedGoogle Scholar
Danecek P, Auton A, Abecasis G, Albers CA, Banks E, DePristo MA, et al. The variant call format and VCFtools. Bioinformatics. 2011;27:2156–8.View ArticlePubMedPubMed CentralGoogle Scholar
Cingolani P, Platts A, Wang LL, Coon M, Nguyen T, Wang L, et al. A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff. Fly (Austin). 2012;6:80–92.View ArticleGoogle Scholar
R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation For Statistical Computing; 2015.Google Scholar
Wickham H. ggplot2. Elegant graphics for data analysis. London: Springer; 2009.Google Scholar
Tajima F. Statistical method for testing the neutral mutation hypothesis by DNA polymorphism. Genetics. 1989;123:585–95.PubMedPubMed CentralGoogle Scholar
Maruyama T, Fuerst PA. Population bottlenecks and nonequilibrium models in population genetics. II. Number of alleles in a small population that was formed by a recent bottleneck. Genetics. 1985;111:675–89.PubMedPubMed CentralGoogle Scholar
Fay JC, Wyckoff GJ, Wu C-I. Testing the neutral theory of molecular evolution with genomic data from Drosophila. Nature. 2002;415:1024–6.View ArticlePubMedGoogle Scholar
Dickson LB, Sharakhova MV, Timoshevskiy VA, Fleming KL, Caspary A, Sylla M, et al. Reproductive incompatibility involving Senegalese Aedes aegypti (L) is associated with chromosome rearrangements. PLoS Negl Trop Dis. 2016;10:e0004626.View ArticlePubMedPubMed CentralGoogle Scholar
Falush D, van Dorp L, Lawson D. A tutorial on how (not) to over-interpret STRUCTURE/ADMIXTURE bar plots. bioRxiv. 2016. https://doi.org/10.1101/066431.
McVean G. A genealogical interpretation of principal components analysis. PLoS Genet. 2009;5:e1000686.View ArticlePubMedPubMed CentralGoogle Scholar
Yi X, Liang Y, Huerta-Sanchez E, Jin X, Cuo ZX, Pool JE, et al. Sequencing of 50 human exomes reveals adaptation to high altitude. Science. 2010;329:75–8.View ArticlePubMedPubMed CentralGoogle Scholar
Arya GH, Magwire MM, Huang W, Serrano-Negron YL, Mackay TFC, Anholt RRH. The genetic basis for variation in olfactory behavior in Drosophila melanogaster. Chem Senses. 2015;40:233–43.View ArticlePubMedPubMed CentralGoogle Scholar
Reimand J, Arak T, Adler P, Kolberg L, Reisberg S, Peterson H, et al. g:Profiler—a web server for functional interpretation of gene lists (2016 update). Nucleic Acids Res. 2016;44:W83–9.View ArticlePubMedPubMed CentralGoogle Scholar
Hemingway J, Hawkes NJ, McCarroll L, Ranson H. The molecular basis of insecticide resistance in mosquitoes. Insect Biochem Mol Biol. 2004;34:653–65.View ArticlePubMedGoogle Scholar
Stevenson BJ, Pignatelli P, Nikou D, Paine MJI. Pinpointing P450s associated with pyrethroid metabolism in the dengue vector, Aedes aegypti: developing new tools to combat insecticide resistance. PLoS Negl Trop Dis. 2012;6:e1595.View ArticlePubMedPubMed CentralGoogle Scholar
Mutebi JP, Barrett ADT. The epidemiology of yellow fever in Africa. Microbes Infect. 2002;4:1459–68.View ArticlePubMedGoogle Scholar
Dickson LB, Sanchez-Vargas I, Sylla M, Fleming K, Black WC. Vector competence in West African Aedes aegypti is flavivirus species and genotype dependent. PLoS Negl Trop Dis. 2014;8:e3153.View ArticlePubMedPubMed CentralGoogle Scholar
Moore DF. Hybridization and mating behavior in Aedes aegypti (Diptera: Culicidae). J Med Entomol. 1979;16:223–6.View ArticlePubMedGoogle Scholar
Moore M, Sylla M, Goss L, Burugu MW, Sang R, Kamau LW, et al. Dual African origins of global Aedes aegypti s.l. populations revealed by mitochondrial DNA. PLoS Negl Trop Dis. 2013;7:e2175.View ArticlePubMedPubMed CentralGoogle Scholar
Galtier N, Nabholz B, GlÉmin S, Hurst GDD. Mitochondrial DNA as a marker of molecular diversity: a reappraisal. Mol Ecol. 2009;18:4541–50.View ArticlePubMedGoogle Scholar
Hurst GDD, Jiggins FM. Problems with mitochondrial DNA as a marker in population, phylogeographic and phylogenetic studies: the effects of inherited symbionts. Proc Biol Sci. 2005;272:1525–34.View ArticlePubMedPubMed CentralGoogle Scholar
Toews DPL, Brelsford A. The biogeography of mitochondrial and nuclear discordance in animals. Mol Ecol. 2012;21:3907–30.View ArticlePubMedGoogle Scholar
Alexander DH, Novembre J, Lange K. Fast model-based estimation of ancestry in unrelated individuals. Genome Res. 2009;19:1655–64.View ArticlePubMedPubMed CentralGoogle Scholar
Manoharan M, Chong MNF, Vaitinadapoule A, Frumence E, Sowdhamini R, Offmann B. Comparative genomics of odorant binding proteins in Anopheles gambiae, Aedes aegypti, and Culex quinquefasciatus. Genome Biol Evol. 2013;5:163–80.View ArticlePubMedPubMed CentralGoogle Scholar | CommonCrawl |
Search results for: '19853'
Showing 1 - 12 of 28 results for "19853"
C. Zhang et al. (jan 2020) Cell metabolism 31 1 148--161.e5
STAT3 Activation-Induced Fatty Acid Oxidation in CD8+ T Effector Cells Is Critical for Obesity-Promoted Breast Tumor Growth.
Although obesity is known to be critical for cancer development, how obesity negatively impacts antitumor immune responses remains largely unknown. Here, we show that increased fatty acid oxidation (FAO) driven by activated STAT3 in CD8+ T effector cells is critical for obesity-associated breast tumor progression. Ablating T cell Stat3 or treatment with an FAO inhibitor in obese mice spontaneously developing breast tumor reduces FAO, increases glycolysis and CD8+ T effector cell functions, leading to inhibition of breast tumor development. Moreover, PD-1 ligation in CD8+ T cells activates STAT3 to increase FAO, inhibiting CD8+ T effector cell glycolysis and functions. Finally, leptin enriched in mammary adipocytes and fat tissues downregulates CD8+ T cell effector functions through activating STAT3-FAO and inhibiting glycolysis. We identify a critical role of increased oxidation of fatty acids driven by leptin and PD-1 through STAT3 in inhibiting CD8+ T effector cell glycolysis and in promoting obesity-associated breast tumorigenesis. View Publication
19853 EasySep™ Mouse CD8+ T Cell Isolation Kit
A. Mansurov et al. ( 2020) Nature biomedical engineering 4 5 531--543
Collagen-binding IL-12 enhances tumour inflammation and drives the complete remission of established immunologically cold mouse tumours.
Checkpoint-inhibitor (CPI) immunotherapy has achieved remarkable clinical success, yet its efficacy in 'immunologically cold' tumours has been modest. Interleukin-12 (IL-12) is a powerful cytokine that activates the innate and adaptive arms of the immune system; however, the administration of IL-12 has been associated with immune-related adverse events. Here we show that, after intravenous administration of a collagen-binding domain fused to IL-12 (CBD-IL-12) in mice bearing aggressive mouse tumours, CBD-IL-12 accumulates in the tumour stroma due to exposed collagen in the disordered tumour vasculature. In comparison with the administration of unmodified IL-12, CBD-IL-12 induced sustained intratumoural levels of interferon-$\gamma$, substantially reduced its systemic levels as well as organ damage and provided superior anticancer efficacy, eliciting complete regression of CPI-unresponsive breast tumours. Furthermore, CBD-IL-12 potently synergized with CPI to eradicate large established melanomas, induced antigen-specific immunological memory and controlled tumour growth in a genetically engineered mouse model of melanoma. CBD-IL-12 may potentiate CPI immunotherapy for immunologically cold tumours. View Publication
W. Wang et al. (may 2019) Nature 569 7755 270--274
CD8+ T cells regulate tumour ferroptosis during cancer immunotherapy.
Cancer immunotherapy restores or enhances the effector function of CD8+ T cells in the tumour microenvironment1,2. CD8+ T cells activated by cancer immunotherapy clear tumours mainly by inducing cell death through perforin-granzyme and Fas-Fas ligand pathways3,4. Ferroptosis is a form of cell death that differs from apoptosis and results from iron-dependent accumulation of lipid peroxide5,6. Although it has been investigated in vitro7,8, there is emerging evidence that ferroptosis might be implicated in a variety of pathological scenarios9,10. It is unclear whether, and how, ferroptosis is involved in T cell immunity and cancer immunotherapy. Here we show that immunotherapy-activated CD8+ T cells enhance ferroptosis-specific lipid peroxidation in tumour cells, and that increased ferroptosis contributes to the anti-tumour efficacy of immunotherapy. Mechanistically, interferon gamma (IFNgamma) released from CD8+ T cells downregulates the expression of SLC3A2 and SLC7A11, two subunits of the glutamate-cystine antiporter system xc-, impairs the uptake of cystine by tumour cells, and as a consequence, promotes tumour cell lipid peroxidation and ferroptosis. In mouse models, depletion of cystine or cysteine by cyst(e)inase (an engineered enzyme that degrades both cystine and cysteine) in combination with checkpoint blockade synergistically enhanced T cell-mediated anti-tumour immunity and induced ferroptosis in tumour cells. Expression of system xc- was negatively associated, in cancer patients, with CD8+ T cell signature, IFNgamma expression, and patient outcome. Analyses of human transcriptomes before and during nivolumab therapy revealed that clinical benefits correlate with reduced expression of SLC3A2 and increased IFNgamma and CD8. Thus, T cell-promoted tumour ferroptosis is an anti-tumour mechanism, and targeting this pathway in combination with checkpoint blockade is a potential therapeutic approach. View Publication
17953 EasySep™ Human CD8+ T Cell Isolation Kit
K. E. Sivick et al. (dec 2018) Cell reports 25 11 3074--3085.e5
Magnitude of Therapeutic STING Activation Determines CD8+ T Cell-Mediated Anti-tumor Immunity.
Intratumoral (IT) STING activation results in tumor regression in preclinical models, yet factors dictating the balance between innate and adaptive anti-tumor immunity are unclear. Here, clinical candidate STING agonist ADU-S100 (S100) is used in an IT dosing regimen optimized for adaptive immunity to uncover requirements for a T cell-driven response compatible with checkpoint inhibitors (CPIs). In contrast to high-dose tumor ablative regimens that result in systemic S100 distribution, low-dose immunogenic regimens induce local activation of tumor-specific CD8+ effector T cells that are responsible for durable anti-tumor immunity and can be enhanced with CPIs. Both hematopoietic cell STING expression and signaling through IFNAR are required for tumor-specific T cell activation, and in the context of optimized T cell responses, TNFalpha is dispensable for tumor control. In a poorly immunogenic model, S100 combined with CPIs generates a survival benefit and durable protection. These results provide fundamental mechanistic insights into STING-induced anti-tumor immunity. View Publication
Y. Nakanishi et al. (dec 2018) Immunity 49 6 1132--1147.e7
Simultaneous Loss of Both Atypical Protein Kinase C Genes in the Intestinal Epithelium Drives Serrated Intestinal Cancer by Impairing Immunosurveillance.
Serrated adenocarcinoma, an alternative pathway for colorectal cancer (CRC) development, accounts for 15{\%}-30{\%} of all CRCs and is aggressive and treatment resistant. We show that the expression of atypical protein kinase C zeta (PKCzeta) and PKClambda/iota was reduced in human serrated tumors. Simultaneous inactivation of the encoding genes in the mouse intestinal epithelium resulted in spontaneous serrated tumorigenesis that progressed to advanced cancer with a strongly reactive and immunosuppressive stroma. Whereas epithelial PKClambda/iota deficiency led to immunogenic cell death and the infiltration of CD8+ T cells, which repressed tumor initiation, PKCzeta loss impaired interferon and CD8+ T cell responses, which resulted in tumorigenesis. Combined treatment with a TGF-beta receptor inhibitor plus anti-PD-L1 checkpoint blockade showed synergistic curative activity. Analysis of human samples supported the relevance of these kinases in the immunosurveillance defects of human serrated CRC. These findings provide insight into avenues for the detection and treatment of this poor-prognosis subtype of CRC. View Publication
J. H. Choi et al. ( 2019) Science (New York, N.Y.) 364 6440
LMBR1L regulates lymphopoiesis through Wnt/beta-catenin signaling.
Precise control of Wnt signaling is necessary for immune system development. In this study, we detected severely impaired development of all lymphoid lineages in mice, resulting from an N-ethyl-N-nitrosourea-induced mutation in the limb region 1-like gene (Lmbr1l), which encodes a membrane-spanning protein with no previously described function in immunity. The interaction of LMBR1L with glycoprotein 78 (GP78) and ubiquitin-associated domain-containing protein 2 (UBAC2) attenuated Wnt signaling in lymphocytes by preventing the maturation of FZD6 and LRP6 through ubiquitination within the endoplasmic reticulum and by stabilizing destruction complex" proteins. LMBR1L-deficient T cells exhibited hallmarks of Wnt/beta-catenin activation and underwent apoptotic cell death in response to proliferative stimuli. LMBR1L has an essential function during lymphopoiesis and lymphoid activation acting as a negative regulator of the Wnt/beta-catenin pathway." View Publication
17961 EasySep™ Human Naïve Pan T Cell Isolation Kit
EasySep™ Human Naïve Pan T Cell Isolation Kit
K.-L. Chu et al. (NOV 2018) Mucosal immunology
GITRL on inflammatory antigen presenting cells in the lung parenchyma provides signal 4 for T-cell accumulation and tissue-resident memory T-cell formation.
T-cell responses in the lung are critical for protection against respiratory pathogens. TNFR superfamily members play important roles in providing survival signals to T cells during respiratory infections. However, whether these signals take place mainly during priming in the secondary lymphoid organs and/or in the peripheral tissues remains unknown. Here we show that under conditions of competition, GITR provides a T-cell intrinsic advantage to both CD4 and CD8 effector T cells in the lung tissue, as well as for the formation of CD4 and CD8 tissue-resident memory T cells during respiratory influenza infection in mice. In contrast, under non-competitive conditions, GITR has a preferential effect on CD8 over CD4 T cells. The nucleoprotein-specific CD8 T-cell response partially compensated for GITR deficiency by expansion of higher affinity T cells; whereas, the polymerase-specific response was less flexible and more GITR dependent. Following influenza infection, GITR is expressed on lung T cells and GITRL is preferentially expressed on lung monocyte-derived inflammatory antigen presenting cells. Accordingly, we show that GITR+/+ T cells in the lung parenchyma express more phosphorylated-ribosomal protein S6 than their GITR-/- counterparts. Thus, GITR signaling within the lung tissue critically regulates effector and tissue-resident memory T-cell accumulation. View Publication
C. L. Araujo Furlan et al. ( 2018) Frontiers in immunology 9 2555
Limited Foxp3+ Regulatory T Cells Response During Acute Trypanosoma cruzi Infection Is Required to Allow the Emergence of Robust Parasite-Specific CD8+ T Cell Immunity.
While it is now acknowledged that CD4+ T cells expressing CD25 and Foxp3 (Treg cells) regulate immune responses and, consequently, influence the pathogenesis of infectious diseases, the regulatory response mediated by Treg cells upon infection by Trypanosoma cruzi was still poorly characterized. In order to understand the role of Treg cells during infection by this protozoan parasite, we determined in time and space the magnitude of the regulatory response and the phenotypic, functional and transcriptional features of the Treg cell population in infected mice. Contrary to the accumulation of Treg cells reported in most chronic infections in mice and humans, experimental T. cruzi infection was characterized by sustained numbers but decreased relative frequency of Treg cells. The reduction in Treg cell frequency resulted from a massive accumulation of effector immune cells, and inversely correlated with the magnitude of the effector immune response as well as with emergence of acute immunopathology. In order to understand the causes underlying the marked reduction in Treg cell frequency, we evaluated the dynamics of the Treg cell population and found a low proliferation rate and limited accrual of peripheral Treg cells during infection. We also observed that Treg cells became activated and acquired a phenotypic and transcriptional profile consistent with suppression of type 1 inflammatory responses. To assess the biological relevance of the relative reduction in Treg cells frequency observed during T. cruzi infection, we transferred in vitro differentiated Treg cells at early moments, when the deregulation of the ratio between regulatory and conventional T cells becomes significant. Intravenous injection of Treg cells dampened parasite-specific CD8+ T cell immunity and affected parasite control in blood and tissues. Altogether, our results show that limited Treg cell response during the acute phase of T. cruzi infection enables the emergence of protective anti-parasite CD8+ T cell immunity and critically influences host resistance. View Publication
S. R. Walsh et al. (NOV 2018) The Journal of clinical investigation
Type I IFN blockade uncouples immunotherapy-induced antitumor immunity and autoimmune toxicity.
Despite showing success in treating melanoma and haematological malignancies, adoptive cell therapy (ACT) has generated only limited effects in solid tumors. This is, in part, due to a lack of specific antigen targets, poor trafficking/infiltration and immunosuppression in the tumor microenvironment. In this study, we combined ACT with oncolytic virus vaccines (OVV) to drive expansion and tumor infiltration of transferred antigen-specific T cells, and demonstrated that the combination is highly potent for the eradication of established solid tumors. Consistent with other successful immunotherapies, this approach elicited severe autoimmune consequence when the antigen targeted was a self-protein. However, modulation of IFN$\alpha$/$\beta$ signaling, either by functional blockade or rational choice of an OVV backbone, ameliorated autoimmune side effects without compromising antitumor efficacy. Our study uncovers a pathogenic role for IFN$\alpha$/$\beta$ in facilitating autoimmune toxicity during cancer immunotherapy and offers a safe and powerful combinatorial regimen with immediate translational applications. View Publication
A. Wroblewska et al. (NOV 2018) Cell 175 4 1141--1155.e16
Protein Barcodes Enable High-Dimensional Single-Cell CRISPR Screens.
CRISPR pools are being widely employed to identify gene functions. However, current technology, which utilizes DNA as barcodes, permits limited phenotyping and bulk-cell resolution. To enable novel screening capabilities, we developed a barcoding system operating at the protein level. We synthesized modules encoding triplet combinations of linear epitopes to generate {\textgreater}100 unique protein barcodes (Pro-Codes). Pro-Code-expressing vectors were introduced into cells and analyzed by CyTOF mass cytometry. Using just 14 antibodies, we detected 364 Pro-Code populations; establishing the largest set of protein-based reporters. By pairing each Pro-Code with a different CRISPR, we simultaneously analyzed multiple phenotypic markers, including phospho-signaling, on dozens of knockouts. Pro-Code/CRISPR screens found two interferon-stimulated genes, the immunoproteasome component Psmb8 and a chaperone Rtp4, are important for antigen-dependent immune editing of cancer cells and identified Socs1 as a negative regulator of Pd-l1. The Pro-Code technology enables simultaneous high-dimensional protein-level phenotyping of 100s of genes with single-cell resolution. View Publication
Chen Z et al. (SEP 2017) Cell reports 20 11 2584--2597
miR-150 Regulates Memory CD8 T Cell Differentiation via c-Myb.
MicroRNAs play an important role in T cell responses. However, how microRNAs regulate CD8 T cell memory remains poorly defined. Here, we found that miR-150 negatively regulates CD8 T cell memory in vivo. Genetic deletion of miR-150 disrupted the balance between memory precursor and terminal effector CD8 T cells following acute viral infection. Moreover, miR-150-deficient memory CD8 T cells were more protective upon rechallenge. A key circuit whereby miR-150 repressed memory CD8 T cell development through the transcription factor c-Myb was identified. Without miR-150, c-Myb was upregulated and anti-apoptotic targets of c-Myb, such as Bcl-2 and Bcl-xL, were also increased, suggesting a miR-150-c-Myb survival circuit during memory CD8 T cell development. Indeed, overexpression of non-repressible c-Myb rescued the memory CD8 T cell defects caused by overexpression of miR-150. Overall, these results identify a key role for miR-150 in memory CD8 T cells through a c-Myb-controlled enhanced survival circuit. View Publication
Xu MM et al. (AUG 2017) Immunity 47 2 363--373.e5
Dendritic Cells but Not Macrophages Sense Tumor Mitochondrial DNA for Cross-priming through Signal Regulatory Protein α Signaling.
Inhibition of cytosolic DNA sensing represents a strategy that tumor cells use for immune evasion, but the underlying mechanisms are unclear. Here we have shown that CD47-signal regulatory protein α (SIRPα) axis dictates the fate of ingested DNA in DCs for immune evasion. Although macrophages were more potent in uptaking tumor DNA, increase of DNA sensing by blocking the interaction of SIRPα with CD47 preferentially occurred in dendritic cells (DCs) but not in macrophages. Mechanistically, CD47 blockade enabled the activation of NADPH oxidase NOX2 in DCs, which in turn inhibited phagosomal acidification and reduced the degradation of tumor mitochondrial DNA (mtDNA) in DCs. mtDNA was recognized by cyclic-GMP-AMP synthase (cGAS) in the DC cytosol, contributing to type I interferon (IFN) production and antitumor adaptive immunity. Thus, our findings have demonstrated how tumor cells inhibit innate sensing in DCs and suggested that the CD47-SIRPα axis is critical for DC-driven antitumor immunity. View Publication
70025 Human Peripheral Blood Mononuclear Cells, Frozen
18780 EasySep™ Mouse CD11c Positive Selection Kit II
Human Peripheral Blood Mononuclear Cells, Frozen
EasySep™ Mouse CD11c Positive Selection Kit II
Resource Type Reference Remove This Item
Immunology 17 items
EasySep 27 items
B Cells 2 items
Dendritic Cells 1 item
Mononuclear Cells 1 item
T Cells, CD4+ 4 items
T Cells, CD8+ 16 items | CommonCrawl |
Large Countable Ordinals (Part 1)
I love the infinite.
It may not exist in the physical world, but we can set up rules to think about it in consistent ways, and then it's a helpful concept. The reason is that infinity is often easier to think about than very large finite numbers.
Finding rules to work with the infinite is one of the great triumphs of mathematics. Cantor's realization that there are different sizes of infinity is truly wondrous—and by now, it's part of the everyday bread and butter of mathematics.
Trying to create a notation for these different infinities is very challenging. It's not a fair challenge, because there are more infinities than expressions we can write down in any given alphabet! But if we seek a notation for countable ordinals, the challenge becomes more fair.
It's still incredibly frustrating. No matter what notation we use it fizzles out too soon… making us wish we'd invented a more general notation. But this process of 'fizzling out' is fascinating to me. There's something profound about it. So, I would like to tell you about this.
Today I'll start with a warmup. Cantor invented a notation for ordinals that works great for ordinals less than a certain ordinal called ε0. Next time I'll go further, and bring in the 'single-variable Veblen hierarchy'! This lets us describe all ordinals below a big guy called the 'Feferman–Schütte ordinal'.
In the post after that I'll bring in the 'multi-variable Veblen hierarchy', which gets us all the ordinals below the 'small Veblen ordinal'. We'll even touch on the 'large Veblen ordinal', which requires a version of the Veblen hierarchy with infinitely many variables. But all this is really just the beginning of a longer story. That's how infinity works: the story never ends!
To describe countable ordinals beyond the large Veblen ordinal, most people switch to an entirely different set of ideas, called 'ordinal collapsing functions'. I may tell you about those someday. Not soon, but someday. My interest in the infinite doesn't seem to be waning. It's a decadent hobby, but hey: some middle-aged men buy fancy red sports cars and drive them really fast. Studying notions of infinity is cooler, and it's environmentally friendly.
I can even imagine writing a book about the infinite. Maybe these posts will become part of that book. But one step at a time…
Cardinals versus ordinals
Cantor invented two different kinds of infinities: cardinals and ordinals. Cardinals say how big sets are. Two sets can be put into 1-1 correspondence iff they have the same number of elements—where this kind of 'number' is a cardinal. You may have heard about cardinals like aleph-nought (the number of integers), 2 to power aleph-nought (the number of real numbers), and so on. You may have even heard rumors of much bigger cardinals, like 'inaccessible cardinals' or 'super-huge cardinals'. All this is tremendously fun, and I recommend starting here:
• Frank R. Drake, Set Theory, an Introduction to Large Cardinals, North-Holland, 1974.
There are other books that go much further, but as a beginner, I found this to be the most fun.
But I don't want to talk about cardinals! I want to talk about ordinals.
Ordinals say how big 'well-ordered' sets are. A set is well-ordered if it comes with a relation ≤ obeying the usual rules:
• Transitivity: if x ≤ y and y ≤ z then x ≤ z
• Reflexivity: x ≤ x
• Antisymmetry: if x ≤ y and y ≤ x then x = y
and one more rule: every nonempty subset has a smallest element!
For example, the empty set
is well-ordered in a trivial sort of way, and the corresponding ordinal is called
Similarly, any set with just one element, like this:
Similarly, any set with two elements, like this:
becomes well-ordered as soon as we decree which element is bigger; the obvious choice is to say 0 < 1. The corresponding ordinal is called
Similarly, any set with three elements, like this:
becomes well-ordered as soon as we linearly order it; the obvious choice here is to say 0 < 1 < 2. The corresponding ordinal is called
Perhaps you're getting the pattern — you've probably seen these particular ordinals before, maybe sometime in grade school. They're called finite ordinals, or "natural numbers".
But there's a cute trick they probably didn't teach you then: we can define each ordinal to be the set of all ordinals less than it:
(since no ordinal is less than 0)
(since only 0 is less than 1)
(since 0 and 1 are less than 2)
(since 0, 1 and 2 are less than 3)
and so on. It's nice because now each ordinal is a well-ordered set of the size that ordinal stands for. And, we can define one ordinal to be "less than or equal" to another precisely when its a subset of the other.
Infinite ordinals
What comes after all the finite ordinals? Well, the set of all finite ordinals is itself well-ordered:
So, there's an ordinal corresponding to this — and it's the first infinite ordinal. It's usually called pronounced 'omega'. Using the cute trick I mentioned, we can actually define
What comes after this? Well, it turns out there's a well-ordered set
containing the finite ordinals together with with the obvious notion of "less than": is bigger than the rest. Corresponding to this set there's an ordinal called
As usual, we can simply define
At this point you could be confused if you know about cardinals, so let me throw in a word of reassurance. The sets and have the same cardinality: they are both countable. In other words, you can find a 1-1 and onto function between these sets. But and are different as ordinals, since you can't find a 1-1 and onto function between them that preserves the ordering. This is easy to see, since has a biggest element while does not.
Indeed, all the ordinals in this series of posts will be countable! So for the infinite ones, you can imagine that all I'm doing is taking your favorite countable set and well-ordering it in ever more sneaky ways.
Okay, so we got to What comes next? Well, not surprisingly, it's
Then comes
and so on. You get the idea.
I haven't really defined ordinal addition in general. I'm trying to keep things fun, not like a textbook. But you can read about it here:
• Wikipedia, Ordinal arithmetic: addition.
The main surprise is that ordinal addition is not commutative. We've seen that since
is an infinite list of things… and then one more thing that comes after all those!. But because one thing followed by a list of infinitely many more is just a list of infinitely many things.
With ordinals, it's not just about quantity: the order matters!
ω+ω and beyond
Okay, so we've seen these ordinals:
Well, the ordinal after all these is called People often call it "omega times 2" or for short. So,
It would be fun to have a book with pages, each page half as thick as the previous page. You can tell a nice long story with an -sized book. I think you can imagine this. And if you put one such book next to another, that's a nice picture of
It's worth noting that is not the same as We have
where we add of these terms. But
This is not a proof, because I haven't given you the official definition of how to multiply ordinals. You can find it here:
• Wikipedia, Ordinal arithmetic: multiplication.
Using this you can prove that what I'm saying is true. Nonetheless, I hope you see why what I'm saying might make sense. Like ordinal addition, ordinal multiplication is not commutative! If you don't like this, you should study cardinals instead.
What next? Well, then comes
and so on. But you probably have the hang of this already, so we can skip right ahead to
In fact, you're probably ready to skip right ahead to and and so on.
In fact, I bet now you're ready to skip all the way to "omega times omega", or for short:
Suppose you had an encyclopedia with volumes, each one being a book with pages. If each book is twice as thin as one before, you'll have pages — and it can still fit in one bookshelf! Here's the idea:
What comes next? Well, we have
and so on, and after all these come
and so on — and eventually
and then a bunch more, and then
and then a bunch more, and more, and eventually
You can probably imagine a bookcase containing encyclopedias, each with volumes, each with pages, for a total of pages. That's
ωω
I've been skipping more and more steps to keep you from getting bored. I know you have plenty to do and can't spend an infinite amount of time reading this, even if the subject is infinity.
So if you don't mind me just mentioning some of the high points, there are guys like and and so on, and after all these comes
Let's try to we imagine this! First, imagine a book with pages. Then imagine an encyclopedia of books like this, with volumes. Then imagine a bookcase containing encyclopedias like this. Then imagine a room containing bookcases like this. Then imagine a floor with library with rooms like this. Then imagine a library with floors like this. Then imagine a city with libraries like this. And so on, ad infinitum.
You have to be a bit careful here, or you'll be imagining an uncountable number of pages. To name a particular page in this universe, you have to say something like this:
the 23rd page of the 107th book of the 20th encyclopedia in the 7th bookcase in 0th room on the 1000th floor of the 973rd library in the 6th city on the 0th continent on the 0th planet in the 0th solar system in the…
But it's crucial that after some finite point you keep saying "the 0th". Without that restriction, there would be uncountably many pages! This is just one of the rules for how ordinal exponentiation works. For the details, read:
• Wikipedia, Ordinal arithmetic: exponentiation.
As they say,
But for infinite exponents, the definition may not be obvious.
Here's a picture of taken from David Madore's wonderful interactive webpage:
On his page, if you click on any of the labels for an initial portion of an ordinal, like or here, the picture will expand to show that portion!
And here's another picture, where each turn of the clock's hand takes you to a higher power of :
Ordinals up to ε0
Okay, so we've reached Now what?
Well, then comes and so on, but I'm sure that's boring by now. And then come ordinals like
leading up to
Then eventually come ordinals like
and so on, leading up to
This actually reminds me of something that happened driving across South Dakota one summer with a friend of mine. We were in college, so we had the summer off, so we drive across the country. We drove across South Dakota all the way from the eastern border to the west on Interstate 90.
This state is huge — about 600 kilometers across, and most of it is really flat, so the drive was really boring. We kept seeing signs for a bunch of tourist attractions on the western edge of the state, like the Badlands and Mt. Rushmore — a mountain that they carved to look like faces of presidents, just to give people some reason to keep driving.
Anyway, I'll tell you the rest of the story later — I see some more ordinals coming up:
We're really whizzing along now just to keep from getting bored — just like my friend and I did in South Dakota. You might fondly imagine that we had fun trading stories and jokes, like they do in road movies. But we were driving all the way from Princeton to my friend Chip's cabin in California. By the time we got to South Dakota, we were all out of stories and jokes.
Hey, look! It's
That was cool. Then comes
Anyway, back to my story. For the first half of our half of our trip across the state, we kept seeing signs for something called the South Dakota Tractor Museum.
Oh, wait, here's an interesting ordinal:
Let's stop and take look:
That was cool. Okay, let's keep driving. Here comes
After a while we reach
This is pretty boring; we're already going infinitely fast, but we're still just picking up speed, and it'll take a while before we reach something interesting.
Anyway, we started getting really curious about this South Dakota Tractor Museum — it sounded sort of funny. It took 250 kilometers of driving before we passed it. We wouldn't normally care about a tractor museum, but there was really nothing else to think about while we were driving. The only thing to see were fields of grain, and these signs, which kept building up the suspense, saying things like
ONLY 100 MILES TO THE SOUTH DAKOTA TRACTOR MUSEUM!
We're zipping along really fast now:
What comes after all these?
At this point we need to stop for gas. Our notation for ordinals just ran out!
The ordinals don't stop; it's just our notation that fizzled out. The set of all ordinals listed up to now — including all the ones we zipped past — is a well-ordered set called
or "epsilon-nought". This has the amazing property that
And it's the smallest ordinal with this property! It looks like this:
It's an amazing fact that every countable ordinal is isomorphic, as an well-ordered set, to some subset of the real line. David Madore took advantage of this to make his pictures.
Cantor normal form
I'll tell you the rest of my road story later. For now let me conclude with a bit of math.
There's a nice notation for all ordinals less than called 'Cantor normal form'. We've been seeing lots of examples. Here is a typical ordinal in Cantor normal form:
The idea is that you write it out using just + and exponentials and 1 and
Here is the theorem that justifies Cantor normal form:
Theorem. Every ordinal can be uniquely written as
where is a natural number, are positive integers, and are ordinals.
It's like writing ordinals in base
Note that every ordinal can be written this way! So why did I say that Cantor normal form is nice notation for ordinals less than ? Here's the problem: the Cantor normal form of is
So, when we hit the exponents can be as big as the ordinal we're trying to describe! So, while the Cantor normal form still exists for ordinals it doesn't give a good notation for them unless we already have some notation for ordinals this big!
This is what I mean by a notation 'fizzling out'. We'll keep seeing this problem in the posts to come.
But for an ordinal less than something nice happens. In this case, when we write
all the exponents are less than So we can go ahead and write them in Cantor normal form, and so on… and because ordinals are well-ordered, this process ends after finitely many steps.
So, Cantor normal form gives a nice way to write any ordinal less than using finitely many symbols! If we abbreviate as and write multiplication by positive integers in terms of addition, we get expressions like this:
They look like trees. Even better, you can write a computer program that does ordinal arithmetic for ordinals of this form: you can add, multiply, and exponentiate them, and tell when one is less than another.
So, there's really no reason to be scared of Remember, each ordinal is just the set of all smaller ordinals. So you can think of as the set of tree-shaped expressions like the one above, with a particular rule for saying when one is less than another. It's a perfectly reasonable entity. For some real excitement, we'll need to move on to larger ordinals. We'll do that next time.
• Wikipedia, Cantor normal form.
This entry was posted on Wednesday, June 29th, 2016 at 1:00 am and is filed under mathematics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
69 Responses to Large Countable Ordinals (Part 1)
Well, the part of large ordinals I like is the theorem that states that for every increasing function f(x) with certain properties there is a fixed point — an ordinal a such that f(a) = a. Epsilon is the smallest fixed point of omega^x, but function that assigns epsilon-x to x (x-th ordinal with this property) also has a fixed point, function that assigns to x the x-th fixed point of THIS function has also a fixed point…
Yes, that theorem is coming in Part 3, after lots of examples in Part 2!
Carsten Führmann says:
I really enjoyed this. You are a fabulous writer, John, maybe I can learn from this.
Thanks! I have some tricks for writing, which I should explain someday. I started writing a paper about this:
• Why mathematics is boring.
but I got bored and never finished it.
Thanks for the reference! It turns out that your text is a clearer, more eloquent version of principles I figured out over the years, but haven't completely mastered for lack of practice. But here is something that just occurred to me: if one were to write a journal paper, or a conference paper, should one maybe pretend that the reader hails from outside the field and must be seduced to read the text? Just to make things more pleasant for all involved? (I'm saying this because I still remember how tiring reviewing of papers could be.)
Christopher Menzel says:
Really enjoyable exposition. One minor terminological suggestion: Since you've said that cardinals measure how big sets are, it introduces an unnecessary ambiguity also to say that ordinals measure how big well-ordered sets are, since (as is clear from your exposition) the two notions of bigness come apart for infinite ordinals: ω+1 is bigger than ω in the ordinal sense but not in the cardinal sense. Perhaps it would be better to say that ordinals measure how "long" well-ordered sets are? Likewise, while you introduce the idea of one ordinal being "less than or equal to" another, you subsequently more often speak of one being smaller/bigger than another instead of being less than/greater than another. Similarly for "least/greatest": It might be preferable, e.g., to express well-foundedness as the rule that every nonempty subset has a least (rather than smallest) element, and to say that ω+1 has a greatest (rather than biggest) element while ω does not.
You're probably planning to offer some recommendations for further study of the ordinals, but I'll mention two that have been very useful to me over the years: Levy's Basic Set Theory and Devlin's The Joy of Sets.
The distinction between cardinals and ordinals is also reflected in ordinary language. When we say, "one", "two", etc., we are referring to cardinals. When we say, "first", "second", etc., we are referring to ordinals. Of course Cantor went much further in clarifying what this really means.
Abel Wolman says:
Yet we don't ordinarily say first plus first equals second, though we do identify the finite ordinals with the natural numbers.
So? Obviously I wasn't trying to suggest that all aspects of ordinal arithmetic are going to be reflected in ordinary language in such a simplistic way.
However, we do have the idea of placing one ordered set after another (which comes first, which comes second), as when people form a queue; this corresponds to ordinal addition. Whereas in cardinal arithmetic (a la Cantor), when we add cardinals, such a conception (which comes first, which comes second) is not the relevant matter — cardinals are treated as abstract sets, and the process is one of decategorifying the category of sets (as John has emphasized many times in various places).
2 July, 2016 at 12:00 am
I am burdened by a nineteenth century mentality here. I am still trying to understand what the set-theoretic conception of the natural numbers is (aware that this sort of questioning has been preempted by topos-theoretic relativism and other modern developments). As children we are taught to imagine the natural numbers as ordered, equally-spaced points on the number line. This geometric intuition still informs our thought processes as when, for example, we debate whether or not the ordinals are 'long' rather than 'big,' or when we think of cardinal and ordinal bigness as analogous to measures of weight or height (restricted, that is, to finite sets). Now, naively, I image that in ZF, or whatever set-theoretic foundations, it is sets all the way down, and therefore I have tried to locate in set theory this childhood geometric intuition. Specifically, what sets instantiate the equal spacing of the natural numbers. Finite ordinals, von Neumann's, are strange in this regard in that they are transitive, satisfy Peano's axioms, but seem to be unequally spaced by virtue of the definition of the successor operation. I suspect this issue (if it is one) accrues to any construction of the ordinals (natural numbers) in a set theory with an axiom of extension.
As you point out, if we follow von Neumann's 'neat trick' and identify cardinals with ordinals (though the trick seems necessitated by a broader foundational goal, the hierarchy of sets), then cardinality is also a nonintuitive measure of (finite) bigness since we don't seem to have a way of saying how much bigger one cardinal is than another, that is, to reckon the spacing between sets in terms of their cardinality.
When you speak of 'abstract sets,' you are referring, I think, to Cantor's (and Lawvere's) 'lauter Einsen' and cardinals are then his 'Kardinalen' formed by 'double abstraction' (as Christopher notes). In this case, there is indeed a notion of equal spacing in that 'lauter Einsen' means 'nothing but many units' (Lawvere, "Cohesive Toposes and Cantor's 'lauter Einsen'"), and one of these 'distinct but otherwise indistingishable units' (Christopher) can be used to define the spacing. However, as Lawever points out, Cantor's 'Kardinalen' are completely different from the 'cardinals' discussed in most set theory texts. Zermelo found Cantor's double abstraction construction inconsistent based as it was on distinct elements having no distinguishing properties. And while Lawvere argues for the productivity of this contradiction, I think his categorical reasoning, lovely as it is, absorbs the contradiction rather than solves it at the level of sets with extensionality: the axiom of extensionality seems to preclude the existence of unit sets. (Cantor's 'lauter Einsen' also satisfy a novel statistics, but that is a different matter.)
Decategorification of the category of sets has the natural numbers waiting so to speak; we descend to a familiar and well-known, albeit impoverished, mathematical object. Unfortunately, as revealed above, I am still confused about the set-theoretic nature of even this impoverished (finite) world.
Dear Abel: interesting though this discussion could be (and I see you've put a lot of thought into your remarks), I'm a little worried about hijacking the comment section by discussing philosophies of foundations, which as experience shows often continue for a long time. (I thought my initial remark about ordinals in natural language to be innocuous, but one thing leads to another…) Plus, experience shows that foundational discussions can get pretty heated, which I'd much rather avoid if possible.
I'll try to reply, but I fear this won't be adequate. I fear in fact that everything I'm about to say is already known to you, although possibly not known to everyone reading this.
Since you seem to have read Lawvere, you probably well know that he is totally opposed to the idea of "sets all the way down" with concomitant iterated membership chains, which is a signal feature of set theories based on a global membership relation, where you can take two arbitrary sets and always ask " ?" (In the nLab, these are called "material set theories": one has the idea that elements of sets have themselves "substance". As opposed to "structural set theories", where the focus is not on what sets and their elements "are" ontologically, but what they "do" operationally, as expressed e.g. in terms of universal elements.) Lawvere rejects material set theory in part because focusing on what "elements are, really" is sterile, fruitless from a mathematician's perspective.
So what does he propose to do about it? Does he have an alternative? Again, you may well be aware of his Elementary Theory of the Category of Sets, ETCS for short, which may be viewed as a prototype structural set theory. I do think this proposal resolves the alleged contradiction brought up by Zermelo: in this axiomatic set-up, elements in and of themselves are indeed featureless. Perhaps a more accurate description is that they acquire features only in terms of their interplay within a structure, and are evidently distinct within that structure.
The notion of a natural numbers object (also due to Lawvere) is a good example. Instead of attempting to say what the successor function is exactly (e.g., or something), we simply axiomatize it, giving enough properties so that we can do recursion. In fact his definition is beautifully elegant, given in terms of a universal property that says realizes $\mathbb{N}$ as an initial algebra of the endofunctor (taking the coproduct with a terminal object). Being given by a universal property, $\mathbb{N}$ is uniquely determined up to uniquely specified isomorphism, which is all we ever need (or even want). I think this conception of fits well with your childhood conception: the definable elements are equally spaced on a line, separated in unit distance by an application of . Other than their roles within the structure, these elements in and of themselves have no distinguishable features to speak of, but they are clearly distinct as part of the structure.
The last question, about extensionality, is perhaps the most important one. How does Lawvere propose to test for equality of sets? Isn't that an important consideration in mathematics? His answer is I think very interesting, and far-reaching: it's that a mathematician is interested in asking "is " essentially only in situations where are already given as subsets of a third set (via injective functions ). That is, a mathematician would never think to ask if two sets plucked at random from the universe are equal, or it would be anyway pointless to ask: it's only when the two sets partake within the same prior-ly given contextual framework, as subsets of the universe of discourse as they say, that a mathematician even thinks to pose such a question. And for that, the axioms of ETCS do allow for an extensionality principle, where subsets of a given set can be considered equal iff those elements of the given ambient set that they contain are the same. More precisely, we say a subset "contains" an element if factors through , and then the extensionality principle gives the condition that qua subsets of .
(Apologies for such a long comment.)
Todd wrote:
I'm a little worried about hijacking the comment section by discussing philosophies of foundations, which as experience shows often continue for a long time.
Since this is a post about infinity, perhaps that's appropriate.
Seriously, I don't mind people talking about the philosophy of mathematics in this thread. Such discussions are only annoying if you care who wins.
Currently I'm interested in large countable ordinals not for philosophical or foundational reasons, but just for ordinary mathematical reasons: in other words, because it's a fun challenge to try to understand them.
Of course this challenge has a particular flavor, which David Madore has tried to psychoanalyze (in French here, or here in a bad English translation). He mentions that children will ask questions like "who would win in a fight, Darth Vader or Spiderman?", and adds:
[…] as children we have a taste for total order relations, and that taste does not disappear completely, at least not at home, when we discover that in fact the world is not so simple, and that two things are not always comparable.
Ordinals are somehow a sublimation of these children's games: given two ordinals, one of them is always larger (stronger, more powerful, more infinite), also in almost all cases we meet, the larger is so monstrously larger than the smaller could essentially be the number 1; and every time we have a set of ordinals, there is one that is the smallest of all, and there is an ordinal that is larger in this set.
Todd: I appreciate your concerns about discussions of philosophies of foundations, and your thoughtful, gentle response to my detour into the same. I also appreciate John's tolerance of potentially long finite threads in a post about the infinite. Of course, no good deed or deeds can go unpunished:
Benacerraf (in McLarty, nLab) maintains since ZF-educated Ernie and Johnny have "equal claims to know what numbers are," but differ as to which sets are the numbers, numbers cannot be sets. Instead, says Benacerraf, Ernie and Johnny have learned about "progressions" and arithmetic is "the science that elaborates the abstract structure that all progressions have in common merely in virtue of being progressions." Now, not all progressions are equally spaced, so why should we expect a structural account of equally spaced numbers? In particular, how does axiomatizing a successor operation with enough properties to do recursion lead to the conception that the definable elements are equally spaced on a line? Rather, wouldn't equal spacing be one of those "uniquely individuating properties … irrelevant to arithmetic" (McLarty) and therefore superfluous to structural set theory?
Suppose fundamental to Ernie and Johnny's conceptualization of natural number is that numbers be equally spaced. In this case, it seems neither Ernie nor Johnny have actually learned about numbers in ZF despite the fact that they can do arithmetic. Would you agree with the assertion that in material set theories with an extensionality axiom equally-spaced natural numbers cannot be defined? If they cannot be defined in material set theories, are they definable in a structural theory?
In the category of sets, the coproduct is disjoint union. In material set theories disjoint unions of copies of sets rely on some sort of indexing. Is there a similar requirement in structural set theories?
I initially thought that equal spacing was accounted for in structural set theories because structural 'elements' were essentially Cantor's 'lauter Einsen,' but I suspect now that even 'lauter Einsen' are too material to be considered elements of a structural set. True?
My 'fascination' (Madore) with the (finite) ordinals stems in part from measurement theory. From a measurement perspective, ordinals (at least of the material sort) are ordinal. They define a heterogeneous hierarchy (in ZF, the backbone of the cumulative hierarchy) having heterogeneous differences (when defined), and therefore are not quantities. In fact, from a measurement point of view, material set theories cannot be used to represent either multitudes or magnitudes, that is, quantities as traditionally (Euclid, Newton, Maxwell) understood. Although finite ordinals support arithmetic, without control over the spacing between ordinals it is not clear how this arithmetic relates to measurement. Now, it is claimed that "ordinals are often used to measure things," and that the rank function "is an operation which measures, in a sense, the number of steps in which a set is built up from the empty set" (van Dolen et al., Sets: Naive, Axiomatic and Applied, 1978), but if ordinals are ordinal, then ordinals and the rank function don't quantitatively measure anything.
Abel, it's entirely possible I'm not grasping what you mean to say, but if I'm reading you right, then frankly I'm not interested in pursuing this line of inquiry; sorry. I'll say that if one has an absolute sense of what 'equally spaced' should mean, one which escapes being characterized up to isomorphism, then I'll very gladly cede the argument and move on.
Maybe it would help to consider an example. The first example will be the natural numbers, however you'd like to picture them that makes them "equally spaced" in your mind — if you believe in that. The second is the set of powers of 2: . The successor operation in the first case is "add 1"; in the second it's "multiply by 2". (For each of these examples let us also distinguish the initial elements: 0 in the first, 1 in the second.)
As algebras of the endofunctor on the category of sets, these two structures are isomorphic, and uniquely so as algebras. Structural set theory has nothing more to add: for all mathematical purposes, these structures can be regarded as 'identical' in nature, in that any mathematical relevant property you wish to assert of one translates perfectly to the other.
If you nonetheless wish to maintain that are not "equally spaced" — as is your right — then I really have nothing more to say, except perhaps that some might argue that "equally spaced" is in the eye of the beholder (e.g., just look at things on a logarithmic scale). But it's not an argument I want to spend time pursuing: it is very much part of the "structuralist conception" (if I may put it that way) that mathematically relevant properties are invariant under isomorphism, meaning that other considerations are unimportant qua mathematics.
You did ask something else: "In the category of sets, the coproduct is disjoint union. In material set theories disjoint unions of copies of sets rely on some sort of indexing. Is there a similar requirement in structural set theories?"
In some set-ups, such as Lawvere's original account of ETCS, one just posits as an axiom that coproducts exist (an adequate response, since any two constructions of $A + B$ are in any case isomorphic up to unique structure-preserving isomorphism). Now in other accounts, such as ones starting with the notion of topos as finitely complete cartesian closed category with subobject classifier, one does have to go to the trouble of constructing coproducts (because their existence is not given as an axiom), but the question of whether any two such constructions are literally 'the same', as opposed to being 'merely isomorphic', can't really be answered: it has no answer that is invariant under equivalence. This makes things rather different from material set theory, where extensionality gives an absolute meaning.
So I think the answer to your question is 'no', if by different ways of relying on indexing in a construction, you mean things that can be distinguished within the theory, as would be the case in ZF.
This is not to sweep matters under the carpet! The issues that surround the notion of equality in structural accounts can be quite tricky and subtle. You can find careful discussion in the HoTT book. The nLab has this article.
Todd: Thank you for pursuing this line of inquiry as long as you have.
My surmise is that (extensional) material set theories do not have unit sets instantiating material successor operations. Note, unit sets are not ZF singletons. Cantor's 'Kardinalen' allow for abstract unit sets, but I cannot find these abstract sets among Zermelo et al.'s material 'Mengen'. The existence of material unit sets is what I mean by equal spacing.
If I understand your example correctly, structural set theories have a surfeit of 'units' corresponding to different successor operations; however these are all identified, lying in a sense in the same orbit under the transitive action of monotone endofunctors.
Now, if equal spacing is in the eye of the structuralist beholder, and all mathematically relevant properties of the natural numbers are invariant, for example, under the endofunctor , are we indifferent to 4 sheep versus 8? More generally, in measurement theory counting is understood to be an absolute scale, the automorphism group of the relevant empirical structure is trivial, how to (need one?) square this with the existence of nontrivial endofunctors on natural numbers?
The above pertains to what decategorification signifies. The first shepherd to count her sheep did so without benefit of a natural number object. Indeed, as legend has it this shepherd's 'stroke of mathematical genius' was the invention of decategorified numbers. Where do these decategorified numbers, and the counting process they support, live?
Abel, if you're still reading: sorry for the delay in response. The "equal spacing" in response times might be on a logarithmic scale. :-) (I hear an echo of a famous story about Esenin-Volpin.)
So I have to admit that I had never sat down to read the Lawvere paper you referred to before (Cohesive Toposes and Cantor's 'lauter Einsen'), although I have read a fair amount of Lawvere in my time. But I'm looking through it now. Lawvere is in my opinion sometimes not so easy to read and requires a certain amount of hermeneutic analysis — hopefully not of the editorial kind that he finds troublesome as e.g. Zermelo's interpolations on Cantor.
I'd like first to respond a little more to an earlier comment of yours, and I'll come to your most recent comment later.
Let me first take a stab at a dictionary. A 'unit set' is nothing more and nothing less than a terminal object in a category of sets, here conceived structurally as in for example ETCS. In such a structural set-up, we don't have a definitional notion of equality of objects, so it is meaningless to ask if two unit sets are literally the same or not; all we can say is they are uniquely isomorphic. Anyway, we do postulate the existence of a terminal object; let us pick one and give it the generic name .
There is some likely ambiguity in the term 'Kardinale', due to this two-stage abstraction process. But at the first stage it just means an abstract set, intuitively a bag of dots, an object of a category , as when Lawvere says (page 8) "a map from a cardinal to the points of a Menge …", or indeed in the very first paragraph of the paper where Myhill observes the identification of Cantor's Kardinalen with abstract sets. (I would guess passage to the second stage is really from Kardinale to Kardinalität.)
So at this stage, we are conceiving a Kardinale as an ensemble of 'lauter Einsen' without any particular geometric cohesion holding it together (as an ellipse or hyperbola, etc.). But an 'Eins' in this context is not identified with a unit set = terminal object, but rather with a map from a terminal set to the cardinal , or a morphism in our category to be more specific. It is important to realize that while there is no definitional equality of objects in a structural set theory, it is part and parcel of such an account that it is meaningful to speak of whether two morphisms (aka elements of ) are equal. (I believe this speaks, partly anyway, to the productive contradiction Lawvere refers to, and which Zermelo apparently regards as a confusion in Cantor.) In fact,one could say that this is the hallmark of sets or Kardinalen: that their only property is that one can identify or distinguish their elements (page 1, first paragraph).
(Actually, Lawvere makes even more precise the sense in which points of a 'Menge' can be seen as distinct in one sense and indistinguishable in another, in terms of two distinct ways of reconstituting the underlying Kardinale as a Menge, via a discrete functor on the one hand and a chaotic functor on the other. This particular passage in his analysis, around page 9, deserves close textual attention in terms of how these concepts are made precise in a cohesive topos, e.g., what is meant by a 'connected space' in this context — this involves a further left adjoint to the discrete functor.)
So the meaning of a Kardinale as consisting of 'lauter Einsen', in my reading, is that is a coproduct of copies of a unit set , where the coproduct inclusions (aka coproduct coprojections) are just the distinct elements . Meaning in particular that a map is uniquely determined by what it does to points . This is one rendering of what extensionality might mean structurally, namely (in categorical lingo), that a terminal object is a generator in the category .
(Although I personally usually reserve 'extensionality' to refer to a principle that is more reminiscent of its use in Zermelo set theory, saying that two subsets and $j: B \hookrightarrow X$ of a set are 'equal' if 'they have the same elements'. Here subobjects are by definition equal if there is a (necessarily unique) isomorphism such that ; by 'having the same elements', I mean that every element "belongs to" (i.e., factors through , necessarily uniquely) iff it belongs to .)
Anyway, to pass to your most recent comment. I think there's probably a problem in how words are being used. But to answer your question, "are we indifferent to 4 sheep versus 8?", my answer is 'of course not'. If you want to use a natural numbers object (say in a suitable theory of a category of categories) to do ordinary counting, a way to formalize that is by reference to a particular category with certain properties such as existence of an initial object (a functor denoted as , where now denotes the terminal category), a terminal object , and existence of chosen coproducts therein (denoted by ). If is a natural numbers object in the category of categories , which will turn out to be a discrete category, then there will be a unique functor which takes the object 'zero' of to the object of , and for which for every object of .
At this point I sound just like the imperious voice of Mathematics Made Difficult. "Thus we can count." Of course the intuition behind the formalism is clear: objects of can be used to parametrize finite sets formed in this recursive manner. So one context in which might live is in a suitable "category of categories", which Lawvere famously attempted to formalize.
A slightly different answer is right within the (better developed) theory ETCS, in which one assumes (postulates) the existence of a natural numbers object in , and then one cleverly uses to internally define within a category object which might be denoted , playing the role of "endlichen Kardinalen" (listen, I don't know German well, so I might have my own problems in word usage). I'll forego the exact construction, in the interest of not making a very long response still longer. But by hook or by crook, one constructs an internal full subcategory of (consisting of "finite sets", precisely parametrized by elements ), and this should give a reasonable answer to your question about where these constructs can be thought of as living.
Todd: Likewise sorry for the delayed response. Your dictionary is most helpful, and yes an Eins is not a unit set. A unit set is an abstract set containing an Eins. Roughly then the traditional definition of a natural number is the numerical relation (ratio) between an abstract set of Einsen and a unit set of one Eins. However, despite Lawvere's efforts (and your lucid exegesis of the same), I agree with Zermelo that these abstract sets (bags of dots, Kardinalen) are not Mengen (the appealing categorical way of relating these notions notwithstanding) and that ZF axiomatizes Mengen. Whatever extensionality might mean structurally, materially, it excludes bags of dots. In particular, there are no unit Mengen. That structural set theories (as theories of abstract sets) have unit sets (terminal objects in the category of sets) is a fortuitous, but I think unintended consequence of the structural approach. Unfortunately, I have not yet, even with your patient efforts, come to terms with how units in this sense (or natural number objects, generally), unique up to isomorphism, represent ordinary counting, meaning counting as a form of quantitative measurement.
My less than fortuitous use of words (previous comment) relates to this last issue. Consider constructing a frequency histogram from a (discrete) data set (bag, multiset). While we recognize that the shape of this histogram depends on certain choices, bin width, for example, we do not normally concern ourselves with the choice of unit set, that is, the (equal) spacing between counts determining the heights of the histogram bars (ignoring the fact that bin width too depends on a choice of unit set). Put differently, we recognize that the shape of a histogram is an artifact of bin selection; we do not normally acknowledge that histogram shape is also an artifact of our choice of unit set (from my problematic "orbit" of isomorphic structural unit sets). This is what I meant earlier by being indifferent between 4 and 8, relating to your observation that "as algebras of the endofunctor …these two structures are isomorphic…Structural set theory has nothing more to add: for all mathematical purposes, these structures can be regarded as 'identical' in nature, in that any mathematical relevant property you wish to assert of one translates perfectly to the other." Given unit spacing is not part of the structuralist conception, I am struggling to translate quotidian matters, such as histogram shape, a statistically, if not mathematically relevant property, into this "invariant under isomorphism" setting.
I suspect an answer to the above is that one must make a choice. For example, you suggest formalizing ordinary counting by "reference to a particular category, ." The downside, from a realist perspective, is that this structuralist choice seems to promote instrumentalist measurement theories.
I use 'big' in different ways in different categories, for example, I'd say that the dimension of a vector space says how big that vector space is, while the rank of a free group says how big that free group is, and either the weight or height or volume of a person says how big that person is. The word 'big' is flexible in this way, so I don't think it's ambiguous in a bad way to say that cardinality says how big sets are while 'ordinality' says how big well-ordered sets are.
I agree that 'long' is more descriptive than 'big' when it comes to ordinals. But if I merely said ordinals say how 'long' well-ordered sets are, without any further explanation, I bet just as many people would be be confused as if I'd said 'big'. The only real solution to this problem is to give a good explanation of the difference between cardinals and ordinals, with examples.
Thanks, I haven't seen those! In later posts I'll give lots of references to notations for large countable ordinals. I haven't found any introductory textbooks that explain these nicely, especially when you go beyond Do those books talk about large countable ordinals, or just ordinals in general?
I don't think I can bring myself to agree about your usage. The ambiguity of "big" is indeed innocuous when you are dealing with such conceptually distinct categories as vector spaces, free groups, and persons. But in the case at hand, the potential for confusion due to ambiguity is high because the categories in question are so closely connected. Indeed, they aren't merely connected — in the context of set theory, one is a subcategory of the other, as cardinals are merely a species of ordinal. Hence, your uses of "big" introduce outright ambiguities in a single context: for example, "ω₃ (= ℵ₃) is bigger than all its predecessors" expresses two different propositions depending on which notion of "bigger than" is in play. Granted, by convention, "ω" is usually used in contexts where we're talking about ordinals and "ℵ" when we're talking about cardinals. But that is not uniformly the case; and even if it were, it seems dicey to depend on notational conventions only to resolve potential ambiguities. And surely the risk of confusion is greater for people new to the subject than would be the case if you chose a different term like "long", since the conceptual challenges would be the same but the potential pitfalls the ambiguity in question introduces would be absent. Doesn't that seem right?
As long as we're bringing in categories, we should consider what the morphisms are. In the case of cardinals and their arithmetic, it seems we have in mind using functions between their underlying sets as the morphisms (i.e., following Cantor and treating a cardinal as representing a class of sets). In effect, cardinals are the objects of a skeleton of the category Set. In the case or ordinals and their arithmetic, one has several options, one being to consider them as posets and taking order-preserving functions between them as morphisms. (Another is to take simulations as the morphisms, but that's another story.) Going with either option, it's surely not the case that the category of cardinals is a subcategory of the category of ordinals.
The idea of defining a cardinal as a special type of ordinal is a neat trick (due to von Neumann I believe), but relying too much on that description results in an unfortunate conflation, drawing us away from the root conceptions that Cantor was pursuing in these types of arithmetic, and in particular his conception of what "bigness" should mean for these types (considered sui generis).
Those are all valid points, of course, Todd. I was (perhaps unwisely) using "category" in the informal sense of "conceptual category"; and, as I noted, my comments were relative to the context of set theory and its usual reduction of ordinals to von Neumann ordinals and cardinals to initial ordinals. I do agree, however, that, the elegance and convenience of that reduction aside, it does "draw us away from the root conceptions that Cantor was pursuing" in his development of cardinal and ordinal arithmetic. (This is one reason why I like Levy's approach in his text, since he introduces ordinals first as the order types of well-ordered sets (without explicitly defining what order types are), and only subsequently defines them explicitly à la von Neumann.)
On a related historical note, although it is clear that Cantor made something like the set/proper class distinction (distinguishing well-behaved, mathematically determinable sets from "absolutely infinite" collections like the collection of all ordinals), for him cardinals were the product of a "double act of abstraction" from any given set S — the first, an abstraction from the natures of the elements S, and the second from the order in which they are given. Thus, his account (at least in his famous 1895 text, Contributions to the Founding of the Theory of Transfinite Numbers) was explicitly psychologistic — the double abstraction resulted in an aggregate of distinct but otherwise indistinguishable "units" existing "in our minds."
Thanks for your clarifications and scholarly historical notes, Christopher!
Both Levy and Devlin deal with ordinals in general; I don't know of any texts that focus specifically on large countable ordinals, though I do not know more than a small fraction of all the texts that are out there. I really like Levy's treatment because it's closer to Cantor's exposition in the Beiträge, where cardinals and ordinals are not identified with sets, but, intuitively, with properties of unordered and well-ordered sets, respectively. Thus, initially, Levy defines cardinals and ordinals only implicit via two (class) functions, Card and Ord, where Card(x) = Card(y) iff x and y can be put into 1-1 correspondence, and Ord(A) = Ord(B), for wosets A and B, just in case A and B are isomorphic. (Wosets are defined as ordered pairs ⟨x,≤⟩, where ≤ well-orders x, and orderings, as usual, are just sets of ordered pairs satisfying certain conditions.) Only after developing the cardinals and ordinals intuitively in this way are the usual explicit definitions of them in terms of the von Neumann ordinals introduced.
arch1 says:
John, I find this topic similar to cosmology, in that it's hard to read about omega-to-the-whatever while taking too seriously my daily frets and cares. Thanks!
I'm glad you consider it a good thing to drop your daily frets and cares for a while and think about
Tracy Hall says:
Very nice post! I think you did skip a step: Instead of , it should be
The equation you suggest is indeed correct; I'll have to see where I made that mistake, and fix it. Thanks!
davidweber2 says:
This seems like a job for up arrow notation at this point, since it seems we've run out of ways to express the ordinals. Although Marek14's comment suggests that there's also an ordinal which is a fixed point of
I enjoyed the tie-in with driving. It gives the sense of accelerating at an ever increasing rate, sort of like an inverse light-speed limit, where instead of more energy giving less speed, we're getting more speed with less energy.
Up arrow notation doesn't seem to work very well with ordinals. There are amateur attempts to figure it out, e.g. here:
• Tetration and higher-order operations on transfinite ordinals , Tetration Forum.
but people who try to be careful about this note difficulties. I'm having trouble finding good discussions of those—the first comment on the above page merely says there are problems but "I'm leaving out the details 'cos it's not very interesting."
In any event, the good way to go beyond is rather different. See Part 2, coming soon!
Jonathan Bowers has a notation here (http://polytope.net/hedrondude/array.htm) though I'm not sure how good it actually is.
Deedlit says:
Hi John. The natural extension of Knuth up arrow notation to ordinals would be to define
and to take appropriate limits at limit ordinals.
The problem with this is that the notation gets "stuck" at . Clearly we have
but then
and therefore
for all .
But arrow notation actually works out alright, provided you use "down arrow notation" rather than "up arrow notation", where
and again you take appropriate limits at limit ordinals.
Unlike up arrow notation, down arrow notation gives you all strictly increasing functions, so it never gets stuck.
Interestingly, on the natural numbers down arrow notation grows half as fast as the up arrow notation; specifically,
Similarly, more or less matches up with the Veblen hierarchy , but for every step increases, you need to increase by two!
Thanks a lot! I think you're stating the problem I'd seen with up arrow notation for infinite ordinals, and also offering a work-around. That's really nice!
Łukasz Lew says:
Here is some quite interesting discussion. Would you care to comment on it?
Alas, I haven't thought about ordinal tetration for a long time, so I couldn't say anything interesting without some time to review what I once knew!
Gro-Tsen says:
If I'm allowed to drop a shameless self-plug here, I made an interactive JavaScript page some years ago that might help with visualizing the first countable ordinals or navigate them: http://www.madore.org/~david/math/drawordinals.html#?v=e (there's no instruction manual, but various bits of the page are clickable). For those who can read JavaScript, the source might be interesting also.
Yay! Thanks! I'd been looking around for that webpage, without success, while writing these articles. If you don't mind, I'll use some of the pictures to illustrate my articles. I'll cite you, of course.
I see that "Gro-Tsen" ≅ "David Madore".
Please feel free to reuse these images in any way you like, both they and the code used to generate them are in the public domain. Of course, for some specific small ordinals, hand-crafted images can be more illuminating than computer-generated ones. (Incidentally, I also created the image you got from Wikipedia for ω². Not that there's anything remarkable about it, but there's this question of whether it's more enlightening to have the positions of the successive sticks follow a geometric progression or a harmonic one like a perspective effect. I think the ω² picture uses a harmonic placement whereas my JavaScript page uses a geometric one.)
Also, if I'm allowed one more self-plug, for those who can read French, here's a not-so-much-mathematical-as-psychological reflection on why I think ordinals are fascinating: http://www.madore.org/~david/weblog/d.2015-12-11.2341.html#d.2015-12-11.2341
Another good introduction to ordinals can be found on David Madore's blog at http://www.madore.org/~david/weblog/d.2011-09-18.1939.nombres-ordinaux-intro.html#d.2011-09-18.1939
Good except that it's written in a language I don't understand very well.
Darko Mulej says:
Am I missing something elementary that I don't found sentence
and because ordinals are well-ordered, this process ends after finitely many steps.
evident?
What is wrong with infinite descending sequence of natural numbers? Of course, when one specific number is picked, then remain only finitely many below it.
The statement you quote relies on this fact. Suppose we have a sequence of ordinals
Then for some we have for all
In words: a non-increasing sequence of ordinals must become constant after finitely many steps.
This is easy to prove given the fact that every nonempty set of ordinals has a least element: just consider the set
Poetically speaking: while ordinals can be infinitely high, when they fall it only takes them a finite time to land. While this fact is built into the definition, I never cease to find it wonderful.
Yes, but my next 'counterexample' (which is infinite and descending) sequence will be different: first element is omega, folllowed by all natural numbers (in descending order, of course).
Admittedly, I cannot write down explicitly k-th element …
Layra Idarani says:
That last bit is precisely the problem. A sequence has more structure than a set. In particular, a sequence is (has?) a map from the natural numbers to whatever set the sequence is taken from; in the case of a descending sequence, the map would be order-reversing.
So in order to have a sequence, the k-th element must at least be well-defined, if not necessarily writable.
"Yes, but my next counterexample…" Darko: you should carefully write down your sequence, and then examine it against John's argument. (I'll put it slightly differently: there can be no infinite strictly decreasing sequence, because that would give a nonempty subset with no least element.)
Your ordering ω, …, 2, 1, 0 of the ordinal ω+1 = {0, 1, 2, …, ω} is fine, it's just not a well-ordering of ω+1 (precisely because the subset {0, 1, 2, …} has no "least" element on that ordering). When speaking of an ordinal α, the relevant ordering is left implicit, because it is always the same — in John's definition, the subset relation (restricted to α). And that relation provably well-orders α. That α can be ordered by other relations is not a counterexample to anything he said.
By the way, two very interesting proofs make use of the well-ordering of ordinals:
• Heads of the Hydra.
• Goodstein sequence.
Oh, let me continue to flood this comment thread with self-plugs! I wrote a JavaScript+SVG implementation of the hydra game, http://www.madore.org/~david/math/hydra0.xhtml (works with Firefox and Chrome, I don't know about other browsers), and a similar one against an even more powerful(?) kind of hydra described by the Bachmann-Howard ordinal: http://www.madore.org/~david/math/hydra.xhtml
Cool! I've included some of your pictures of ordinals in my post here—readers should click on them to have some fun.
My series of posts do not go up to the Bachmann–Howard ordinal, but I'm beginning to want to tackle that.
Large Countable Ordinals (Part 2) | Azimuth says:
Last time I took you on a road trip to infinity. We zipped past a bunch of countable ordinals
and stopped for gas at the first one after all these. It's called Heuristically, you can imagine it like this:
Refurio Anachro says:
Nice stuff, David Madore! Let me chime in and put another must see (shamelessly promoted) below:
https://plus.google.com/+RefurioAnachro/posts/cHtWqA3ZAV4
It links my recent posts about ordinals (and sheep and trees) and explains a tidbit that was missing in the previous one.
And that hydra of yours? It snapped at me, but my reaction sufficed to leave before / lost my head.
linasv says:
J. H. Conway starts his book "On Numbers and Games" by introducing 5 axioms: one for ordering, and one for addition, subtraction, multiplication and division. This allows him to construct not only the ordinals, in more-or-less exactly the same fashion as above, but to also define and show that it is larger than any finite ordinal, but less than . Likewise, he can define and so on, obeying the more-or-less expected inequalities.
I always found the construction elegant, intuitive, interesting … and am continually surprised, to this day, why his construction has not completely overtaken the field, and made the standard set-theoretic ordinal construction obsolete and archaic. Is it because this book is simply obscure, or is there some technical reason? Is there an objection to having these additional axioms? Foundationally, there seem to be no problems that I can see .. it can all be anchored in set theory. So why is his approach not more widespread?
Wikipedia has an article – "Surreal numbers" – on this. It will give a flavor of it. Conway is a much wittier and more lucid writer, though.
Maybe it's because surreals aren't well-ordered? Plus, classic ordinals are still an important subset (numbers whose right set is empty).
One thing that bothers me a bit is also that not all rational numbers are on the same level — binary fractions can be expressed finitely, but nonbinary fractions like 1/3 require infinite construction like general reals.
Royce Peng says:
I also am a little disappointed that surreal numbers have not become popular and been researched more actively. However, Conway's notion that surreal numbers could overthrow the foundations of mathematics seems too far-fetched. (but perhaps he wasn't being serious)
As for using surreal numbers for the construction of ordinals, I don't really see how it helps; surreals are defined by adding {L | R} for sets of surreals L and R, for ordinals this just becomes {L |} for a set of ordinals L, which further simplifies to either taking a successor or taking a limit of ordinals, which is the standard construction.
If we use the sign-expansion definition of surreals, then an ordinal alpha is just a sequence of +'s of length alpha, which doesn't help at all.
linasv (aka Linas Vepstas) wrote:
I always found the construction elegant, intuitive, interesting … and am continually surprised, to this day, why his construction has not completely overtaken the field, and made the standard set-theoretic ordinal construction obsolete and archaic.
The main consumers of infinite ordinals are set theorists and logicians. They like ordinals not just because they're big numbers, but because they are powerful tools. For example:
• ordinals index the cumulative hierarchy, which is a way of breaking up the universe of all sets into 'layers'. is the empty set. is the power set of For a limit ordinal we say is the union of all the lower layers:
This idea is really important in set theory.
• ordinals index the constructible hierarchy, a smaller version of the cumulative hierarchy invented by Gödel, which can be used to find a model of ZFC in which the Continuum Hypothesis holds.
• countable ordinals index the Borel hierarchy, where we start with open sets and build up all the Borel sets by taking countable unions, countable intersections and complements. The Borel hierarchy is fundamental to descriptive set theory.
• countable ordinals index the lightface Borel hierarchy, which is an effective (= computable) analogue of the Borel hierarchy.
• countable ordinals index the strength of axioms for arithmetic: the proof-theoretic ordinal of a theory is the smallest countable ordinal that the theory is unable to prove well-ordered.
• large cardinal axioms are the most systematic way known to generated stronger and stronger axioms of set theory.
You'd have to do these things better with surreal numbers, or do a bunch of equally cool things, for surreal numbers to really catch on the way ordinals have. Maybe they will someday! But anyway, while my posts have been treating ordinals as a fun game, their importance comes from their applications.
Thanks. As Royce points out, the ordinals are a subset of the surreals, so nothing is particularly lost, other than the simplicity of the definition of the successor function.
Off-topic, but while prowling around, I just found out that the Levi-Civita field is the smallest field that contains the formal power series over the reals, and is also closed and Cauchy complete (in the order topology). That seems to make it ideal for non-standard analysis which is something that I've not seen before, when skimming the topic. Now, could there be something interesting, when interested with the Borel hierarchies? Hmmm…
Re: Conway's maybe-joke about overthrowing the foundations of mathematics, well, surely it won't replace all the highly developed mechanisms of germs and sheaves and jets and what-not, for analysis, but, given what I just read about the Levi-Civita field, it does suggest that it could provide a "concrete" setting for doing analysis, side-stepping the various issues about convergence and closure and completeness, etc. that germs and sheaves and etc. are, in part, intended to solve. It does suggest a world turned a bit sideways. But of course, I'm speculating about the view from a mountain-top, while standing at it's base…
There is a misreference in the article: You say
This is not a proof, because I haven't given you the official definition of how to add ordinals. You can find the definition here:
and then link to the definition of ordinal multiplication.
Also, I thought it might be interesting to mention that Cantor Normal Form works with any ordinal base , although then the coefficients range over ordinals less than rather than the natural numbers.
Thanks for noticing that misreference. I actually meant
This is not a proof, because I haven't given you the official definition of how to multiply ordinals.
since I'd already pointed to the definition of addition earlier.
That's a neat fact about Cantor normal form!
Ordinals | Point at Infinity says:
For an extensive tour of the countable ordinals, I highly recommend you take a look at the recent series at John Baez's excellent blog, Azimuth. Also, make sure to visit David Madore's delightful interactive visualization […]
bugsbunnysan says:
I have a general question about this.
Sometime ago, some guy (Hilbert) is like, 'all these proofs in maths are tedious and written in natural languages and not easy to verify. We should invent a langauge that can express, logically, all of mathematics in all generaliry!" But then some other guy (Goedel) is like "yeah, nice try, but, suprise! self-references!, no can do!" And another guy (Turing) is like, "I made a machine for all the things that we can calculate!" And ta-da: informatics/computer science!
And the funny thing is, in informatics, it's widely accepted, somethings can't be calcualted in the very general case. And especially, all the things that can't be calculated can always be mapped to the same problem (i.e. they are all in the the halting-problem class). Similarily, there's the question of P != NP (?) where if this is solved for even one NP hard problem, it' solved for all the NP problems.
This whole thing about infinities and sets of infinite things and sets of sets and ordinals and on and on and on… It seems that, here also the "problem" (if you want to call it that) is that of self-reference. I.e. what happens if I put infinity in infinity… It seems like a repeat of what Hilbert tried to do (do something for the extremely general case) and Goedel showed that there's a limit to how general case you can get, which is when you hit the point where you allow self-references….
There's more to logic than self-reference, but indeed self-reference affects its overall structure, including how infinities work. For example, we know that induction up to the ordinal would let us prove Peano arithmetic consistent. But we know Peano arithmetic can't prove itself consistent unless it's _in_consistent, thanks to Gödel—self-reference is relevant here. So we know Peano arithmetic can't handle induction up to the ordinal even though this principle seems "obviously true" if you think about it for a while.
Similarly, every other formalism for arithmetic has its "proof-theoretic ordinal", roughly the first ordinal that it can't handle. And this is a great way to rate formalisms for arithmetic: to see how strong they are. This is called ordinal analysis, and people who care a lot about large countable ordinals tend to do so because of this subject.
In the world of set theory, there are many important kinds of uncountable cardinals that are too big for their existence to be proven given whatever axioms you happen to be using (unless those axioms are inconsistent). This is again a concrete manifestation of Gödel's incompleteness theorem, which can be seen as a consequence of the ability of sufficiently powerful axiom systems to reason about themselves.
Haitao Zhang says:
I studied the ordinal numbers a little last year while trying to understand a counter-example to a topological statement: does every limit point have a countable sequence that converges to it?
My aha moment of understanding the ordinal numbers is realizing that the function next(x) is total; but not the function prev(x). This is actually due to the well-ordering even though the relationship is a little counter intuitive at first sight. Since every set has a minimum we can first find a minimum min0 for set A and then find the next minimum min1 for A\min0. But prev(ω) is not defined. This is likely equivalent to your observation that addition is not commutative on the ordinals.
To finish the story: in the order topology of ordinal numbers ω1 is a limit point but there is no countable sequence that converges to it. https://en.wikipedia.org/wiki/Order_topology#Topology_and_ordinals
Sorry for the late comment as I only just discovered this treasure trove of yours from the link to your Nobel physics commentary. I have always found that the chronological nature of the blog format a bit of a hindrance to the discussion of mathematical ideas where timeliness is not high on the agenda.
Here's an overview of what's on this blog:
• Azimuth blog.
It's a bit out of date, and it's a huge pile of stuff, but you'll see there are different series of posts on different topics, and it will help you navigate these posts.
Born says:
Cantor needed ordinals because the natural numbers aren't "enough" to index processes which take a transfinite amount of time to complete. For example, applying the Cantor-Bendixson process to a closed set might not terminate in a finite number of steps. Thus ordinals are a natural extension of the naturals to index transfinite processes.
But why did Cantor need to invent ordinals, when he could just use the elements of to index transfinite processes? is an ordered set.
Immediately after asking the question, I realised I was being silly. Please ignore my question.
Just so other people know: the problem is that with its usual ordering is not well-ordered. Every countable ordinal is isomorphic to a subset of with its induced ordering. So, if you're only interested in countable ordinals, you can use subsets of But Cantor, of course, was not only interested in countable ordinals.
Andrés Sicard-Ramírez says:
The main surprise is that ordinal addition is not commutative. We've seen that , since
Note the missing in the definition of .
Thanks for catching that! Fixed.
Non-standard Models of Arithmetic 8 | Diagonal Argument says:
Getting back to the Paris-Harrington theorem: the original proof used non-standard models of PA. (Kanamori and McAloon soon simplified it; the treatment in the recent book by Katz and Reimann is particularly easy to follow.) That's the one I want to look at here.
But since you had such fun with ordinals here (and here and here), I better add that Ketonen and Solovay later gave a proof based on the ε0 stuff and the hierarchy of fast-growing functions. (The variation due to Loebl and Nešetřil is nice and short.) We should talk about this sometime! I wish I understood all the connections better. (Stillwell's Roads to Infinity offers a nice entry point, though he does like to gloss over details.) | CommonCrawl |
Improving quality of foreign direct investment attraction in Vietnam
Ngo Phuc Hanh1,2,
Đao Van Hùng1,2,
Nguyen Thac Hoat1,2 &
Dao Thi Thu Trang3
Foreign direct investment (FDI) enterprises are playing a key role in Vietnam's economy. By the end of 2016, there are more than 21,398 FDI projects in force, with the total registered capital of nearly 293 billion USD. One hundred six countries and territories have invested in 19 industries in 68 provinces and cities of Vietnam. These investments have added a large amount of capital to the economy, which has basically been used effectively, contributing to the economic growth of Vietnam. In this context, the study focuses on the analysis of statistical data from 1988 to 2016 on the sources of funds, the number of projects, the invested sectors, and countries invested in Vietnam; research also includes three main factors that affect the quality of FDI attraction in Vietnam, namely resources, infrastructure, and other support policies. In this study, the support policy factor is thought to have the greatest impact. In addition to the use of statistical techniques, quantitative research is also applied to three data analysis techniques, including descriptive statistics, scale reliability analysis, and regression analysis, to verify the hypothesis. Policy implications are also proposed in this study to improve the quality of FDI attraction in Vietnam in the coming years.
As one of the most critical point of economic reform policies, the Foreign Investment Law in Vietnam was first enacted in December 1987 and then became the basic legal framework specifying Vietnam's point of view about opening and integration. There are some fluctuations, but the FDI sector in particular and external economic activities in general has shown a positive role in the achievement of growth and development of Vietnam for nearly 30 years. According to the General Statistics Office of Vietnam (GSO), average annual economic growth was 7.3%, and GDP per capita rose by 5.7% over the period 1990–2004 and expanded 6.40% in the September quarter of 2016. GDP growth rate in Vietnam averaged 6.17% from 2000 until 2016, reaching an all-time high of 8.46% in the fourth quarter of 2007 and a record low of 3.14% in the first quarter of 2009. Meanwhile, property rate fell from roughly 80% in 1986 to around 29% in 2002, Vietnam's poverty rate fell from 14.2% in 2010 to 4.5% in 2015. Vietnam aims to reduce its poverty rate 1.3–1.5% in 2016. For the past decade, Vietnam has always been among the rapidly growing economies with sharp poverty reduction in the world [1,2,3].
In the first phase of opening, FDI was an effective solution to help Vietnam out of the tricky situation of siege and embargo. In the next stage, FDI is an important additional capital in the total investment of the whole society, contributing significantly to the promotion of economic restructuring, the increase of production capacity, the innovation of technology, the breakthrough in international markets, the increase in exports, the improvement of the international balance of payments, the contribution of state budget, the development of high-quality human resources, and the creation of additional jobs.
FDI in Vietnam has major influence on other economic sectors, namely stimulating the domestic investment, creating the competition, promoting the innovation and the transfer of technology, improving production efficiency, and developing the supporting industries that all help Vietnam participate in the value chain of global production. Today, Vietnam has become an appealing destination of many leading corporations around the world in different fields, such as BP, Total, Toyota, Canon, Samsung, Intel, Unilever, etc. with products of international quality, which not only has a great contribution to consolidate the position of Vietnam on the region and the world, but has also created the competitive motivation for the domestic enterprises to adapt in the context of globalization. FDI also plays an active role in supporting the process of reform of state enterprises, encouraging administrative procedures to reform and fulfill the market economy.
Up to now, Vietnam has attracted nearly 290 billion USD in foreign direct investment (FDI) with more than 22,000 projects from 114 countries and territories and has disbursed nearly $145 billion [4].
After nearly 30 years, FDI is distributed throughout Vietnam. Funds come primarily from Asian countries such as Japan, China, Hong Kong, Taiwan, Korea, and Singapore (accounting for 70.6%) or from European countries such as Germany, France, the UK (8.8%), the Americas including the USA, Canada (accounting for 7.7%), and Australia (2.7%); the rest are other partners. The average of used FDI accounts for 25% of social capital annually. This is an important fund to support economic development [1, 5].
FDI sector has a positive impact on the restructuring of economic sectors and the orientation of industrialization in Vietnam. From 2000 to 2015, the percentage of FDI in economic structure increased by 5.4%, while the public sector and the private sector decreased respectively. FDI sector accounted for about 45% of the total industrial production value, contributing to the formation of the key industrial sectors including telecommunications, oil and gas, electronics, chemicals, automotive, motorcycle, public information technology, steel, cement, food processing agricultural products, footwear, garment, etc. The majority of FDI enterprises operate in the fields of high-tech industries such as mining and oil and gas, electronics, telecommunications, office equipment, and computers. FDI restructured agricultural structure, diversified the types of product, improved the value of expectedly agricultural goods, and acquired a number of advanced technologies and high-quality international standard seeds and breeds. However, the percentage of FDI accounted for less than 3% of the output of the agricultural industry [1, 5].
The motivation fueling the research is that the quality of FDI projects in Vietnam also contributed to improving the quality of banking services, insurance, and auditing with the modern methods of payment, credit, and card. FDI in the tourism sector, hotels, and office leasing has changed the appearance of some major urban and coastal areas. Many recreation areas such as golf, bowling, and gambling areas created attractive conditions for investors and international tourists. In Vietnam, the other areas such as education, training, and health care did not initially attract FDI but later were invested in several high-quality institutions, some modern hospitals and clinics which served the needs of the high-income population and foreigners living in Vietnam.
This study will analyze the specific situation of attracting foreign direct investment in Vietnam during the period from 1988 to 2015 and propose some suggestions to improve the quality of FDI attraction in Vietnam. Additionally, for the purpose of verifying the claims, the authors of the study also applied the survey method to collect additional opinions of company groups in assessing the factors affecting the quality of attracting FDI projects in Vietnam with the focus on three main factors: resources, infrastructure, and other support policies.
Literature review, research model, and methodology
Literature review and research model
Foreign direct investment (FDI) is defined as an investment involving a long-term relationship and reflecting a lasting interest and control by a resident entity in one economy (foreign direct investor or parent enterprise) in an enterprise resident in an economy other than that of the foreign direct investor (FDI enterprise or affiliated enterprise or foreign affiliate) [6]. According to the World Trade Organization [7], FDI occurs when an investor based in one country (the home country) acquires an asset in another country (the host country) with the intent to manage that asset [8]. The management dimension is what distinguishes FDI from portfolio investment in foreign stocks, bonds, and other financial instruments. In most instances, both the investor and the asset it manages abroad are business firms. In such cases, the investor is typically referred to as the "parent firm" and the asset as the "affiliate" or "subsidiary." FDI is the net inflows of investment to acquire a lasting management interest (10% or more of voting stock) in an enterprise operating in an economy other than that of the investor. It is the sum of equity capital, reinvestment of earnings, other long-term capital, and short-term capital as shown in the balance of payments [6]. Studies have shown the special role of FDI in developing economies such as resolving employment issues [9], addressing the lack of investment capital [10], economic restructuring [10], providing modern technology, or transferring management experiences to local businesses. FDI is a type of long-term investment of individuals or companies in one country to another country by establishing the subsidiary companies or new businesses. Those individuals and companies will take the control of these enterprises. According to Law on Foreign Investment in Vietnam, 1996, FDI is "The foreign investors bring into Vietnam the capital or any assets to carry out investment activities in accordance with the law of Vietnam."
Factors influencing the quality of FDI attraction have been tested through numerous studies and theories; typically, Mayer [11] points out that the rationale for long-term investment of multinationals is that they want to take advantage of the local resources of emerging economies such as the cheap and abundant labor force or precious natural resources. However, Mayer et al. [12] state that access to economic resources is becoming more important due to the growing concerns of local authorities on the adverse effects of FDI. Lipsey [13] also emphasized that foreign companies want to increase long-term investment in developing countries to seek resources while the host country considers FDI as a source of capital to improve economic development and access to modern technology. Another factor, according to Sullivan and Sheffrin [14], infrastructure, is defined as the whole of the productive relationships that constitute the economic structure of a given society. Khadaroo and Seetanah [8] have argued that growth in infrastructure is defined as an indicator for higher transport performance or lower transportation costs. Iwanow and Kirkpatrick [15] determined that when infrastructure was improved by about 10%, the efficiency of developing country exports would be increased by 8%. In a study on the relationship between policies and FDI, according to Prokopenko [16] FDI inflows are influenced by a series of local government policies to improve globalization and national competitiveness.
The research model has been established as follows (Fig. 1):
Research model
Research questions and hypotheses:
H1: The resources with a positive impact on the quality of FDI attraction in Vietnam.
H2: The infrastructure with a positive impact on the quality of FDI attraction in Vietnam.
H3: The support policies with a positive impact on the quality of FDI attraction in Vietnam.
Methodology and data
This study is based on the annual time series data set in Vietnam ranging from 1998 to 2015. The data were obtained and calculated by the General Statistics Office (GSO), the Foreign Investment Agency (FIA) of Ministry of Planning and Investment of Vietnam, and the World Development Indicators published by the World Bank (WB), the International Monetary Fund (IMF) for Vietnam. Because the data of FDI were registered in US dollar, they were converted into Vietnamese dong using yearly average exchange rate. The authors analyzed the quality of foreign direct investment attraction in Vietnam in the period 1998–2015 by applying time series analysis techniques and statistical method.
However, to find more bases for the statement, the authors apply the survey method to collect ideas from the group of enterprises affected by three factors affecting the quality of FDI attraction in Vietnam: resources, infrastructure, and support policies. The method of data collection was done via questionnaire surveys sent to FDI enterprises in Vietnam. There were 500 sent questionnaires, and 485 valid answers were collected. Based on the data collected, the authors perform a number of statistical analysis methods including the calculation of the Cronbach's alpha coefficient to test the reliability of the research scale and the linear regression to estimate the influence of research factors on the quality of FDI attraction in Vietnam. Mathematically, these effects can be measured through linear regression, in which the quality of FDI attraction in Vietnam is considered to be the dependent variable and the mentioned three factors are the independent variables. The linear regression equation was cited by Cresswell [17] and Hair et al. [18] as follows:
$$ \mathrm{QFDI}=W0\kern0.5em +\kern0.5em W1\kern0.5em \times \mathrm{Rs}\kern0.5em +\kern0.5em W2\kern0.5em \times \kern0.5em I\kern0.5em +\kern0.5em W3\kern0.5em \times \kern0.5em \mathrm{Ps}\kern0.5em +\kern0.5em e $$
in which QFDI is the quality attracting FDI in Vietnam, Rs represents the resources, I is the infrastructure, Ps is the support policy factor, and e is the estimated error. Data are analyzed with the support of SPSS 20. Muijs [19] argues that SPSS is not the best tool, but it is the most popular software in academic research.
Data analysis of the reality of attracting foreign direct investment (FDI) in Vietnam between 1988 and 2015
FDI attraction through the registered capital and implemented capital
Based on the volatility of FDI flow into Vietnam, the development process of FDI can be divided into four (04) stages as follows (Table 1):
In the period of 1988–1997:
Table 1 The number of projects and registered FDI from 1988 to 2016
The three-year period of 1988–1990 is considered the warm-up period. Since 1991, it was taken over by the first FDI wave with the fast pace of FDI attraction; the annual average increased by 50% of registered capital, by 45% of realized capital, and was higher than the average growth rate of total social capital (23%). The registered capital reached $31.6 billion; the realized capital was $13.37 billion, equivalent to 37.5% of the registered capital [1, 2].
This is a recession period of FDI. The registered FDI decreased to $5590.7 million in 1997, $2012.4 million in 2000, and $4547.6 million in 2004. The annually averaged realized capital was $2.54 billion, equivalent to 78% of the realized capital in 1997. The registered FDI reached $23.88 billion; the realized capital was $17.84 billion, accounting for 75% of registered capital [1, 2].
This is the emerging second FDI wave. The registered FDI was $6.838 billion in 2005, $12.004 billion in 2006, $21.347 billion in 2007, and $71.7126 billion in 2008. The registered capital reached $111.918 billion, and the realized capital was $26.934 billion, accounting for 24% of the registered capital, and it was 4.68 times more than the registered capital and 1.5 times greater than the realized capital compared to the last period [1, 2].
In the period from 2000 to April 2016:
Registered capital reached a peak in 2008 before decreasing in recent years; however, the realized capital was still stable, averaged at $10–11 billion. The registered capital reached $67.1 billion, and the realized capital was $39.28 billion, accounting for 58.5% of the registered capital.
Over nearly 30 years of attracting foreign direct investment, the foreign direct investment (FDI) played a significant role in economic and social development. The total amount of the registered FDI (cumulative) reached $313,552.6 million, while the total amount of the realized capital reached $138,692.9 million, equivalent to 44.23% by the end of 2015 [1, 2].
Until 20 November 2016, Vietnam has attracted 2240 new FDI projects with a total registered capital of $18,103.0 billion, a 96.1% increase in the number of projects and an 89.5% increase in the registered capital compared to the same period in 2015. At the same time, there are 1075 projects, increasing the registered capital to the total amount of $5075 billion [4]. The FDI projects are expected to improve human resources quality, develop local supply systems, and increase domestic enterprises' competitive capability in joining global supply chains (Fig. 2)
The number of projects
Types of foreign direct investment in Vietnam
Of all the valid FDI projects in Vietnam today, there are mainly traditional types of investment. The investments could be 100% foreign investment, joint venture, build-operate-transfer (BOT), build-transfer (BT), build-transfer-operate (BTO), and business cooperation contracts.
For the 100% foreign investment type, there were only 854 new businesses in 2000 but the number of businesses increased to 7543 enterprises in 2013 (accounting for 83% of all FDI enterprises), about 8.8 times higher than the year 2000. The average of the period of 2000–2015 increased approximately 20% per year [2].
For the type of joint venture, the number of enterprises increased from 671 units to 1550 units between 2000 and 2013, respectively (accounting for 17% of the number of FDI enterprises), 2.3 times as high compared to that in 2000; the annual average of the period of 2000–2015 increased by 7.2% [2] (Table 2).
Table 2 Different types of FDI in 2015
The direct investment of countries and territories into Vietnam
Presently, there are 116 countries and territories having FDI projects in Vietnam, led by Korea, Japan, and Singapore; the specific data is presented in Table 3; Fig. 3.
Table 3 Nations and territories have invested the largest FDI in Vietnam
Nations and territories with the largest FDI in Vietnam
Among the countries investing FDI into Vietnam, Korea currently leads with a total investment of new and expanded FDI of $44,452.4 million. In the period of 1995–1997, the investment from Korea into Vietnam is moderate (less than $1 billion); most of them were the small and medium-sized projects focusing light industries such as textiles and footwear. From 1997 to 2004, investments have declined in the lowest amount of $15.2 million in 1997. However, the investments increased dramatically between 2005 and 2011 from Korea, with 3.112 projects, representing a total of $23,960.5 million. There were 3197 projects with the capital of $24,816.0 million in 2012; while the figures were 3611 projects and $29,653.0 million respectively, and Korea became the biggest investor in Vietnam in 2 years, 2014 and 2015 [2].
The second largest investor is Japan with 2830 projects, representing a total investment of $39,176.2 million. Japanese investments were stable at over $500 million between 1995 and 1998. However, the investment decreased significantly from 1998 to 2003 and the amount of Japanese investment was at the lowest at $71.6 million in 1999. Since 2004, the investment from Japan had a marked improvement when it continued to increase and peaked at $7.6 billion of the registered capital in 2008. Because of the impact of the global economic crisis in 2009, FDI from Japan has fallen to $715 million, 10 times lower than that in 2008. Since 2010, the investment recovered, and until 2015 total registered investment reached $39,176.2 million [1, 2].
Singapore is the third country to invest substantially in Vietnam with 1497 projects and a total investment of $34,168.2 million. From the period of 1995–2015, so far, Singapore has maintained its position as a major partner investing in Vietnam (except for 2008). However, the regional monetary crisis between 1997 and 1998 has negatively affected the investment from Singapore to Vietnam. Within 6 years of the crisis (from 1999 to 2005), the amount of Singapore's investment fell considerably and remained at the lowest point compared to the period of 1995–1996. It was not until 2006, when Vietnam deployed the Vietnam-Singapore Framework Agreement to connect the two economies, the new investment from Singapore rebounded before a decrease in 2009 due to the world economic crisis and then increased again in 2010, but it has decreased slightly in recent years [20, 21].
Up to now (December 2016), with 65 countries and territories having investment projects in Vietnam, Korea is leading with a total investment of newly registered and additional capital of 5.58 billion USD, accounting for 34% of total investment in Vietnam; Singapore is the second with newly registered and additional capital of $1.84 billion, accounting for 11.2% of total registered capital; Japan is taking the third place with a newly registered and additional capital of $1.7 billion, accounting for 10.3% of total investment.
Foreign direct investment by the key industries and fields
By the end of 2015, the processing industry and the manufacturing industry had the highest amount of FDI as well as the highest number of projects, with $156,739.9 million and 10,555 projects accounting for 56.89% of a total of registered investment. Investments in the real estate area were in the second place; although the number of projects was not high, the scale was large with $50,674.5 million in total, accounting for 18.39% of total FDI (Table 4).
Table 4 FDI in Vietnam by the industries
Although the agriculture, forestry, and fishery industries were encouraged, these fields attracted very few projects. By the end of 2015, there were only 546 valid FDI projects with total investment of $3989.3 million, accounting for 1.44% of total FDI in Vietnam. The scale of FDI of the projects was small; they were mainly used in livestock production, poultry feed production, and processing of poultry products for domestic consumption and export [22].
Foreign direct investment by the region
The southeast region is the region attracting the highest amount of FDI with 10,631 projects and $112,053.9 million of the registered capital, accounting for 42.75%. The second was the Red River Delta with 5978 projects and $65,789.7 million of the registered capital, accounting for 25.10%. North Central and Central Coast regions also had 1185 projects and registered capital of $51,834.5 million, accounting for 19.77%. The Highland was the region that attracted the least FDI with 156 projects and total registered capital of $859.9 million, about 0.32% [22] (Table 5).
Table 5 FDI in Vietnam by the regions (including oil and gas)
Therefore, it can be seen clearly that there was a significant difference between regions, the plains and the mountains, the wealthy places, and the poor places. FDI projects are concentrated mainly in the Red River Delta, Southeast, and North Central and Central Coast regions. Because most of the largest industrial areas are gathered here and they have good infrastructure, convenient credit services, such as banking and developed transport system, these regions are attracting many investors.
Key findings and discussions
Analyzing the reliability tests of Vietnam's resources
As Table 6 shows, Cronbach's alpha of the resource factor is 0.658, higher than 0.6. However, the Item-Total statistics table shows that the third attribute has a Corrected-Item Total Correlation of 0.095, and this number is less than 0.3 so this attribute will be excluded from the reliability test analysis. When the third attribute is excluded, the Cronbach's alpha value is 0.734, higher than 0.6. Therefore, all the requirements for analyzing tree-level reliability are fulfilled. Among the attributes, the cheap local labor resource scores the highest in Corrected-Item Total Correlation, which means that cheap labor is viewed as an important part of assessing the quality of FDI attraction in Vietnam. In fact, Vietnam is known to be a good place for foreign investment because local labor is plentiful and cheaper than other countries in the region. On the other hand, Vietnam should consider labor in rural areas where labor costs are lower than in other areas, and labor quality factors are not included in this case.
Table 6 Descriptive statistics on Vietnam's resources
Reliability test on the infrastructure of Vietnam
As Table 7 shows, the Cronbach's alpha value of this factor is greater than 0.6 while all the attributes of this factor have a Corrected Item-Total Correlation value of 0.3, so the requirements for analytical testing reliability is fulfilled. Among five attributes of the infrastructure factor, the third attribute is the highest in the Corrected Item-Total Correction (0.722), which demonstrates the fact that Vietnam's existing infrastructure transportation network has the highest impact on the quality of FDI attraction. This is true since the infrastructure in the delivery network will ensure the efficiency of the operation of FDI projects in Vietnam, and the inadequate transport network will degrade the quality of the FDI attraction because foreign investors will consider the efficiency of business operations.
Table 7 Descriptive statistics on Vietnam's infrastructure
Testing the reliability of the support policies of Vietnam
As Table 8 shows, the Cronbach's alpha value is 0.852, higher than 0.6. In addition, all attributes of this factor have a Corrected Item-Total Correction of 0.3. Therefore, all requirements for reliability testing are fulfilled. Among five attributes of the supportive policy factors, the way local governments implement policies to support administrative procedures plays a vital role in increasing the quality of attracting FDI projects into Vietnam. In fact, when Vietnam reforms administrative procedures, it will be of particular interest to investors.
Table 8 Statistics describing Vietnam's support policies
Linear regression results and hypothesis testing
These four factors can explain the 66.8% of the dependent variable being the quality of FDI attraction in Vietnam in the coming years. This is rather high because Hair et al. [18] asserts that any R-squared value in linear regression greater than 0.5 is considered to be a good correlation between dependent and independent variables. Furthermore, all factors are statistically significant at 5% of the confidence interval as Sig value is lower than 0.05 (all Sig values are 0.000 < 0.05). That means that supportive policies, infrastructure, and resources have significant implications for attracting FDI into Vietnam. In addition, the supporting policy factor shows the highest impact on the quality of FDI attraction as the beta of this component is higher than that of other sectors and it can be seen that if Vietnam can improve the practical effectiveness of policies, the quality of FDI attraction to Vietnam will be improved by 0.324% (Table 9).
Table 9 Linear regression results
Hypothesis 1: Resources have a positive impact on the quality of FDI attraction in Vietnam
This hypothesis is supported because the correlation coefficient is statistically significant at 5% of the confidence interval. The partial correlation coefficient is equal to 0.270, which means that when Vietnam can improve its resource efficiency by more than 1%, the FDI attraction in Vietnam will be improved by 0.270%.
Hypothesis 2: Infrastructure has a positive impact on the quality of FDI attraction in Vietnam
This hypothesis is verified because the correlation coefficient is statistically significant at 5% of the confidence interval. The partial correlation coefficient is 0.266, which translates into when Vietnam can improve its effective use of infrastructure by more than 1%, the quality of FDI attraction in Vietnam will be improved by 0.266%.
Hypothesis 3: Supportive policies (socio-economic) have a positive impact on the quality of FDI attraction in Vietnam
This hypothesis is supported because the correlation coefficient is statistically significant at 5% of the confidence interval. The correlation coefficient is 0.324, which means that when Vietnam can improve its effective support policies by more than 1%, the quality of FDI attraction in Vietnam will be improved by 0.324%.
For a period of nearly 30 years, Vietnam expects to improve the development of the enterprises from FDI. During that time, the quality of FDI attraction has been improved significantly.
Attracting FDI has made a remarkable contribution to economic growth. In some ways, it helps to improve the efficiency of domestic investing resources. Foreign direct investment is being the most dynamic fund with GDP growth higher than the national growth rate. In 1995, GDP of foreign direct investment increased by 14.98% while the national GDP increased just only 9.54%. In 2000, 2005, and 2010, the former number and the latter number were 11.44 and 6.79%, 13.22 and 8.44%, 8.12 and 6.78% respectively. The contribution of FDI sector has increased gradually, from 2% GDP (1982) to 12.7% (2000), 16.98% (2006), 18.97% (2011), 19% (2015), and 23.4% (2016) [1, 23].
FDI is playing a fundamental role on the total social investment. It contributes significantly to Vietnam's export and changes the structure of exports toward reducing the share of mining products and raw materials while increasing the proportion of manufactured goods. The enterprises from FDI have a positive impact on expanding Vietnamese export's market, especially to the USA and EU. This change, in some ways, results in changing export structure by making USA become the largest market contributing to national budget. The export revenue including crude oil from FDI enterprises reached only 45.2 of total turnover before 2001. However, since 2003, foreign direct investment became a major factor boosting beyond the domestic region. It accounts for roughly 64% of total exports in 2012. The total export turnover of FDI enterprises in 2015 reached nearly 2008 billion US dollars and increased approximately 16.7% that is equivalent to 29.69 billion US dollars in comparison with 2014. That accounts for 63.4% of the national export turnovers [1, 23].
Foreign direct investment helps promoting economic restructuring in Vietnam toward industrialization and modernization. In Vietnam, FDI focuses on investing in the industrial sector with a higher technological level of the country's average level. The growth rate of industry from FDI is nearly 18% on average [1, 4], and it is higher than the growth rate of the whole industry currently. FDI plays a leading role on developing several key industrial sectors like telecommunication, mining, oil and gas processing, electronics, media technology, steel and cement, etc.
In addition, FDI helps in creating more jobs, improves the quality of human resources, and changes Vietnamese labor structure. On annual average, the FDI companies generate roughly 2 million jobs directly and about 3–4 million jobs indirectly with a strong impact on Vietnamese labor restructuring towards industrialization and modernization [1].
Foreign investment is an important channel for technology transfer, contributing in raising the technological level of Vietnamese economy. Since 1993, Vietnam had 951 technology transfer contracts already approved/registered with 605 contracts from FDT enterprises, accounting for 63.6% [1]. FDI activities help in bringing the development of worldwide technologies into Vietnam.
It is clear that projects from FDI have a huge impact to improve competitiveness in all three national levels, enterprise level, and product level. In fact, many Vietnamese products are considered competitive in the US market, EU market, and Japan market. FDI sectors help to boost the competition of other domestic sectors and the whole national economy by boosting productivity, exports, the balance of international payments, the level of technology, labor skills, and labor restructuring.
FDI projects have helped to improve economic management and have been a significant contribution to Vietnamese international integration. FDI attraction has helped in breaking our national embargo, expanding external economic relationship, joining ASEAN, and signing several framework agreements with EU, Bilateral Trade Agreement with the United States, the economic partnership Agreement (EPA) with Japan, etc.
Conclusion and recommendations
The research is based on statistical techniques with a series of data of an over-30-year period collected by the author group, along with the use of a three-factor research model to introduce a model of research on the quality of FDI attraction in Vietnam, including support policies, resources, and infrastructure. The support policies should always be concerned with how to attract more foreign investors. The infrastructure should ensure the sustainability of the FDI projects. And the last factor—resources—should focus on improving the benefits of foreign investors via reducing the cost.
The current position of Vietnam in improving the quality of FDI attraction is reflected through the quantitative impact of infrastructure, support policies, and resources to increase FDI in the coming years. Linear regression demonstrates that all factors are statistically significant for improving the quality of FDI attraction in Vietnam and that government support policies are the factors that show the greatest impact on quality of FDI attraction. The study strongly encourages the process of improving and cleaning up the state management apparatus in the provinces of Vietnam. This is quite accurate as FDI inflows into developing countries, including Vietnam, are increasing dramatically as Vietnam strives to improve its policy of attracting foreign investment.
To further improve the quality of FDI attraction in Vietnam in the coming years, the Government of Vietnam should continue to improve the policy towards transparency and access to international practices and reform the administrative procedures; continue to adjust and invest in infrastructure, giving priority to water supply and drainage, environmental sanitation, road, and sea port systems; and continue to improve resources including quality of labor and financial institutions. These policy implications are also relatively relevant to the current situation in Vietnam as well as the data from this study.
FDI:
FIA:
Foreign Investment Agency
GSO:
General Statistics Office of Vietnam
WTO:
General Statistics Office (GSO) (1998-2015) Vietnam's statistical year book. Statistical Publishing House, Vietnam
Foreign Investment Agency (FIA) (1998 – 2015), Situation of foreign direct investment attraction, The Ministry of Planning and Investment. Vietnam
PricewaterhouseCoopers (2008-2015) Vietnam, a guide for business and investment. PricewaterhouseCoopers, Vietnam
Foreign Investment Agency (FIA) (2016) Vietnam gains noted economic achievements after 30 years of Doi Moi. The Ministry of Planning and Investment, Statistical Publishing House, Vietnam
General Statistics Office, the investigation report on the results of production and business situation of the Vietnam enterprises with foreign investment, stage 2000 to 2015. http://gso.gov.vn/Default.aspx?tabid=382&ItemID=14002. Accessed 7 July 2014
This general definition of FDI is based on OECD, Detailed Benchmark Definition of Foreign Direct Investment, third edition (OECD, 1996), and International Monetary Fund, Balance of Payments Manual, fifth edition (IMF, 1993). http://www.oecd.org/investment/investment-policy/2090148.pdf. https://www.imf.org/external/pubs/ft/bopman/bopman.pdf
World Trade Organization (2002) Annual report 2002. WTO Publications. Printed in France VII-2002-3,000 © World Trade Organization. https://www.wto.org/english/res_e/booksp_e/anrep_e/anrep02_e.pdf
Khadaroo AJ, Seetanah B (2008) Transport and economic performance: the case of Mauritius. J Trans Econ Policy 42(2):1–13
Alfaro L (2003) Foreign Direct Investment and Growth: Does the Sector Malter. Harvard Business School, Mimeo, Boston, pp 1–31
Blomstrom M, Wang J-Y (1992) Foreign investment and technology transfer: a simple model, published. Eur Econ Rev 36:137–155
Mayer RE (2005). Cognitive theory of multimedia learning. The Cambridge handbook of multimedia learning. [Google Books version]. University Press, Cambridge. Retrieved February 15, 2011
Mayer JD, Roberts RD, Barsade SG (2008) Human abilities: emotional intelligence. Annu Rev Psychol 59:507–536
Lipsey RE (2000), "Interpreting developed countries' foreign direct investment", NBER Working paper no. 7810, National Bureau of Economic Research, Cambridge
Sullivan A, Sheffrin MS (2000) Economics: principles and tools. Prentice Hall, New Jersey, p 712
Iwanow T, Kirkpatrick C (2006) Trade facilitation, regulatory quality and export performance. J Int Dev 19(6):735–753
Prokopenko J (2000) "Globalization, competitiveness and productivity strategies" Enterprise and Management Development Working Paper - EMD/22/E, January. International Labour Organization, Geneva. http://oracle02.ilo.org/dyn/empent/empent
Creswell J (2002) Educational research: planning, conducting, and evaluating quantitative and qualitative research. Merrill Prentice Hall, Upper Saddle River, NJ
Hair JF, Black WC, Babin BJ, Anderson, RE (2011). Multivariate data analysis (7th ed.). Beijing: China Machine Press.
Muijs D (2011) Doing quantitative research in education with SPSS. Sage Publications Ltd, Thousand Oaks, CA
Ishida M (2012) In: Lim H, Yamada Y (eds) "Attracting FDI: experiences of east Asian countries", economic reforms in Myanmar: pathways and prospects, BRC research report no.10. Bangkok Research Center, IDE-JETRO, Bangkok, Thailand
World Bank.( 2015) The World Bank Annual Report 2015; Washington, DC. © World Bank. https://openknowledge.worldbank.org/handle/10986/22550 License: CC BY 3.0 IGO"
Foreign Investment Agency (2015) Attracting foreign direct investment may 12, 2015. The Ministry of Planning and Investment, Vietnam
Malesky E (2007) "Provincial Governance and Foreign Direct Investment in Vietnam", 20 Years of Foreign Investment: Reviewing and Looking Forward (1987–2007). Knowledge Publishing House, Vietnam
We acknowledge the General Statistics Office of Vietnam; Vietnam Foreign Investment Agency for supporting us to complete this study.
This study was conducted without any financial support.
Raw data from the General Statistics Office of Vietnam and Foreign Investment Agency are stated in the references. The data are analyzed with the support of SPSS 20.
Academy of Policy and Development—APD, Hanoi, Vietnam
Ngo Phuc Hanh, Đao Van Hùng & Nguyen Thac Hoat
Ministry of Planning and Investment, Hanoi, Vietnam
Phuong Dong University, Hanoi, Vietnam
Dao Thi Thu Trang
Ngo Phuc Hanh
Đao Van Hùng
Nguyen Thac Hoat
NPH synthesized and analyzed quantitative research and finalized the research. DVH worked on the latest update of FDI data. NTH collected and analyzed statistical data of FDI projects from 1988 to 2016. DTT worked for the General Statistics Office of Vietnam; Foreign Investment Agency to get the most objective assessment. All authors read and approved the final manuscript.
Correspondence to Ngo Phuc Hanh.
Hanh, N.P., Van Hùng, Đ., Hoat, N.T. et al. Improving quality of foreign direct investment attraction in Vietnam. Int J Qual Innov 3, 7 (2017). https://doi.org/10.1186/s40887-017-0016-7 | CommonCrawl |
Anniversaries (Ukrainian)
Yurii Dmitrievich Sokolov (on his 100th birthday)
Gorbachuk M. L., Luchka A. Y., Mitropolskiy Yu. A., Samoilenko A. M.
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 11. - pp. 1443-1445
Chronicles (Ukrainian)
Just people of the world
Zukhovitskii S. I.
Article (Russian)
On the optimal rate of convergence of the projection-iterative method and some generalizations of it on a class of equations with smoothing operators
Azizov M.
↓ Abstract
For some classes of operator equations of the second kind with smoothing operators, we find the exact order of the optimal rate of convergence of generalized projection-iterative methods.
On boundary-value problems for a second-order differential equation with complex coefficients in a plane domain
Burskii V. P.
We study boundary-value problems for a homogeneous partial differential equation of the second order with arbitrary constant complex coefficients and a homogeneous symbol in a bounded domain with smooth boundary. Necessary and sufficient conditions for the solvability of the Cauchy problem are obtained. These conditions are written in the form of a moment problem on the boundary of the domain and applied to the investigation of boundary-value problems. This moment problem is solved in the case of a disk.
Article (Ukrainian)
Multipoint problem for hyperbolic equations with variable coefficients
Klyus I. S., Ptashnik B. I., Vasylyshyn P. B.
By using the metric approach, we study the problem of classical well-posedness of a problem with multipoint conditions with respect to time in a tube domain for linear hyperbolic equations of order 2n (n ≥ 1) with coefficients depending onx. We prove metric theorems on lower bounds for small denominators appearing in the course of the solution of the problem.
Estimate of error of an approximated solution by the method of moments of an operator equation
Gorbachuk M. L., Yakymiv R. Ya.
For an equationAu = f whereA is a closed densely defined operator in a Hilbert spaceH, f εH, we estimate the deviation of its approximated solution obtained by the moment method from the exact solution. All presented theorems are of direct and inverse character. The paper refers to direct methods of mathematical physics, the development of which was promoted by Yu. D. Sokolov, the well-known Ukrainian mathematician and mechanic, a great humanitarian and righteous man. We dedicate this paper to his blessed memory.
On characteristic properties of singular operators
Koshmanenko V. D., Ota S.
For a linear operatorS in a Hilbert space ℋ, the relationship between the following properties is investigated: (i)S is singular (= nowhere closable), (ii) the set kerS is dense in ℋ, and (iii)D(S)∩ℛ(S)={0}.
On one variational criterion of stability of pseudoequilibrium forms
Lukovsky I. O., Mykhailyuk O. V., Timokha A. N.
We establish a variational criterion of stability for the problem of the vibrocapillary equilibrium state which appears in the theory of interaction of limited volumes of liquid with vibrational fields.
Methods for the solution of equations with restrictions and the Sokolov projection-iterative method
Luchka A. Y.
We establish consistency conditions for equations with additional restrictions in a Hilbert space, suggest and justify iterative methods for the construction of approximate solutions, and describe the relationship between these methods and the Sokolov projection-iterative method.
Variational schemes for vector eigenvalue problems
Makarov I. L.
We construct and study exact and truncated self-adjoint three-point variational schemes of any degree of accuracy for self-adjoint eigenvalue problems for systems of second-order ordinary differential equations.
Potential fields with axial symmetry and algebras of monogenic functions of a vector variable. I
Mel'nichenko I. P., Plaksa S. A.
We obtain a new representation of potential and flow functions for space potential solenoidal fields with axial symmetry. We study principal algebraic-analytical properties of monogenic functions of a vector variable with values in an infinite-dimensional Banach algebra of even Fourier series and describe the relationship between these functions and the axially symmetric potential and Stokes flow function. The suggested method for the description of the above-mentioned fields is an analog of the method of analytic functions in the complex plane for the description of plane potential fields.
On the optimization of projection-iterative methods for the approximate solution of ill-posed problems
Pereverzev S. V., Solodkii S. G.
We consider a new version of the projection-iterative method for the solution of operator equations of the first kind. We show that it is more economical in the sense of amount of used discrete information.
Moduli of continuity defined by zero continuation of functions and K-functionals with restrictions
Radzievskii G. V.
We consider the followingK-functional: $$K(\delta ,f)_p : = \mathop {\sup }\limits_{g \in W_{p U}^r } \left\{ {\left\| {f - g} \right\|_{L_p } + \delta \sum\limits_{j = 0}^r {\left\| {g^{(j)} } \right\|_{L_p } } } \right\}, \delta \geqslant 0,$$ where ƒ ∈L p :=L p [0, 1] andW p,U r is a subspace of the Sobolev spaceW p r [0, 1], 1≤p≤∞, which consists of functionsg such that \(\int_0^1 {g^{(l_j )} (\tau ) d\sigma _j (\tau ) = 0, j = 1, ... , n} \) . Assume that 0≤l l ≤...≤l n ≤r-1 and there is at least one point τ j of jump for each function σ j , and if τ j =τ s forj ≠s, thenl j ≠l s . Let \(\hat f(t) = f(t)\) , 0≤t≤1, let \(\hat f(t) = 0\) ,t<0, and let the modulus of continuity of the functionf be given by the equality $$\hat \omega _0^{[l]} (\delta ,f)_p : = \mathop {\sup }\limits_{0 \leqslant h \leqslant \delta } \left\| {\sum\limits_{j = 0}^l {( - 1)^j \left( \begin{gathered} l \hfill \\ j \hfill \\ \end{gathered} \right)\hat f( - hj)} } \right\|_{L_p } , \delta \geqslant 0.$$
We obtain the estimates \(K(\delta ^r ,f)_p \leqslant c\hat \omega _0^{[l_1 ]} (\delta ,f)_p \) and \(K(\delta ^r ,f)_p \leqslant c\hat \omega _0^{[l_1 + 1]} (\delta ^\beta ,f)_p \) , where β=(pl l + 1)/p(l 1 + 1), and the constantc>0 does not depend on δ>0 and ƒ ∈L p . We also establish some other estimates for the consideredK-functional.
Sobolev problem in the complete scale of Banach Spaces
Roitberg Ya. A., Sklyarets A. V.
In a bounded domainG ⊂ ℝ n , whose boundary is the union of manifolds of different dimensions, we study the Sobolev problem for a properly elliptic expression of order 2m. The boundary conditions are given by linear differential expressions on manifolds of different dimensions. We study the Sobolev problem in the complete scale of Banach spaces. For this problem, we prove the theorem on a complete set of isomorphisms and indicate its applications.
Brief Communications (Russian)
Coercive solvability of a generalized Cauchy-Riemann system in the Space $L_p (E)$
Ospanov K. N.
For an inhomogeneous generalized Cauchy-Riemann system with nonsmooth coefficients separated from zero, we establish conditions for the solvability and estimation of a weighted solution and its first-order derivatives.
Brief Communications (Ukrainian)
Periodic solutions of Quasilinear Hyperbolic integro-differential equations of second order
Petrovskii Ya. B.
We study a periodic boundary-value problem for a quasilinear integro-differential equation with the d'Alembert operator on the left-hand side and a nonlinear integral operator on the right-hand side. We establish conditions under which the uniqueness theorems are true.
On averaging of differential inclusions in the case where the average of the right-hand side does not exist
Plotnikov V. A., Savchenko V. M.
We consider the problem of application of the averaging method to the asymptotic approximation of solutions of differential inclusions of standard form in the case where the average of the right-hand side does not exist.
Boundary-Value problems for systems of integro-differential equations with Degenerate Kernel
Boichuk О. A., Krivosheya S. A., Samoilenko A. M.
By using methods of the theory of generalized inverse matrices, we establish a criterion of solvability and study the structure of the set of solutions of a general linear Noether boundary-value problem for systems of integro-differential equations of Fredholm type with degenerate kernel.
On the instability of lagrange solutions in the three-body problem
Sosnitskii S. P.
We consider the relation between the Lyapunov instability of Lagrange equilateral triangle solutions and their orbital instability. We present a theorem on the orbital instability of Lagrange solutions. This theorem is extended to the planarn-body problem. | CommonCrawl |
why isn't the net force considered while calculating potential due to a system of charges?
The textbook I'm reading defines potential at a point as
work done per unit charge by an external agent to move the test charge from the reference point to the point under consideration (without changing it's kinetic energy)
During calculation of potential at a point due to a system of charges, why isn't the work done against the net force due to the system considered instead of simply adding up the work done against separate forces caused by individual charges?
PS: Wherever the explanation requires math, kindly also provide it's physical implications
electrostatics potential
ApoorvApoorv
...why isn't the work done against the net force due to the system considered instead of simply adding up the work done against separate forces caused by individual charges?
They're both equivalent, due to the principle of superposition.
Basically, the net force is what you get when you add up the separate forces from the individual charges acting on the test charge, so when you calculate the work done against the net force, it's the same as adding up the work done against the separate forces.
General two particle system:
Imagine you have two charged particles in space, $Q_1$ and $Q_2$, and your test charge. When you move your test charge, the work done against the electrostatic force, $\mathbf{F_1}$, of $Q_1$ is $W_1=-\int{\mathbf{F_1}\cdot\mathrm{d}\mathbf{r}}$. Similarly, the work done against charge 2 is $W_2=-\int{\mathbf{F_2}\cdot\mathrm{d}\mathbf{r}}$.
What is the total work done? $$W_T=W_1+W_2=-\int{\mathbf{F_1}\cdot\mathrm{d}\mathbf{r}}-\int{\mathbf{F_2}\cdot\mathrm{d}\mathbf{r}}$$ $$=-\int{(\mathbf{F_1}+\mathbf{F_2})\cdot\mathrm{d}\mathbf{r}}$$ But wait, what is the resultant force on the test charge? It's $\mathbf{F_T}=\mathbf{F_1}+\mathbf{F_2}$. Therefore $$W_T=-\int{\mathbf{F_T}\cdot\mathrm{d}\mathbf{r}}$$
binaryfuntbinaryfunt
$\begingroup$ @Apoorv Are you familiar with line integrals, i.e., that $W=\int \mathbf{F}\cdot \mathrm{d} \mathbf{r}$? $\endgroup$ – binaryfunt May 17 '15 at 12:17
$\begingroup$ yes, I'm pretty comfortable with those $\endgroup$ – Apoorv May 17 '15 at 12:21
Not the answer you're looking for? Browse other questions tagged electrostatics potential or ask your own question.
Electrostatic potential due to two charge system
System of point charges, Potential related question
Does the potential of a point due to a system of charges depend upon the potential of the reference point?
What is the difference between the potential energy and the energy of a test charge due to the electric field?
Work done in moving a charge
When talking about potential difference in a circuit, whose work done are we talking about?
Some Questions regarding the Potential Difference
Is this formula I derived for Potential Difference between two points in an electric field correct?
Why is the electric potential at a distance of $R$ from a point charge $q$ equal to $\frac{-q}{4\pi\varepsilon_0 R}$ | CommonCrawl |
A Student's Guide to the Schrödinger Equation
Daniel A. Fleisch
Vectors and Functions
p. 1 - Intro
p. 2 - Section 1.1
p. 14 - Section 1.3
Interactive Simulations Code
Figure 1.2
1.2 MATLAB
1.2 OCTAVE
Worked Problems
Problem 1 (Analytical Approach)
Find the components of vector \(\vec{C}=\vec{A}+\vec{B}\) if $$\vec{A}=3\hat{\imath}-2\hat{\jmath}$$ and $$\vec{B}=\hat{\imath}+\hat{\jmath}$$ using Eq. 1.4. Verify your answer using graphical addition.
Hint 1a
Add the x-component of vector \(\vec{A}\) (which is \(3\hat{\imath}\)) to the x-component of vector \(\vec{B}\) (which is \(\hat{\imath}=1\hat{\imath}\)).
Add the y-component of vector \(\vec{A}\) (which is \(-2\hat{\jmath}\)) to the y-component of vector \(\vec{B}\) (which is \(\hat{\jmath}=1\hat{\jmath}\))
The x-component of vector \(\vec{C}=\vec{A}+\vec{B}\) is
C_x=A_x+B_x=3\hat{\imath}+1\hat{\imath}=4\hat{\imath}
and the y-component of vector \(\vec{C}=\vec{A}+\vec{B}\) is
C_y=A_y+B_y=-2\hat{\jmath}+1\hat{\jmath}=-1\hat{\jmath}=-\hat{\jmath}
so the vector \(\vec{C}\) is
\vec{C}=4\hat{\imath}-\hat{\jmath}.
Full Solution (Analytical approach)
To find the x-component of vector \(\vec{C}\), add the x-component of vector \(\vec{A}\) (which is \(3\hat{\imath}\)) to the x-component of vector \(\vec{B}\) (which is \(\hat{\imath}=1\hat{\imath}\)).
To find the y-component of vector \(\vec{C}\), Add the y-component of vector \(\vec{A}\) (which is \(-2\hat{\jmath}\)) to the y-component of vector \(\vec{B}\) (which is \(\hat{\jmath}=1\hat{\jmath}\)).
This makes the x-component of vector \(\vec{C}=\vec{A}+\vec{B}\)
and the y-component of vector \(\vec{C}=\vec{A}+\vec{B}\)\begin{equation*}
Show all steps Hide all steps
Problem 1 (Graphical Approach)
Find the components of vector \(\vec{C}=\vec{A}+\vec{B}\) if \(\vec{A}=3\hat{\imath}-2\hat{\jmath}\) and \(\vec{B}=\hat{\imath}+\hat{\jmath}\) using Eq. 1.4. Verify your answer using graphical addition.
Hint 1b
Draw horizontal (x) and vertical (y) axes with unit vector \(\hat{\imath}\) extending one unit along the x-axis and unit vector \(\hat{\jmath}\) extending one unit along the y-axis. Sketch vector \(\vec{A}=3\hat{\imath}-2\hat{\jmath}\) with its base at the origin (\(x=0, y=0)\) and its tip at the point (\(x=3, y=-2)\).
Sketch vector \(\vec{B}=\hat{\imath}+\hat{\jmath}\) with its base at the origin (\(x=0, y=0)\) and its tip at the point (\(x=1, y=1)\).
Displace vector \(\vec{B}\) (without changing its length or its direction) so that its base is at the tip of vector \(\vec{A}\).
Sketch vector \(\vec{C}\) from the beginning (the base) of vector \(\vec{A}\) to the end (the tip) of vector \(\vec{B}\). Note that vector \(\vec{C}=4\hat{\imath}-\hat{\jmath}\), in agreement with the result of the analytical approach. Note also that you could have achieved the same result by displacing vector \(\vec{A}\) (again without changing its length or its direction) so that its base is at the tip of vector \(\vec{B}\).
Full Solution (Graphical Approach)
Begin by drawing horizontal (x) and vertical (y) axes with unit vector \(\hat{\imath}\) extending one unit along the x-axis and unit vector \(\hat{\jmath}\) extending one unit along the y-axis. Then sketch vector \(\vec{A}=3\hat{\imath}-2\hat{\jmath}\) with its base at the origin (x=0,y=0) and its tip at the point (x=3, y=-2).
Now sketch vector \(\vec{B}=\hat{\imath}+\hat{\jmath}\) with its base at the origin (x=0,y=0) and its tip at the point (x=1,y=1).
The next step is to displace vector \(\vec{B}\) (without changing its length or its direction) so that its base is at the tip of vector \(\vec{A}\).
Finally, sketch vector \(\vec{C}\) from the beginning (the base) of vector \(\vec{A}\) to the end (the tip) of vector \(\vec{B}\). Note that vector \(\vec{C}=4\hat{\imath}-\hat{\jmath}\), in agreement with the result of the analytical approach. Note also that you could have achieved the same result by displacing vector \(\vec{A}\) (again without changing its length or its direction) so that its base is at the tip of vector \(\vec{B}\).
Problem 2
What are the lengths of vectors \(\vec{A}\), \(\vec{B}\), and \(\vec{C}\) from Problem 1? Verify your answers using your graph from Problem 1.
Hint 1
According to Eq. 1.3, the magnitude of vector can be found by squaring and adding the vector's components and taking the square root of the result.
For the two-dimensional vectors \(\vec{A}\), \(\vec{B}\), and \(\vec{C}\) of Problem 1, the magnitudes can be found using
\begin{align*}
|\vec{A}|&=\sqrt{A_x^2+A_y^2}=\sqrt{(3)^2+(-2)^2}\\
|\vec{B}|&=\sqrt{B_x^2+B_y^2}=\sqrt{(1)^2+(1)^2}\\
|\vec{C}|&=\sqrt{C_x^2+C_y^2}=\sqrt{(4)^2+(1)^2}.
\end{align*}
|\vec{A}|&=\sqrt{13}=3.6\\
|\vec{B}|&=\sqrt{2}=1.4\\
|\vec{C}|&=\sqrt{17}=4.1
can be verified using a ruler to measure the lengths of the vectors in the graph from Problem 1.
Full Solution
For the two-dimensional vectors \(\vec{A}\), \(\vec{B}\), and \(\vec{C}\) of Problem 1, the magnitudes are
|\vec{A}|&=\sqrt{A_x^2+A_y^2}=\sqrt{(3)^2+(-2)^2}=\sqrt{13}=3.6\\
|\vec{B}|&=\sqrt{B_x^2+B_y^2}=\sqrt{(1)^2+(1)^2}=\sqrt{2}=1.4\\
|\vec{C}|&=\sqrt{C_x^2+C_y^2}=\sqrt{(4)^2+(1)^2}=\sqrt{17}=4.1
as you can verify by measuring the lengths of the vectors in the graph from Problem 1 using a ruler.
Find the scalar product \(\vec{A}\circ \vec{B}\) for vectors \(\vec{A}\) and \(\vec{B}\) from Problem 1. Use your result to find the angle between \(\vec{A}\) and \(\vec{B}\) using Eq. 1.10 and the magnitudes \(|\vec{A}|\) and \(|\vec{B}|\) that you found in Problem 2. Verify your answer for the angle using your graph from Problem 1.
Eq. 1.6 tells you that the scalar product between vectors \(\vec{A}\) and \(\vec{B}\) can be found by multiplying the corresponding Cartesian components and summing those products.
For the two-dimensional vectors \(\vec{A}\) and \(\vec{B}\) of Problem 1, the scalar product is
(\vec{A},\vec{B})=\vec{A}\circ\vec{B}=A_xB_x+A_yB_y=(3)(1)+(-2)(1)
(remember that these two products are scalars which may be added together to give the scalar value for \(\vec{A}\circ \vec{B}\))
The cosine of the angle between two vectors can be found by dividing the scalar product of the two vectors by the product of the vectors' magnitudes, as shown in Eq. 1.10.
In this case, the scalar product of the vectors is one and the magnitudes of the two vectors are \(|\vec{A}|=\sqrt{13}\) and \(|\vec{B}|=\sqrt{2}\), so the cosine of the angle between \(\vec{A}\) and \(\vec{B}\) is
\cos{\theta}=\frac{\vec{A}\circ\vec{B}}{|\vec{A}||\vec{B}|}=\frac{1}{\sqrt{13}\sqrt{2}}=\frac{1}{\sqrt{26}}
so the angle \(\theta\) may be found by taking the arccosine of this value.
The angle\(\theta=\acos{\frac{1}{\sqrt{26}}}=78.7^\circ\) between vectors\(\vec{A}\) and\(\vec{B}\) may be confirmed using a protractor to find the angle between the vectors on your graph from Problem 1.
Eq. 1.6 tells you that the scalar product between vectors \(\vec{A}\) and \(\vec{B}\) can be found by multiplying the corresponding Cartesian components and summing those products. For the two-dimensional vectors \(\vec{A}\) and \(\vec{B}\) of Problem 1, the scalar product is
(remember that these two products are scalars which may be added together to give the scalar value for \(\vec{A}\circ \vec{B}\), so the scalar product \(\vec{A}\circ \vec{B}=1\) in this case.)
The cosine of the angle between two vectors can be found by dividing the scalar product of the two vectors by the product of the vectors' magnitudes, as shown in Eq. 1.10. In this case, the scalar product of the vectors is one and the magnitudes of the two vectors are \(|\vec{A}|=\sqrt{13}\) and \(|\vec{B}|=\sqrt{2}\), so the cosine of the angle between \(\vec{A}\) and \(\vec{B}\) is
That gives the value of the angle between vectors \(\vec{A}\) and \(\vec{B}\) as $$\theta=acos{\frac{1}{\sqrt{26}}}=78.7^\circ$$ which may be confirmed using a protractor to find the angle between the vectors on your graph from Problem 1.
Are the 2D vectors \(\vec{A}\) and \(\vec{B}\) from Problem 1 orthogonal? Consider what happens if you add a third component of \(+\hat{k}\) to \(\vec{A}\) and \(-\hat{k}\) to \(\vec{B}\); are the 3D vectors \(\vec{A}=3\hat{\imath}-2\hat{\jmath}+\hat{k}\) and \(\vec{B}=\hat{\imath}+\hat{\jmath}-\hat{k}\) orthogonal? This illustrates the principal that vectors (and abstract N-dimensional vectors) may be orthogonal over some range of components but non-orthogonal over a different range.
According to Eq. 1.8, the scalar product between two orthogonal vectors is zero.
As shown in the solution for Problem 3, the dot product between the two-dimensional vectors \(\vec{A}\) and \(\vec{B}\) of Problem 1 is
(\vec{A},\vec{B})=\vec{A}\circ\vec{B}=A_xB_x+A_yB_y=(3)(1)+(-2)(1)=1
which means that these two-dimensional vectors are not perpendicular.
To determine whether the three-dimensional vectors \(\vec{A}=3\hat{\imath}-2\hat{\jmath}+\hat{k}\) and \(\vec{B}=\hat{\imath}+\hat{\jmath}-\hat{k}\) are perpendicular, take the scalar product between them to check whether it's zero.
The scalar product between the 3-D vectors \(\vec{A}\) and \(\vec{B}\) is
(\vec{A},\vec{B})&=\vec{A}\circ\vec{B}=A_xB_x+A_yB_y+A_zB_z\\
&=(3)(1)+(-2)(1)+(1)(-1)=0
so these 3-D vectors are orthogonal.
According to Eq. 1.8, the scalar product between two orthogonal vectors is zero. As shown in the solution for Problem 3, the dot product between the two-dimensional vectors \(\vec{A}\) and \(\vec{B}\) of Problem 1 is
To determine whether the three-dimensional vectors \(\vec{A}=3\hat{\imath}-2\hat{\jmath}+\hat{k}\) and \(\vec{B}=\hat{\imath}+\hat{\jmath}-\hat{k}\) are perpendicular, take the scalar product between them to check whether it's zero. The scalar product between the 3-D vectors \(\vec{A}\) and \(\vec{B}\) is
If ket \(\ket{\psi}=4\ket{\epsilon_1}-2i\ket{\epsilon_2}+i\ket{\epsilon_3}\) in a coordinate system with orthonormal basis kets \(\ket{\epsilon_1}\), \(\ket{\epsilon_2}\), and \(\ket{\epsilon_3}\), find the norm of \(\ket{\psi}\). Then "normalize" \(\ket{\psi}\) by dividing each component of \(\ket{\psi}\) by the norm of \(\ket{\psi}\).
As described in Section 1.2, the square of the norm of ket \(\ket{A}\) can be found by operating on ket \(\ket{A}\) with bra \(\bra{A}\).
For ket
\ket{\psi}=4\ket{\epsilon_1}-2i\ket{\epsilon_2}+i\ket{\epsilon_3}=\begin{pmatrix}4\\-2i\\i \end{pmatrix}
the corresponding bra is
\bra{\psi}=\left(4^*\; -2i^*\; i^*\right)=(4\; 2i\; -i).
The inner product of \(\psi\) with itself is
\braket{\psi\vert\psi}&=(4\; 2i\; -i)\begin{pmatrix}4\\-2i\\i \end{pmatrix}=(4)(4)+(2i)(-2i)+(-i)(i)\\
&=16+4+1.
Since \(|\psi|^2=\braket{\psi\vert\psi}=21\), the norm of \(\psi\) is
|\psi|=\sqrt{21}.
Dividing ket \(\ket{\psi}\) by its norm gives the normalized version of \(\ket{\psi}\):
\ket{\psi}=\frac{1}{\sqrt{21}}\left(4\ket{\epsilon_1}-2i\ket{\epsilon_2}+i\ket{\epsilon_3}\right)=\frac{1}{\sqrt{21}}\begin{pmatrix}4\\-2i\\i \end{pmatrix}.
As described in Section 1.2, the square of the norm of ket \(\ket{A}\) can be found by operating on ket \(\ket{A}\) with bra \(\bra{A}\). For ket
The inner product of \(\psi\) with itself is therefore
&=16+4+1,
and since \(|\psi|^2=\braket{\psi\vert\psi}=21\), the norm of \(\psi\) is
For ket \(\ket{\psi}\) from Problem 5 and ket \(\ket{\phi}=3i\ket{\epsilon_1}+\ket{\epsilon_2}-5i\ket{\epsilon_3}\), find the inner product \(\braket{\phi\vert\psi}\) and show that \(\braket{\phi\vert\psi}=\braket{\psi\vert\phi}^*\).
To form the inner product \(\braket{\phi\vert\psi}\), start by finding the bra \(\bra{\phi}\) that is the dual of ket \(\ket{\phi}\).
The bra \(\bra{\phi}\) that is the dual of ket \(\ket{\phi}=3i\ket{\epsilon_1}+\ket{\epsilon_2}-5i\ket{\epsilon_3}\) is
\bra{\phi}=(3i^*\; 1^*\; -5i^*)=(-3i\; 1\; 5i).
Forming the inner product using the bra \(\bra{\phi}\) from the previous hint and ket \(\ket{\psi}\) gives
\braket{\phi\vert\psi}&=(-3i\; 1\; 5i)\begin{pmatrix}4\\-2i\\i \end{pmatrix}\\
&=(-3i)(4)+(1)(-2i)+(5i)(i).
To show that \(\braket{\phi\vert\psi}=\braket{\psi}{\phi}^*\), find the bra \(\bra{\psi}\) that is the dual of ket \(\ket{\psi}\) and multiply that bra by the ket \(\ket{\phi}\).
The bra \(\bra{\psi}\) that is the dual of ket \(\ket{\psi}\) is
\bra{\psi}=(4^*\; -2i^*\; i^*)=(4\; 2i\; -i).
Multiplying bra \(\bra{\psi}\) by ket \(\ket{\phi}\) gives
\braket{\psi\vert\phi}&=(4\; 2i\; -i)\begin{pmatrix}3i\\1\\-5i \end{pmatrix}\\
&=12i+2i-5.
To form the inner product \(\braket{\phi\vert\psi}\), start by finding the bra \(\bra{\phi}\) that is the dual of ket \(\ket{\phi}\). This bra is
\bra{\phi}=(3i^*\; 1^*\; -5i^*)=(-3i\; 1\; 5i),
and forming the inner product of this bra with ket \(\ket{\psi}\) gives
&=(-3i)(4)+(1)(-2i)+(5i)(i)=-14i-5.
To show that \(\braket{\phi\vert\psi}=\braket{\psi\vert\phi}^*\), find the bra \(\bra{\psi}\) that is the dual of ket \(\ket{\psi}\) and multiply that bra by the ket \(\ket{\phi}\). This bra is
\bra{\psi}=(4^*\; -2i^*\; i^*)=(4\; 2i\; -i),
and multiplying this bra by ket \(\ket{\phi}\) gives
&=12i+2i-5=14i-5.
Comparing the inner-product result for \(\braket{\phi\vert\psi}\) with the result for \(\braket{\psi\vert\phi}\) shows
\braket{\phi\vert\psi}&=-14i-5\\
\braket{\psi\vert\phi}&=14i-5
so \(\braket{\phi\vert\psi}=\braket{\psi\vert\phi}^*\).
If \(m\) and \(n\) are different positive integers, are the functions \(\sin{mx}\) and \(\sin{nx}\) orthogonal over the interval \(x=0\) to \(x=2\pi\)? What about over the interval \(x=0\) to \(x=\frac{3\pi}{2}\)?
As described in Section 1.5, two functions are orthogonal if the inner product between the functions equals zero:
(f(x),g(x))=\braket{f(x)\vert g(x)}=\int_{-\infty}^{\infty}f^*(x)g(x)dx=0.
If you make the function \(f(x)=\sin{mx}\) and the function \(g(x)=\sin{nx}\), the inner product is
(f(x),g(x))=\int_{0}^{2\pi}(\sin{mx})^*(\sin{nx})\:dx.
Since \((\sin{mx})^*=\sin{mx}\), you can use the integral relation
\int\sin{mx}\sin{nx}\;dx=\left[\frac{\sin{(m-n)}x}{2(m-n)}+\frac{\sin{(m+n)}x}{2(m+n)}\right]
in which \(m\) and \(n\) are different integers.
Using the integral relation from the previous hint gives
\int_0^{2\pi}\sin{mx}\sin{nx}\;dx&=\left[\frac{\sin{(m-n)}x}{2(m-n)}+\frac{\sin{(m+n)}x}{2(m+n)}\right]\Bigr|_{0}^{2\pi}\\
&=\left[\frac{\sin{(m-n)}2\pi}{2(m-n)}+\frac{\sin{(m+n)}2\pi}{2(m+n)}\right]\\
&\hspace{1cm}-\left[\frac{\sin{(m-n)}0}{2(m-n)}+\frac{\sin{(m+n)}0}{2(m+n)}\right].
Since \(m\) and \(n\) are different integers, the difference \(m-n\) and the sum \(m+n\) are also an integers, and the sine of any integer multiple of \(2\pi\) is zero. The sine of zero is also zero.
For the interval \(x=0\) to \(x=\frac{3\pi}{2}\), use the same process with these limits on the integrals:
\int_0^{\frac{3\pi}{2}}\sin{mx}\sin{nx}\;dx&=\left[\frac{\sin{(m-n)}x}{2(m-n)}+\frac{\sin{(m+n)}x}{2(m+n)}\right]\Bigr|_{0}^{\frac{3\pi}{2}}\\
&=\left[\frac{\sin{(m-n)}\frac{3\pi}{2}}{2(m-n)}+\frac{\sin{(m+n)}\frac{3\pi}{2}}{2(m+n)}\right]\\
The term \(\frac{\sin{(m-n)}\frac{3\pi}{2}}{2(m-n)}\) and the term
\(\frac{\sin{(m+n)}\frac{3\pi}{2}}{2(m+n)}\) can take on various values, depending on the values of \(m\) and \(n\), but the difference between these two terms is not, in general, equal to zero (specifically, when \(m-n\) is odd).
As described in Section 1.5, two functions are orthogonal if the inner product between the function equals zero:
Using this integral relation gives
Since \(m\) and \(n\) are different integers, the difference \(m-n\) is also an integer, and the sine of any integer multiple of \(2\pi\) is zero. The sine of zero is also zero, so
(f(x),g(x))=\int_{0}^{2\pi}(\sin{mx})^*(\sin{nx})\:dx=0
and these two functions are orthogonal over the interval of 0 to \(2\pi\).
\(\frac{\sin{(m+n)}\frac{3\pi}{2}}{2(m+n)}\) can take on various values, depending on the values of \(m\) and \(n\), but the difference between these two terms is not, in general, equal to zero (specifically, when \(m-n\) is odd). Hence these two functions are not orthogonal over the interval of 0 to \(3\pi/2\) if \(m-n\) is odd.
Can the functions \(e^{i\omega t}\) and \(e^{2i\omega t}\) with \(\omega=\frac{2\pi}{T}\) form an orthonormal basis over the interval \(t=0\) to \(t=T\)?
To serve as an orthonormal basis over the specified interval, these functions must be orthogonal over this interval and must have unity norm.
To determine if these two functions are orthogonal over the interval \(t=0\) to \(t=T\), evaluate the integral
\int_0^Te^{-i\omega t}e^{2i\omega t}dt
with \(\omega=\frac{2\pi}{T}\).
To evaluate the integral given in the previous hint, combine the two terms by adding their exponents and integrating:
\int_0^Te^{-i\omega t}e^{2i\omega t}dt=\int_0^T e^{i\omega t}dt=\frac{1}{i\omega}e^{i\omega t}\Bigr|_0^T.
\frac{1}{i\omega}\left(e^{i\omega T}-e^{i\omega 0}\right)=\frac{1}{i\omega}\left(e^{i\frac{2\pi}{T} T}-e^{i\frac{2\pi}{T} 0}\right)=\frac{1}{i\omega}\left(e^{i(2\pi)}-e^0\right),
and\(e^{i(2\pi)}=\cos{2\pi}+i\sin{2\pi}=1\), the integral
\int_0^T e^{-i\omega t}e^{2i\omega t}dt=\frac{1}{i\omega}(1-1)=0.
To determine if the functions \(e^{i\omega t}\) and\(e^{2i\omega t}\) have unity norm, use Eq. 1.27:
|{f(x)}|=\sqrt{\braket{f(x)|f(x)}}=\sqrt{\int_{-\infty}^{\infty}f^*(x)f(x)dx}
which in this case is
|{e^{-i\omega t}}|=\sqrt{\int_{0}^{T}e^{+i\omega t}e^{-i \omega t}dt}
|{e^{2i\omega t}}|=\sqrt{\int_{0}^{T}e^{-2i\omega t}e^{2i \omega t}dt}.
The integrals of the previous hint evaluate to
|{e^{-i\omega t}}|=\sqrt{\int_{0}^{T}e^{0}dt}=\sqrt{T}
|{e^{2i\omega t}}|=\sqrt{\int_{0}^{T}e^{0}dt}=\sqrt{T}
so these functions do not have unity norm, but they can be normalized by dividing each by\(\sqrt{T}\).
To serve as an orthonormal basis over the specified interval, these functions must be orthogonal over this interval and must have unity norm. To determine if these two functions are orthogonal over the interval \(t=0\) to \(t=T\), evaluate the integral
To evaluate this integral, combine the two terms by adding their exponents and integrating:
And since
and \(e^{i(2\pi)}=\cos{2\pi}+i\sin{2\pi}=1\), the integral
\int_0^T e^{-i\omega t}e^{2i\omega t}dt=\frac{1}{i\omega}(1-1)=0,
so these two functions are orthogonal to one another.
To determine if the functions \(e^{i\omega t}\) and \(e^{2i\omega t}\) have unity norm, use Eq. 1.27:
These integrals evaluate to
|{e^{2i\omega t}}|=\sqrt{\int_{0}^{T}e^{0}dt}=\sqrt{T},
Given the basis vectors \(\vec{\epsilon}_1=3\hat{\imath}\), \(\vec{\epsilon}_2=4\hat{\jmath}+4\hat{k}\), and \(\vec{\epsilon}_3=-2\hat{\jmath}+\hat{k}\), what are the components of vector \(\vec{A}=6\hat{\imath}+6\hat{\jmath}+6\hat{k}\) along the direction of each of these basis vectors?
You can find the components of a vector in the direction of each basis vector by taking the inner product between the vector and each basis vector, as shown in Eq. 1.32.
In this case, inserting the basis vector \(\vec{\epsilon}_1\) and vector \(\vec{A}\) into Eq. 1.32 gives
A_1=\frac{\vec{\epsilon}_1 \circ \vec{A}}{\vert\vec{\epsilon}_1\vert^2}=\frac{3\hat{\imath} \circ (6\hat{\imath}+6\hat{\jmath}+6\hat{k})}{\vert3\hat{\imath}\vert^2}.
Since \(\hat{\imath}\circ \hat{\imath}=1\) and \(\hat{\imath}\circ \hat{\jmath}=\hat{\imath} \circ \hat{k}=0\), the expression in the previous hint is
A_1=\frac{3\hat{\imath} \circ (6\hat{\imath}+6\hat{\jmath}+6\hat{k})}{\vert3\hat{\imath}\vert^2}=\frac{(3)(6)(1)+(3)(6)(0)+(3)(6)(0)}{(3^2)+(0^2)+(0^2)}
The same process using basis vector \(\vec{\epsilon}_2\) and vector \(\vec{A}\) gives
A_2=\frac{(4\hat{\jmath}+4\hat{k}) \circ (6\hat{\imath}+6\hat{\jmath}+6\hat{k})}{\vert4\hat{\jmath}+4\hat{k}\vert^2}=\frac{(4)(6)(1)+(4)(6)(1)}{(4^2)+(4^2)}
and using basis vector \(\vec{\epsilon}_3\) and vector \(\vec{A}\) gives
A_3=\frac{(-2\hat{\jmath}+\hat{k}) \circ (6\hat{\imath}+6\hat{\jmath}+6\hat{k})}{\vert-2\hat{\jmath}+\hat{k}\vert^2}=\frac{(-2)(6)(1)+(1)(6)(1)}{(-2^2)+(1^2)}.
You can find the components of a vector in the direction of each basis vector by taking the inner product between the vector and each basis vector, as shown in Eq. 1.32. In this case, inserting the basis vector \(\vec{\epsilon}_1\) and vector \(\vec{A}\) into Eq. 1.32 gives
A_1=\frac{\vec{\epsilon}_1 \circ \vec{A}}{\vert\vec{\epsilon}_1\vert^2}=\frac{3\hat{\imath} \circ (6\hat{\imath}+6\hat{\jmath}+6\hat{k})}{\vert3\hat{\imath}\vert^2},
and since \(\hat{\imath}\circ \hat{\imath}=1\) and \(\hat{\imath}\circ \hat{\jmath}=\hat{\imath} \circ \hat{k}=0\), this expression is
A_1=\frac{3\hat{\imath} \circ (6\hat{\imath}+6\hat{\jmath}+6\hat{k})}{\vert3\hat{\imath}\vert^2}&=\frac{(3)(6)(1)+(3)(6)(0)+(3)(6)(0)}{(3^2)+(0^2)+(0^2)}\\
&=\frac{18}{9}=2.
A_2=\frac{(4\hat{\jmath}+4\hat{k}) \circ (6\hat{\imath}+6\hat{\jmath}+6\hat{k})}{\vert4\hat{\jmath}+4\hat{k}\vert^2}&=\frac{(4)(6)(1)+(4)(6)(1)}{(4^2)+(4^2)}\\
&=\frac{48}{32}=1.5
A_3=\frac{(-2\hat{\jmath}+\hat{k}) \circ (6\hat{\imath}+6\hat{\jmath}+6\hat{k})}{\vert-2\hat{\jmath}+\hat{k}\vert^2}=&\frac{(-2)(6)(1)+(1)(6)(1)}{(-2^2)+(1^2)}\\
&=\frac{-6}{5}=-1.2.
Given square-pulse function \(f(x)=1\) for \(0\leq x \leq L\) and \(f(x)=0\) for \(x < 0\) and \(x > L\), find the values of \(c_1\), \(c_2\), \(c_3\), and \(c_4\) for the basis functions \(\psi_1=\sin{(\frac{\pi x}{L})}\), \(\psi_2=\cos{(\frac{\pi x}{L})}\), \(\psi_3=\sin{(\frac{2\pi x}{L})}\), and \(\psi_4=\cos{(\frac{2\pi x}{L})}\).
To determine values of \(c_1\), \(c_2\), \(c_3\), and \(c_4\) (the "amount" of each basis function \(\psi_1(x)\), \(\psi_2(x)\), \(\psi_3(x)\), and \(\psi_4(x)\) contained in function \(f(x)\)), use Eq. 1.36:
c_1&=\frac{\braket{\psi_1\vert\psi}}{\braket{\psi_1\vert\psi_1}}=\frac{\int_{-\infty}^{\infty}\psi_1^*(x)\psi(x)dx}{\int_{-\infty}^{\infty}\psi_1^*(x)\psi_1(x)dx}\\
c_N&=\frac{\braket{\psi_N\vert\psi}}{\braket{\psi_N\vert\psi_N}}=\frac{\int_{-\infty}^{\infty}\psi_N^*(x)\psi(x)dx}{\int_{-\infty}^{\infty}\psi_N^*(x)\psi_N(x)dx}
with \(\psi(x)=f(x)=1\) between \(x=0\) and \(x=L\).
For \(c_1\), the "amount" of \(\psi_1=\sin{(\frac{\pi x}{L})}\) contained in the function \(f(x)=1\) between \(x=0\) and \(x=L\), the first portion of Eq. 1.36 looks like this:
c_1&=\frac{\int_{-\infty}^{\infty}\psi_1^*(x)\psi(x)dx}{\int_{-\infty}^{\infty}\psi_1^*(x)\psi_1(x)dx}\\
&=\frac{\int_{0}^{L}\left[\sin{(\frac{\pi x}{L})}\right]^*\left[1\right]dx}{\int_{0}^{L}\left[\sin{(\frac{\pi x}{L})}\right]^*\left[\sin{(\frac{\pi x}{L})}\right]dx}.
The integrals in the previous hint can be evaluated using
\int_{0}^{L}\left[\sin{\left(\frac{\pi x}{L}\right)}\right]dx=
\left[-\cos{\left(\frac{\pi x}{L}\right)}\right]\left[\frac{L}{\pi}\right]\Bigr|_0^L
\int_{0}^{L}\left[\sin{(\frac{\pi x}{L})}\right]^2dx=\left[\frac{x}{2}-\frac{\sin{\left(\frac{2\pi x}{L}\right)}}{4\left(\frac{\pi}{L}\right)}\right]\Bigr|_0^L.
Plugging in the limits gives
\int_{0}^{L}\left[\sin{\left(\frac{\pi x}{L}\right)}\right]dx&=
\left[-\cos{\left(\frac{\pi L}{L}\right)}\right]\left[\frac{L}{\pi}\right]-\left[-\cos{\left(\frac{\pi (0)}{L}\right)}\right]\left[\frac{L}{\pi}\right]\\
&=-(-1)\frac{L}{\pi}+\frac{L}{\pi}=\frac{2L}{\pi}
\int_{0}^{L}\left[\sin{(\frac{\pi x}{L})}\right]^2dx&=\left[\frac{L}{2}-\frac{\sin{\left(\frac{2\pi L}{L}\right)}}{4\left(\frac{\pi}{L}\right)}\right]-\left[\frac{0}{2}-\frac{\sin{\left(\frac{2\pi (0)}{L}\right)}}{4\left(\frac{\pi}{L}\right)}\right]\\
&=\frac{L}{2}.
c_1=\frac{\frac{2L}{\pi}}{\frac{L}{2}}=\frac{4}{\pi}.
Using the same approach gives \(c_2\):
&=\frac{\int_{0}^{L}\left[\cos{(\frac{\pi x}{L})}\right]^*\left[1\right]dx}{\int_{0}^{L}\left[\cos{(\frac{\pi x}{L})}\right]^*\left[\cos{(\frac{\pi x}{L})}\right]dx}.
which can be evaluated using
\int_{0}^{L}\left[\cos{\left(\frac{\pi x}{L}\right)}\right]dx=
\left[\sin{\left(\frac{\pi x}{L}\right)}\right]\left[\frac{L}{\pi}\right]\Bigr|_0^L=0
\int_{0}^{L}\left[\cos{(\frac{\pi x}{L})}\right]^2dx=\left[\frac{x}{2}-\frac{\sin{\left(\frac{2\pi x}{L}\right)}}{4\left(\frac{\pi}{L}\right)}\right]\Bigr|_0^L=\frac{L}{2}.
Thus \(c_2=\frac{0}{\frac{L}{2}}=0\).
&=\frac{\int_{0}^{L}\left[\sin{(\frac{2\pi x}{L})}\right]^*\left[1\right]dx}{\int_{0}^{L}\left[\sin{(\frac{2\pi x}{L})}\right]^*\left[\sin{(\frac{2\pi x}{L})}\right]dx}.
\int_{0}^{L}\left[\sin{\left(\frac{2\pi x}{L}\right)}\right]dx&=
\left[-\cos{\left(\frac{2\pi x}{L}\right)}\right]\left[\frac{L}{2\pi}\right]\Bigr|_0^L\\
&=\left[-1\right]\left[\frac{L}{2\pi}\right]-\left[-1\right]\left[\frac{L}{2\pi}\right]=0
\int_{0}^{L}\left[\sin{(\frac{2\pi x}{L})}\right]^2dx=\left[\frac{x}{2}-\frac{\sin{\left(\frac{4\pi x}{L}\right)}}{8\left(\frac{\pi}{L}\right)}\right]\Bigr|_0^L=\frac{L}{2}.
&=\frac{\int_{0}^{L}\left[\cos{(\frac{2\pi x}{L})}\right]^*\left[1\right]dx}{\int_{0}^{L}\left[\cos{(\frac{2\pi x}{L})}\right]^*\left[\cos{(\frac{2\pi x}{L})}\right]dx}.
\int_{0}^{L}\left[\cos{\left(\frac{2\pi x}{L}\right)}\right]dx=
\left[\sin{\left(\frac{2\pi x}{L}\right)}\right]\left[\frac{L}{2\pi}\right]\Bigr|_0^L=0
\int_{0}^{L}\left[\cos{(\frac{2\pi x}{L})}\right]^2dx=\left[\frac{x}{2}-\frac{\sin{\left(\frac{4\pi x}{L}\right)}}{8\left(\frac{\pi}{L}\right)}\right]\Bigr|_0^L=\frac{L}{2}.
These integrals can be evaluated using
\int_{0}^{L}\left[\sin{(\frac{\pi x}{L})}\right]^2dx=\left[\frac{x}{2}-\frac{\sin{\left(\frac{2\pi x}{L}\right)}}{4\left(\frac{\pi}{L}\right)}\right]\Bigr|_0^L
and plugging in the limits gives
Thus \(c_3=\frac{0}{\frac{L}{2}}=0\).\\
After working through this chapter, readers will be able to manipulate vectors, express vectors using Dirac notation, explain how vectors are related to functions, and apply vector mathematics to complex abstract vectors and functions.
p. 3-4
Add vectors graphically and algebraically
Multiply vectors by scalars and by other vectors
Express vectors as kets and covectors as bras
Graph 2D vectors on the complex plane
Determine whether two functions are orthogonal
Find the components of a vector or function using the inner product
Welcome to the Chapter 1 Quiz
1) Which of the following procedures gives the length of a vector?
a) Adding the vector to itself and dividing the result by 2
b) Subtracting the vector from itself and squaring the result
c) Taking the dot product of the vector with itself
d) Taking the dot product of the vector with itself and taking the square root of the result
2) All basis vectors must be perpendicular to one another and must have unit length.
a) True
b) False
c) Maybe
3) If you know the components of a ket in some basis system and you wish to find the corresponding bra for that ket, which of these processes should you use?
4) Multiplying a bra times a ket produces
a) A bra
b) A ket
c) A vector
d) A scalar
5) Just like vectors, functions can be considered to have "directions."
6) Imaginary numbers are every bit as real as real numbers.
7) What is the effect of multiplying a complex number by i (the square root of minus one)?
a) The line from the origin to the number rotates 90 degrees counter-clockwise
b) The complex number becomes entirely real
c) The complex number becomes entirely imaginary
d) The magnitude of the number becomes negative
8) The inner product between two functions is zero.
9) The components of an N-dimensional abstract vector in a given basis can be found simply by dotting each basis vector into the vector.
10) Since sinusoids have values that range from -1 to +1, all sinusoidal functions are already normalized over all intervals.
Address: University Printing House, Shaftesbury Road, Cambridge, UK. CB2 8BS
Visit Higher Education from Cambridge University Press for teaching and learning content. | CommonCrawl |
Elementary PDEs
1 The PDE of heat propagation
2 Transfer depends on geometry
3 The PDE of wave propagation
4 Simulating wave propagation with a spreadsheet
The PDE of heat propagation
To model heat propagation, imagine a grid of square rooms and the temperature of each room changes by a proportion of the average of the temperature of the four adjacent rooms. Its spreadsheet simulation is seen in the two images below; they are the initial state (a single initial hot spot) and the results (after $1,700$ iterations) for the temperature at the border fixed at $0$:
We will postpone the topic of heat propagation in higher dimensional spaces until later and concentrate on the $1$-dimensional case: the heat is contained in a row of rooms and each room exchanges the material with its two neighbors through its walls.
$a=AB$ is one of the rooms;
$b,c$ are the two adjacent rooms, left and right;
$A,B$ are the walls of $a$, left and right;
$p=A^\star,q=B^\star$ are the two pipes from $a$, left and right.
What makes this different from ODEs is that the cochains will have two variables and two degrees -- with respect to location and with respect to time. The amount of heat $U=U(a,t)$ is simply a number assigned to each room $a$ which makes it a $1$-cochain. It also depends on time which makes it also a $0$-cochain.
A careful look reveals that to model heat transfer, we need to separately record the exchange of heat with each of the adjacent rooms.
The process we are to study obeys the following law of physics.
Newton's Law of Cooling: The rate of cooling of an object is proportional to the difference between its temperature and the ambient temperature.
This law is nothing but a version of the ODE of population growth and decay -- with respect to the exterior derivative $d_t$ over time. For each cell there are two adjacent cells and two temperature differences, $d_x$, to be taken into account. The result is a partial differential equation (PDE).
$d_t$ is the exterior derivative with respect to time; and
$d_x$ is the exterior derivative with respect to location. $\\$
Either is simply the difference in ${\mathbb R}$.
The conservation of energy in cell $a$ gives us the following. The change of the amount of the heat in room $a$ over the increment of time is equal to $$d_t U(a)=-\bigg( \text{ sum of the outflow } F \text{ across the walls of } a\bigg).$$ The outflow gives the amount of flow across a node (from the room to its neighbor) per unit of time. It is a dual $1$-cochain. Specifically, the flow is positive at $A$ if it is from left to right and the opposite for $B$; then: $$d_t U(a)=-\big( F^\star(A)-F^\star(B) \big) = F^\star(A)-F^\star(B),$$ which is the exterior derivative of this $0$-cochain.
Now, we need to express $F$ in terms of $U$. The flow $F(A^\star)=F^\star(A)$ through wall $A$ of room $a$ is proportional to the difference of the amounts of heat in $a$ and the other room adjacent to $A$. So, $$F^\star(A) = - k(A)d_x(U^\star)(A^\star).$$ Here, $k(A) \ge 0$ represents the permeability of the wall $A$ at a given time. Specifically, $$\begin{array}{lll} F^\star(A) &= - k(A) \big( U(a) - U(b) \big),\\ F^\star(B) &= - k(B) \big( U(c) - U(a) \big). \end{array}$$
Naturally, the walls form the boundary $\partial a$ of $a$. Therefore, we can rewrite: $$d_t U(a)= -F^\star(\partial a)$$ or, by the Stokes Theorem, $$d_t U(a) = -(d_x F^\star)(a).$$
The result of the substitution is a PDE of second degree called the heat equation of cochains: $$d_t U = d_x \big( kd_x U^\star \big)^\star.$$ Specifically, we have: $$d_t U(a) = \Big(- k(A) \big( U(a) - U(b)\big)\Big) - \Big(- k(B) \big( U(c) - U(a)\big)\Big). $$ The right-hand side becomes the increment in the recursive formula for the simulation: $$U(a, t+1):= U(a,t) + \Big[- k(a) \big( U(a) - U(a-1)\big) + k(a+1) \big( U(a+1) - U(a)\big)\Big].$$ The initial state is shown below:
Note: When the domain isn't the whole space, the pipes at the border of the region have to be removed. In the spreadsheet, we use boundary conditions to substitute for the missing data.
The result after $1,500$ iterations is shown next:
One can clearly see how the density becomes uniform eventually -- but not across an impenetrable wall ($k(A_5)=0$).
Link to file: Spreadsheets.
Exercise. For an infinite sequence of rooms, what is the limit of $U$ as $t\to \infty$?
Exercise. Create a simulation for a circular sequence of rooms. What is the limit state?
Exercise. Generalize the formula to the case when the permeability of walls depends on the direction.
Transfer depends on geometry
We now consider the general case of edges/rooms of arbitrary size. Then what determines the dynamics of the heat transfer isn't the amount of heat $U(a)$ in room $a$ but its temperature, i.e., the average amount of heat: $U(a)/|a|$.
In summary, the amount of heat exchanged between two rooms is proportional to:
the temperature difference,
the permeability of the wall (dually: the conductance of the pipe),
the area of the wall that separates them (dually: the cross section of the pipe), and
inversely, to the distance between the centers of mass of the rooms (dually: the length of the pipe).
Let's split the data into three categories:
the adjacency of rooms (and the pipes) is the topology,
the areas of the walls (and the lengths of the pipes) is the geometry, and
the properties of the material of the walls (and the pipes) is the physics. $\\$
They are given by, respectively:
the domain ${\mathbb R}$,
the lengths of the edges of ${\mathbb R}$ and ${\mathbb R}^\star$, and
the $0$-cochain $k$ over ${\mathbb R}$.
In addition to the above list, the amount of material exchanged between two rooms is also proportional to the length of the current time interval $|t|$. Then our PDE takes the form: $$d_tUq^{-1}=(kU_x)_x|t|,$$ with both sides $(n,0)$-cochains. Consider the first derivative with respect to time: $$U_{t}:=U'=\star d_t U=\tfrac{1}{|t|}d_tU.$$
The heat equation is: $$U_t q^{-1}= (kU_x)_x.$$
The abbreviated version is below. $$\begin{array}{|c|} \hline \\ \quad U_t = (kU_x)_x \quad \\ \\ \hline \end{array}$$
Second, it's the lengths of the $1$-cells dual to the endpoints of $a=AB$: $$\frac{1}{|A^\star|},\frac{1}{|B^\star|}.$$ The denominators are the lengths of the pipes.
Conclusion: The amount of heat exchanged by a primal $1$-cell $a$ with its neighbor $a'$ is
directly proportional to the difference of density in $a$ and $a'$, and
inversely proportional to the length of the pipe that leaves $a$ for $a'$.
To confirm these ideas, we run a spreadsheet simulation below:
With a single initial spike in the middle, we see that the amounts of material in the smaller cells on the right: $$|a_5|=|a_6|=|a_7|=|a_8|=|a_9|=1,$$ quickly become uniform, while the larger ones on the left: $$|a_1|=|a_2|=|a_3|=|a_4|=10,$$ develop slower.
Exercise. Find the speed of propagation of the material in a uniform grid as a function of the length of the cell.
Exercise. Suppose we have two rods made of two different kinds of metal soldered together side by side. The cells will expand at two different rates when heated. Model the change of its geometry and illustrate with a spreadsheet. Hint: Assign a thickness to both.
The PDE of wave propagation
Previously, we studied the motion of an object attached to a wall by a (mass-less) spring. Imagine this time a string of objects connected by springs:
Just as above, we will provide the mathematics to describe the following three parts of the setup:
the topology of the cell complex $L$ of the objects and springs,
the geometry given to that complex such as the lengths of the springs, and
the physics represented by the parameters of the system such as those of the objects and springs.
Let $u(t,x)$ be the function that measures the displacement from the equilibrium of the object associated with position $x$ at time $t$ (we will suppress $t$). It is an algebraic quantity: $$u=u(t,x)\in R.$$ As such, it can represent quantities of any nature that may have nothing to do with a system of objects and springs; it could be an oscillating string:
Here, the particles of the string are vertically displaced while waves propagate horizontally (or we can see the pressure or stress vary in a solid medium producing sound).
First, we consider the spatial variable, $x\in {\bf Z}$. We think of the array -- at rest -- as the standard $1$-dimensional cubical complex $L={\mathbb R}_x$. The complex may be given a geometry: each object has a (possibly variable) distance $h=\Delta x$ to its neighbor and the distance between the centers of the springs has length $\Delta x^\star$. We think of $u$ as a cochain of degree $0$ -- with respect to $x$.
According to Hooke's law, the force exerted by the spring is $$F_{Hooke} = -k df,$$ where $df\in R$ is the displacement of the end of the spring from its equilibrium state and the constant, stiffness, $k\in R$ reflects the physical properties of the spring. If this is the spring that connects locations $x$ and $x+1$, its displacement is the difference of the displacements of the two objects. In other words, we have: $$df=u(x+1) - u(x).$$ Therefore, the force of this spring is $$H_{x,x+1} = k \Big[ u(x+1) - u(x) \Big].$$ Since $k$ can be location-dependent, it is a $1$-cochain over $L$.
Now, let $H$ be the force that acted on the object located at $x$. There are two Hooke's forces acting on this object from the two adjacent springs: $H_{x,x-1}$ and $H_{x,x+1}$. Therefore, we have: $$\begin{array}{lll} H &= H_{x,x-1} &+ H_{x,x+1} \\ &= k \Big[ u(x-1) - u(x) \Big] &+ k\Big[ u(x+1) - u(x) \Big]\\ &=-(kd_xu)[x-1,x]&+(kd_xu)[x,x+1]. \end{array}$$
Next, we investigate what this means in terms of the Hodge duality. These are the duality relations of the cells involved: $$\begin{array}{llll} &[x-1,x]^\star &= \{x-1/2 \},\\ &[x,x+1]^\star &= \{x+1/2 \},\\ &\{x\}^\star &= [x-1/2,x+1/2]. \end{array}$$ Then the computation is straight-forward: $$\begin{array}{llll} H&= (kd_xu)\Big([x+1,x]-[x,x-1]\Big)\\ &= (kd_xu)\Big(\{x+1/2 \}^\star -\{x-1/2 \}^\star\Big)\\ &=(\star kd_xu)\Big( \{x+1/2\}-\{x-1/2\}\Big)\\ &= d_x(\star kd_xu)\Big( [x-1/2,x+1/2] \Big)\\ &=d_x(\star kd_xu)(x^\star)\\ &=\star d_x\star kd_xu(x)\\ &=\star d_xk^\star \star d_xu(x)\\ &=(k^\star u_x)_x(x). \end{array}$$
Second, we consider the temporal variable, $t\in {\bf Z}$. We think of time as the standard $1$-dimensional cubical complex ${\mathbb R}_t$. The complex is also given a geometry. It is natural to assume that the geometry has no curvature, but each increment of time may have a different duration (and, possibly, $\Delta t \ne \Delta t^\star$). We think of $u$ as a cochain of degree $0$ with respect to $t$.
Now suppose that each object has mass $m$. Then, by the Second Newton's Law, the total force is $$F=m \cdot a,$$ where $a$ is the acceleration. It is the second derivative with respect to time, i.e., this $0$-cochain: $$a=u_{tt}:=\star d_t \star d_t u . $$ The mass $m$ is a $0$-cochain too and so is $F$. Note that the stiffness $k$ is also a $0$-cochain with respect to time.
Now, with these two forces being equal, we have derived the wave equation of cochains: $$\begin{array}{|c|} \hline \\ \quad m u_{tt} = (k^\star u_x)_x. \quad \\ \\ \hline \end{array}$$
If $k$ and $m$ are constant cochains (and $R$ is a field), the wave equation takes a familiar shape: $$u_{tt}=\tfrac{k}{m}u_{xx}.$$
Simulating wave propagation with a spreadsheet
Now we will derive the recurrence relations.
First, we assume that the geometry of the time is "flat": $\Delta t =\Delta t^\star$. Then the left-hand side of our equation is $$m d_{tt} u = m\frac{u(x,t+1)-2u(x,t)+u(x,t-1)}{(\Delta t)^2}.$$ For the right-hand side, we can use the original expression: $$ \star d_x \star k d_xu = k \Big[ u(x-1) - u(x) \Big] + k\Big[ u(x+1) - u(x) \Big].$$
Second, we assume that $k$ and $m$ are constant. Then just solve for $u(x,t+1)$: $$\begin{array}{ll} u(x,t+1) &= 2u(x,t) - u(x,t-1) + \alpha\Big(u(x+1,t)-2u(x,t)+u(x-1,t)\Big), \end{array}$$ where $$\alpha :=(\Delta t)^2 \frac{k}{m}.$$
To visualize the formula, we arrange the terms in a table: $$\begin{array}{l|cccccc} & x-1 & x & x+1\\ \hline t+1 & & u(x,t+1) \\ t &=\alpha u(x-1,t) & +2(1-\alpha)u(t,x) & +\alpha u(x+1,t)\\ t-1 & & -u(x,t-1) \end{array}$$ Even though the right-hand side is the same, the table is different from that of the (dual) diffusion equation. The presence of the second derivative with respect to time makes it necessary to look two steps back, not just one. That's why we have two initial conditions.
We suppose, for simplicity, that $\alpha=1$.
Example. Choosing the simplified settings allows us to easily solve the following initial value problem: $$\begin{array}{lll} u_{tt}=u_{xx};\\ u(x,t)=\begin{cases} 1 &\text{ if } t=0,x=1;\\ 0 &\text{ if } t=0,x\ne 1;\\ 1 &\text{ if } t=1,x=2;\\ 0 &\text{ if } t=1,x\ne 2. \end{cases} \end{array}$$ Initially, the wave has a single bump and then the bump moves one step from left to right. The negative values of $x$ are ignored.
Now, setting $k=1$ makes the middle term in the table disappear. Then every new term is computed by taking an alternating sum of the three terms above, as shown below: $$\begin{array}{l|cccccccc} t \backslash x& 1 & 2 & 3 & 4 & 5 & 6 & 7 & ..\\ \hline 0 & 1 & 0 & 0 & 0 & 0 & [0] & 0 & ..\\ 1 & 0 & 1 & 0 & 0 & [0] & 0 & [0] & ..\\ \hline 2 & 0 & 0 & 1 & 0 & 0 & (0) & 0 & ..\\ 3 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & ..\\ 4 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & ..\\ 5 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & ..\\ .. & .. & .. & .. & .. & .. & .. & .. & .. \end{array}$$ We can see that the wave is a single bump running from left to right at speed $1$:
$\square$
Exercise. (a) Solve the two-sided version of the above IVP. (b) Set up and solve an IVP with $2$ bumps, $n$ bumps.
Exercise. Implement a spreadsheet simulation for the case of non-constant $m$. Hint: you will need two buffers.
The recurrence formula for dimension $1$ wave equation and constant $k$ and $m$ is: $$\begin{array}{ll} u(x,t+1) &= 2u(x,t) - u(x,t-1) + \alpha\Big[u(x+1,t)-2u(x,t)+u(x-1,t)\Big], \end{array}$$ with $$\alpha :=(\Delta t)^2 \frac{k}{m}.$$ We put these terms in a table to be implemented as a spreadsheet: $$\begin{array}{l|cccccc} & x-1 & x & x+1\\ \hline t-1 & & -u(x,t-1) \\ t &=\alpha u(x-1,t) & +2(1-\alpha)u(t,x) & +\alpha u(x+1,t)\\ t+1 & & u(x,t+1) \end{array}$$
The simplest way to implement this dynamics with a spreadsheet is to use the first two rows for the initial conditions and then add one row for every moment of time. The Excel formula is: $$\texttt{ = R1C5*R[-1]C[-1] + 2*(1-R1C5)*R[-1]C + R1C5*R[-1]C[1] - R[-2]C}$$ Here cell $\texttt{R1C5}$ contains the value of $\alpha$.
Example. The simplest propagation pattern is given by $\alpha=1$. Below we show the propagation of a single bump, a two-cell bump, and two bumps:
Exercise. Modify the spreadsheet to introduce walls into the picture:
Exercise. Modify the spreadsheet to accommodate non-constant data by adding the following, consecutively: (a) the stiffness $k$ (as shown below), (b) the masses $m$, (c) the time intervals $|\Delta t|$.
Retrieved from "https://calculus123.com/index.php?title=Elementary_PDEs&oldid=1005" | CommonCrawl |
Eur. Phys. J. C 2, 359-364
An inconsistency in the simulation of Bose-Einstein correlations
M. Martin1 - H. Kalechofsky1 - P. Foka1 - U.A. Wiedemann2
1 University of Geneva, CH-1211 Geneva 4, Switzerland
2 Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany
Received: 11 April 1997 / Revised version: 9 June 1997
We show that the formalism commonly used to implement Bose-Einstein correlations in Monte-Carlo simulations can lead to values of the two-particle correlator significantly smaller than unity, in the case of sources with strong position-momentum correlations. This is more pronounced when the phase space of the emitted particles is strongly reduced by experimental acceptance or kinematic analysis selections. It is inconsistent with general principles from the coherent state formalism according to which the Bose-Einstein correlator is larger than unity. This inconsistency seems to be rooted in the fact that quantum mechanical localization properties are not taken into account properly.
Copyright Springer-Verlag
Soft QCD, minimum bias and diffraction: results from ALICE
EPJ Web of Conferences 28, 02004 (2012)
Fine-tuning two-particle interferometry I: Effects from temperature gradients in the source
Femtoscopic Signature of Strong Radial Flow in High-multiplicity pp Collisions
Bose-Einstein correlations in ${\mathrm K^{\pm}}{\mathrm K^{\pm}}$ pairs from $\mathrm{Z}^0$ decays into two hadronic jets
Eur. Phys. J. C 21, 23-32 (2001)
Bose-Einstein correlations from opaque sources | CommonCrawl |
Determinants on an efficient cellulase recycling process for the production of bioethanol from recycled paper sludge under high solid loadings
Daniel Gomes1,
Miguel Gama1 &
Lucília Domingues ORCID: orcid.org/0000-0003-1089-76271
In spite of the continuous efforts and investments in the last decades, lignocellulosic ethanol is still not economically competitive with fossil fuels. Optimization is still required in different parts of the process. Namely, the cost effective usage of enzymes has been pursued by different strategies, one of them being recycling.
Cellulase recycling was analyzed on recycled paper sludge (RPS) conversion into bioethanol under intensified conditions. Different cocktails were studied regarding thermostability, hydrolysis efficiency, distribution in the multiphasic system and recovery from solid. Celluclast showed inferior stability at higher temperatures (45–55 °C), nevertheless its performance at moderate temperatures (40 °C) was slightly superior to other cocktails (ACCELLERASE®1500 and Cellic®CTec2). Celluclast distribution in the solid–liquid medium was also more favorable, enabling to recover 88% of final activity at the end of the process. A central composite design studied the influence of solid concentration and enzyme dosage on RPS conversion by Celluclast. Solids concentration showed a significant positive effect on glucose production, no major limitations being found from utilizing high amounts of solids under the studied conditions. Increasing enzyme loading from 20 to 30 FPU/gcellulose had no significant effect on sugars production, suggesting that 22% solids and 20 FPU/gcellulose are the best operational conditions towards an intensified process. Applying these, a system of multiple rounds of hydrolysis with enzyme recycling was implemented, allowing to maintain the steady levels of enzyme activity with only 50% of enzyme on each recycling stage. Additionally, interesting levels of solid conversion (70–81%) were also achieved, leading to considerable improvements on glucose and ethanol production comparatively with the reports available so far (3.4- and 3.8-fold, respectively).
Enzyme recycling viability depends on enzyme distribution between the solid and liquid phases at the end of hydrolysis, as well as enzymes thermostability. Both are critical features to be observed for a judicious choice of enzyme cocktail. This work demonstrates that enzyme recycling in intensified biomass degradation can be achieved through simple means. The process is possibly much more effective at larger scale, hence novel enzyme formulations favoring this possibility should be developed for industrial usage.
Over the last decades, lignocellulosic ethanol assumed a major role on the definitive affirmation of biofuels in the new global energy picture. Relying on cheaper raw materials, such as agro-forestry wastes, it can represent an important boost for the economy of small and local communities [1]. Additionally, it may also encompass the utilization of industrial/municipal wastes, enabling some value recovery from a negative-cost material and a reduction on its environmental impact.
Despite the notorious progresses made, the development of suitable hydrolytic enzymes still faces challenges, such as the high cost and sensitivity to process conditions.
Distinct estimations for the cost of cellulases have been pointed out by different studies. According to Klein-Marcusschamer et al. [2], the cellulase cost on ethanol production is approximately $ 0.68 per gallon, close to $ 0.5 per gallon suggested by Novozymes [3]. Aden and Foust [4], however, already reported a value around $ 0.1 per gallon, similar to $ 0.3 reported by Lynd et al. [5] and $ 0.32 reported by Dutta et al. [6]. Even though important reductions have been achieved on their production cost, operated by intense research from both industry and academia, some authors already admitted these strategies will not allow much further reductions. Independently of the current cost of enzymes, it is widely recognized as a critical determinant for cellulosic ethanol competitiveness.
A reduction on cellulase cost has been intensively pursued through different strategies, being one of them the reutilization of enzymes [7]. This has been achieved by distinct ways: recovering enzymes by ultrafiltration [8,9,10,11]; re-adsorption of free enzymes into fresh solid [12,13,14,15,16]; finally, partial recycling of whole final medium, and consequently, of the enzymes [17]. While less complex, the two later options present limitations that can severely hamper an efficient recovery process. Re-adsorption into fresh solid requires that a significant fraction will efficiently adsorb over the process of solid separation. Also, low cellulose-binding enzymes, such as β-glucosidase, would require being supplemented [17,18,19]. On the other hand, partial/total whole medium (solid and liquid) recycling will always be restricted by lignin build-up constraints and the consequent increase of non-productive enzyme binding [20]. As an alternative, ultrafiltration can allow an efficient separation of enzymes that can then be directly applied on a new hydrolysis process. In addition to being potentially more expensive, the late approach requires the enzymes to be freely available in the liquid phase, i.e., they should have low affinity towards the final solid residue. Hence, a critical role is attributed to the composition and structure of the raw material but also to the selected cellulases. Both have shown to significantly affect the specific distribution of free (soluble) and solid-bound (adsorbed) enzymes as well as the effectiveness of their recovery [7, 9]. Enzymes adsorbed to the solid can still be recovered by pH switch [14, 21, 22] or using different chemicals [15, 23]. Therefore, it seems clear that the binomial substrate–enzyme will determine the most suitable recycling strategy for each case.
In the scope of a more economic process, also intensification has been pursued from multiple angles, namely through an increase on solid loadings [24, 25] or through an optimized integration of hydrolysis and fermentation [26,27,28]. For high-water retention materials, such RPS (recycled paper sludge), converting high solid loadings represents, however, a serious challenge as enzymes have a reduced mobility due to a lower free liquid in suspension. In fact, Marques et al. [29] reported 17.9% RPS as the maximum solid concentration that enabled hydrolysis. Considering the moderate levels of cellulose and hemicellulose in this material, maximizing sugar concentration on the final hydrolysate is critical for a sustainable process. On the other hand, this should also be taken into account when selecting and designing a cellulase recycling strategy. High solid loadings and/or materials with a high lignin content could be a serious challenge, particularly when solid is recycled.
Here we perform a structured and sequential study on the implementation of cellulase recycling in the process of bioethanol production from recycled paper sludge under high solid loadings. The performance of different cellulase cocktails is addressed in terms of hydrolytic performance, stability and final enzyme recovery. Aiming at process intensification, the effect of higher amounts of solid and enzyme on the hydrolysis efficiency is studied, to find the best operational conditions. Those were then considered on the implementation of a system of multiple rounds with cellulase recycling where the levels of enzyme activity and solid conversion were evaluated.
Enzymes, substrate and microorganisms
Enzymatic hydrolysis assays were conducted separately with different cellulase cocktails: Celluclast 1.5 L (from Novozymes A/S); ACCELLERASE® 1500Footnote 1, Footnote 2 (from DuPont); Cellic® CTec2 (from Novozymes A/S). FPase activity of these preparations was determined to be 60, 40 and 120 FPU/mL, respectively. Also, pNPG β-glucosidase activities were determined as 42, 499 and 3609 U/g, respectively. The protein content assessed by Bradford assay (using BSA as standard) were 30, 20 and 58 mg/g, respectively.
Due to the low level of β-glucosidase activity found on Celluclast, this cocktail was always supplemented with the β-glucosidase preparation Novozyme 188 (from Novozymes A/S) on a β-glucosidase/FPase ratio of 3.
Recycled paper sludge (RPS) was kindly provided by RENOVA (Torres Novas, Portugal) and refers to the residue obtained from the wastewater treatment of paper recycling effluents generated by this company. Due to its high carbonate content, which results on an alkaline solid with a reduced holocellulose fraction, prior to its utilization RPS was treated with hydrochloric acid 37% and then washed, first with water and then with buffer (0.1 M acetic acid/sodium acetate) [8]. This resulted on a neutralized RPS (nRPS), which was used in the current work, with an increased holocellulose fraction: 27.1% cellulose, 7.3% xylan and 65.7% acid-insoluble solid.
Fermentations were conducted with Saccharomyces cerevisiae CA11, a strain which was recently reported to have a good fermentation performance at high temperatures [30, 31].
Thermostability assays
To assess which cellulase mixture is more stable towards thermal deactivation, the efficiency of nRPS (carbonates-neutralized RPS) solid conversion was quantified after enzymes exposure to increasing periods of incubation at different temperatures (45, 50 and 55 °C). Then, after the pre-incubation period, nRPS hydrolysis for 18 h, with 5% (w/v) solids at 50 °C, was performed to evaluate the remaining activity.
Comparative hydrolysis efficiency and enzyme activity phase distribution of different cellulase mixtures
To enable a direct comparison of the performance of the three cellulase mixtures, their profiles of glucose production were studied using two distinct solid concentrations [10 and 18% (w/v)]. For that purpose, the solid suspension was incubated with a volume of enzyme equivalent to 20 FPU/g cellulose in 0.1 M sodium acetate/acetic acid buffer (pH of 4.8) and incubated at 40 °C for 96 h.
To evaluate activity distribution of the three cellulase mixtures in the multiphasic system, Cel7A (major cellulase component of Trichoderma reesei cocktails) levels were quantified in both the solid and liquid fractions, both after hydrolysis and alkaline washing [21].
Effect of solid concentration and enzyme loading on the efficiency of nRPS hydrolysis
The effect of both solid concentration and enzyme loading on the efficiency of nRPS hydrolysis was studied conducting a central composite inscribed (CCI) design. Each factor was tested for five levels for the nominal values of − 1, − 0.7, 0, + 0.7 and + 1. Solid concentration was tested in the range of 14–22% (w/v), defined according to preliminary tests on the mixing efficiency as a function of nRPS consistency. Enzyme loading was set to the range of 20–30 FPU/g cellulose. The lower level is within the usual values employed on the literature [7, 8, 32, 33]. The upper level is slightly superior to evaluate potential improvements on enzyme hydrolysis efficiency. In the context of enzyme recycling, the overall enzyme load is actually reduced, as only a fraction of the initial load is used in the subsequent cycles.
The matrix of the CCI design with both the nominal and the real values is presented in Table 1.
Table 1 CCI design matrix presenting the normalized and the real values for each run
Multiple rounds of hydrolysis with enzyme recycling
Enzymatic hydrolysis in the context of cellulase recycling was conducted similar to the single-round experiments. For the first round, the sterilized solid suspension [22% (w/v)] was mixed with 20 FPU/g cellulose of Celluclast (complemented with β-glucosidase) and incubated for 120 h (40 °C; 200 rpm). Afterwards, this mixture was inoculated with 8 g/L (fresh biomass) CA11 yeast cells and incubated for 24 h at 35 °C.
At the end of the round, final broth was centrifuged (9000 rpm for 20 min) to separate fractions. Supernatant, containing free enzymes (in the liquid fraction), was filtered through a 0.22-μm polyethersulfone (PES) filter to remove impurities and stored (4 °C) until further use. The solid was subjected to an alkaline washing, as previously described by Gomes et al. [8]. The elution liquid, containing the desorbed enzymes, was filtered to remove major impurities and stored until use. Prior to its storage, the pH of this liquid was adjusted to the common operational pH (4.8) through the addition of 1 M acetic acid/sodium acetate buffer (pH 4.8). Final solid was repeatedly washed, oven dried (at 45 °C) until an estimated water content below 10% was reached, and finally stored until final analysis.
For cellulase recycling, both enzyme-containing fractions (stored at 4 °C) were mixed and concentrated using a tangential ultrafiltration system Pellicon XL membrane with a 10 kDa cut-off PES membrane (Millipore, Billerica, MA, USA). The two fractions were initially concentrated by diafiltration, and at the end, adjusted to a final fixed volume. For a new round of hydrolysis, the freshly sterilized solid was resuspended on the enzyme suspension obtained from the previous ultrafiltration procedure, filter-sterilized with 0.2-μm PES syringe filters. For each recycling stage, a portion of fresh enzyme was added to this suspension, corresponding to 50% of the original enzyme dosage (maintaining the β-glucosidase/FPase activity ratio). The new solid suspension was then subjected to the same conditions of hydrolysis and fermentation, as previously described. This procedure was applied over a total of four rounds of hydrolysis and fermentation as schematically described on Fig. 1.
Schematic representation for the system of multiple rounds of hydrolysis (and fermentation) with cellulase recycling
Sugars and ethanol quantification
After thawing, aliquots from hydrolysis and fermentation experiments were diluted, filtered and then analyzed by high-performance liquid chromatography (HPLC) for glucose and ethanol quantification. Samples were eluted on a Varian MetaCarb 87H column at 60 °C, with 0.005 M H2SO4 at a flow rate of 0.7 mL/min, and a refractive index detector.
Measurement of enzymatic activity
Samples collected for quantification of enzymatic activity were stored at 4 °C until further utilization. Cel7A, Cel7B and β-glucosidase activities were quantified by fluorescence spectroscopy with slight differences according to the specific cellulolytic component, following a modified version of the protocol previously published by Bailey and Tähtiharju [34]. For Cel7A, Cel7B and β-glucosidase quantification, 400 μL of a freshly prepared solution of 1 mM 4-methylumbelliferyl-b-d-cellobioside (MUC, Sigma-Aldrich, M6018), 4-methylumbelliferyl-b-d-lactopyranoside (MULac, Sigma-Aldrich, M2405) and 4-methylumbelliferyl-b-d-glucopyranoside (MUGlc, Sigma-Aldrich, M3633), respectively, were mixed with 50 μL of enzyme sample (properly diluted on buffer considering the linearity range of the method) and then incubated for 15 min at 50 °C. After that, the reaction was stopped by the addition of 550 μL of 1 M Na2CO3 and measured on a black bottom 96-well UV fluorescence microplate using a Biotech Synergy HT Elisa plate reader. For Cel7B quantification, the addition of 50 μL of a mixture containing 1 M glucose and 50 mM cellobiose is still required to inhibit Cel7A and β-glucosidase activities. Cel7A, Cel7B and β-glucosidase act on their specific substrates releasing free 4-methylumbelliferone (MU, Sigma-Aldrich, M1508), which results on a change of the fluorescence spectra that is quantified for an excitation and emission wavelengths of 360 and 460 nm, respectively.
Determination of solid composition
The solid main composition, either corresponding to the initial material or after enzymatic hydrolysis, was determined by quantitative acid hydrolysis [35]. After oven drying (at 45 °C) to a water content inferior to 10%, approximately 0.5 g of solid was mixed with 5 mL of 72% (w/v) H2SO4 for 1 h at 30 °C. Afterwards, this mixture was subjected to a dilute hydrolysis by raising the volume with water to a total mass of 148.67 g and subsequently autoclaved for 1 h at 121 °C. Next, the solid residue was recovered by filtration (cresol Gooch no. 3) and dried (at 105 °C) until constant weight. Different sugar monomers formed during hydrolysis were quantified by HPLC analysis of the liquid fraction.
Estimation of hydrolysis and fermentation yields
For an overall assessment of hydrolysis and fermentation processes, glucose and ethanol production yields (GY120 and EY23, respectively) were estimated according to the following equations:
$${\text{G}}{{\text{Y}}_{120}}\left( \% \right) = \frac{{{{\left[ {\text{Glucose}} \right]}_{120}} + 1.053{{\left[ {\text{Cellobiose}} \right]}_{120}}}}{{1.111{{\left[ {\text{Solids}} \right]}_{\text{i}}} \times {F_{\text{cel}}}}} \times 100$$
$${\text{E}}{{\text{Y}}_{23}}\left( \% \right) = \frac{{{{\left[ {\text{Ethanol}} \right]}_{23}}}}{{0.51\left( {1.111{{\left[ {\text{Solids}} \right]}_{\text{i}}} \times {F_{\text{cel}}} \times 0.963} \right)}} \times 100$$
where [Glucose]120 and [Cellobiose]120 are the concentrations of glucose and cellobiose, respectively, at 120 of hydrolysis and [Ethanol]23 is the ethanol concentration at 23 h of fermentation. [Solids]i refers to the initial concentration of dry solid and Fcel is the fraction of cellulose on a dry solid base. 1.111 consists on the glucan to glucose conversion ratio, 0.51 is the maximum theoretical conversion of glucose into ethanol and 0.963 was the dilution factor imposed by cells inoculation.
On a recent work it was demonstrated that nRPS can be used for bioethanol production, and additionally, is suitable for the implementation of a cellulase recycling system [8]. As a proof-of-concept approach, these tests were, however, conducted under non-intensified conditions [5% (w/v) solids; hydrolysis temperature of 35 °C].
Here we have addressed two important factors targeting the scalability and the economic feasibility of the process, either in terms of nRPS solid conversion but also on the integration of an enzyme recycling system: the selection of the cellulase cocktail and the intensification of solid conversion.
Thermostability of different cellulase mixtures
Considering that optimal enzymatic hydrolysis occurs around 50 °C, increased thermostabilities represent an important feature in the context of enzyme reutilization. Figure 2 presents the variation of nRPS solid conversion after incubation of the cellulase suspension at 45, 50 and 55 °C, for different time periods.
Variation of solid conversion by different cellulase mixtures after increasing periods of pre-incubation at different temperatures
As expected, all cocktails presented an increasing loss of hydrolysis capacity with cumulative periods of incubation, being this behavior more prominent for higher temperatures. As an example, for an incubation at 45 °C, after 72 h of incubation the conversion degree still remained above 78% for all cocktails comparatively to the control levels. On the other hand, for a temperature of 55 °C, the conversion dropped to 59, 74 and 80% for Celluclast 1.5 L (Celluclast), ACCELLERASE® 1500 (Accellerase) and Cellic® CTec2 (Cellic), respectively. Differences on thermal deactivation between cocktails were minor for the smallest periods of incubation, excepting for the study at 55 °C, where some differences are already found on an early stage. Considering an incubation period equal or higher to 48 h, significant differences are visible. The hydrolysis efficiency of Celluclast was significantly more affected comparing to Accellerase or Cellic. It is worth noting, however, that the absolute values of glucose production were 4–21% higher for the case of Celluclast, as described in more detail in the next section.
Hydrolysis efficiency of different cellulase cocktails
Thermal deactivation assays were not enough to clearly identify the most suitable cellulase cocktail to be employed at moderate-high temperatures. Although Celluclast present an inferior resistance to thermal denaturation, it enabled higher values of solid conversion. Therefore, and considering the notorious reduction of activity observed in the range of 45–55 °C, which may be especially critical on a cellulase recycling context, the profiles of glucose production obtained by the three cocktails were evaluated for a temperature of 40 °C at different solid concentrations (Fig. 3). Thermal denaturation tests conducted with Celluclast at 40 °C on a week-long experiment provided indications of no activity loss under these conditions.
Profiles of glucose production using distinct enzyme mixtures under different solid concentrations, at 40 °C
For a solid concentration of 10% there was not a significant difference on solid conversion between cocktails although Accellerase presented a slightly inferior performance on the first 48–72 h. On the other hand, for 18% solids, Celluclast enabled an average 15% higher glucose production over the entire hydrolysis period, comparatively to the other cocktails. These results suggest that at moderate temperatures (40 °C) where thermal denaturation is low or absent, both Accellerase and Cellic could not surpass Celluclast. It is worth to mention that even supplemented with Novozyme 188, β-glucosidase levels on Celluclast assays are considerably inferior comparatively to the other cocktails: 4.11 U/mL for Celluclast; 13.53 and 37.41 U/mL for Accellerase and Cellic, respectively. This seems to confirm that on this set of conditions (enzyme and solid loadings) the levels of β-glucosidase are not limiting the hydrolysis, as suggested by the absence of cellobiose accumulation (data not shown), hence it does not represent a relevant factor for the different performances.
On these particular conditions, Celluclast seemed to present a slight advantage over the other cocktails regarding hydrolysis performance, nevertheless, enzyme distribution between phases still needed to be assessed.
Phase activity distribution and efficiency of alkaline washing
The final activity distribution among solid and liquid fractions is critical for enzyme recycling and process complexity. Even though lignin represents nearly 20% of RPS composition [29], being commonly reported as an efficient enzyme adsorbent (by non-productive binding), it was recently observed that 70% of final Cel7A activity is found on the liquid fraction after hydrolysis of RPS with Celluclast under 5% (w/v) solids [8]. This represents a good scenario for enzyme reutilization, as a significant part of the activity is easily recovered.
As reported by other authors [7, 33], different cellulase mixtures may display diverse solid–liquid distributions. To enable the evaluation of the different cellulase mixtures behavior in this regard, Cel7A levels were quantified on both liquid and solid fractions after hydrolysis and alkaline washing, used to extract the adsorbed enzyme (Table 2).
Table 2 Final distribution of Cel7A activity after hydrolysis of nRPS and alkaline washing using different cellulase mixtures
First, it is worth noting that significant differences were observed regarding the initial levels of Cel7A for the different cocktails even though the same FPU activity was applied on every case. This suggests differences on the composition of each cocktail and on its synergetic mechanisms of enzymatic hydrolysis. Taking into account the values of Cel7A activity one can observe that Celluclast and Accellerase distribute similarly among fractions, with 61.3 and 62.9% of total final activity being found on the liquid fraction, respectively. A significant part still remains adsorbed to the final solid, hampering a more efficient enzyme reutilization. In what concerns Cellic mixture, the enzyme levels on solid fraction were even higher, close to 60% of the final activity. Similarly, different efficiencies were also attained for alkaline washing: 60, 53 and 41% of the enzymes were recovered for Celluclast, Accellerase and Cellic, respectively. As the performances of the different cocktails did not vary considerably (and consequently the final solid composition), no major differences on enzyme fractionation are expected due specifically to distinct binding affinities to cellulose and lignin [9]. On the other hand, these results seem to suggest that different cellulase preparations can, in fact, present very distinct enzyme fractionation profiles for the same material, possibly due to different binding affinities associated to enzymes from different sources. A similar difference was observed by Rodrigues et al. [33] for Celluclast and Cellic binding during the hydrolysis of wheat straw: 26–28% of original Cel7A activity was found soluble on the final liquid fraction on Celluclast; final soluble Cel7A for Cellic was only around 6%. Also, a recent study conducted by Strobel et al. [36] have demonstrated that specific mutations on the T. reesei Cel7A CBM can cause significant differences on the binding affinity to both cellulose and lignin, confirming the determinant role of enzyme properties on its binding mechanism to distinct fractions of the solid.
As it can be seen from Table 2, it was possible to achieve an overall recovery of final activity in the range of 60%, for Cellic, 81% for Accellerase and 88%, for Celluclast. Thus, the two later cocktails may be recycled to larger extent, potentially enabling important savings.
Effect of nRPS concentration and cellulase loading
Even though nRPS is a residue currently with a negative price associated to disposal costs, maximization of solid concentration should still be pursued, as more concentrated hydrolysates allow higher productivities and lower process costs (e.g., distillation). Preliminary studies indicated that a maximum level of 22% (w/v) in solid consistency can be used, still enabling the "liquefaction" of fibers through enzymes action. For higher amounts of solid, a very high viscosity suspension is obtained which enzymes are unable to process.
Considering the results from previous sections—thermostability, hydrolysis efficiency and distribution in the heterogeneous system (recyclability)—Celluclast was chosen for a CCI design studying the influence of enzyme loading and solid concentration on the nRPS hydrolysis (Table 3).
Table 3 Experimental values obtained from a CCI design testing different levels of solid concentration and enzyme loadings
From the results of the CCI design, four distinct variables of response were fitted to the experimental data through a second-order polynomial model: glucose concentration (Glu120) and production yield (GY120) after 120 of hydrolysis; ethanol concentration (Eth23) and production yield (EY23) after 23 h of fermentation (Eth23). The models representing the variables of response as a function of the normalized values of solid concentration (X1) and enzyme loading (X2) are presented on the Eqs. 1–4.
$${\text{Gl}}{{\text{u}}_{120}} = 45.955 + 9.560{X_1} + 1.891{X_2} + 0.515{X_1}^2 - 0.584{X_2}^2 + 0.573{X_1}{X_2}$$
$${\text{G}}{{\text{Y}}_{120}} = 87.025 - 1.322{X_1} + 3.557{X_2} + 1.153{X_1}^2 - 1.017{X_2}^2 + 0.165{X_1}{X_2}$$
$${\text{Et}}{{\text{h}}_{23}} = 22.212 + 5.285{X_1} + 1.391{X_2} + 0.512{X_1}^2 - 0.087{X_2}^2 + 0.012{X_1}{X_2}$$
$${\text{E}}{{\text{Y}}_{23}} = 87.762 + 0.975{X_1} + 5.533{X_2} + 1.551{X_1}^2 + 0.414{X_2}^2 - 1.207{X_1}{X_2}$$
From ANOVA analysis, it was verified that these models adequately represent the values of Glu120, GY120, Eth23 and EY23, with an estimated determination coefficient (R2) of 0.989, 0.824, 0.989 and 0.877, respectively. F value was higher than the tabular F (3.33) for all the models, indicating that they are statistically significant for a confidence level of 95%. Additionally, the non-significant values of lack of fit also suggest an adequate fitting of the different models (Table 4). For each model, the correspondent response surface was constructed to better visualize the influence of each variable on the different responses (Fig. 4).
Table 4 Regression indicators and analysis of variance (ANOVA) for the different models
Response surfaces for Glu120 (a), Eth23 (b), GY120 (c), and EY23 (d) as a function of solid concentration (X1) and enzyme loading (X2)
Considering first the concentration of solids (X1), as expected, a significant positive (linear) effect was observed on both glucose and ethanol concentrations (p value of 4.8 × 10−11 and 4.9 × 10−11, respectively), justified by an increased availability of cellulose and fermentable sugars, respectively. Furthermore, there were no evidences of critical limitations caused by the high amounts of solids, namely mass transference related or end-product inhibition. That could be also observed from the model of glucose production yield (Fig. 4c), where no clear negative effect is visible; indeed, the glucose yield varies around values of 84–91%, a clear trend associated to solid content being unnoticeable. Very high solid concentrations are reported to have a significant negative impact on glucose yield, an effect that is not observed in this case since the range of solid concentration used was selected in exploratory assays. Also, it is worth noting that the hydrolysis was conducted for 120 h, which is the time required for satisfactory yields to be reached under the highest solid loadings, attenuating therefore time-dependent limitations. In a similar way, also the utilization of this specific range of enzyme loadings may have contributed to attenuate limitations resulting from increased solid loadings such as non-productive binding of enzymes to the solid. These results suggest that further intensification may still be achievable at industrial scale, using better mixing conditions than the ones available at lab scale in this study.
Finally, it still should be highlighted that, as the solid has a negative cost on this case, more important than the production yield is the productivity, equally critical for lowering operational costs. We can therefore consider 22% solids as the most adequate option under the lab scale setup available, as it leads to satisfactory glucose yield, enabling the maximum glucose concentration.
Reporting now to the influence of enzyme loading, although a slight increase is visible for all response variables it is not expressive. Additionally, it seems to impact similarly in the entire range of solid concentration, while a superior effect would be expected for the highest consistency where possible enzyme limitations would be more likely. Thus, it seems that for this range of solid and enzyme loadings there is indeed no significant limitation of enzyme availability. From a previous work by our group, it was verified that this specific cellulase cocktail is particularly efficient on the hydrolysis of nRPS [8].
Maximum values of glucose concentration were achieved for the highest level of enzyme dosage, as expected (Table 2). However, when enzyme dosage was increased in 50% (from 20 to 30 FPU/gcellulose) for the highest solid concentration, glucose concentration only increased approximately 12% (from 52.4 to 58.9 g/L). Considering the high cost of enzymes and negative cost of the substrate, a lower enzyme dosage may be a sensible choice in this scenario.
nRPS hydrolysis with cellulase recycling under high solid loadings
Taking in account the results from CCI design, we envisage the nRPS conversion to high ethanol concentrations while enabling cellulase recycling. Hence, a system of multiples rounds of hydrolysis was implemented with Celluclast, applying the pre-determined conditions of solid and enzyme loadings.
From the analysis of Fig. 5, we may observe that the initial levels of the three cellulases analyzed (Cel7A, Cel7B and β-glucosidase) were similar over the four rounds of hydrolysis and fermentation, an outcome that could be achieved using a 50% supplementation with fresh enzymes in each round. As a matter of fact, for each round there is a considerable decrease on the activity levels, an average reduction of 33.4, 32.4 and 16.1% being observed for Cel7A, Cel7B and β-glucosidase, respectively. A lower reduction observed for β-glucosidase may be attributed to its well known lack of cellulose-binding domain. Also, the fact that β-glucosidase levels may have been used in excess, enables an inferior relative variation. Referring to a previous work, the levels of activity variations for this case were considerably higher comparing to average decreases of 14.3, 17.6 and 7.0% obtained for Cel7A, Cel7B and β-glucosidase, respectively [8]. Considering that there was no thermal deactivation, it may be possible that the higher concentrations of ethanol achieved on this case may have caused some loss of enzyme activity [37] since the intensification strategy followed in the present study allowed a 3.8-fold increase in ethanol concentration.
Variation of Cel7A, Cel7B and β-glucosidase activities over four rounds of nRPS hydrolysis (120 h hydrolysis [40 °C] → 24 h SSF [35 °C]) with cellulase recycling. 20 FPU/gcellulose were initially employed with a posterior supplementation of 50% fresh enzymes on each recycling stage (Rxi and Rxf refers to the initial and final activity of round x, respectively)
Referring to the enzyme distribution at the end of each cycle, the results demonstrate that a considerable fraction of activity remained solid-bound: an average of 30.4, 32.6 and 30.3% for Cel7A, Cel7B and β-glucosidase, respectively. This result highlights the need to recover both fractions in spite of increasing process complexity.
From the steady levels of initial activity for the different cellulases along the different cycles, one could expect the applied strategy of cellulase recycling to achieve equal levels of solid conversion along the process. Nevertheless, it was verified that hydrolysis efficiency had an average decrease of 12.5% in the rounds with recycled enzyme comparatively to the initial one (Table 5). A major part of this reduction may possibly arise from a different sterilization process used. While the first nRPS batch was sterilized after being suspended in the liquid (approx. 22% solids), the following ones were processed at high consistency (approx. 95% solids), which leads to a decrease on solid conversion by around 14%. This was required to enable a higher volume of concentrate after ultrafiltration since high final enzyme concentrations have shown before to cause higher losses during this process. On an industrial scale, however, the utilization of different sterilization processes or UF devices with lower limitations may enable to overcome in some degree this reduction. In addition, this decrease may equally be attributed to the fact that on rounds 2, 3 and 4, 50% of the enzymes have already undertaken at least one cycle of hydrolysis and fermentation, which can cause to some extent a reduction on their efficiency.
Table 5 Multiple rounds of nRPS hydrolysis with cellulase recycling (20 FPU/g cellulose; 50% fresh enzymes)
In spite of this decrease on hydrolysis efficiency, we should highlight that it was still possible to reach important improvements in both glucose and ethanol production comparatively to the existing literature. Using a similar substrate (although with slightly superior cellulose content), the maximum ethanol concentration obtained by Marques et al. [29] was 19.6 g/L. Also, Marques et al. [38] were able to achieve nearly 80 g/L of glucose; nevertheless, this was obtained through a fed-batch strategy with multiple pulses of substrate addition and not a single addition as for the current work. Comparing specifically to a previous work also applying cellulases recycling on RPS conversion [8], it was verified an increase of 3.4- and 3.8-fold on glucose and ethanol productions, respectively. Even employing a set of much more challenging conditions to the process, namely a higher temperature of hydrolysis and fermentation and a considerable increase on solid loading, it was still possible to successfully implement the recycling of cellulases enabling an approximate enzyme saving of 50%, to nearly 10 FPU/gcellulose. It should be referred that when hydrolysis was conducted in the same conditions as for the cycles with recycled enzyme but using instead only 10 FPU/gcellulose (simulating the estimated enzyme saving) glucose production decreased approximately 35% (from 41.6 to 27.0 g/L).
This work provides critical insights from the perspective of a future industrial implementation of enzyme recycling in the specific case of bioethanol production from RPS. It demonstrates that this material can be efficiently converted by different commercial cocktails currently available even under intensified conditions. Also, it elucidates the important role of enzyme cocktail selection on determining the final distribution of enzymatic activity between phases and its overall recovery after the process, a critical factor on the establishment of a simple recycling strategy. In this scope, Celluclast showed a more favorable scenario comparatively to other cocktails, enabling as well a slight advantage on the hydrolysis efficiency.
Even employing intensified operational conditions, cellulase recycling was successfully implemented on RPS conversion with the addition of only 50% of enzymes on each recycling stage, suggesting that process intensification may be combined with enzyme recycling.
ACCELLERASE is a registered trademark of Danisco US Inc. or its affiliated companies.
In providing samples to the authors, DuPont does not endorse the results, conclusions, and/or views expressed herein, and makes no representation and/or warranty concerning the samples, including, but not limited to, the availability of samples for research or commercial purposes, merchantability, fitness for a particular purpose and/or noninfringement of intellectual property rights.
Abban-Mensah I, Vis M, van Sleen P. Socio-economic impacts of a lignocellulosic ethanol refinery in Canada. In: Rutz D, Janssen R, editors. Socio-economic impacts of bioenergy production. New York: Springer; 2014. p. 233–51.
Klein-Marcusschamer D, Oleskowicz-Popiel P, Simmons BA, Blanch WH. The challenge of enzyme cost in the production of lignocellulosic biofuels. Biotechnol Bioeng. 2012;109:1083–9.
http://novozymes.com/en/news/news-archive/Pages/45713.aspx (2017). Accessed 12 Oct 2017.
Aden A, Foust T. Technoeconomic analysis of the dilute sulfuric acid and enzymatic hydrolysis process for the conversion of corn stover to ethanol. Cellulose. 2009;16:535–45.
Lynd LR, Laser MS, Bransby D, Dale BE, Davison B, Hamilton R, Himmel M, Keller M, McMillan JD, Sheehan J, Wyman CE. How biotech can transform biofuels. Nat Biotechnol. 2008;26:169–72.
Dutta A, Dowe N, Ibsen KN, Schell DJ, Aden A. An economic comparison of different fermentation configurations to convert corn stover to ethanol using Z. mobilis and Saccharomyces. Biotechnol Prog. 2010;26:64–72.
Pribowo A, Arantes V, Saddler JN. The adsorption and enzyme activity profiles of specific Trichoderma reesei cellulose/xylanase components when hydrolyzing steam pretreated corn stover. Enzyme Microb Technol. 2012;50:195–203.
Gomes D, Domingues L, Gama M. Valorizing recycled paper sludge by a bioethanol production process with cellulase recycling. Biores Technol. 2016;216:637–44.
Rodrigues AC, Felby C, Gama M. Cellulase stability, adsorption/desorption profiles and recycling during successive cycles of hydrolysis and fermentation of wheat straw. Biores Technol. 2014;156:163–9.
Chen G, Song W, Qi B, Lu J, Wan Y. Recycling cellulase from enzymatic hydrolyzate of acid treated wheat straw by electroultrafiltration. Biores Technol. 2013;144:186–93.
Yang J, Zhang X, Yong Q, Yu S. Three-stage hydrolysis to enhance enzymatic saccharification of steam-exploded corn stover. Biores Technol. 2010;101:4930–5.
Huang R, Guo H, Su R, Qi W, He Z. Enhanced cellulase recovery without β-glucosidase supplementation for cellulosic ethanol production using an engineered strain and surfactant. Biotechnol Bioeng. 2017;114(3):543–51.
Gomes D, Rodrigues AC, Domingues L, Gama M. Cellulase recycling in biorefineries–is it possible? Appl Microbiol Biotechnol. 2015;99(10):4131–43.
Shang Y, Su R, Huang R, Yang Y, Qi W, Li Q, He Z. Recycling cellulases by pH-triggered adsorption-desorption during the enzymatic hydrolysis of lignocellulosic biomass. Appl Microbiol Biotechnol. 2014;98(12):5765–74.
Eckard AD, Muthukumarappan K, Gibbons W. Enhanced bioethanol production from pretreated corn stover via multi-positive effect of casein micelles. Biores Technol. 2013;135:93–102.
Tu M, Saddler JN. Potential enzyme cost reduction with the addition of surfactant during the hydrolysis of pretreated softwood. Appl Biochem Biotechnol. 2010;161:274–87.
Haven MØ, Lindedam J, Jeppesen MD, Elleskov M, Rodrigues AC, Gama M, Jørgensen H, Felby C. Continuous recycling of enzymes during production of lignocellulosic bioethanol in demonstration scale. Appl Energy. 2015;159:188–95.
Tu M, Chandra RP, Saddler JN. Evaluating the distribution of cellulases and the recycling of free cellulases during the hydrolysis of lignocellulosic substrates. Biotechnol Prog. 2007;23:398–406.
Lee D, Yu AHC, Saddler JN. Evaluation of cellulase recycling strategies for the hydrolysis of lignocellulosic substrates. Biotechnol Bioeng. 1995;45:328–36.
Jørgensen H, Pinelo M. Enzyme recycling in lignocellulosic biorefineries. Biofuels Bioprod Bioref. 2017;11:150–67.
Rodrigues AC, Leitão AF, Moreira S, Felby C, Gama M. Recycling of cellulases in lignocellulosic hydrolysates using alkaline elution. Biores Technol. 2012;110:526–33.
Du R, Su R, Li X, Tantai X, Liu Z, Yang J, Qi W, He Z. Controlled adsorption of cellulase onto pretreated corncob by pH adjustment. Cellulose. 2012;19:371–80.
Sipos B, Dienes D, Schleicher Á, Perazzini R, Crestini C, Siika-aho M, Réczey K. Hydrolysis efficiency and enzyme adsorption on steam-pretreated spruce in the presence of poly(ethylene glycol). Enzyme Microb Technol. 2010;47:84–90.
Cunha M, Romaní A, Carvalho M, Domingues L. Boosting bioethanol production from Eucalyptus wood by whey incorporation. Biores Technol. 2018;250:256–64.
Romaní A, Ruiz HA, Teixeira JA, Domingues L. Valorization of Eucalyptus wood by glycerol–organosolv pretreatment within the biorefinery concept: an integrated and intensified approach. Renew Energy. 2016;95:1–9.
Kelbert M, Romaní A, Coelho E, Pereira FB, Teixeira JA, Domingues L. Simultaneous saccharification and fermentation of hydrothermal pretreated lignocellulosic biomass: evaluation of process performance under multiple stress conditions. BioEnergy Res. 2016;9(3):750–62.
Kelbert M, Romaní A, Coelho E, Pereira FB, Teixeira JA, Domingues L. Lignocellulosic bioethanol production with revalorization of low-cost agroindustrial by-products as nutritional supplements. Ind Crops Prod. 2015;64:16–24.
Romaní A, Ruiz HA, Pereira FB, Teixeira JA, Domingues L. Integrated approach for effective bioethanol production using whole slurry from autohydrolyzed Eucalyptus globulus wood at high-solid loadings. Fuel. 2014;135:482–91.
Marques S, Alves L, Roseiro JC, Gírio FM. Conversion of recycled paper sludge to ethanol by SHF and SSF using Pichia stipites. Biomass Bioenergy. 2008;32:400–6.
Costa CE, Romaní A, Cunha JT, Johansson B, Domingues L. Integrated approach for selecting efficient Saccharomyces cerevisiae for industrial lignocellulosic fermentations: importance of yeast chassis linked to process conditions. Biores Technol. 2017;227:24–34.
Ruiz HA, Silva DP, Ruzene DS, Lima LF, Vicente AA, Teixeira JA. Bioethanol production from hydrothermal pretreated wheat straw by a flocculating Saccharomyces cerevisiae strain-effect of process conditions. Fuel. 2012;95:528–36.
Domínguez E, Romaní A, Domingues L, Garrote G. Evaluation of strategies for second generation bioethanol production from fast growing biomass Paulownia within a biorefinery scheme. Appl Energy. 2017;187:777–89.
Rodrigues AC, Haven MØ, Lindedam J, Felby C, Gama M. Celluclast and Cellic® CTec2: saccharification/fermentation of wheat straw, solid-liquid partition and potential of enzyme recycling by alkaline washing. Enzyme Microb Technol. 2015;79–80:70–7.
Bailey MJ, Tähtiharju J. Efficient cellulose production by Trichoderma reesei in continuous cultivation on lactose medium with a computer-controlled feeding strategy. Appl Microbiol Biotechnol. 2003;62:156–62.
Sluiter A, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton D, Crocker D. Determination of structural carbohydrates and lignin in biomass. NREL chem Anal Testing Lab Anal Proced. 2008;1617:1–6.
Strobel KL, Pfeiffer KA, Blanch HW, Clark DS. Engineering Cel7A carbohydrate binding module and linker for reduced lignin inhibition. Biotechnol Bioeng. 2015;113:1369–74.
Chen H, Jin S. Effect of ethanol and yeast on cellulase activity and hydrolysis of crystalline cellulose. Enzyme Microb Technol. 2006;39:1430–2.
Marques S, Gírio FM, Santos JAL, Roseiro JC. Pulsed fed-batch strategy towards intensified process for lactic acid production using recycled paper sludge. Biomass Convers Bioref. 2017;7:127–37.
DG participated in the design of experiments, collected the data and drafted the manuscript. MG and LD participated in the design of experiments and helped write the manuscript. All authors read and approved the final manuscript.
The authors acknowledge RENOVA (Portugal) for providing the recycled paper sludge (RPS), Novozymes A/S for providing Celluclast 1.5 L and Cellic® CTec2, and DuPont for providing ACCELLERASE® 1500.
This work had the financial support of the Portuguese Foundation for Science and Technology (FCT) under the scope of the strategic funding of UID/BIO/04469/2013 unit, COMPETE 2020 (POCI-01-0145-FEDER-006684) and the MultiBiorefinery project (POCI-01-0145-FEDER-016403). Furthermore, FCT equally supported the Ph.D. grant to DG (SFRH/BD/88623/2012).
Centre of Biological Engineering, University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal
Daniel Gomes, Miguel Gama & Lucília Domingues
Daniel Gomes
Miguel Gama
Lucília Domingues
Correspondence to Lucília Domingues.
Gomes, D., Gama, M. & Domingues, L. Determinants on an efficient cellulase recycling process for the production of bioethanol from recycled paper sludge under high solid loadings. Biotechnol Biofuels 11, 111 (2018). https://doi.org/10.1186/s13068-018-1103-2
Recycled paper sludge
Cellulase recycling
Enzyme thermostability
Enzyme activity phase distribution
Cellulosic bioethanol | CommonCrawl |
Articles+ start You searched for: Yes de l ✖ Remove constraint Yes de l Settings Scholarly & peer-reviewed only ✖ Remove constraint Settings: Scholarly & peer-reviewed only Database Arts & Humanities Citation Index ✖ Remove constraint Database: Arts & Humanities Citation Index Database Education Full Text (H.W. Wilson) ✖ Remove constraint Database: Education Full Text (H.W. Wilson) Database Complementary Index ✖ Remove constraint Database: Complementary Index Database Urban Studies Abstracts ✖ Remove constraint Database: Urban Studies Abstracts
Academic Journals83
portuguese4
longitudinal method4
behavior3
biology and life sciences3
descriptive statistics3
diagnosis3
disease risk factors3
endocrine disruptors3
litter (trash)3
logistic regression analysis3
medical cooperation3
medicine and health sciences3
pediatrics3
professions3
research3
risk factors3
social norms3
waste management3
anatomy2
attitude (psychology)2
cancers and neoplasms2
chi-squared test2
chronic diseases2
ckd2
colon cancer2
comparative studies2
conservacion de la energia2
conservation of energy2
data analysis software2
diagnostic analysis2
diagnostic medicine2
electronic health records2
environmental behavior2
environmental responsibility2
environmental sociology2
environmentalism2
estrogenicity2
fish reproduction2
health care2
health care facilities2
homeostasis2
hospitals2
littering2
medical referrals2
norm activation2
nurses2
oncology2
aveiro lagoon (portugal)1
hong kong (china)1
netherlands1
nigeria1
poland1
environment & behavior4
plos one4
annals of the rheumatic diseases3
bmc medicine2
innotec2
latin-american journal of physics education2
acta neuropsychologica1
acupuncture in medicine1
advances in medical sciences (de gruyter open)1
american journal of medical quality1
american journal of respiratory & critical care medicine1
american journal on mental retardation1
archives of environmental contamination & toxicology1
azania: archaeological research in africa1
birth defects research1
bjs open1
bmc public health1
breast cancer research & treatment1
british journal of cancer1
british journal of clinical pharmacology1
british journal of general practice1
canadian entomologist1
canadian journal of applied physiology1
clinical & translational allergy1
clinical & translational medicine1
cognition1
comunicata scientiae1
current opinion in clinical nutrition & metabolic care1
developmental medicine & child neurology1
diachronica1
diversitas journal1
european spine journal1
family process1
food additives & contaminants. part a: chemistry, analysis, control, exposure & risk assessment1
frontiers in microbiology1
ingenieria de recursos naturales y del ambiente1
international journal of social economics1
jnci cancer spectrum1
journal of applied ecology1
journal of cancer survivorship1
journal of clinical medicine1
journal of environmental engineering1
journal of evaluation in clinical practice1
journal of fungi1
journal of hematology & oncology1
journal of nutrition1
revue philosophique de la france et de l etranger1
revue theologique de louvain1
springer nature10
wiley-blackwell7
sage publications inc.6
taylor & francis ltd5
biomed central4
public library of science4
bmj publishing group3
mdpi3
oxford university press / usa3
cambridge university press2
elsevier b.v.2
john wiley & sons, inc.2
laboratorio tecnologico del uruguay2
latin-american physics education network2
agencja wydawnicza medsportpress sp. z o.o.1
american association on intellectual & developmental disabilities1
american society of civil engineers1
american thoracic society1
asociacion latinoamericana de patologia/asociacion mexicana de patologia/consejo mexicano de medicos1
canadian science publishing1
centro de estudos de crescimento e desenvolvimento do ser humano1
council of the west african linguistic society1
ediciones mayo1
emerald publishing limited1
frontiers media s.a.1
green hill healthcare communications, llc1
instituto federal do rio grande do norte - ifrn1
iwa publishing1
john benjamins publishing co.1
jospt, inc. d/b/a movement science media1
lippincott williams & wilkins1
mary ann liebert, inc.1
presses univ france1
royal college of general practitioners1
sage publications, ltd.1
scandinavian journal of work environment (sjweh)1
sciendo1
univ catholique louvain1
universidad del valle1
universidad santo tomas, seccional tunja1
PASCAL Archive209
Academic Search Index187
Academic Search Premier187
MathSciNet via EBSCOhost110
ScienceDirect90
OpenAIRE83
Networked Digital Library of Theses & Dissertations76
Complementary Index✖[remove]75
FRANCIS Archive51
Scopus®46
Directory of Open Access Journals40
Environment Index40
Springer Nature Journals38
Journals@OVID32
Supplemental Index23
GreenFILE19
Science Citation Index Expanded19
NARCIS15
Family & Society Studies Worldwide12
Business Source Complete10
Business Source Index10
SwePub9
British Library Document Supply Centre Inside Serials & Conference Proceedings8
JSTOR Journals6
IEEE Xplore Digital Library6
Social Sciences Citation Index5
AGRIS5
Education Full Text (H.W. Wilson)✖[remove]5
EconLit with Full Text5
Index to Legal Periodicals & Books Full Text (H.W. Wilson)4
Gale In Context: Science3
Gale In Context: Opposing Viewpoints3
Communication & Mass Media Complete3
Index to Legal Periodicals and Books (H.W. Wilson)2
Erudit2
HBO Kennisbank2
SciELO2
Women's Studies International2
Gale OneFile: CPI.Q2
Arts & Humanities Citation Index✖[remove]2
Bibliography of Indigenous Peoples in North America2
Gale In Context: Middle School1
Openedition.org1
MLA International Bibliography1
RILM Abstracts of Music Literature with Full Text1
Gale OneFile: Business1
Library, Information Science & Technology Abstracts1
Directory of Open Access Books1
E-LIS (Eprints in Library & Information Science)1
Urban Studies Abstracts✖[remove]1
ERIC1
BioOne Complete1
Historical Abstracts with Full Text1
Digital Access to Scholarship at Harvard (DASH)1
DEHESA1
Persée1
Korean Studies Information Service System1
Art Full Text (H.W. Wilson)1
America: History and Life with Full Text1
83 articles+ results
1 - 50 Next
1. This title is not available for guests. Log in to see the title and access the article.
3. DESMONTE DEL ESTADO CONSTITUCIONAL MEDIANTE EL RECURRENTE ACOMODO INSTITUCIONAL NO DEMOCRÁTICO. EL CASO DEL REITERADO REFORMISMO EN DETRIMENTO DEL SISTEMA DEL MÉRITO EN EL SERVICIO PÚBLICO (1991-2016). [2018]
Patiño-Rojas, Jorge Enrique
Principia Iuris; ene-abr2018, Vol. 16 Issue 29, p108-127, 20p
Copyright of Principia Iuris is the property of Universidad Santo Tomas, Seccional Tunja and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Find full text or request
4. Continuity and change in the evolution of French yes-no questions: A cross-variety perspective. [2022]
Comeau, Philip, King, Ruth, and LeBlanc, Carmen L.
Diachronica; 2022, Vol. 39 Issue 5, p616-657, 42p
FRENCH language, SOCIOLINGUISTICS, CONTINUITY, CANADIAN history, and TECHNOLOGICAL innovations
Copyright of Diachronica is the property of John Benjamins Publishing Co. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
5. Enteroparasitos em Alface (Lactuca Sativa L.) Comercializada em uma Feira Livre de um Município Alagoano. [2021]
Farias Santos, Fernanda de, da Silva, Igor Jean Moura, Gomes, Dharliton Soares, and de Amorim Santos, Israel Gomes
Diversitas Journal; 2021, Vol. 6 Issue 4, p3882-3889, 8p
Copyright of Diversitas Journal is the property of Diversitas Journal and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
6. Hard to predict! No clear effects of home-field advantage on leaf litter decomposition in tropical heath vegetation. [2022]
Alencar, Mery I. G. de, Belo, André Y. S. P., Silva, José L. A., Asato, Ana E. B., Gomes, Eduarda F., de Oliveira, Valéria S., Teixeira, Jesiel de O., Monte, Otávio de S., Mota, Adriano S., Pereira, Vitória M. L., Dantas, Sibele S., Silva, Gabriel H. S., Goto, Bruno T., Souza, Alexandre F., and Caliman, Adriano
Journal of Tropical Ecology; Nov2022, Vol. 38 Issue 6, p462-471, 10p
The home-field advantage (HFA) hypothesis establishes that plant litter decomposes faster at 'home' sites than in 'away' sites due to more specialized decomposers acting at home sites. This hypothesis has predominantly been tested through 'yes or no' transplanting experiments, where the litter decomposition of a focal species is quantified near and away from their conspecifics. Herein, we evaluated the occurrence and magnitude of home-field effects on the leaf litter decomposition of Myrcia ramuliflora (O.Berg) N. Silveira (Myrtaceae) along a natural gradient of conspecific litterfall input and also if home-field effects are affected by litter and soil traits. Litter decomposition of M. ramuliflora was assessed through litterbags placed in 39 plots in a tropical heath vegetation over a period of 12 months. We also characterized abiotic factors, litter layer traits, and litter diversity. Our results indicated the occurrence of positive (i.e. Home-field advantage) and negative (i.e. Home-field disadvantage) effects in more than half of the plots. Positive and negative effects occurred in a similar frequency and magnitude. Among all predictors tested, only the community weighted mean C/N ratio of the litterfall input was associated with home-field effects. Our results reinforce the lack of generality for home-field effects found in the literature and thus challenge the understanding of litter-decomposer interaction in tropical ecosystems. [ABSTRACT FROM AUTHOR]
7. Analysis of early neonatal case fatality rate among newborns with congenital hydrocephalus, a 2000–2014 multi‐country registry‐based study. [2022]
Gili, Juan Antonio, López‐Camelo, Jorge Santiago, Nembhard, Wendy N., Bakker, Marian, de Walle, Hermien E. K., Stallings, Erin B., Kancherla, Vijaya, Contiero, Paolo, Dastgiri, Saeed, Feldkamp, Marcia L., Nance, Amy, Gatt, Miriam, Martínez, Laura, Canessa, María Aurora, Groisman, Boris, Hurtado‐Villa, Paula, Källén, Karin, Landau, Danielle, Lelong, Nathalie, and Morgan, Margery
Birth Defects Research; Jul2022, Vol. 114 Issue 12, p631-644, 14p
Background: Congenital hydrocephalus (CH) comprises a heterogeneous group of birth anomalies with a wide‐ranging prevalence across geographic regions and registry type. The aim of the present study was to analyze the early neonatal case fatality rate (CFR) and total birth prevalence of newborns diagnosed with CH. Methods: Data were provided by 25 registries from four continents participating in the International Clearinghouse for Birth Defects Surveillance and Research (ICBDSR) on births ascertained between 2000 and 2014. Two CH rates were calculated using a Poisson distribution: early neonatal CFR (death within 7 days) per 100 liveborn CH cases (CFR) and total birth prevalence rate (BPR) per 10,000 births (including live births and stillbirths) (BPR). Heterogeneity between registries was calculated using a meta‐analysis approach with random effects. Temporal trends in CFR and BPR within registries were evaluated through Poisson regression modeling. Results: A total of 13,112 CH cases among 19,293,280 total births were analyzed. The early neonatal CFR was 5.9 per 100 liveborn cases, 95% confidence interval (CI): 5.4–6.8. The CFR among syndromic cases was 2.7 times (95% CI: 2.2–3.3) higher than among non‐syndromic cases (10.4% [95% CI: 9.3–11.7] and 4.4% [95% CI: 3.7–5.2], respectively). The total BPR was 6.8 per 10,000 births (95% CI: 6.7–6.9). Stratified by elective termination of pregnancy for fetal anomalies (ETOPFA), region and system, higher CFR were observed alongside higher BPR rates. The early neonatal CFR and total BPR did not show temporal variation, with the exception of a CFR decrease in one registry. Conclusions: Findings of early neonatal CFR and total BPR were highly heterogeneous among registries participating in ICBDSR. Most registries with higher CFR also had higher BPR. Differences were attributable to type of registry (hospital‐based vs. population‐based), ETOPFA (allowed yes or no) and geographical regions. These findings contribute to the understanding of regional differences of CH occurrence and early neonatal deaths. [ABSTRACT FROM AUTHOR]
8. Targeting the IL-6-Yap-Snail signalling axis in synovial fibroblasts ameliorates inflammatory arthritis. [2022]
Symons, Rebecca A., Colella, Fabio, Collins, Fraser L., Rafipay, Alexandra J., Kania, Karolina, McClure, Jessica J., White, Nathan, Cunningham, Iain, Ashraf, Sadaf, Hay, Elizabeth, Mackenzie, Kevin S., Howard, Kenneth A., Riemen, Anna H. K., Manzo, Antonio, Clark, Susan M., Roelofs, Anke J., Bari, Cosimo De, and De Bari, Cosimo
Annals of the Rheumatic Diseases; Feb2022, Vol. 81 Issue 2, p214-224, 11p
Objective: We aimed to understand the role of the transcriptional co-factor Yes-associated protein (Yap) in the molecular pathway underpinning the pathogenic transformation of synovial fibroblasts (SF) in rheumatoid arthritis (RA) to become invasive and cause joint destruction.Methods: Synovium from patients with RA and mice with antigen-induced arthritis (AIA) was analysed by immunostaining and qRT-PCR. SF were targeted using Pdgfrα-CreER and Gdf5-Cre mice, crossed with fluorescent reporters for cell tracing and Yap-flox mice for conditional Yap ablation. Fibroblast phenotypes were analysed by flow cytometry, and arthritis severity was assessed by histology. Yap activation was detected using Yap-Tead reporter cells and Yap-Snail interaction by proximity ligation assay. SF invasiveness was analysed using matrigel-coated transwells.Results: Yap, its binding partner Snail and downstream target connective tissue growth factor were upregulated in hyperplastic human RA and in mouse AIA synovium, with Yap detected in SF but not macrophages. Lineage tracing showed polyclonal expansion of Pdgfrα-expressing SF during AIA, with predominant expansion of the Gdf5-lineage SF subpopulation descending from the embryonic joint interzone. Gdf5-lineage SF showed increased expression of Yap and adopted an erosive phenotype (podoplanin+Thy-1 cell surface antigen-), invading cartilage and bone. Conditional ablation of Yap in Gdf5-lineage cells or Pdgfrα-expressing fibroblasts ameliorated AIA. Interleukin (IL)-6, but not tumour necrosis factor alpha (TNF-α) or IL-1β, Jak-dependently activated Yap and induced Yap-Snail interaction. SF invasiveness induced by IL-6 stimulation or Snail overexpression was prevented by Yap knockdown, showing a critical role for Yap in SF transformation in RA.Conclusions: Our findings uncover the IL-6-Yap-Snail signalling axis in pathogenic SF in inflammatory arthritis. [ABSTRACT FROM AUTHOR]
9. Differences in characteristics between people with tinnitus that seek help and that do not. [2021]
Rademaker, M. M., Stegeman, I., Brabers, A. E. M., de Jong, J. D., Stokroos, R. J., and Smit, A. L.
Scientific Reports; 11/25/2021, Vol. 11 Issue 1, p1-13, 13p
HELP-seeking behavior, TINNITUS, and HEARING disorders
Knowledge on characteristics of people that seek help for tinnitus is scarce. The primary objective of this study was to describe differences in characteristics between people with tinnitus that seek help compared to those who do not seek help. Next, we described differences in characteristics between those with and without tinnitus. In this cross-sectional study, we sent a questionnaire on characteristics in different domains; demographic, tinnitus-specific, general- and psychological health, auditory and noise- and substance behaviour. We assessed if participants had sought help or planned to seek help for tinnitus. Tinnitus distress was defined with the Tinnitus Functional Index. Differences between groups (help seeking: yes/no, tinnitus: yes/no) were described. 932 people took part in our survey. Two hundred and sixteen participants were defined as having tinnitus (23.2%). Seventy-three of those sought or planned to seek help. A constant tinnitus pattern, a varying tinnitus loudness, and hearing loss, were described more frequently in help seekers. Help seekers reported higher TFI scores. Differences between help seekers and people not seeking help were mainly identified in tinnitus- and audiological characteristics. These outcomes might function as a foundation to explore the heterogeneity in tinnitus patients. [ABSTRACT FROM AUTHOR]
10. Reciprocal YAP1 loss and INSM1 expression in neuroendocrine prostate cancer. [2021]
Asrani, Kaushal, Torres, Alba FC, Woo, Juhyung, Vidotto, Thiago, Tsai, Harrison K, Luo, Jun, Corey, Eva, Hanratty, Brian, Coleman, Ilsa, Yegnasubramanian, Srinivasan, De Marzo, Angelo M, Nelson, Peter S, Haffner, Michael C, and Lotan, Tamara L
Journal of Pathology; Dec2021, Vol. 255 Issue 4, p425-437, 13p
PROSTATE cancer, ANDROGEN deprivation therapy, LABORATORY mice, CANCER genes, TRANSCRIPTION factors, and DIAGNOSIS
Neuroendocrine prostate cancer (NEPC) is a rare but aggressive histologic variant of prostate cancer that responds poorly to androgen deprivation therapy. Hybrid NEPC‐adenocarcinoma (AdCa) tumors are common, often eluding accurate pathologic diagnosis and requiring ancillary markers for classification. We recently performed an outlier‐based meta‐analysis across a number of independent gene expression microarray datasets to identify novel markers that differentiate NEPC from AdCa, including up‐regulation of insulinoma‐associated protein 1 (INSM1) and loss of Yes‐associated protein 1 (YAP1). Here, using diverse cancer gene expression datasets, we show that Hippo pathway‐related genes, including YAP1, are among the top down‐regulated gene sets with expression of the neuroendocrine transcription factors, including INSM1. In prostate cancer cell lines, transgenic mouse models, and human prostate tumor cohorts, we confirm that YAP1 RNA and YAP1 protein expression are silenced in NEPC and demonstrate that the inverse correlation of INSM1 and YAP1 expression helps to distinguish AdCa from NEPC. Mechanistically, we find that YAP1 loss in NEPC may help to maintain INSM1 expression in prostate cancer cell lines and we further demonstrate that YAP1 silencing likely occurs epigenetically, via CpG hypermethylation near its transcriptional start site. Taken together, these data nominate two additional markers to distinguish NEPC from AdCa and add to data from other tumor types suggesting that Hippo signaling is tightly reciprocally regulated with neuroendocrine transcription factor expression. © 2021 The Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
11. The Altmetric Score Has a Stronger Relationship With Article Citations Than Journal Impact Factor and Open Access Status: A Cross-sectional Analysis of 4022 Sport Sciences Articles. [2021]
DE OLIVEIRA SILVA, DANILO, TABORDA, BIANCA, PAZZINATTO, MARCELLA F., ARDERN, CLARE L., and BARTON, CHRISTIAN J.
Journal of Orthopaedic & Sports Physical Therapy; Nov2021, Vol. 51 Issue 11, p536-541, 6p
SPORTS sciences, CROSS-sectional method, SERIAL publications, SOCIAL media, MULTIPLE regression analysis, REGRESSION analysis, CITATION analysis, OPEN access publishing, DESCRIPTIVE statistics, PERIODICAL articles, DATA analysis software, and IMPACT factor (Citation analysis)
*OBJECTIVE: To assess the relationship of individual article citations in the sport sciences field with (1) Journal Impact Factor, (2) each article's open access status, and (3) Altmetric score components. * DESIGN: Cross-sectional. * METHODS: We searched the Web of Science Journal Citation Reports database in the sport sciences category for the 20 journals with the highest 2-year Journal Impact Factor in 2018. We extracted the impact factor for each journal and each article's open access status (yes or no). Between September 2019 and February 2020, we obtained individual citations, Altmetric scores, and details of Altmetric components (eg, number of tweets, Face-book posts, etc) for each article published in 2017. Linear and multiple regression models were used to assess the relationship between the dependent variable (citation number) and the independent variables (article Altmetric score and open access status and Journal Impact Factor). *RESULTS: Of the 4022 articles included, the total Altmetric score, Journal Impact Factor, and open access status respectively explained 32%, 14%, and 1% of the variance in article citations (when combined, the variables explained 40% of the variance in article citations). The number of tweets related to an article was the Altmetric component that explained the highest proportion of article citations (37%). *CONCLUSION: Altmetric scores in sport sciences journals have a stronger relationship with number of citations than Journal Impact Factor and open access status do. Twitter may be the best social media platform for promoting a research article. [ABSTRACT FROM AUTHOR]
12. Does the use of indirect calorimetry change outcome in the ICU? Yes it does. [2018]
De Waele, Elisabeth, Honoré, Patrick M., and Malbrain, Manu L. N. G.
Current Opinion in Clinical Nutrition & Metabolic Care; Mar2018, Vol. 21 Issue 2, p126-129, 4p
13. Effect of Lactic Acid Bacteria Strains on the Growth and Aflatoxin Production Potential of Aspergillus parasiticus , and Their Ability to Bind Aflatoxin B1, Ochratoxin A, and Zearalenone in vitro. [2021]
Møller, Cleide Oliveira de Almeida, Freire, Luisa, Rosim, Roice Eliana, Margalho, Larissa Pereira, Balthazar, Celso Fasura, Franco, Larissa Tuanny, Sant'Ana, Anderson de Souza, Corassin, Carlos Humberto, Rattray, Fergal Patrick, and Oliveira, Carlos Augusto Fernandes de
Frontiers in Microbiology; 4/22/2021, Vol. 11, pN.PAG-N.PAG, 18p
AFLATOXINS, ASPERGILLUS flavus, LACTIC acid bacteria, ASPERGILLUS parasiticus, COMPETITION (Biology), ZEARALENONE, and POTASSIUM phosphates
The increased consumption of plant-based foods has intensified the concern related to mycotoxin intoxication. This study aimed to investigate the effect of selected lactic acid bacteria (LAB) strains on the growth of Aspergillus parasiticus NRRL 2999 and its production of aflatoxin (AF). The ability of the heat-killed (100°C for 1 h) LAB strains to bind aflatoxin M1 (AFM1) in milk and aflatoxin B1 (AFB1), ochratoxin A (OTA), and zearalenone (ZEN) in potassium phosphate buffer (PPB) was also evaluated in vitro. Ten LAB strains were tested individually, by inoculating them simultaneously with the fungus or after incubation of the fungus for 24 or 48 h at 25°C. Double layer yeast extract sucrose (YES) agar, de Man Rogosa and Sharpe (MRS) agar, and YES broth were incubated for 7 days at 25°C to follow the development of the fungus. Levilactobacillus spp. 3QB398 and Levilactobacillus brevis 2QB422 strains were able to delay the growth of A. parasiticus in YES broth, even when these strains were inoculated 24 h after the fungus. The inhibitory effect of these LAB strains was confirmed by the reduction of fungus colony size, suggesting dominance of LAB by competition (a Lotka-Voltera effect). The production of AFB1 by A. parasiticus was inhibited when the fungus was inoculated simultaneously with Lactiplantibacillus plantarum 3QB361 or L. plantarum 3QB350. No AFB1 was found when Levilactobacillus spp. 2QB383 was present, even when the LAB was inoculated 48 h after the fungus. In binding studies, seven inactivated LAB strains were able to promote a reduction of at least 50% the level of AFB1, OTA, and ZEN. This reduction varied depending on the pH of the PPB. In milk, however, only two inactivated LAB strains were able to reduce AFM1, with a reduction of 33 and 45% for Levilactobacillus spp. 3QB398 (Levilactobacillus spp.) and L. brevis 2QB422, respectively. Nevertheless, these results clearly indicate the potential of using LAB for mycotoxin reduction. [ABSTRACT FROM AUTHOR]
14. Body composition and its association with fatigue in the first 2 years after colorectal cancer diagnosis. [2021]
van Baar, H., Bours, M. J. L., Beijer, S., van Zutphen, M., van Duijnhoven, F. J. B., Kok, D. E., Wesselink, E., de Wilt, J. H. W., Kampman, E., and Winkels, R. M.
Journal of Cancer Survivorship; Aug2021, Vol. 15 Issue 4, p597-606, 10p
Purpose: Persistent fatigue among colorectal cancer (CRC) patients might be associated with unfavorable body composition, but data are sparse and inconsistent. We studied how skeletal muscle index (SMI), skeletal muscle radiodensity (SMR), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT) at diagnosis are associated with fatigue up to 24 months post-diagnosis in stage I–III CRC patients. Methods: SMI, SMR, VAT, and SAT were assessed among 646 CRC patients using pre-treatment computed tomography images. Fatigue at diagnosis, at 6, and 24 months post-diagnosis was assessed using the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire. The association of SMI, SMR, VAT, and SAT with fatigue (yes/no) was assessed using confounder-adjusted restricted cubic spline analyses. Results: Prevalence of fatigue at diagnosis was 18%, at 6 months 25%, and at 24 months 12%. At diagnosis, a significant (p = 0.01) non-linear association of higher levels of SAT with higher prevalence of fatigue was observed. Lower levels of SMR were linearly associated with higher prevalence of fatigue at 6 months post-diagnosis (overall association p = 0.02). None of the body composition parameters were significantly associated with fatigue at 24 months. Conclusion: Having more SAT was associated with more fatigue at diagnosis, while low levels of SMR were associated with more fatigue at 6 months post-diagnosis. Implications for Cancer Survivors: Our results suggest that it may be interesting to investigate whether interventions that aim to increase SMR around the time of diagnosis may help to lower fatigue. However, more knowledge is needed to understand the mechanisms behind the association of SMR with fatigue. [ABSTRACT FROM AUTHOR]
15. Talaromyces santanderensis : A New Cadmium-Tolerant Fungus from Cacao Soils in Colombia. [2022]
Guerra Sierra, Beatriz E., Arteaga-Figueroa, Luis A., Sierra-Pelaéz, Susana, and Alvarez, Javier C.
Journal of Fungi; Oct2022, Vol. 8 Issue 10, p1042-N.PAG, 18p
TALAROMYCES, CACAO, FUNGAL growth, NATURAL resources, SOIL pollution, and SOILS
Inorganic pollutants in Colombian cocoa (Theobroma cacao L.) agrosystems cause problems in the production, quality, and exportation of this raw material worldwide. There has been an increased interest in bioprospecting studies of different fungal species focused on the biosorption of heavy metals. Furthermore, fungi constitute a valuable, profitable, ecological, and efficient natural soil resource that could be considered in the integrated management of cadmium mitigation. This study reports a new species of Talaromyces isolated from a cocoa soil sample collected in San Vicente de Chucurí, Colombia. T. santanderensis is featured by Lemon Yellow (R. Pl. IV) mycelium on CYA, mono-to-biverticillade conidiophores, and acerose phialides. T. santanderensis is distinguished from related species by its growth rate on CYAS and powdery textures on MEA, YES and OA, high acid production on CREA and smaller conidia. It is differentiated from T. lentulus by its growth rate on CYA medium at 37 °C without exudate production, its cream (R. PI. XVI) margin on MEA, and dense sporulation on YES and CYA. Phylogenetic analysis was performed using a polyphasic approach, including different phylogenetic analyses of combined and individual ITS, CaM, BenA, and RPB2 gene sequences that indicate that it is new to science and is named Talaromyces santanderensis sp. nov. This new species belongs to the Talaromyces section and is closely related to T. lentulus, T. soli, T. tumuli, and T. pratensis (inside the T. pinophilus species complex) in the inferred phylogeny. Mycelia growth of the fungal strains was subjected to a range of 0–400 mg/kg Cd and incorporated into malt extract agar (MEA) in triplicates. Fungal radial growth was recorded every three days over a 13-day incubation period and In vitro cadmium tolerance tests showed a high tolerance index (0.81) when the mycelium was exposed to 300 mg/kg of Cd. Results suggest that T. santanderensis showed tolerance to Cd concentrations that exceed the permissible limits for contaminated soils, and it is promising for its use in bioremediation strategies to eliminate Cd from highly contaminated agricultural soils. [ABSTRACT FROM AUTHOR]
16. Circulating Myeloperoxidase (MPO)-DNA complexes as marker for Neutrophil Extracellular Traps (NETs) levels and the association with cardiovascular risk factors in the general population. [2021]
Donkel, Samantha J., Wolters, Frank J., Ikram, M. Arfan, and de Maat, Moniek P. M.
PLoS ONE; 8/11/2021, p1-13, 13p
CARDIOVASCULAR diseases risk factors, MYELOPEROXIDASE, CORONARY disease, CARDIOVASCULAR diseases, CIRCULATING tumor DNA, and HDL cholesterol
Introduction: Neutrophil extracellular traps (NETs) are DNA scaffolds enriched with antimicrobial proteins. NETs have been implicated in the development of various diseases, such as cardiovascular disease. Here, we investigate the association of demographic and cardiovascular (CVD) risk factors with NETs in the general population. Material and methods: Citrated plasma was collected from 6449 participants, aged ≥55 years, as part of the prospective population-based Rotterdam Study. NETs were quantified by measuring MPO-DNA complex using an ELISA. We used linear regression to determine the associations between MPO-DNA complex and age, sex, cardio-metabolic risk factors, and plasma markers of inflammation and coagulation. Results: MPO-DNA complex levels were weakly associated with age (log difference per 10 year increase: -0.04 mAU/mL, 95% confidence interval [CI] -0.06;-0.02), a history of coronary heart disease (yes versus no: -0.10 mAU/mL, 95% CI -0.17;-0.03), the use of lipid-lowering drugs (yes versus no: -0.06 mAU/mL, 95% CI -0.12;-0.01), and HDL-cholesterol (per mmol/l increase: -0.07 mAU/mL/, 95% CI -0.12;-0.03). Conclusions: Older age, a history of coronary heart disease, the use of lipid-lowering drugs and higher HDL-cholesterol are weakly correlated with lower plasma levels of NETs. These findings show that the effect of CVD risk factors on NETs levels in a general population is only small and may not be of clinical relevance. This supports that NETs may play a more important role in an acute phase of disease than in a steady state situation. [ABSTRACT FROM AUTHOR]
Full text View/download PDF
17. Circulating Folate and Folic Acid Concentrations: Associations With Colorectal Cancer Recurrence and Survival. [2020]
Geijsen, Anne J M R, Ulvik, Arve, Gigic, Biljana, Kok, Dieuwertje E, Duijnhoven, Fränzel J B van, Holowatyj, Andreana N, Brezina, Stefanie, Roekel, Eline H van, Baierl, Andreas, Bergmann, Michael M, Böhm, Jürgen, Bours, Martijn J L, Brenner, Hermann, Breukink, Stéphanie O, Bronner, Mary P, Chang-Claude, Jenny, Wilt, Johannes H W de, Grady, William M, Grünberger, Thomas, and Gumpenberger, Tanja
JNCI Cancer Spectrum; Oct2020, Vol. 4 Issue 5, p1-11, 11p
FOLIC acid, COLON cancer, CARCINOGENESIS, COLON cancer patients, and PROPORTIONAL hazards models
Background Folates, including folic acid, may play a dual role in colorectal cancer development. Folate is suggested to be protective in early carcinogenesis but could accelerate growth of premalignant lesions or micrometastases. Whether circulating concentrations of folate and folic acid, measured around time of diagnosis, are associated with recurrence and survival in colorectal cancer patients is largely unknown. Methods Circulating concentrations of folate, folic acid, and folate catabolites p-aminobenzoylglutamate and p-acetamidobenzoylglutamate were measured by liquid chromatography-tandem mass spectrometry at diagnosis in 2024 stage I-III colorectal cancer patients from European and US patient cohort studies. Multivariable-adjusted Cox proportional hazard models were used to assess associations between folate, folic acid, and folate catabolites concentrations with recurrence, overall survival, and disease-free survival. Results No statistically significant associations were observed between folate, p-aminobenzoylglutamate, and p-acetamidobenzoylglutamate concentrations and recurrence, overall survival, and disease-free survival, with hazard ratios ranging from 0.92 to 1.16. The detection of folic acid in the circulation (yes or no) was not associated with any outcome. However, among patients with detectable folic acid concentrations (n = 296), a higher risk of recurrence was observed for each twofold increase in folic acid (hazard ratio = 1.31, 95% confidence interval = 1.02 to 1.58). No statistically significant associations were found between folic acid concentrations and overall and disease-free survival. Conclusions Circulating folate and folate catabolite concentrations at colorectal cancer diagnosis were not associated with recurrence and survival. However, caution is warranted for high blood concentrations of folic acid because they may increase the risk of colorectal cancer recurrence. [ABSTRACT FROM AUTHOR]
18. Reducing Disparities in Receipt of Genetic Counseling for Underserved Women at Risk of Hereditary Breast and Ovarian Cancer. [2020]
Sutton, Arnethea L., Hurtado-de-Mendoza, Alejandra, Quillin, John, Rubinsak, Lisa, Temkin, Sarah M., Gal, Tamas, and Sheppard, Vanessa B.
Journal of Women's Health (15409996); Aug2020, Vol. 29 Issue 8, p1131-1135, 5p
ACADEMIC medical centers, CANCER genetics, CHI-squared test, CONFIDENCE intervals, EMPLOYMENT, GENETIC counseling, HEALTH services accessibility, HEALTH status indicators, HEALTH insurance, MARITAL status, MEDICAL care use, MEDICAL referrals, METROPOLITAN areas, MULTIVARIATE analysis, RACISM, LOGISTIC regression analysis, ELECTRONIC health records, DESCRIPTIVE statistics, ODDS ratio, and DISEASE risk factors
Purpose: Genetic counseling (GC) provides critical risk prediction information to women at-risk of carrying a genetic alternation; yet racial/ethnic and socioeconomic disparities persist with regard to GC uptake. This study examined patterns of GC uptake after a referral in a racially diverse population. Materials and Methods: In an urban academic medical center, medical records were reviewed between January 2016 and December 2017 for women who were referred to a genetic counselor for hereditary breast and ovarian cancer. Study outcomes were making an appointment (yes/no) and keeping an appointment. We assessed sociodemographic factors and clinical factors. Associations between factors and the outcomes were analyzed using chi square, and logistic regression was used for multivariable analysis. Results: A total of 510 women were referred to GC and most made appointments. More than half were white (55.3%) and employed (53.1%). No significant associations were observed between sociodemographic factors and making an appointment. A total of 425 women made an appointment and 268 kept their appointment. Insurance status ( p = 0.003), marital status ( p = 0.000), and work status ( p = 0.039) were associated with receiving GC. In the logistic model, being married (odds ratio [OR] 2.119 [95% confidence interval, CI 1.341–3.347] p = 0.001) and having insurance (OR 2.203 [95% CI 1.208–4.016] p = 0.021) increased the likelihood of receiving counseling. Conclusions: Racial disparities in GC uptake were not observed in this sample. Unmarried women may need additional support to obtain GC. Financial assistance or other options need to be discussed during navigation as a way to lessen the disparity between women with insurance and those without. [ABSTRACT FROM AUTHOR]
19. De-escalation yes, but not at the expense of efficacy: in defense of better treatment. [2019]
Shapiro, Charles L.
NPJ Breast Cancer; 8/12/2019, Vol. 5 Issue 1, pN.PAG-N.PAG, 1p
20. This title is not available for guests. Log in to see the title and access the article.
21. Sitting In: The Experience of Learning and Practicing Family Therapy through Being a Co‐Therapist in Hong Kong. [2020]
Xia, Lily L. L. and Ma, Joyce L. C.
Family Process; Dec2020, Vol. 59 Issue 4, p1914-1927, 14p, 3 Charts
ATTITUDE (Psychology), CULTURE, EXPERIENTIAL learning, FAMILY psychotherapy, HOSPITAL medical staff, INTERPROFESSIONAL relations, INTERVIEWING, MEDICAL personnel, PROFESSIONS, SELF-efficacy, SUPERVISION of employees, PEER relations, and THEMATIC analysis
Copyright of Family Process is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
22. Metástasis de adenocarcinoma colorrectal hacia piel de vulva y perirrectal, un caso raro. [2016]
P., Orozco-Cortez, L. E., Herrera-Barrera, and F., Bustos-Rodríguez
Patologia Revista Latinoamericana; Jul-Sep2016, Vol. 54 Issue 3, p90-95, 6p
Copyright of Patologia Revista Latinoamericana is the property of Asociacion Latinoamericana de Patologia/Asociacion Mexicana de Patologia/Consejo Mexicano de Medicos and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
23. Optimización de una técnica de medida de disrupción endocrina por medio de Saccharomyces cerevisiae recombinantes. [2010]
Keel, K., Míguez, D., Soares, A., and Parodi, A.
Innotec; dic2010, Issue 5, p34-38, 5p
GALACTOSIDASES, ENDOCRINE disruptors, HOMEOSTASIS, FISH reproduction, and SACCHAROMYCES cerevisiae
Copyright of Innotec is the property of Laboratorio Tecnologico del Uruguay and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
24. Association of lifestyle and clinical characteristics with receipt of radiotherapy treatment among women diagnosed with DCIS in the NIH-AARP Diet and Health Study. [2020]
Mullooly, Maeve, Withrow, Diana R., Curtis, Rochelle E., Fan, Shaoqi, Liao, Linda M., Pfeiffer, Ruth M., de González, Amy Berrington, and Gierach, Gretchen L.
Breast Cancer Research & Treatment; Jan2020, Vol. 179 Issue 2, p445-457, 13p
Purpose: The long-term risks and benefits of radiotherapy for ductal carcinoma in situ (DCIS) remain unclear. Recent data from the Surveillance, Epidemiology and End Results (SEER) registries showed that DCIS-associated radiotherapy treatment significantly increased risk of second non-breast cancers including lung cancer. To help understand those observations and whether breast cancer risk factors are related to radiotherapy treatment decision-making, we examined associations between lifestyle and clinical factors with DCIS radiotherapy receipt. Methods: Among 1628 participants from the NIH-AARP Diet and Health Study, diagnosed with incident DCIS (1995–2011), we examined associations between lifestyle and clinical factors with radiotherapy receipt. Radiotherapy and clinical information were ascertained from state cancer registries. Odds ratios (ORs) and 95% confidence intervals (CIs) for radiotherapy receipt (yes/no) were estimated from multivariable logistic regression. Results: Overall, 45% (n = 730) received radiotherapy. No relationships were observed for most lifestyle factors and radiotherapy receipt, including current smoking (OR 0.97, 95%CI 0.70, 1.34). However positive associations were observed for moderate alcohol consumption and infrequent physical activity. The strongest associations were observed for radiotherapy receipt and more recent diagnoses (2005–2011 vs. 1995–1999; OR 1.60, 95%CI 1.14, 2.25), poorly versus well-differentiated tumors (OR 1.69, 95%CI 1.16, 2.46) and endocrine therapy (OR 3.37, 95%CI 2.56, 4.44). Conclusions: Clinical characteristics were the strongest determinants of DCIS radiotherapy. Receipt was largely unrelated to lifestyle factors suggesting that the previously observed associations in SEER were likely not confounded by these lifestyle factors. Further studies are needed to understand mechanisms driving radiotherapy-associated second malignancies following DCIS, to identify prevention opportunities for this growing population. [ABSTRACT FROM AUTHOR]
25. The economic cost of losing native pollinator species for orchard production. [2020]
Pérez‐Méndez, Néstor, Andersson, Georg K. S., Requier, Fabrice, Hipólito, Juliana, Aizen, Marcelo A., Morales, Carolina L., García, Nancy, Gennari, Gerardo P., Garibaldi, Lucas A., and Diekötter, Tim
Journal of Applied Ecology; Mar2020, Vol. 57 Issue 3, p599-608, 10p, 1 Diagram, 3 Graphs
POLLINATION, AGRICULTURAL productivity, APPLE orchards, CROP yields, FACTORIAL experiment designs, SUSTAINABLE agriculture, POLLINATORS, and ORCHARDS
Copyright of Journal of Applied Ecology is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
26. Recruitment and retention of participants in UK surgical trials: survey of key issues reported by trial staff. [2020]
Crocker, J. C., Farrar, N., Cook, J. A., Treweek, S., Woolfall, K., Chant, A., Bostock, J., Locock, L., Rees, S., Olszowski, S., and Bulbulia, R.
BJS Open; Dec2020, Vol. 4 Issue 6, p1238-1245, 8p
NURSES, PHYSICIANS, and CLINICAL trials
Copyright of BJS Open is the property of Oxford University Press / USA and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
27. THE POWER OF SELF-DECEPTION: PSYCHOLOGICAL REACTION TO THE COVID-19 THREAT. [2021]
Kaczmarek, Bożydar L. J. and Gaś, Zbigniew B.
Acta Neuropsychologica; 2021, Vol. 19 Issue 3, p319-328, 10p
SELF-deception, COVID-19, PSYCHOLOGICAL well-being, COVID-19 pandemic, SOCIAL attitudes, FEAR, and ATTITUDE (Psychology)
Background: Poland's inhabitants have often expressed disbelief and negative attitudes toward social isolation, combined with restlessness. This is due to a tendency to discount troubling informa tion while facing the unknown and counter-argue against information that causes discomfort and fear. This tendency helps humans to maintain hope and well-being. The study aimed to determine if Polish citizens tend to downplay or even deny danger when faced with a death threat. Material/Methods: The study comprised 58 adults - 46 females 12 males, aged 21 to 49. The participants were asked to answer 12 questions defining their beliefs and attitudes towards the COVID-19 pandemic threat and its consequences. The subjects gave answers on the 5-point Likert scale, from "definitely not" to "de finitely yes". Results: The findings of the present study show that a considerable number of the participants tend to exhibit an optimistic bias. This is reflected in their direct statements and in the lack of congruence of their opinions. They do feel the threat of becoming ill but also seem to believe it need not affect them personally. They are also relatively optimistic about the outcomes of the pandemic. At the same time, they realize that COVID-19 may lead to severe psychological, neurological, and mental disorders. Conclusions: The study confirmed a tendency to deny the threat that can pose a severe risk to health and psychological well-being. This is a manifestation of an optimism bias that has its roots in the way the human brain works. The participants did express concerns about the future but at the same time hoped that life after the pandemic would return to normal. It reflects a benevolent facet of self-deception since it makes it possible to cope with highly threatening and impossible to control events. [ABSTRACT FROM AUTHOR]
28. Prognostic significance of age in 5631 patients with Wilms tumour prospectively registered in International Society of Paediatric Oncology (SIOP) 93-01 and 2001. [2019]
Hol, J. A., Lopez-Yurda, M. I., Van Tinteren, H., Van Grotel, M., Godzinski, J., Vujanic, G., Oldenburger, F., De Camargo, B., Ramírez-Villar, G. L., Bergeron, C., Pritchard-Jones, K., Graf, N., and Van den Heuvel-Eibrink, M. M.
PLoS ONE; 8/19/2019, Vol. 14 Issue 8, p1-15, 15p
ONCOLOGY, TUMORS, CANCER, AGE, and TUMORS in children
Background: To enhance risk stratification for Wilms tumour (WT) in a pre-operative chemotherapy setting, we explored the prognostic significance and optimal age cutoffs in patients treated according to International Society of Paediatric Oncology Renal Tumour Study Group (SIOP-RTSG) protocols. Methods: Patients(6 months-18 years) with unilateral WT were selected from prospective SIOP 93–01 and 2001 studies(1993–2016). Martingale residual analysis was used to explore optimal age cutoffs. Outcome according to age was analyzed by uni- and multivariable analysis, adjusted for sex, biopsy(yes/no), stage, histology and tumour volume at surgery. Results: 5631 patients were included; median age was 3.4 years(IQR: 2–5.1). Estimated 5-year event-free survival (EFS) and overall survival (OS) were 85%(95%CI 83.5–85.5) and 93%(95%CI 92.0–93.4). Martingale residual plots detected no optimal age cutoffs. Multivariable analysis showed lower EFS with increasing age(linear trend P<0.001). Using previously described age categories, EFS was lower for patients aged 2-4(HR 1.34, P = 0.02), 4-10(HR 1.83, P<0.0001) and 10–18 years(HR 1.74, P = 0.01) as compared to patients aged 6 months-2 years. OS was lower for patients 4–10 years(HR 1.67, P = 0.01) and 10–18 years(HR 1.87, P = 0.04), but not for 2–4 years(HR 1.29, P = 0.23). Higher stage, histological risk group and tumour volume were independent adverse prognostic factors. Conclusion: Although optimal age cutoffs could not be identified, we demonstrated the prognostic significance of age as well as previously described cutoffs for EFS (2 and 4 years) and OS (4 years) in children with WT treated with pre-operative chemotherapy. These findings encourage the consideration of age in the design of future SIOP-RTSG protocols. [ABSTRACT FROM AUTHOR]
29. Serum 25-Hydroxyvitamin D Concentrations Are Associated with Computed Tomography Markers of Subclinical Interstitial Lung Disease among Community-Dwelling Adults in the Multi-Ethnic Study of Atherosclerosis (MESA). [2018]
Kim, Samuel M, Zhao, Di, Podolanczuk, Anna J, Lutsey, Pamela L, Guallar, Eliseo, Kawut, Steven M, Barr, R Graham, Boer, Ian H de, Kestenbaum, Bryan R, Lederer, David J, Michos, Erin D, and de Boer, Ian H
Journal of Nutrition; Jul2018, Vol. 148 Issue 7, p1126-1134, 9p
ATHEROSCLEROSIS, LUNG diseases, VITAMIN D deficiency, COMPUTED tomography, ARTERIOSCLEROSIS, COMPARATIVE studies, ETHNIC groups, INTERSTITIAL lung diseases, LONGITUDINAL method, RESEARCH methodology, MEDICAL cooperation, RESEARCH, RESEARCH funding, VITAMIN D, and EVALUATION research
Background: Activated vitamin D has anti-inflammatory properties. 25-Hydroxyvitamin D [25(OH)D] deficiency might contribute to subclinical interstitial lung disease (ILD).Objective: We examined associations between serum 25(OH)D concentrations and subclinical ILD among middle-aged to older adults who were free of cardiovascular disease at baseline.Methods: We studied 6302 Multi-Ethnic Study of Atherosclerosis (MESA) participants who had baseline serum 25(OH)D concentrations and computed tomography (CT) imaging spanning ≤ 10 y. Baseline cardiac CT scans (2000-2002) included partial lung fields. Some participants had follow-up cardiac CT scans at exams 2-5 and a full-lung CT scan at exam 5 (2010-2012), with a mean ± SD of 2.1 ± 1.0 scans. Subclinical ILD was defined quantitatively as high-attenuation areas (HAAs) between -600 and -250 Hounsfield units. We assessed associations of 25(OH)D with adjusted HAA volumes and HAA progression. We also examined associations between baseline 25(OH)D and the presence of interstitial lung abnormalities (ILAs) assessed qualitatively (yes or no) from full-lung CT scans at exam 5. Models were adjusted for sociodemographic characteristics, lifestyle factors (including smoking), and lung volumes.Results: The cohort's mean ± SD characteristics were 62.2 ± 10 y for age, 25.8 ± 10.9 ng/mL for 25(OH)D concentrations, and 28.3 ± 5.4 for body mass index (kg/m2); 53% were women, with 39% white, 27% black, 22% Hispanic, and 12% Chinese race/ethnicities. Thirty-three percent had replete (≥30 ng/mL), 35% intermediate (20 to <30 ng/mL), and 32% deficient (<20 ng/mL) 25(OH)D concentrations. Compared with those with replete concentrations, participants with 25(OH)D deficiency had greater adjusted HAA volume at baseline (2.7 cm3; 95% CI: 0.9, 4.5 cm3) and increased progression over a median of 4.3 y of follow-up (2.7 cm3; 95% CI: 0.9, 4.4 cm3) (P < 0.05). 25(OH)D deficiency was also associated with increased prevalence of ILAs 10 y later (OR: 1.5; 95% CI: 1.1, 2.2).Conclusions: Vitamin D deficiency is independently associated with subclinical ILD and its progression, based on both increased HAAs and ILAs, in a community-based population. Further studies are needed to examine whether vitamin D repletion can prevent ILD or slow its progression. The MESA cohort design is registered at www.clinicaltrials.gov as NCT00005487. [ABSTRACT FROM AUTHOR]
30. Determinants of working until retirement compared to a transition to early retirement among older workers with and without chronic diseases: Results from a Dutch prospective cohort study. [2018]
SEWDAS, RANU, VAN DER BEEK, ALLARD J., DE WIND, ASTRID, VAN DER ZWAAN, LENNART G. L., and BOOT, CÉCILE R. L.
Scandinavian Journal of Public Health; May2018, Vol. 46 Issue 3, p400-408, 9p
RETIREMENT -- Psychological aspects, ANALYSIS of covariance, AUTONOMY (Psychology), CHRONIC diseases, LONGITUDINAL method, PROBABILITY theory, PROFESSIONS, LOGISTIC regression analysis, and MIDDLE age
Aim: The ageing society and recent policy changes may lead to an increase of older workers with chronic diseases in the workforce. To date, it is unclear whether workers with chronic diseases have specific needs while employed. The aim of this study is to explore the differences in determinants of working until retirement compared to a reference group who have transitioned to early retirement among workers with and without chronic diseases. Methods: Dutch workers aged 57-62 years (n = 2445) were selected from an existing prospective cohort study, 'STREAM'. The potential determinants were categorized into: individual, health, work-related and social factors. logistic regression analyses were performed to determine the associations between these determinants and working until retirement - once for workers with and once for those without chronic diseases. To test differences, we included an interaction term between the determinant and the covariate 'having a chronic disease yes/no' in the analyses of the total population. Results: In total, 1652 (68%) persons were employed from 2011 to 2013. The majority of the determinants appeared to be similar for workers with or without a chronic disease; the interaction terms for these determinants and the covariate 'having a chronic disease' showed a p-value higher than 0.05, except for one individual factor (i.e. mastery) and one work-related factor (i.e. autonomy), which showed a p-value below 0.05. Higher mastery and higher autonomy were statistically significantly associated with working until retirement for those with chronic diseases, whereas they were not for those without chronic diseases. Conclusions: Differences between workers with and without chronic diseases may exist for working until a statutory retirement age. Interventions aimed at encouraging work participation of older workers should make a distinction between the two groups. Autonomy at work and mastery were found to be factors that may promote work participation until higher age, specifically for older workers with chronic diseases. [ABSTRACT FROM AUTHOR]
31. YES-NO QUESTION FORMATION IN IGBO: THE PHONO-SYNTAX INTERFACE. [2018]
Amaechi, Mary
Journal of West African Languages; 2018, Vol. 45 Issue 1, p78-87, 10p
IGBO (African people), SYNTAX (Grammar), PRONOUNS (Grammar), PHONOGRAM (Linguistics), COMPARATIVE grammar, and ETHNOLOGY
Copyright of Journal of West African Languages is the property of Council of the West African Linguistic Society and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
32. Translation Invariant Extensions of Finite Volume Measures. [2017]
Goldstein, S., Kuna, T., Lebowitz, J., and Speer, E.
Journal of Statistical Physics; Feb2017, Vol. 166 Issue 3/4, p765-782, 18p
SYMMETRY (Physics), INVARIANT measures, DE Bruijn graph, ENTROPY, and LATTICE constants
We investigate the following questions: Given a measure $$\mu _\Lambda $$ on configurations on a subset $$\Lambda $$ of a lattice $$\mathbb {L}$$ , where a configuration is an element of $$\Omega ^\Lambda $$ for some fixed set $$\Omega $$ , does there exist a measure $$\mu $$ on configurations on all of $$\mathbb {L}$$ , invariant under some specified symmetry group of $$\mathbb {L}$$ , such that $$\mu _\Lambda $$ is its marginal on configurations on $$\Lambda $$ ? When the answer is yes, what are the properties, e.g., the entropies, of such measures? Our primary focus is the case in which $$\mathbb {L}=\mathbb {Z}^d$$ and the symmetries are the translations. For the case in which $$\Lambda $$ is an interval in $$\mathbb {Z}$$ we give a simple necessary and sufficient condition, local translation invariance ( LTI), for extendibility. For LTI measures we construct extensions having maximal entropy, which we show are Gibbs measures; this construction extends to the case in which $$\mathbb {L}$$ is the Bethe lattice. On $$\mathbb {Z}$$ we also consider extensions supported on periodic configurations, which are analyzed using de Bruijn graphs and which include the extensions with minimal entropy. When $$\Lambda \subset \mathbb {Z}$$ is not an interval, or when $$\Lambda \subset \mathbb {Z}^d$$ with $$d>1$$ , the LTI condition is necessary but not sufficient for extendibility. For $$\mathbb {Z}^d$$ with $$d>1$$ , extendibility is in some sense undecidable. [ABSTRACT FROM AUTHOR]
33. Activity of the novel BCR kinase inhibitor IQS019 in preclinical models of B-cell non-Hodgkin lymphoma. [2017]
Balsas, P., Esteve-Arenys, A., Roldán, J., Jiménez, L., Rodríguez, V., Valero, J. G., Chamorro-Jorganes, A., de la Bellacasa, R. Puig, Teixidó, J., Matas-Céspedes, A., Moros, A., Martínez, A., Campo, E., Sáez-Borderías, A., Borrell, J. I., Pérez-Galán, P., Colomer, D., and Roué, G.
Journal of Hematology & Oncology; 3/31/2017, Vol. 10, p1-14, 14p
B cell receptors, KINASE inhibitors, HODGKIN'S disease, CELL culture, and ANTINEOPLASTIC agents
Background: Pharmacological inhibition of B cell receptor (BCR) signaling has recently emerged as an effective approach in a wide range of B lymphoid neoplasms. However, despite promising clinical activity of the first Bruton's kinase (Btk) and spleen tyrosine kinase (Syk) inhibitors, a small fraction of patients tend to develop progressive disease after initial response to these agents. Methods: We evaluated the antitumor activity of IQS019, a new BCR kinase inhibitor with increased affinity for Btk, Syk, and Lck/Yes novel tyrosine kinase (Lyn), in a set of 34 B lymphoid cell lines and primary cultures, including samples with acquired resistance to the first-in-class Btk inhibitor ibrutinib. Safety and efficacy of the compound were then evaluated in two xenograft mouse models of B cell lymphoma. Results: IQS019 simultaneously engaged a rapid and dose-dependent de-phosphorylation of both constitutive and IgM-activated Syk, Lyn, and Btk, leading to impaired cell proliferation, reduced CXCL12-dependent cell migration, and induction of caspase-dependent apoptosis. Accordingly, B cell lymphoma-bearing mice receiving IQS019 presented a reduced tumor outgrowth characterized by a decreased mitotic index and a lower infiltration of malignant cells in the spleen, in tight correlation with downregulation of phospho-Syk, phospho-Lyn, and phospho-Btk. More interestingly, IQS019 showed improved efficacy in vitro and in vivo when compared to the first-in-class Btk inhibitor ibrutinib, and was active in cells with acquired resistance to this latest. Conclusions: These results define IQS019 as a potential drug candidate for a variety of B lymphoid neoplasms, including cases with acquired resistance to current BCR-targeting therapies. [ABSTRACT FROM AUTHOR]
34. The identification of cases of major hemorrhage during hospitalization in patients with acute leukemia using routinely recorded healthcare data. [2018]
Kreuger, Aukje L., Middelburg, Rutger A., Beckers, Erik A. M., de Vooght, Karen M. K., Zwaginga, Jaap Jan, Kerkhoffs, Jean-Louis H., and van der Bom, Johanna G.
ELECTRONIC health records, ACUTE leukemia, HEMORRHAGE, HEMOGLOBINS, and BLOOD transfusion
Introduction: Electronic health care data offers the opportunity to study rare events, although detecting these events in large datasets remains difficult. We aimed to develop a model to identify leukemia patients with major hemorrhages within routinely recorded health records. Methods: The model was developed using routinely recorded health records of a cohort of leukemia patients admitted to an academic hospital in the Netherlands between June 2011 and December 2015. Major hemorrhage was assessed by chart review. The model comprised CT-brain, hemoglobin drop, and transfusion need within 24 hours for which the best discriminating cut off values were taken. External validation was performed within a cohort of two other academic hospitals. Results: The derivation cohort consisted of 255 patients, 10,638 hospitalization days, of which chart review was performed for 353 days. The incidence of major hemorrhage was 0.22 per 100 days in hospital. The model consisted of CT-brain (yes/no), hemoglobin drop of ≥0.8 g/dl and transfusion of ≥6 units. The C-statistic was 0.988 (CI 0.981–0.995). In the external validation cohort of 436 patients (19,188 days), the incidence of major hemorrhage was 0.46 per 100 hospitalization days and the C-statistic was 0.975 (CI 0.970–0.980). Presence of at least one indicator had a sensitivity of 100% (CI 95.8–100) and a specificity of 90.7% (CI 90.2–91.1). The number of days to screen to find one case decreased from 217.4 to 23.6. Interpretation: A model based on information on CT-brain, hemoglobin drop and need of transfusions can accurately identify cases of major hemorrhage within routinely recorded health records. [ABSTRACT FROM AUTHOR]
35. Sintomatología sugestiva de vejiga hiperactiva: prevalencia y factores de riesgo asociados. Resultados del estudio PREVEGIN. [2012]
Usandizaga Elio, R., Puch, M., Pastrana, J. L., Sánchez Quintana, M. ªD., and González Salmerón, M. ªD.
Suelo Pélvico; 2012, Vol. 8 Issue 3, p56-63, 8p
OVERACTIVE bladder, DISEASE prevalence, BLADDER diseases, PROGNOSTIC tests, DIAGNOSIS of diseases in women, DIAGNOSIS, and DISEASE risk factors
Copyright of Suelo Pélvico is the property of Ediciones Mayo and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
36. ESCALA FUNCIONAL DE INCAPACIDADE DO PESCOÇO DE COPENHAGEN: TRADUÇÃO E ADAPTAÇÃO CULTURAL PARA O PORTUGUÊS BRASILEIRO. [2014]
Righi Badaró, Flávia Azevedo, Araújo, Rubens Corrêa, and Behlau, Mara
Revista Brasileira de Crescimento e Desenvolvimento Humano; 2014, Vol. 24 Issue 3, p1-9, 9p
Copyright of Revista Brasileira de Crescimento e Desenvolvimento Humano is the property of Centro de Estudos de Crescimento e Desenvolvimento do Ser Humano and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
38. Total knee replacement: Are there any baseline factors that have influence in patient reported outcomes? [2017]
Escobar, A., García Pérez, L., Herrera‐Espiñeira, C., Aizpuru, F., Sarasqueta, C., Gonzalez Sáenz de Tejada, M., Quintana, J.M., and Bilbao, A.
Journal of Evaluation in Clinical Practice; Dec2017, Vol. 23 Issue 6, p1232-1239, 8p
FACTOR analysis, JOINT diseases, LONGITUDINAL method, MEDICAL cooperation, MENTAL health, HEALTH outcome assessment, POSTOPERATIVE period, QUESTIONNAIRES, RESEARCH, STATISTICS, TOTAL knee replacement, PAIN measurement, PATIENTS' attitudes, and FUNCTIONAL assessment
Background There is conflicting evidence about what factors influence outcomes after total knee replacement (TKR). The objective is to identify baseline factors that differentiate patients who achieve both, minimal clinically important difference (MCID) and a patient acceptable symptom state (PASS) in pain and function, measured by WOMAC, after TKR from those who do not attain scores above the cutoff in either of these dimensions. Methods One-year prospective multicentre study. Patients completed WOMAC, SF-12, EQ-5D, expectations, other joint problems and sociodemographic data while in the waiting list, and 1-year post-TKR. Dependent variable was a combination of MCID and PASS in both dimensions (yes/no). Univariate analysis was performed to identify variables associated. Exploratory factor analysis (EFA) was performed to study how these variables grouped into different factors. Results Total sample comprised 492 patients. Mean (SD) age was 71.3 (6.9), and there were a 69.7% of women. Of the total, 106 patients did not attain either MCID or PASS in either dimension, and 230 exceeded both thresholds in both dimensions. In the univariate analysis, 13 variables were associated with belonging to one group or another. These 13 variables were included in EFA; 3 factors were extracted: expectations, mental health, and other joints problems. The percentage of variance explained by the 3 factors was 80.4%. Conclusion We have found 2 modifiable baseline factors, expectations and mental health, that should be properly managed by different specialist. Indication of TKR should take into account these modifiable factors for improving outcomes after TKR. [ABSTRACT FROM AUTHOR]
39. Which patient-reported factors predict referral to spinal surgery? A cohort study among 4987 chronic low back pain patients. [2017]
Dongen, Johanna, Hooff, Miranda, Spruit, Maarten, Kleuver, Marinus, Ostelo, Raymond, van Dongen, Johanna M, van Hooff, Miranda L, de Kleuver, Marinus, and Ostelo, Raymond W J G
European Spine Journal; Nov2017, Vol. 26 Issue 11, p2782-2788, 7p
SPINAL surgery, BACKACHE, LOGISTIC regression analysis, SOMATIZATION disorder, HOSPITAL records, CHRONIC pain, LONGITUDINAL method, MEDICAL referrals, SELF-evaluation, and LUMBAR pain
Purpose: It is unknown which chronic low back pain (CLBP) patients are typically referred to spinal surgery. The present study, therefore, aimed to explore which patient-reported factors are predictive of spinal surgery referral among CLBP patients.Methods: CLBP patients were consecutively recruited from a Dutch orthopedic hospital specialized in spine care (n = 4987). The outcome of this study was referral to spinal surgery (yes/no), and was assessed using hospital records. Possible predictive factors were assessed using a screening questionnaire. A prediction model was constructed using logistic regression, with backwards selection and p < 0.10 for keeping variables in the model. The model was internally validated and evaluated using discrimination and calibration measures.Results: Female gender, previous back surgery, high intensity leg pain, somatization, and positive treatment expectations increased the odds of being referred to spinal surgery, while being obese, having comorbidities, pain in the thoracic spine, increased walking distance, and consultation location decreased the odds. The model's fit was good (X 2 = 10.5; p = 0.23), its discriminative ability was poor (AUC = 0.671), and its explained variance was low (5.5%). A post hoc analysis indicated that consultation location was significantly associated with spinal surgery referral, even after correcting for case-mix variables.Conclusion: Some patient-reported factors could be identified that are predictive of spinal surgery referral. Although the identified factors are known as common predictive factors of surgery outcome, they could only partly predict spinal surgery referral. [ABSTRACT FROM AUTHOR]
40. New feed ingredients: the insect opportunity. [2017]
van Raamsdonk, L. W. D., van der Fels-Klerx, H. J., and de Jong, J.
Food Additives & Contaminants. Part A: Chemistry, Analysis, Control, Exposure & Risk Assessment; Aug2017, Vol. 34 Issue 8, p1384-1397, 14p, 4 Charts, 1 Graph
COMPOSITION of feeds, INSECT proteins, FOOD chains, SUSTAINABILITY, FEED utilization efficiency, ANIMAL feeds, and SAFETY
In the framework of sustainability and a circular economy, new ingredients for feed are desired and, to this end, initiatives for implementing such novel ingredients have been started. The initiatives include a range of different sources, of which insects are of particular interest. Within the European Union, generally, a new feed ingredient should comply with legal constraints in terms of 'yes, provided that' its safety commits to a range of legal limits for heavy metals, mycotoxins, pesticides, contaminants, pathogens etc. In the case of animal proteins, however, a second legal framework applies which is based on the principle 'no, unless'. This legislation for eradicating transmissible spongiform encephalopathy consists of prohibitions with a set of derogations applying to specific situations. Insects are currently considered animal proteins. The use of insect proteins is a good case to illustrate this difference between a positive, although restricted, modus and a negative modus for allowing animal proteins. This overview presents aspects in the areas of legislation, feed safety, environmental issues, efficiency and detection of the identity of insects. Use of insects as an extra step in the feed production chain costs extra energy and this results in a higher footprint. A measure for energy conversion should be used to facilitate the comparison between production systems based on cold- versus warm-blooded animals. Added value can be found by applying new commodities for rearing, including but not limited to category 2 animal by-products, catering and household waste including meat, and manure. Furthermore, monitoring of a correct use of insects is one possible approach for label control, traceability and prevention of fraud. The link between legislation and enforcement is strong. A principle called WISE (Witful, Indicative, Societal demands, Enforceable) is launched for governing the relationship between the above-mentioned aspects. [ABSTRACT FROM AUTHOR]
41. Interobserver reliability in the interpretation of three-dimensional gait analysis in children with gait disorders. [2019]
Wang, Kemble K, Stout, Jean L, Ries, Andrew J, and Novacheck, Tom F
Developmental Medicine & Child Neurology; Jun2019, Vol. 61 Issue 6, p710-716, 7p
GAIT disorders, ANATOMICAL planes, CEREBRAL palsy, THERAPEUTICS, and INTER-observer reliability
Copyright of Developmental Medicine & Child Neurology is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
42. Acupuncture and standard emergency department care for pain and/or nausea and its impact on emergency care delivery: a feasibility study. [2014]
Zhang, Anthony L., Parker, Shefton J., Smit, De Villiers, Taylor, David McD., and Xu, Charlie C. L.
Acupuncture in Medicine; Jun2014, Vol. 32 Issue 3, p250-256, 7p, 2 Diagrams, 1 Chart
NAUSEA, PAIN, PREVENTIVE medicine, ACUPUNCTURE, CHI-squared test, EMERGENCY medical services, HOSPITAL emergency services, PATIENT satisfaction, PATIENT safety, RESEARCH funding, T-test (Statistics), U-statistics, PILOT projects, PATIENT refusal of treatment, VISUAL analog scale, DATA analysis software, DESCRIPTIVE statistics, and PREVENTION
Objective: To evaluate the feasibility of delivering acupuncture in an emergency department (ED) to patients presenting with pain and/or nausea. Methods: A feasibility study (with historical controls) undertaken at the Northern Hospital ED in Melbourne, Australia, involving people presenting to ED triage with pain (VAS 0–10) and/or nausea (Morrow Index 1–6) between January and August 2010 (n=400). The acupuncture group comprised 200 patients who received usual medical care and acupuncture; the usual care group comprised 200 patients with retrospective data closely matched from ED electronic health records. Results: Refusal rate was 31%, with 'symptoms under control owing to medical treatment before acupuncture' the most prevalent reason for refusal (n=36); 52.5% of participants responded 'definitely yes' for their willingness to repeat acupuncture, and a further 31.8% responded 'probably yes'. Over half (57%) reported a satisfaction score of 10 for acupuncture treatment. Musculoskeletal conditions were the most common conditions treated n=117 (58.5%), followed by abdominal or flank pain n=49 (24.5%). Adverse events were rare (2%) and mild. Pain and nausea scores reduced from a mean±SD of 7.01±2.02 before acupuncture to 4.72±2.62 after acupuncture and from 2.6±2.19 to 1.42±1.86, respectively. Conclusions: Acupuncture in the ED appears safe and acceptable for patients with pain and/or nausea. Results suggest combined care may provide effective pain and nausea relief in ED patients. Further high-quality, sufficiently powered randomised studies evaluating the cost-effectiveness and efficacy of the add-on effect of acupuncture are recommended. [ABSTRACT FROM AUTHOR]
43. Oncology Nurse plus Peer Navigation: A Promising Model for Hispanic/Latina Women with Breast Cancer. [2020]
Saavedra Ferrer, E. L., Hine, W. L., Arellano, S. L., Ortega de Corona, P., Cardenas, M. C., and Vicuna Tellez, B.
Journal of Oncology Navigation & Survivorship; Nov2020, Vol. 11 Issue 11, p402-403, 2p
BREAST tumors, ONCOLOGY nursing, CONFERENCES & conventions, HISPANIC Americans, NURSES, AFFINITY groups, OCCUPATIONAL roles, and SOCIAL support
Background: Breast cancer is the most common cancer among New Mexico Hispanic women, and Hispanic women are less likely to be diagnosed with early-stage cancer compared with Anglo women.1 Hispanic/Latina women have lower mammography screening rates for breast cancer compared with Anglo women.2 Breast cancer screening disparities are persistent among Hispanic/Latina women. The Comadre a Comadre Program is a multilevel, community-based, peer-led, culturally and linguistically competent intervention designed to improve the breast health and breast cancer outcomes among Hispanic/ Latina women in New Mexico. It is important to explore the effectiveness of tailored navigation models for specific populations and settings. Oncology nurse navigation in cancer care is a vital component of patient care. The nurse navigator serves as a clinical resource with expertise in oncology care management.3-7 Underserved women with cancer face additional barriers, which are outside the domain of the medical facility setting and the clinical aspects of cancer. These barriers, or social determinants of health, can be cultural (eg, privacy norms that discourage discussion about their bodies), logistical (eg, transportation to hospitals), language-based (eg, lack of understanding despite the use of medical interpreters), and emotionally based (eg, a fatalistic view of cancer).8 Community-based peer navigation, when implemented in conjunction with clinic-based oncology nurse navigation, could be a promising navigation model.9-12 Objectives: * To examine the types of practical, structural, and nonclinical support provided to the women by lay peer navigators (community-based) * To examine the clinical aspects of the role of the oncology nurse navigator in this cancer setting * To examine the characteristics of women who most benefited from navigation efforts Method: Peer navigators completed 2 different types of tracking data forms following individual encounters with participants. The Clinic Tracking and the Non-Clinic Tracking forms are used when the peer navigator either meets the participant at the clinic for her medical appointment (clinic form) or conducts other navigation on her behalf, either face-to-face or by telephone (non-clinic form). The forms track dichotomous variables, recorded by checking yes or no on the form. The Clinic Tracking form variables include providing emotional support at the medical appointment, accessing types of cancer services (medical language interpreter, social worker, etc). The Non-Clinic Tracking form variables include advocacy, program support, navigation to other agencies, and follow-up with the patient. Participants also complete a project-developed demographic survey. Quantitative data collected from peer navigators (tracking form data and demographic data) for the period 2018 to 2020 will be the focus of the analysis. Five to 10 encounters per woman, for 100 women, will be analyzed. The oncology nurse navigator completes referral forms for participants in the program. Reason( s) provided for referrals will be examined qualitatively, analyzing emergent themes and core competencies of oncology nurse and lay patient navigators. Results: Results from this analysis will include demographic data that include variables such as income, self-identify, education, etc. In an earlier analysis (2014) of the Comadre Program, we found emotional support, financial navigation, and language access to be the most frequent types of support provided by the navigators. We anticipate we will find similar patterns in these data. We are currently in the final stages of analysis for the period of 2018 to 2020. Conclusion: The findings gleaned from this analysis will show that the types of support provided by these 2 navigator types (lay and nurse oncology navigation) can work in a complementary manner and be effective in improving cancer care for Hispanic/Latina women. [ABSTRACT FROM AUTHOR]
44. INCIDENCIA DE LA POSICION DE LOS FRUTOS EN EL RACIMO DE PLATANO EN DESHIDRATACIÓN OSMÓTICA Y FRITURA. [2011]
Torres Mora, Ana Marin, Duran, Igor Pérez, Castillo Vicuacha, Karina L., Saa, Eduardo Álvarez, and Alberto, Díaz Ortiz
Ingeniería de Recursos Naturales y del Ambiente; ene-dic2011, Issue 10, p101-108, 8p
BANANA varieties, FRUIT drying, OSMOTIC potential of plants, DEEP frying, MOISTURE content of plants, and FRUIT quality
Copyright of Ingeniería de Recursos Naturales y del Ambiente is the property of Universidad del Valle and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
45. Estrogenic activity, selected plasticizers and potential health risks associated with bottled water in South Africa. [2018]
Aneck-Hahn, Natalie H., Van Zijl, Magdalena C., Swart, Pieter, Truebody, Barry, Genthe, Bettina, Charmier, Jessica, and De Jager, Christiaan
Journal of Water & Health; 2018, Vol. 16 Issue 2, p253-262, 10p
ENDOCRINE disruptors, BOTTLED water, BISPHENOL A, POLYETHYLENE terephthalate, and HEALTH risk assessment
Potential endocrine disrupting chemicals (EDCs) are present in bottled water from various countries. In South Africa (SA), increased bottled water consumption and concomitant increases in plastic packaging create important consequences for public health. This study aimed to screen SA bottled water for estrogenic activity, selected target chemicals and assessing potential health risks. Ten bottled water brands were exposed to 20 C and 40 C over 10 days. Estrogenic activity was assessed using the recombinant yeast estrogen screen (YES) and the T47D-KBluc reporter gene assay. Solid phase extracts of samples were analyzed for bis(2-ethylhexyl) adipate (DEHA), selected phthalates, bisphenol-A (BPA), 4-nonylphenol (4-NP), 17β-estradiol (E2), estrone (E1), and ethynylestradiol (EE2) using gas chromatography-mass spectrophotometry. Using a scenario-based health risk assessment, human health risks associated with bottled water consumption were evaluated. Estrogenic activity was detected at 20 C (n = 2) and at 40 C (n = 8). Estradiol equivalent (EEq) values ranged from 0.001 to 0.003 ng/L. BPA concentrations ranged from 0.9 ng/L to 10.06 ng/L. Although EEqs and BPA concentrations were higher in bottled water stored at 40 C compared to 20 C, samples posed an acceptable risk for a lifetime of exposure. Irrespective of temperature, bottled water from SA contained chemicals with acceptable health risks. [ABSTRACT FROM AUTHOR]
46. Post-natal erythromycin exposure and risk of infantile hypertrophic pyloric stenosis: a systematic review and meta-analysis. [2016]
Murchison, L., Coppi, P., Eaton, S., and De Coppi, P
Pediatric Surgery International; Dec2016, Vol. 32 Issue 12, p1147-1152, 6p
HYPERTROPHIC pyloric stenosis, ERYTHROMYCIN, SYSTEMATIC reviews, META-analysis, CHILD patients, and DISEASE risk factors
Purpose: Macrolide antibiotics, erythromycin, in particular, have been linked to the development of infantile hypertrophic pyloric stenosis (IHPS). Our aim was to conduct a systematic review of the evidence of whether post-natal erythromycin exposure is associated with subsequent development of IHPS.Methods: A systematic review of postnatal erythromycin administration and IHPS was performed. Papers were included if data were available on development (yes/no) of IHPS in infants exposed/unexposed to erythromycin. Data were meta-analysed using Review Manager 5.3. A random effects model was decided on a priori due to heterogeneity of study design; data are odds ratio (OR) with 95 % CI.Results: Nine papers reported data suitable for analysis; two randomised controlled trials and seven retrospective studies. Overall, erythromycin exposure was significantly associated with development of IHPS [OR 2.45 (1.12-5.35), p = 0.02]. However, significant heterogeneity existed between the studies (I 2 = 84 %, p < 0.0001). Data on erythromycin exposure in the first 14 days of life was extracted from 4/9 studies and identified a strong association between erythromycin exposure and subsequent development IHPS [OR 12.89 (7.67-2167), p < 0.00001].Conclusion: This study demonstrates a significant association between post-natal erythromycin exposure and development of IHPS, which seems stronger when exposure occurs in the first 2 weeks of life. [ABSTRACT FROM AUTHOR]
47. HIPPO-Integrin-linked Kinase Cross-Talk Controls Self-Sustaining Proliferation and Survival in Pulmonary Hypertension. [2016]
Kudryashova, Tatiana V., Goncharov, Dmitry A., Pena, Andressa, Kelly, Neil, Vanderpool, Rebecca, Baust, Jeff, Kobir, Ahasanul, Shufesky, William, Mora, Ana L., Morelli, Adrian E., Jing Zhao, Ihida-Stansbury, Kaori, Baojun Chang, De Lisser, Horace, Tuder, Rubin M., Kawut, Steven M., Silljé, Herman H. W., Shapiro, Steven, Yutong Zhao, and Goncharova, Elena A.
American Journal of Respiratory & Critical Care Medicine; 10/1/2016, Vol. 194 Issue 7, p866-877, 12p, 8 Graphs
Rationale: Enhanced proliferation and impaired apoptosis of pulmonary arterial vascular smooth muscle cells (PAVSMCs) are key pathophysiologic components of pulmonary vascular remodeling in pulmonary arterial hypertension (PAH).Objectives: To determine the role and therapeutic relevance of HIPPO signaling in PAVSMC proliferation/apoptosis imbalance in PAH.Methods: Primary distal PAVSMCs, lung tissue sections from unused donor (control) and idiopathic PAH lungs, and rat and mouse models of SU5416/hypoxia-induced pulmonary hypertension (PH) were used. Immunohistochemical, immunocytochemical, and immunoblot analyses and transfection, infection, DNA synthesis, apoptosis, migration, cell count, and protein activity assays were performed in this study.Measurements and Main Results: Immunohistochemical and immunoblot analyses demonstrated that the HIPPO central component large tumor suppressor 1 (LATS1) is inactivated in small remodeled pulmonary arteries (PAs) and distal PAVSMCs in idiopathic PAH. Molecular- and pharmacology-based analyses revealed that LATS1 inactivation and consequent up-regulation of its reciprocal effector Yes-associated protein (Yap) were required for activation of mammalian target of rapamycin (mTOR)-Akt, accumulation of HIF1α, Notch3 intracellular domain and β-catenin, deficiency of proapoptotic Bim, increased proliferation, and survival of human PAH PAVSMCs. LATS1 inactivation and up-regulation of Yap increased production and secretion of fibronectin that up-regulated integrin-linked kinase 1 (ILK1). ILK1 supported LATS1 inactivation, and its inhibition reactivated LATS1, down-regulated Yap, suppressed proliferation, and promoted apoptosis in PAH, but not control PAVSMCs. PAVSM in small remodeled PAs from rats and mice with SU5416/hypoxia-induced PH showed down-regulation of LATS1 and overexpression of ILK1. Treatment of mice with selective ILK inhibitor Cpd22 at Days 22-35 of SU5416/hypoxia exposure restored LATS1 signaling and reduced established pulmonary vascular remodeling and PH.Conclusions: These data report inactivation of HIPPO/LATS1, self-supported via Yap-fibronectin-ILK1 signaling loop, as a novel mechanism of self-sustaining proliferation and apoptosis resistance of PAVSMCs in PAH and suggest a new potential target for therapeutic intervention. [ABSTRACT FROM AUTHOR]
48. Antibiotic Treatment for First Episode of Acute Otitis Media Is Not Associated with Future Recurrences. [2016]
te Molder, Marthe, de Hoog, Marieke L. A., Uiterwaal, Cuno S. P. M., van der Ent, Cornelis K., Smit, Henriette A., Schilder, Anne G. M., Damoiseaux, Roger A. M. J., and Venekamp, Roderick P.
ACUTE otitis media, ANTIBIOTICS, DISEASE relapse, DRUG efficacy, DRUG prescribing, and THERAPEUTICS
Objective: Antibiotic treatment of acute otitis media (AOM) has been suggested to increase the risk of future AOM episodes by causing unfavorable shifts in microbial flora. Because current evidence on this topic is inconclusive and long-term follow-up data are scarce, we wanted to estimate the effect of antibiotic treatment for a first AOM episode occurring during infancy on AOM recurrences and AOM-related health care utilization later in life. Methods: We obtained demographic information and risk factors from data of the Wheezing Illnesses Study Leidsche Rijn, a prospective birth cohort study in which all healthy newborns born in Leidsche Rijn (between 2001 and 2012), The Netherlands, were enrolled. These data were linked to children's primary care electronic health records up to the age of four. Children with at least one family physician-diagnosed AOM episode before the age of two were included in analyses. The exposure of interest was the prescription of oral antibiotics (yes vs no) for a child's first AOM episode before the age of two years. Results: 848 children were included in analyses and 512 (60%) children were prescribed antibiotics for their first AOM episode. Antibiotic treatment was not associated with an increased risk of total AOM recurrences (adjusted rate ratio: 0.94, 95% CI: 0.78–1.13), recurrent AOM (≥3 episodes in 6 months or ≥4 in one year; adjusted risk ratio: 0.79, 95% CI: 0.57–1.11), or with increased AOM-related health care utilization during children's first four years of life. Conclusions: Oral antibiotic treatment of a first AOM episode occurring during infancy does not affect the number of AOM recurrences and AOM-related health care utilization later in life. This information can be used when weighing the pros and cons of various AOM treatment options. [ABSTRACT FROM AUTHOR]
49. Four-month metacarpal bone mineral density loss predicts radiological joint damage progression after 1 year in patients with early rheumatoid arthritis: exploratory analyses from the IMPROVED study. [2015]
Boer, K V C Wevers-de, Heimans, L, Visser, K, Kälvesten, J, Goekoop, R J, van Oosterhout, M, Harbers, J B, Bijkerk, C, Steup-Beekman, M, de Buck, M P D M, de Sonnaville, P B J, Huizinga, T W J, Allaart, C F, and Wevers-de Boer, K V C
Annals of the Rheumatic Diseases; Feb2015, Vol. 74 Issue 2, p341-346, 6p
Aim: To assess whether in early (rheumatoid) arthritis (RA) patients, metacarpal bone mineral density (BMD) loss after 4 months predicts radiological progression after 1 year of antirheumatic treatment. Methods: Metacarpal BMD was measured 4 monthly during the first year by digital X-ray radiogrammetry (DXR-BMD) in patients participating in the IMPROVED study, a clinical trial in 610 patients with recent onset RA (2010 criteria) or undifferentiated arthritis, treated according to a remission (disease activity score<1.6) steered strategy. With Sharp/van der Heijde progression ≥0.5 points after 1 year (yes/no) as dependent variable, univariate and multivariate logistic regression analyses were performed. Results: Of 428 patients with DXR-BMD results and progression scores available, 28 (7%) had radiological progression after 1 year. Independent predictors for radiological progression were presence of baseline erosions (OR (95% CI) 6.5 (1.7 to 25)) and early DXR-BMD loss (OR (95% CI) 1.5 (1.1 to 2.0)). In 366 (86%) patients without baseline erosions, early DXR-BMD loss was the only independent predictor of progression (OR (95% CI) 2.0 (1.4 to 2.9)). Conclusions: In early RA patients, metacarpal BMD loss after 4 months of treatment is an independent predictor of radiological progression after 1 year. In patients without baseline erosions, early metacarpal BMD loss is the main predictor of radiological progression. [ABSTRACT FROM AUTHOR]
50. Four-month metacarpal bone mineral density loss predicts radiological joint damage progression after 1 year in patients with early rheumatoid arthritis: exploratory analyses from the IMPROVED study. [2015]
Wevers-de Boer, K. V. C., Heimans, L., Visser, K., Kälvesten, J., Goekoop, R. J., van Oosterhout, M., Harbers, J. B., Bijkerk, C., Steup-Beekman, M., de Buck, M. P. D. M., de Sonnaville, P. B. J., Huizinga, T. W. J., and Allaart, C. F.
Annals of the Rheumatic Diseases; Feb2015, Vol. 74 Issue 2, p341-346, 6p, 6 Charts
Aim To assess whether in early (rheumatoid) arthritis (RA) patients, metacarpal bone mineral density (BMD) loss after 4 months predicts radiological progression after 1 year of antirheumatic treatment. Methods Metacarpal BMD was measured 4 monthly during the first year by digital X-ray radiogrammetry (DXR-BMD) in patients participating in the IMPROVED study, a clinical trial in 610 patients with recent onset RA (2010 criteria) or undifferentiated arthritis, treated according to a remission (disease activity score<1.6) steered strategy. With Sharp/van der Heijde progression >0.5 points after 1 year (yes/no) as dependent variable, univariate and multivariate logistic regression analyses were performed. Results Of 428 patients with DXR-BMD results and progression scores available, 28 (7%) had radiological progression after 1 year. Independent predictors for radiological progression were presence of baseline erosions (OR (95% CI) 6.5 (1.7 to 25)) and early DXR-BMD loss (OR (95% CI) 1.5 (1.1 to 2.0)). In 366 (86%) patients without baseline erosions, early DXR-BMD loss was the only independent predictor of progression (OR (95% CI) 2.0 (1.4 to 2.9)). Conclusions In early RA patients, metacarpal BMD loss after 4 months of treatment is an independent predictor of radiological progression after 1 year. In patients without baseline erosions, early metacarpal BMD loss is the main predictor of radiological progression. [ABSTRACT FROM AUTHOR]
Search "Yes de l" in all guide pages | CommonCrawl |
Work Problems
Thread starter blazerqb11
blazerqb11
I have a question about work integrals. I'm trying to reconcile using integrals to essentially multiply force by distance, but the fact that there appear to be multiple different types of problems that seem to be fundamentally different is making it difficult. Here are some example problems:
Example 1.
A cable that weighs 8 lb/ft is used to lift 650 lb of coal up a mine shaft 700 ft deep. Find the work done.
A cylindrically shaped tube has a circular base with a diameter of 2 inches and a height of 12 inches. The bottom of the tube is closed. The tube contains a liquid which is 3 inches deep and has a weight density of 62 lbs per ft3. What is the work done in pumping the liquid to the top of the tube?
A tank is full of water. Find the work W required to pump the water out of the spout. (Use 9.8 m/s2 for g. Use 1000 kg/m3 as the weight density of water.)
In the first example you find an equation for the force that is done and then integrate with bounds over the distance moved.
In the second example however, you find an equation for the distance a layer is moved, with a constant force multiplied in, and you integrate with bounds from one end of the substance to the other.
The third example seems to combine these two issues into one problem. In this case, the force equation and and the distance equation are multiplied together, but the bounds of integration are still over the substance moved and not the distance.
If someone could explain the differences in these problems, and what exactly the integration is adding up, I would much appreciate it. Specifically I would like to know why the first one integrates with bounds over the distance moved, and doesn't seem to include an equation for the distance in the integral (I'm guessing these are related, but I can't quite put my finger on exactly why), and why the second two examples integrate over the bounds of the substance moved instead.
Welcome to MHB! Thank you for posting such a well-worded problem.
Here's my answer: the work done by a force $ \mathbf{F}= \mathbf{F}( \mathbf{x})$ is given by
$$W= \int_{ \mathbf{a}}^{ \mathbf{b}} \mathbf{F} \cdot d\mathbf{l}.$$
This is a line integral over a particular path. So from this integral, you can see what needs to happen if the force is changing (such as it would for a spring, or for an irregular tank). But now suppose you can't ignore differences in path for different parts of your problem. You would then have to consider that the previous equation is only an infinitesimal chunk of work done by moving an infinitesimal chunk of stuff through its (now single) path:
$$dW= d \left( \int_{ \mathbf{a}}^{ \mathbf{b}} \mathbf{F} \cdot d\mathbf{l} \right).$$
Now, to find the total work done, you must integrate this expression:
$$W=\int_{\text{each path}}d \left( \int_{ \mathbf{a}}^{ \mathbf{b}} \mathbf{F} \cdot d\mathbf{l} \right).$$
This is what is really happening with your work problems.
One comment about your third problem: it will definitely have to have a distance moved in there somewhere. What expression do you get for the work done?
OK, I think I get some of what you are saying, but I'm still not totally clear. It is sort of like you are doing two integrals at the same time so that you can add up the two different quantities? What exactly is a line integral? Is the dl in the examples representing distance or is it the differential?
For example 3 I would integrate as follows:
\(\displaystyle
(9.8)(1000)\int_0^3 8x(5-x) \, dx\)
(g)(\text{density of water}) \int_{\text{lower limit of substance}}^{\text{upper limit of substance}} (\text{area of a "layer"})(\text{distance the layer is moved}) \, dx\)
What it seems like this integral is doing is integrating to find the volume of the water and then multiplying that by density and gravity to give force and then multiplying by the distance. This must not be exactly right, though. I see that each layer has it's own path to travel, but I don't really get how this is added up in the integral.
Whereas example 1:
\(\displaystyle \int_0^{700} 8x +650 \, dx\)
\(\displaystyle \int_{\text{starting point}}^{\text{upper limit of movement}} (\text{the force on the rope, not exactly sure why x is included}) + (\text{force on the coal}) \, dx\)
It is much less visible to me what this integral does.
In case anyone is wondering I did finally find out how these problems are different. Some are using the definition of work as \(\displaystyle \int F \cdot \, ds\) whereas others, such as the tank problem, are using the definition of work as the potential energy change, i.e. \(\displaystyle E_f - E_i =\) Work, or \(\displaystyle mgh_f - mgh_i\). In the latter case the integral is adding up the work done in moving each layer from it's initial height to it's final height.
blazerqb11 said:
Right. Well, one of your fundamental physics laws is the Work-Energy Theorem:
$$W=\Delta PE + \Delta KE.$$
If your kinetic energy doesn't change, then the work you put into the system must increase the potential energy. | CommonCrawl |
DOI:10.1515/jgt-2012-0008
Degrees, class sizes and divisors of character values
@article{Gallagher2012DegreesCS,
title={Degrees, class sizes and divisors of character values},
author={Patrick X. Gallagher},
journal={Journal of Group Theory},
P. Gallagher
Published 1 July 2012
Journal of Group Theory
Abstract. In the character table of a finite group there is a tendency either for the character degree to divide the conjugacy class size or the character value to vanish. There is also a partial divisibility where the determinant of the character is not 1. There are versions of these depending on a subgroup, based on an arithmetic property of spherical functions which generalizes the integrality of the values of the characters and the central characters.
View via Publisher
degruyter.com
On parity and characters of symmetric groups
Alexander R. Miller
J. Comb. Theory, Ser. A
Zeros and roots of unity in character tables
. For any finite group G , Thompson proved that, for each χ ∈ Irr( G ), χ ( g ) is a root of unity or zero for more than a third of the elements g ∈ G , and Gallagher proved that, for each larger than…
The sparsity of character tables of high rank groups of Lie type
M. Larsen, Alexander R. Miller
Representation Theory of the American Mathematical Society
In the high rank limit, the fraction of non-zero character table entries of finite simple groups of Lie type goes to zero.
Congruences in character tables of symmetric groups.
If $\lambda$ and $\mu$ are two non-empty Young diagrams with the same number of squares, and $\boldsymbol\lambda$ and $\boldsymbol\mu$ are obtained by dividing each square into $d^2$ congruent…
On roots of unity and character values
For any finite group $G$, Thompson proved that, for each $\chi\in {\rm Irr}(G)$, $\chi(g)$ is a root of unity or zero for more than a third of the elements $g\in G$, and Gallagher proved that, for…
Many Zeros of Many Characters of GL(n,q)
P. Gallagher, M. Larsen, Alexander R. Miller
International Mathematics Research Notices
For $G=\textrm{GL}(n,q)$, the proportion $P_{n,q}$ of pairs $(\chi ,g)$ in $\textrm{Irr}(G)\times G$ with $\chi (g)\neq 0$ satisfies $P_{n,q}\to 0$ as $n\to \infty $.
View 3 excerpts, cites background and methods
Character Theory of Finite Groups
M. Lewis, G. Navarro, D. Passman, T. Wolf
1. (i) Suppose K is a conjugacy class of Sn contained in An; then K is called split if K is a union of two conjugacy classes of An. Show that the number of split conjugacy classes contained in An is…
Character sums and double cosets
I. Isaacs, G. Navarro
Orthogonality on cosets
R. Knörr
On the number of conjugacy classes of zeros of characters
A. Moretó, J. Sangroniz
Letm be a fixed non-negative integer. In this work we try to answer the following question: What can be said about a (finite) groupG if all of its irreducible (complex) characters vanish on at mostm…
Orders of elements and zeros and heights of characters in a finite group
Tom Wilde
Let \chi be an irreducible character of the finite group G. If g is an element of G and \chi(g) is not zero, then we conjecture that the order of g divides |G|/\chi(1). The conjecture is a…
Finite Group Elements where No Irreducible Character Vanishes
I. Isaacs, G. Navarro, T. Wolf
In this paper, we consider elements x of a finite group G with the property that χ(x) ≠ 0 for all irreducible characters χ of G. If G is solvable and x has odd order, we show that x must lie in the…
Irreducible Symmetric Group Characters of Rectangular Shape
R. Stanley
We give a new formula for the values of an irreducible character of the symmetric group S_n indexed by a partition of rectangular shape. Some observations and a conjecture are given concerning a…
Induction and Restriction of π-Special Characters
I. Isaacs
Canadian Journal of Mathematics
1. Introduction. The character theory of solvable groups has undergone significant development during the last decade or so and it can now be seen to have quite a rich structure. In particular, there…
The Stanley-Féray-Śniady formula for the generalized characters of the symmetric group
F. Scarabotti
We show that the explicit formula of Stanley-F\'eray-\'Sniady for the characters of the symmetric group have a natural extension to the generalized characters. These are the spherical functions of…
Elementary proof of Brauer's and Nesbitt's theorem on zeros of characters of finite groups
M. Leitz
The following has been proven by Brauer and Nesbitt. Let G be a finite group, and let p be a prime. Assume x is an irreducible complex character of G such that the order of a p-Sylow subgroup of G… | CommonCrawl |
Power minimization for cooperative MIMO-OFDM systems with individual user rate constraints
Chih-yu Hsu1,
Phee Lep Yeoh1 &
Brian S. Krongold1
We propose a continuous rate and power allocation algorithm for multiuser downlink multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems with coordinated multipoint (CoMP) transmission that guarantees to satisfy individual rate target across all users. The optimization problem is formulated as a total transmit power minimization problem subject to per-user rate targets and per-antenna power constraints across multiple cooperating base stations. While the per-antenna power constraint leads to a more complex optimization problem, it is a practical consideration that limits the average transmit antenna power and helps to control the resulting high peak powers in OFDM. Our proposed algorithm uses successive convex approximation (SCA) to transform the non-convex power minimization problem and dynamically allocate power to co-channel user terminals. We prove that the transformed power minimization problem is convex and that our proposed SCA algorithm converges to a solution. The proposed algorithm is compared with two alternative approaches: (1) iterative waterfilling (IWF) and (2) zero-forcing beamforming (ZFB) with semi-orthogonal user selection. Simulation results highlight that the SCA algorithm outperforms IWF and ZFB in both medium- and low-interference environments.
Intercell interference (ICI) is a limiting factor on the throughput performance of downlink multiuser multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems. User terminals (UTs) located at the cell edge are particularly susceptible to interference from base stations (BSs) that are operating in proximity within the same frequency. In this paper, we consider the use of coordinated multipoint (CoMP) transmission with joint processing to mitigate the effect of ICI, which is a key technology in next-generation networks [1–3]. Joint processing is accomplished by sharing channel state information and user data between multiple BSs via a high-speed low-delay optical backhaul. In doing so, ICI can be mitigated by transmitting user data to a UT simultaneously from all the cooperating BSs [4, 5].
In addition to mitigating ICI using joint processing, resource allocation algorithms can be employed in conjunction with CoMP to achieve substantial improvements in multiuser MIMO system performance [6–8]. In [6], the system performance is improved by joint power allocation and linear precoding for multiuser MIMO systems with CoMP under per-antenna power constraints. In [7], the received signal-to-interference-plus-noise ratio for individual user is enhanced by adaptive nonlinear precoding and power allocation for CoMP systems with multiuser MIMO under total BS and per-BS power constraints. In [8], the joint linear precoding and power allocation for multiuser MIMO systems with CoMP are solved by convex optimization techniques under per-BS power constraints to improve the system performance.
The resource allocation problem for downlink MIMO-OFDM systems has been studied extensively for the single-user case [9, 10]. However, the optimization problem for multiuser MIMO-OFDM systems becomes mathematically challenging as the problem becomes non-convex in the presence of interference. As a result, obtaining a globally optimal solution is difficult to achieve. Dirty paper coding (DPC) was first proposed in [11] to achieve broadcast channel capacity for single-cell MIMO systems, and it was extended to solve the non-convex sum-rate maximization problem for multicell systems [12]. The DPC employs a nonlinear precoding scheme which presubtracts interference to achieve channel capacity. However, DPC requires high computational demands in successive encodings and decodings which makes it difficult to be implemented in practice.
Suboptimal strategies, such as iterative waterfilling (IWF) [13] and zero-forcing beamforming (ZFB) [14], have been proposed to solve the non-convex problem. The IWF approach in [13] treats interference as a channel noise component which transforms the optimization problem into a convex one. As a result, an equilibrium can be achieved by performing a competitive waterfilling-based algorithm iteratively across all UTs. The ZFB in [14] eliminates interference by employing zero-forcing beamformers. This allows powers to be allocated in interference-free OFDM subchannels via the waterfilling strategy. However, the performance of ZFB is limited by the number of transmit antennas and the mutual orthogonality of the UT channel gains. As a result, a semi-orthogonal user selection is proposed in [15] to select a subgroup of UTs that results in the lowest mutual interference.
In this paper, we introduce a new resource allocation algorithm for multiuser downlink MIMO-OFDM systems that guarantees to satisfy a minimum rate constraint for each UT. The algorithm aims to minimize total transmit power subject to per-UT rate targets and per-antenna power constraints. Similar optimization problem has been considered in [16] for multi-cell OFDMA networks and for MIMO broadcast channels in [17]. We focus on a centralized implementation of the proposed algorithm for joint processing strategy in a multicell scenario. In doing so, we assume that perfect knowledge of all channel gains and user messages are shared via an optical backhaul which interconnects all the cooperating BSs to a central processor as shown in Fig. 1. The execution of the proposed algorithm is accomplished by allocating powers to co-channel UTs in the presence of multiuser interference (MUI) formulated as an optimization problem. As a result, the optimization problem is non-convex which is difficult to solve. To overcome this, we adopt the successive convex approximation (SCA)-based technique in [18] to transform the problem into a convex one. In [18], a SCA technique is developed for solving a non-convex dynamic spectrum management in the digital subscriber line technology with crosstalk. The algorithm attempts to jointly optimize desired signal powers and interference powers through an iterative convex approximation procedure. The same technique has been adopted to solve resource allocation problems for both single-cell MIMO-OFDMA in [19] and multicell OFDMA in [20–22] wireless networks. The SCA approach has been demonstrated in [18] to outperform the IWF algorithm. The SCA approach allows us to obtain locally optimal solutions using the dual Lagrange decomposition technique with the aid of subgradient-based methods [23].
A downlink MIMO-OFDM system with M=2 CoMP base stations transmitting to K=2 user terminals and L T =L R =2
The main contributions of the paper are summarized as follows:
We establish an optimization approach for minimizing total transmit power while achieving per-UT rate targets. We perform eigenbeamforming on each MIMO-OFDM subchannel, with the aid of singular value decomposition, to obtain precoding and postprocessing matrices for the BS and UT, respectively.
We derive an iterative algorithm, which is based on the SCA approach in [18], to solve the non-convex power minimization problem in which a minimum rate target is achieved for each UT. A convex-equivalent optimization problem is obtained using the proposed iterative algorithm. In doing so, we provide a convexity proof for the transformed problem and we show that the proposed algorithm can converge to a unique solution.
We consider the per-antenna average transmit power constraint, which limits the average transmit antenna power. As a result, the high peak power of each transmit antenna can be indirectly constrained. This ensures that the peak power is limited at an acceptable level which does not exceed the dynamic range of a high-powered amplifier, thereby causing nonlinear transmission effects. The issue of high peak powers is often overlooked in resource allocation problems which only consider a total power constraint.
We compare our proposed algorithm with two other suboptimal algorithms IWF [24] and ZFB with semi-orthogonal user selection [15]. We adopt an empirical path loss model, the COST-231 Hata empirical model [25], to model various interference environments.
A much more complicated problem would be the joint adaptive beamforming design and power allocation with a minimum mean square error receiver used to suppress the inter-user interference. While this problem tends to be intractable, our proposed algorithm could be applied on top of a coordinated beamforming method across all cooperating base stations. Furthermore, the proposed SCA algorithm is suited to fixed-wireless applications in sparsely populated regions that require high UT data rates over large network areas. A prime example is the provision of wireless broadband in rural areas where the channel gains are quasi-stationary [26]. Our algorithm is also suitable for implementation in small cells with low user mobility.
The paper is organized as follows. The system model is introduced in Section 2. The formulation of total transmit power minimization problem is presented in Section 3. The fundamental of the SCA-based algorithm is outlined in Section 4. This section also includes the convexity proof for the convex-approximated optimization problem transformed by the proposed SCA algorithm and the convergence of the proposed algorithm. Section 5 presents the numerical results of the optimization problem. Concluding remarks are presented in Section 6.
In this paper, we consider a downlink multiuser MIMO-OFDM system with N subchannels. The system consists of M cooperating BSs each with L T transmit antennas, as shown in Fig. 1. These BSs are interconnected by a high-speed optical backhaul for exchanging CSI and user data for joint processing. The optical backhaul is then connected to a central processor for executing a centralized implementation of our proposed power allocation algorithm, which is based on the CSI of each OFDM subchannel from the cooperating BSs. There are K UT, each equipped with L R receive antennas. The spatial degree of freedom for the MIMO-OFDM system is defined as L≤ min(M L T ,L R ). We assume that perfect CSI knowledge between transmit-receive antenna pairs is known to both BSs and UTs. The CoMP configuration with joint processing operation can be envisioned as a multiuser MIMO system with distributed transmit antenna. The channel gains of these distributed transmit antennas consist various path loss profiles depending on the relative distances between the distributed transmit and receive antennas.
Assuming signals received at UTs from all cooperating BSs arrive at the same, the discrete-time complex baseband received signal in the nth MIMO-OFDM subchannel, denoted as y n , for all K UTs after postprocessing is modeled as
$$ \mathbf{y}_{n} = \mathbf{U}_{n}^{\mathrm{H}}\mathbf{H}_{n}\mathbf{V}_{n}\mathbf{x}_{n} + \mathbf{w}_{n}, $$
where \(\mathbf {H}_{n}\triangleq \left [\mathbf {H}_{n}^{1}\cdots \mathbf {H}_{n}^{K}\right ]^{\mathrm {T}}\) is the complex channel gain matrix and each matrix \(\mathbf {H}_{n}^{k}\in \mathbb {C}^{L_{R}\times {ML}_{T}}\) is independently and identically distributed (i.i.d.) random variables, each of which is drawn from a zero mean and unit variance circularly symmetric complex Gaussian distribution \(\mathcal {CN}(0,1)\) in the nth subchannel for the kth UT. The term in the \(\mathbf {H}_{n}^{k}\) matrix can be interpreted as
$$\begin{aligned} {h_{i,j}^{k}}[\!n] =\, &n\text{th OFDM subchannel gain from Tx }j~\text{to}\\ &\text{Rx }i~\text{in}~k~\text{th UT}. \end{aligned} $$
We note that these subchannel gains include path attenuations, as well as both small- and large-scale fading components. The transmitted signals is denoted as \(\mathbf {x}_{n}~\in ~\mathbb {C}^{{KML}_{T}\times 1}\) and the complex Gaussian noise is denoted as \(\mathbf {w}_{n}~\in ~\mathbb {C}^{{KL}_{R}\times 1}\) with variance \({\boldsymbol \sigma _{n}^{2}}\). The matrices \(\mathbf {U}_{n}^{\mathrm {H}} = \text {diag}\left ({{}\mathbf {U}_{n}^{1}}^{\mathrm {H}} \cdots {{}\mathbf {U}_{n}^{K}}^{\mathrm {H}}\right)\) and \(\mathbf {V}_{n} = \left [\mathbf {V}_{n}^{1} \cdots \mathbf {V}_{n}^{K}\right ]\) are the postprocessing and precoding matrices, respectively. Each terms \({{}\mathbf {U}_{n}^{k}}^{\mathrm {H}}\) and \(\mathbf {V}_{n}^{k}\) is obtained from the singular value decomposition (SVD) of the MIMO-OFDM subchannel matrix \(\mathbf {H}_{n}^{k}\), which are given by
$$ \mathbf{H}_{n}^{k} = \mathbf{U}_{n}^{k}{\boldsymbol\Lambda_{n}^{k}}{{}\mathbf{V}_{n}^{k}}^{\mathrm{H}}, $$
where \(\mathbf {U}_{n}^{k}\in \mathbb {C}^{L_{R}\times L_{R}}\) and \(\mathbf {V}_{n}^{k}\in \mathbb {C}^{{ML}_{T}\times {ML}_{T}}\) are the unitary transmit precoding and receiver shaping matrices, respectively, and \({\boldsymbol \Lambda _{n}^{k}}\in \mathbb {R}^{L\times {ML}_{T}}\) is the diagonal matrix with non-negative singular values \(\sqrt {\gamma _{n,l}^{k}},\,l = 1,\ldots,L\) as the gain for the (n,l)th spatial subchannel [25]. The operator (·)H represents the Hermitian transpose.
The SVD, known as the eigenbeamforming [27], is employed to decouple each MIMO-OFDM subchannel into L independent parallel spatial subchannels with the singular values as the subchannel gains. This is accomplished by applying the linear transformation \(\mathbf {V}_{n}^{k}\) to the transmitted symbol vector and applying the linear transformation \({{}\mathbf {U}_{n}^{k}}^{\mathrm {H}}\) to the received symbol vector. The resulting cascaded channel can be written as
$$ {{}\mathbf{U}_{n}^{k}}^{\mathrm{H}}\mathbf{H}_{n}^{k}\mathbf{V}_{n}^{k} = {{}\mathbf{U}_{n}^{k}}^{\mathrm{H}}\mathbf{U}_{n}^{k}{\boldsymbol\Lambda_{n}^{k}}{{}\mathbf{V}_{n}^{k}}^{\mathrm{H}}\mathbf{V}_{n}^{k} = {\boldsymbol\Lambda_{n}^{k}}. $$
As such, a N-subchannel MIMO-OFDM system can be decomposed into a total of N×L spatial subchannels and with full CSI knowledge, intelligent power, and bit allocation algorithms can be employed to optimize system performance across all the spatial subchannels. The application of the eigenbeamforming does not eliminate the inter-user interference for cochannel users (like ZFB). The inter-user interference is caused by the mismatch between the jth UT transmit precoding matrix and all the UT receiver shaping matrices.
Before we formulate the optimization problem, we formally define the following two signal and power domains, which will be used throughout the paper.
Definition 1.
The antenna domain consists of powers that are physically transmitted by the antennas at the BSs.
The spatial domain consists of effective powers and signals sent in the spatial subchannels resulting from SVD.
We also define the following terms:
\(\tilde {R}_{n,l}^{k} =\) spatial rate in (n,l)th spatial subchannel for
UT k
\(\tilde {P}_{n,l}^{k} =\) spatial power in (n,l)th spatial subchannel for
P n,m = transmit power in subchannel n from antenna
where a spatial subchannel pair is denoted by an accent with subscripts (n,l) and a subchannel-antenna pair is denoted by subscripts (n,m).
In the proposed power and rate allocation algorithm, we consider a continuous bit-loading scheme with a desirable rate region on the (n,l)th spatial subchannel for kth UT, in bits/second/Hertz, as follows:
$$ \tilde{R}_{n,l}^{k}\left(\tilde{\mathbf{P}}_{n}\right) = \log_{2}\left[1+\text{SINR}_{n,l}^{k}\left(\tilde{\mathbf{P}}_{n}\right)\right], $$
where \(\tilde {\mathbf {P}}_{n}=\,\left [\tilde {\mathbf {P}}_{n}^{1}\ldots \tilde {\mathbf {P}}_{n}^{K}\right ]\) is the L×K spatial power matrix for the nth OFDM subchannel and each \(\tilde {\mathbf {P}}_{n}^{K}={\left [\tilde {P}_{n,1}^{K}\ldots \tilde {P}_{n,L}^{K}\right ]}^{\mathrm {T}}\) is the L×1 spatial power vector for the Kth UT in the nth subchannel. The noise variance in the (n,l)th spatial subchannel for the kth UT is expressed as \({\sigma _{n,l}^{k}}^{2}\), and we assume the noise variances are constant and equal among all the spatial subchannels. The signal-to-interference-plus-noise ratio (SINR) for the kth UT on the (n,l)th spatial subchannel is defined as follows:
$$ \text{SINR}_{n,l}^{k}\left(\tilde{\mathbf{P}}_{n}\right) = \frac{\mathrm{G}_{n,l}^{k,k}\tilde{P}_{n,l}^{k}}{\sum\limits_{j\neq k}\mathbf{G}_{n}^{k,j}(l,:){\tilde{\mathbf{P}}_{n}}^{j}+{\sigma_{n,l}^{k}}^{2}},\,\forall\, l=1,\ldots,L. $$
The term \(\mathrm {G}_{n,l}^{k,k}\) is the (n,l)th spatial subchannel gain obtained from the SVD of the channel matrix. The inter-user interference channel gain matrix \(\mathbf {G}_{n}^{k,j}\) for the nth OFDM subchannel between the kth UT and the jth UT is defined as follows:
$$ \mathbf{G}_{n}^{k,j}(x,y) = {\left|\mathbf{U}_{n}^{k}(x,:)\mathbf{H}_{n}^{k}{\mathbf{V}_{n}^{j}}^{\mathrm{H}}(:,y)\right|}^{2},\,\forall\,x,y=1,\ldots,L. $$
The physical interpretation of the inter-user interference gain \(\mathbf {G}_{n}^{k,j}\) can be explained as the interference function from the jth UT projecting onto the receiving direction of the kth UT. This gives in a weighted sum of the transmitted signal in all L spatial subchannels as a result of a conjugate mismatch between the transmit beamforming weights \(\mathbf {V}_{n}^{j}\) and the postprocessing of \(\mathbf {U}_{n}^{k}\). In the next section, we present the optimization problem that satisfy per-UT rate targets for given per-antenna transmit power constraints.
Power minimization problem formulation
The resource allocation problem in MIMO-OFDM systems can be formulated into a power minimization (PM) problem. The PM problem aims to minimize the transmit power while satisfying rate targets for each UT and transmit power constraints. For conventional rate adaptive problems, in which the objective is to maximize the total system throughput subject to a total transmit power constraint, it is intuitive that by allocating powers to the UT who has the best channel condition will maximize the overall system throughput for a given transmit power constraint. Those UTs with less favorable channel conditions will receive very little or even no data throughput as there is no rate constraint on the individual UT. In contrast, the PM problem guarantees per-UT rate targets to be satisfied while minimizing total transmit power for a given set of per-antenna power constraints.
We seek to minimize the total transmit power of a downlink multiuser MIMO-OFDM system subject to per-UT target rates and per-antenna power constraints. This problem can be expressed mathematically as the following optimization problem:
$$\begin{array}{*{20}l} \underset{\forall\,P_{n,m}\,\geq\,0}{\text{minimize}}\,& \sum_{m=1}^{{ML}_{T}}\sum_{n=1}^{N}\ P_{n,m} \\ \text{subject to}\,& \ \sum_{n=1}^{N}\sum_{l=1}^{L} \ \tilde{R}_{n,l}^{k}\left(\tilde{\mathbf{P}}_{n}, \, {\sigma_{n,l}^{k}}^{2}, \, \mathrm{G}_{n,l}^{k,k}, \, \mathbf{G}_{n}^{k,j}\right)\: \geq \: R_{\mathrm{T}}^{k} \\ & \ \sum_{n=1}^{N}\ P_{n,m}\leq P_{\text{max}}^{m},\,\forall\,m=1,\ldots,{ML}_{T}, \end{array} $$
where \(R_{\mathrm {T}}^{k}\) is the desirable rate target for the kth UT. These rate targets must be feasible, which means there must exist a feasible power allocation such that the per-user rate target is satisfied and the per-antenna power constraints not being violated. To enhance readability, we now write \(\tilde {R}_{n,l}^{k}\) without explicitly stating it being a function of \(\tilde {\mathbf {P}}_{n}, \, {\sigma _{n,l}^{k}}^{2}, \, \mathrm {G}_{n,l}^{k,k}\) and \(\mathbf {G}_{n}^{k,j}\).
We simplify the PM problem in (7) by converting the objective function and per-antenna power constraints into the spatial domain. In doing so, we derive an important relationship between spatial average powers and antenna average powers. Assuming the data symbols sent in each spatial subchannel are uncorrelated, which is expected, with zero mean and normalized to unit variance, it can be shown that, for a given subchannel n, the relationship between spatial and antenna powers is given by the following lemma.
Lemma 1.
The relationship between antenna powers \(\mathbf {P}_{n}^{k}\) and spatial powers \(\tilde {\mathbf {P}}_{n}^{k}\) is given by \(\mathbf {P}_{n}^{k} = \mathbf {A}_{n}^{k}\tilde {\mathbf {P}}_{n}^{k}\), where \(\mathbf {A}_{n}^{k}(m,l) = \left |\mathbf {V}_{n}^{k}(m,l)\right |^{2}\).
Proof.
The symbol vectors, \(\tilde {\mathbf {x}}_{n}^{k}\in \mathbb {C}^{{ML}_{T}\times 1}\), sent in each spatial subchannel undergo a linear transformation with the precoding matrix \(\mathbf {V}_{n}^{k}\) before transmission, which is given by \(\mathbf {x}_{n}^{k}=\mathbf {V}_{n}^{k}\tilde {\mathbf {x}}_{n}^{k}\in \mathbb {C}^{{ML}_{T}\times 1}\). Furthermore, we assume these sent symbols are uncorrelated (as is expected) with zero mean and unit variance. Therefore, the relationship between antenna average powers and spatial average powers can be derived as follows:
$$\begin{array}{*{20}l} \textbf{P}_{n}^{k} &= \text{Tr}\left\{\mathbb{E}\left[\textbf{x}_{n}^{k}{\textbf{x}_{n}^{k}}^{\mathrm{H}}\right]\right\} \\ &= \text{Tr}\left\{\mathbb{E}\left[\textbf{V}_{n}^{k}\tilde{\textbf{x}}_{n}^{k}{{}\tilde{\textbf{x}}_{n}^{k}}^{\mathrm{H}}{\textbf{V}_{n}^{k}}^{\mathrm{H}}\right]\right\} \\ &= \left|\left[\begin{array}{ccc} v_{1,1}^{k} & \cdots & v_{1,{ML}_{T}}^{k} \\ \vdots & \ddots & \vdots \\ v_{{ML}_{T},1}^{k} & \cdots & v_{{ML}_{T},{ML}_{T}}^{k} \end{array}\right]\right|^{2}\mathbb{E}\left[\left|\tilde{\mathbf{x}}_{n}^{k}\right|^{2}\right] \\ &= \left|\mathbf{V}_{n}^{k}\right|^{2}\tilde{\mathbf{P}}_{n}^{k}, \end{array} $$
where Tr(·) denoted as the trace of a matrix and |·|2 denoted as the squared magnitude operation.
The term \(\mathbf {A}_{n}^{k}\) refers to the power gain transformation from spatial powers to antenna powers in the nth subchannel for the kth UT and is equal to the element-by-element squared-magnitude of the transmit precoding matrix, \(\mathbf {V}_{n}^{k}\).
This relationship allows us to transform antenna powers into spatial powers which result in effective rates in each spatial subchannel. Moreover, the per-antenna power constraint prevents unbalanced power allocation among all the cooperating BSs. In the case of the total average transmit power constraint, the majority of the power would be allocated to BSs with better channel conditions. This makes the inherent peak-to-average-power ratio (PAPR) problem in the OFDM more problematic as the resulting peak transmit power at the transmit antenna may exceed the dynamic range of high-powered amplifiers (HAP) during transmission. As a result, the transmitted signal will experience nonlinear transmission effects, which compromises signal quality and, consequently, affecting the overall system performance. With per-antenna power constraints in place, the average transmit power of each antenna is constrained to a threshold in which the resulting high PAPR would not be problematic to cause irreversible nonlinear transmission effects.
The primal optimization problem in (7) is a non-convex optimization problem which the globally optimal solution is difficult to obtain. This can be shown by rewriting the per-UT rate constraint in (7) in the following expression:
$$ \begin{aligned} \tilde{R}_{n,l}^{k} &= \log_{2}\left[\mathrm{G}_{n,l}^{k,k}\tilde{P}_{n,l}^{k}+\sum\limits_{j\neq k}^{K}\mathbf{G}_{n}^{k,j}(l,:)\tilde{\mathbf{P}}_{n}^{j}+{\sigma_{n,l}^{k}}^{2}\right]\\ & \quad- \log_{2}\left[\sum\limits_{j\neq k}^{K}\mathbf{G}_{n}^{k,j}(l,:)\tilde{\mathbf{P}}_{n}^{j}+{\sigma_{n,l}^{k}}^{2}\right]. \end{aligned} $$
From the expression in (9), it can be seen that it is of the form of difference of concave functions (DoCF) of \(\tilde {\mathbf {P}}_{n}\). Obtaining globally optimal solutions for optimization problems involve with DoCF is difficult and NP-hard [28].
The proposed SCA algorithm
To overcome the DoCF structure of the rate constraints in (7), we adopt the SCA algorithm in [18] to solve our non-convex optimization problems. The SCA algorithm converts a non-convex optimization problem into a convex one by an iterative convex approximation technique. The convex approximation is based on the following lower bound:
$$ \log_{2}\left(1+\text{SINR}\right)\geq \alpha\log_{2}\text{SINR} + \beta, $$
((10))
where α and β are the convex approximation constants, which dictate the accuracy of this lower bound approximation on the Pareto boundary of the achievable rate region. The approximation constants are defined as the following:
$$\begin{array}{*{20}l} \alpha &= \frac{\text{SINR}}{1 + \text{SINR}}~\text{and } \end{array} $$
((11a))
$$\begin{array}{*{20}l} \beta &= \log_{2}(1 + \text{SINR})-\frac{\text{SINR}}{1+\text{SINR}}\log_{2} \text{SINR}. \end{array} $$
((11b))
The lower bound is improved successively by evaluating and updating α and β according to (11a) and (11b) at each iteration, respectively, based on the new value \(\bar {x}\). A locally optimal solution will be obtained as the lower bound converges to the actual achievable rate curve [18].
We make use of the lower bound in (10) to the per-UT rate targets and express the antenna powers in term of the spatial powers using Lemma 1. This results in the following power minimization problem, which only involves variables in the spatial domain:
$$\begin{array}{*{20}l} \underset{\forall\,\tilde{\mathbf{P}}_{n}\,\succeq\,0}{\text{minimize}}\,& \ \sum_{m=1}^{{ML}_{T}}\sum_{k=1}^{K}\sum_{n=1}^{N}\mathbf{A}_{n}^{k}(m,:){\tilde{\mathbf{P}}_{n}}^{k} \\ \text{subject to}\,& \ \sum_{n=1}^{N}\sum_{l=1}^{L} \ \alpha_{n,l}^{k}\log_{2}\left[\text{SINR}_{n,l}^{k}\left(\tilde{\mathbf{P}}_{n}\right)\right]\\ &+ \beta_{n,l}^{k}\: \geq \: R_{\mathrm{T}}^{k}\\ & \ \sum_{k=1}^{K}\sum_{n=1}^{N}\mathbf{A}_{n}^{k}(m,:){\tilde{\mathbf{P}}_{n}}^{k} \: \leq \: P_{\text{max}}^{m}. \end{array} $$
In order to solve this optimization problem efficiently, we adopt the Lagrange dual decomposition method [23]. First, we define the Lagrangian by converting the primal problem in (12) into an unconstrained dual optimization problem with the substitution of \(\tilde {\mathbf {P}}_{n} = e^{\hat {\mathbf {P}}_{n}}\), which is given by
$$\begin{array}{*{20}l} &\mathcal{L}_{\text{PM}}\left\{\hat{\mathbf{P}}_{n},\boldsymbol\mu,\boldsymbol\lambda\right\} = \sum_{m=1}^{{ML}_{T}}\sum_{k=1}^{K}\sum_{n=1}^{N}\mathbf{A}_{n}^{k}(m,:) e^{\hat{\mathbf{P}}_{n}^{k}} \\ &~~~~~ +\sum_{m=1}^{{ML}_{T}}\lambda_{m}\left[\sum_{k=1}^{K}\sum_{n=1}^{N}\mathbf{A}_{n}^{k}(m,:)e^{\hat{\mathbf{P}}_{n}^{k}}-P_{\text{max}}^{m}\right]\\ &~~~~~ - \sum_{k=1}^{K}\mu_{k}\left\{\sum_{n=1}^{N}\sum_{l=1}^{L}\beta_{n,l}^{k}-R_{\mathrm{T}}^{k}+ \frac{\alpha_{n,l}^{k}}{\ln 2}\left\{\ln\mathrm{G}_{n,l}^{k,k}+\hat{P}_{n,l}^{k}{\phantom{\sum\limits_{j\neq k}^{K}}}\right.\right.\\ &~~~~~\left.\left. -\ln\left[\sum\limits_{j\neq k}^{K}\mathbf{G}_{n}^{k,j}(l,:)e^{\hat{\mathbf{P}}_{n}^{j}}+{\sigma_{n,l}^{k}}^{2}\right]\right\}\right\}, \end{array} $$
where \(\boldsymbol \lambda =\left [\lambda _{1}\ldots \lambda _{{ML}_{T}}\right ]\) is the 1×M L T vector of Lagrange multipliers associated with each transmit antenna and μ=[μ 1…μ K ] is the 1×K vector of Lagrange multipliers associated with each UT rate target. The proof for the convexity of the per-UT rate target is provided in Lemma 2.
The per-UT rate target in (12) is a concave function with the substitution of \(\tilde {\mathbf {P}}_{n}~=~e^{\hat {\mathbf {P}}_{n}}\).
The per-UT rate target in (12) with the substitution of \(\tilde {\mathbf {P}}_{n}~=~e^{\hat {\mathbf {P}}_{n}}\) is given by
$$\begin{array}{*{20}l} \tilde{R}_{n,l}^{k}\left(e^{\hat{\mathbf{P}}_{n}}\right) &= \alpha_{n,l}^{k}\log_{2}\left[\text{SINR}_{n,l}^{k}\left(e^{\hat{\mathbf{P}}_{n}}\right)\right] + \beta_{n,l}^{k}\\ &= \frac{\alpha_{n,l}^{k}}{\ln 2}\left\{\ln\mathrm{G}_{n,l}^{k,k}+\hat{P}_{n,l}^{k}{\phantom{\sum\limits_{j\neq k}^{K}}}\right.\\ &\left. \quad-\ln\left[\sum\limits_{j\neq k}^{K}\mathbf{G}_{n}^{k,j}(l,:)e^{\hat{\mathbf{P}}_{n}^{j}}+{\sigma_{n,l}^{k}}^{2}\right]\right\}+\beta_{n,l}^{k}. \end{array} $$
To show that the rate target in (14) is a concave function in \(\hat {\mathbf {P}}_{n}\), we need to show that the Hessian of \(\tilde {R}_{n,l}^{k}(\hat {\mathbf {P}}_{n})\) is in fact a positive semi-definite matrix i.e., \({\nabla }^{2}\tilde {R}_{n,l}^{k}(\hat {\mathbf {P}}_{n})\geq 0\). The Hessian of \(\tilde {R}_{n,l}^{k}(\hat {\mathbf {P}}_{n})\) is given by
$$ {\nabla}^{2}\tilde{R}_{n,l}^{k}(\hat{\mathbf{P}}_{n}) = \frac{1}{{X}^{2}}\left[X\text{diag}(\mathbf{x})-\mathbf{x}{\mathbf{x}}^{\mathrm{T}}\right], $$
where the vector x is defined as
$$ {} \begin{aligned} \mathbf{x}\! =& \!\left[\!\mathbf{G}_{n}^{k,1}(l,:)e^{\hat{\mathbf{P}}_{n}^{1}},\ldots,\mathbf{G}_{n}^{k,k-1}(l,:)e^{\hat{\mathbf{P}}_{n}^{k-1}},\mathbf{G}_{n}^{k,k+1}(l,:)e^{\hat{\mathbf{P}}_{n}^{k+1}},\ldots,\right.\\ &\left.\quad\mathbf{G}_{n}^{k,K}(l,:)e^{\hat{\mathbf{P}}_{n}^{K}}\right], \end{aligned} $$
and the term X is defined as
$$ X = \sum\limits_{j=1}^{K-1}x_{j}+{\sigma_{n,l}^{k}}^{2}. $$
For every \(\mathbf {z}\in {\mathbb {R}}^{K-1}\), we have \(L \triangleq {\mathbf {z}}^{\mathrm {T}}{\nabla }^{2}\tilde {R}_{n,l}^{k}\left (\hat {\mathbf {P}}_{n}\right)\mathbf {z}\geq 0\), which is given by
$$\begin{array}{*{20}l} {X}^{2}L &= {\mathbf{z}}^{\mathrm{T}}\left[X\text{diag}(\mathbf{x})-\mathbf{x}{\mathbf{x}}^{\mathrm{T}}\right]\mathbf{z}\notag\\ &= \left(\sum_{j=1}^{K-1}{z_{j}}^{2}x_{j}\right)\left(\sum_{j=1}^{K-1}x_{j}+{\sigma_{n,l}^{k}}^{2}\right)-{\left(\sum_{j=1}^{K-1}z_{j}x_{j}\right)}^{2}\\ &\geq 0, \end{array} $$
and since \({\sigma _{n,l}^{k}}^{2}\) is non-negative and therefore, the Cauchy-Schwarz inequality holds [23].
The dual problem is then given by
$$ \underset{\boldsymbol\mu,\,\boldsymbol\lambda\,\succeq\, 0}{\text{maximize}}\, d(\boldsymbol\mu,\boldsymbol\lambda), $$
where the Lagrange dual objective function, denoted as d(μ,λ), is defined as
$$ d(\boldsymbol\mu,\boldsymbol\lambda) = \underset{\hat{\mathbf{P}}_{n}\,\succeq\, 0}{\min}\,\mathcal{L}_{PM}\left\{\hat{\mathbf{P}}_{n},\boldsymbol\mu,\boldsymbol\lambda\right\}. $$
The optimal solution of the dual problem is given by
$$ D^{*} = d(\boldsymbol\mu^{*},\boldsymbol\lambda^{*}), $$
where μ ∗ and λ ∗ are the optimal Lagrange multipliers. From (13) and (20), d(μ,λ) is a convex function as it is a pointwise minimum of a series of weighted affine functions of μ and λ [23]. Therefore, the optimal Lagrange multipliers of μ ∗ and λ ∗, in which maximize d(μ,λ), can be obtained by using the standard convex optimization techniques [23]. The corresponding optimal value of the dual problem D ∗ is the lower bound for the optimal value of the approximated primal problem in (12), denoted as P ∗, given by
$$ P^{*}\geq D^{*}. $$
Since the SCA technique is employed to transform the original optimization problem in (7) into a convex one and the feasible set has a non-empty interior, the duality gap between P ∗ and D ∗ is in fact zero. This is due to the fact that any finite rate target is achievable for any given arbitrary large transmit powers. As a result, the approximated optimization problem in (12) satisfies the Slater's condition which implies that the strong duality condition holds [23]. Therefore, P ∗ can be found by first minimizing the Lagrangian \(\mathcal {L}_{\mathrm {{PM}}}\) in (13) to evaluate the dual objective function d(μ,λ) in (20) and then maximizing d(μ,λ) over all non-negative values of μ and λ. Furthermore, the Lagrangian in (13) can be simplified into NK-independent subproblems using standard dual decomposition method for a given μ and λ, which is given by
$$ \mathcal{L}_{\text{PM}}\left(\boldsymbol\mu,\boldsymbol\lambda\right) = \sum_{n=1}^{N}\sum_{k=1}^{K}\breve{g}_{n}^{k}\left(\boldsymbol\mu,\boldsymbol\lambda\right)-\sum_{m=1}^{{ML}_{T}}\lambda_{m}P_{\text{max}}^{m}+\sum_{k=1}^{K}\mu_{k}R_{\mathrm{T}}^{k}, $$
where \(\breve {g}_{n}^{k}\) is given by
$$\begin{array}{*{20}l} \breve{g}_{n}^{k}&\left(\boldsymbol\mu,\boldsymbol\lambda\right) = \underset{\hat{\mathbf{P}}_{n}^{k}\succeq 0}{\text{minimize}}\,\sum_{m=1}^{{ML}_{T}}\mathbf{A}_{n}^{k}(m,:)e^{\hat{\mathbf{P}}_{n}^{k}}\left(1+\lambda_{m}\right)\\ & - \sum_{l=1}^{L}\frac{\mu_{k}\alpha_{n,l}^{k}}{\ln 2}\left\{\ln G_{n,l}^{k,k}+\hat{P}_{n,l}^{k}-\ln\left[\sum\limits_{j\neq k}^{K}\mathbf{G}_{n}^{k,j}(l,:)e^{\hat{\mathbf{P}}_{n}^{j}}\right.\right.\\ &\left.\left.{\phantom{\sum\limits_{j\neq k}^{K}}} +{\sigma_{n,l}^{k}}^{2}\right]+\frac{\beta_{n,l}\ln 2}{\alpha_{n,l}^{k}}\right\}. \end{array} $$
This indicates that the dual problem can be solved by optimizing N-independent dual subproblems, each for \(\breve {g}_{n}^{k}\left (\boldsymbol \mu,\boldsymbol \lambda \right),\,\forall \,k=1,\ldots,K\). As a result, the overall implementation cost can be reduced significantly if the same procedure is executed repeatedly for solving each subproblem, or alternatively, K parallel processors can be adopted for solving N dual subproblems simultaneously to improve the convergence time of the algorithm.
For the nth OFDM subchannel of the kth UT, the minimization in (24) over \(\hat {\mathbf {P}}_{n}^{k}\) is a convex optimization problem. Therefore, the optimal value \({{}\hat {\mathbf {P}}_{n}^{k}}^{*}\) must satisfy the following Karush-Kuhn-Tucker (KKT) necessary conditions [23] simultaneously, which are given by
$$ {\fontsize{8.2}{6}\begin{aligned} 1 - \frac{\ln 2\mathbf{A}_{n}^{k}(m,:)e^{{{}\hat{\mathbf{P}}_{n}^{k}}^{*}}\left(1+\lambda_{m}\right)}{\mu_{k}\alpha_{n,l}^{k}} - \frac{\sum\limits_{j\neq k}\mathbf{G}_{n}^{k,j}(l,:)e^{\hat{\mathbf{P}}_{n}^{j}}}{\sum\limits_{j\neq k}\mathbf{G}_{n}^{k,j}(l,:)e^{\hat{\mathbf{P}}_{n}^{j}}+{\sigma_{n,l}^{k}}^{2}} &= 0,\,\forall\,k,n,l\\ \lambda_{m}\left[\sum_{k=1}^{K}\sum_{n=1}^{N}\mathbf{A}_{n}^{k}(m,:)e^{{{}\hat{\mathbf{P}}_{n}^{k}}^{*}}-P_{\text{max}}^{m}\right]&=0,\,\forall\,m\\ \mu_{k}\left\{R_{\mathrm{T}}^{k}-\sum_{n=1}^{N}\sum_{l=1}^{L}\alpha_{n,l}^{k}\log_{2}\left[\text{SINR}_{n,l}^{k}\left(e^{{{}\hat{\mathbf{P}}_{n}^{k}}^{*}}\right)\right] + \beta_{n,l}^{k}\right\}&=0,\,\forall\,k\\ \lambda_{m}&\geq 0,\,\forall\,m\\ \mu_{k}&\geq 0,\,\forall\,k \end{aligned}} $$
From the stationarity of the KKT conditions in (25), the optimal power allocation \(\tilde {P}_{n,l}^{k}\) can be obtained by substituting \(\hat {\mathbf {P}}_{n} = \ln \tilde {\mathbf {P}}_{n}\) with fixed λ and μ, which results in
$$ \tilde{P}_{n,l}^{k} = \frac{\mu_{k}\alpha_{n,l}^{k}}{\ln 2\left(\mathbf{1}+\boldsymbol\lambda\right)\mathbf{A}_{n}^{k}(:,l)+\sum\limits_{j\neq k}\mathbf{G}_{n}^{j,k}(l,:){\boldsymbol\alpha_{n}^{j}}\mu_{j}\frac{\textrm{SINR}_{n,l}^{j}\left(\tilde{\mathbf{P}}_{n}\right)}{\mathrm{G}_{n,l}^{j,j}\tilde{P}_{n,l}^{j}}}, $$
where 1 is the 1×M L T vector of ones and \({\boldsymbol \alpha _{n}^{j}}={\left [\alpha _{n,1}^{j}\ldots \alpha _{n,L}^{j}\right ]}^{\mathrm {T}}\) is the L×1 convex approximation constant vector for the nth OFDM subchannel of the jth UT. We note that the term \(\mathbf {G}_{n}^{j,k}(l,:)\) quantifies the impact of allocating \(\tilde {P}_{n,l}^{k}\) to the kth UT on all other UTs, which results in an altruistic approach of allocating powers to UTs that have the minimal mutual interference. This differs from the egoistic approach of IWF by maximizing the signal-to-noise ratio without regard to resulting mutual interference to all UTs.
The power allocation strategy in (26) is a standard interference function which is guaranteed to coverage to a unique solution [29]. To demonstrate this, we apply Yates' definition of standard interference function in [29] to (26) which is introduced in the following definition.
An interference function \(\mathcal {I}(\textbf {p})\) is standard if for all p≽0 and the following properties are satisfied.
Positivity: \(\mathcal {I}(\textbf {p}) > 0\)
Monotonicity: If \(\textbf {p} \succeq \textbf {p}^{\prime }\phantom {\dot {i}\!}\), then \(\mathcal {I}(\textbf {p}) \geq \mathcal {I}(\textbf {p}^{'})\)
Scalability: For all θ>1, \(\theta \,\mathcal {I}(\textbf {p}) > \mathcal {I}(\theta \,\textbf {p})\)
The power allocation strategy in (26) is a standard interference function [18].
We rewrite the power allocation in (26) as
$$ \mathcal{I}_{n,l}^{k}(\tilde{\mathbf{P}}) = \frac{\mu_{k}\alpha_{n,l}^{k}}{\ln 2\left(\mathbf{1}+\boldsymbol\lambda\right)\mathbf{A}_{n}^{k}(:,l)+\sum\limits_{j\neq k}\frac{\mathbf{G}_{n}^{j,k}(l,:){\boldsymbol\alpha_{n}^{j}}\mu_{j}}{\mathbf{G}_{n}^{k,j}(l,:)\tilde{\mathbf{P}}^{j}_{n}+{\sigma_{n,l}^{k}}^{2}}}. $$
To show the power allocation in (27) is unique and it can converge to a locally optimal solution, we apply Yates' definition of standard interference function in Definition 3.
Positivity: This follows from the fact that each term in \(\mathcal {I}_{n,l}^{k}(\tilde {\mathbf {P}})\) in (27) is non-negative.
Monotonicity: Suppose \(\tilde {\mathbf {P}} \geq \tilde {\mathbf {P}}^{'}\), the monotonicity property follows from
$$\begin{array}{*{20}l} \mathcal{I}_{n,l}^{k}(\tilde{\mathbf{P}}) &= \frac{\mu_{k}\alpha_{n,l}^{k}}{\ln 2\left(\mathbf{1}+\boldsymbol\lambda\right)\mathbf{A}_{n}^{k}(:,l)+\sum\limits_{j\neq k}\frac{\mathbf{G}_{n}^{j,k}(l,:){\boldsymbol\alpha_{n}^{j}}\mu_{j}}{\mathbf{G}_{n}^{k,j}(l,:)\tilde{\mathbf{P}}^{j}_{n}+{\sigma_{n,l}^{k}}^{2}}}\\ &\geq \frac{\mu_{k}\alpha_{n,l}^{k}}{\ln 2\left(\mathbf{1}+\boldsymbol\lambda\right)\mathbf{A}_{n}^{k}(:,l)+\sum\limits_{j\neq k}\frac{\mathbf{G}_{n}^{j,k}(l,:){\boldsymbol\alpha_{n}^{j}}\mu_{j}}{\mathbf{G}_{n}^{k,j}(l,:){{}\tilde{\mathbf{P}}^{j}_{n}}^{'}+{\sigma_{n,l}^{k}}^{2}}}\\ &= \mathcal{I}_{n,l}^{k}(\tilde{\mathbf{P}}^{'}) \end{array} $$
Scalability: Suppose \(\tilde {\mathbf {P}}=\theta \tilde {\mathbf {P}}^{'}\) for θ>1, the scalability property follows from
$$\begin{array}{*{20}l} \theta\,\mathcal{I}_{n,l}^{k}(\tilde{\mathbf{P}}) &= \frac{\mu_{k}\alpha_{n,l}^{k}}{\frac{1}{\theta}\ln 2\left(\mathbf{1}+\boldsymbol\lambda\right)\mathbf{A}_{n}^{k}(:,l)+\frac{1}{\theta}\sum\limits_{j\neq k}\frac{\mathbf{G}_{n}^{j,k}(l,:){\boldsymbol\alpha_{n}^{j}}\mu_{j}}{\mathbf{G}_{n}^{k,j}(l,:)\tilde{\mathbf{P}}^{j}_{n}+{\sigma_{n,l}^{k}}^{2}}}\\ &> \frac{\mu_{k}\alpha_{n,l}^{k}}{\ln 2\left(\mathbf{1}+\boldsymbol\lambda\right)\mathbf{A}_{n}^{k}(:,l)+\sum\limits_{j\neq k}\frac{\mathbf{G}_{n}^{j,k}(l,:){\boldsymbol\alpha_{n}^{j}}\mu_{j}}{\mathbf{G}_{n}^{k,j}(l,:)\theta{{}\tilde{\mathbf{P}}^{j}_{n}}^{'}+{\sigma_{n,l}^{k}}^{2}}}\\ &= \mathcal{I}_{n,l}^{k}(\theta\,\tilde{\mathbf{P}}^{'}) \end{array} $$
The final step is to find μ ∗ and λ ∗ that maximize d(μ,λ) over all μ≽0 and λ≽0. This is accomplished by a gradient descent method [23] which is given by the following:
$$\begin{array}{*{20}l} {\lambda_{m}}^{[s+1]} &= {\left[{\lambda_{m}}^{[s]} + \nu\left(\sum_{k=1}^{K}\sum_{n=1}^{N}\mathbf{A}_{n}^{k}(m,:){{}{\tilde{\mathbf{P}}_{n}}^{k}}^{[s+1]} - P_{\text{max}}^{m}\right)\right]}^{+} \end{array} $$
$$\begin{array}{*{20}l} {\mu_{k}}^{[s+1]} &= {\left[{\mu_{k}}^{[s]} + \epsilon\left(R^{k}_{\mathrm{T}}-\sum_{n=1}^{N}\sum_{l=1}^{L}{{}\tilde{R}_{n,l}^{k}}^{[s+1]}\right)\right]}^{+}, \end{array} $$
respectively, for some fixed \({\tilde {\mathbf {P}}_{n}}^{k}\), where ε and ν are step sizes for each iteration, and s is the iteration number. The updated Lagrange multipliers μ [s+1] and λ [s+1] are then substituted back into (26) to obtain the new power allocation \({{}{\tilde {\mathbf {P}}_{n}}^{k}}^{[s+1]}\), and the resulting rate allocation \({{}\tilde {R}_{n,l}^{k}}^{[s+1]}\) is obtained from \({{}{\tilde {\mathbf {P}}_{n}}^{k}}^{[s+1]}\) using (4). The iterative procedure terminates when the duality gap between the primal and dual objective function approaches to zero. The PM-SCA algorithm is outlined in Algorithm 1. We initialize the algorithm with a high-SINR approximation with α=1 and β=0 [18]. Before we present numerical results in next section, we introduce IWF and ZFB with semi-orthogonal user selection, which we used to compare the performance of our proposed algorithm.
In IWF, the power allocation for each MIMO-OFDM subchannel is performed by assuming that the inter-user interference is constant and treating it as a part of channel noise. As a result, the original nonconvex optimization problem is transformed into a convex one. An equilibrium is achieved by performing the waterfilling solution iteratively across all the UTs in the system. In numerical simulations, we first perform a SVD on individual MIMO-OFDM subchannel to obtain the individual subchannel gains. These subchannel gains are then used to perform the power allocation, which is based on the iterative waterfilling algorithm across all the UTs.
ZFB with semi-orthogonal user selection
In ZFB, orthogonal beamformers are used to eliminate the inter-user interference for co-channel UTs. This transforms the original nonconvex optimization problem into a convex one, and the waterfilling algorithm is performed across MIMO-OFDM subchannels to obtain a suboptimal solution. However, an efficient user selection algorithm is needed for finding co-channel UTs with less mutual interference in order to maximize the system performance, in particular, when the number of UTs is large. Therefore, a semi-orthogonal user selection is introduced for effectively finding near-orthogonal co-channel UTs to occupy the limited number of zero-forcing beamformers, which is governed by the number of transmit antennas.
Simulation results and discussion
In this section, we present the numerical results to evaluate our proposed algorithm against IWF and ZFB with semi-orthogonal user selection. We consider a downlink fixed-wireless MIMO-OFDM system with N=32 OFDM subchannels and M=3 BSs where each BS is equipped with L T =2 transmit antennas. All the cooperating BSs are d km separated from each other. These BSs are assumed to be interconnected by an optical backhaul at which channel gains and user data are shared between cooperating BSs for joint processing. The proposed algorithm is executed centrally by a central processor which is also connected to BSs via the backhaul. We focus on K=5 UTs in the simulation, and each UT is equipped with L R =2 receive antennas. As shown in Fig. 2, the UTs are randomly distributed in a virtual radius of r=100 m which is located between the two cooperating BSs to simulate a cell-edge environment. The COST-231 Hata empirical model [25] is used for predicting the path loss of the channels in rural (flat) environments for typical macrocell deployments. The transmission loss, L d , expressed in decibels is given by [25]
$$ \begin{aligned} {}L_{d} &=\,46.3 + 33.9\log_{10}f - 13.82\log_{10}h_{t} - a(h_{r}) \\ & \quad + \left(44.9-6.55\log_{10}h_{t}\right)\log_{10}d + C_{m}, \end{aligned} $$
Three-cell MIMO-OFDM network with downlink CoMP and UTs located at the cell-edge. We vary the distance between BSs to model different channel-to-noise ratios (CNRs)
where f is the carrier frequency in MHz, d is the distance between BS and UT antennas in km, and h t is the height of the BS above ground level in m. The parameter C m is defined as 0 dB for suburban or rural environments and 3 dB for metropolitan environments. The parameter a(h r ) is defined for rural environments as [26]
$$ a\left(h_{r}\right) = \left(1.1\log_{10}f - 0.7\right)h_{r} - \left(1.56\log_{10}f - 0.8\right), $$
where h r is the height of the UT above ground level in meters. The simulation parameters from [25, 30, 31] are given in Table 1.
Table 1 COST 231 path-loss model parameters [25, 30, 31]
In the simulation results, we investigate the power minimization performance of our proposed algorithm in various interference environments by varying the distance between cooperating BSs, which is denoted as d in Fig. 2, ranging from 5 to 40 km in a typical LTE macrocell deployment [31]. Based on these distances, a received channel-to-noise ratio (CNR) on the (n,l)th spatial subchannel for the kth UT, which is given by [10]
$$ \text{CNR}_{n,l}^{k} = \frac{\Lambda_{n,l}^{k}}{{\sigma_{n,l}^{k}}^{2}}, $$
where \(\Lambda _{n,l}^{k}\) is the effective channel gain after precoding and postprocessing. The noise power is assumed to be equal across all OFDM subchannels. We average the simulation results over a total of 16,000 channel realizations, which is obtained from 100 simulation iterations for each UT subchannel.
Figure 3 shows total power minimization comparison between our proposed SCA and other two alternative approaches of IWF in [24] and ZFB with semi-orthogonal user selection [15]. The BS-to-BS separation distance is set to d=5 km and the resulting average received CNR =−7.66 dB which indicates in a low-interference environment where the interference power is insignificant compared with the noise power. The results show that the proposed SCA algorithm provides the lowest total transmit power compared with IWF and ZFB for a given per-UT rate target. We notice that IWF provides a similar performance to SCA whereas ZFB results in the highest total transmit power for a given per-UT target rate. For example, SCA and IWF offer an approximately 50 W of saving in total transmit power to achieve a per-UT rate target of 4.5 Mbits/s.
Transmit power comparison of SCA, IWF, and ZFB in low-interference environments with different per-UT rate targets and d=40 km
Figure 4 compares SCA, IWF, and ZFB with a distance between cooperating BSs of d=20 km. The resulting average received CNR = 2.94 dB which indicates a medium-interference environment where the interference power is comparable to the noise power. In this scenario, we notice that our proposed SCA algorithm can achieve the lowest total transmit power for a given per-UT rate target. For a total transmit power of 120 W between cooperating BSs, we can see that IWF is limited to 19 Mbits/s per UT as a result of interference. Despite the canceling of interference between the scheduled UTs, we see that ZFB results in a higher total transmit power of 102 W compared to SCA of 67 W for a rate target of 24 Mbits/s per UT. This is due to the reduction in the effective channel gain of the scheduled UTs as a result of performing ZFB on the channel gain matrices of the scheduled UTs in each subchannel.
Transmit power comparison of SCA, IWF, and ZFB in medium-interference environments with different per-UT rate targets and d=20 km
Figure 5 compares SCA, IWF, and ZFB with the BS-to-BS separation distance set to d=40 km. The average received CNR = 21.12 dB, which models a high-interference environment where the noise power is insignificant compared to the interference power. The plot shows that ZFB results in the lowest total transmit power for a given per-UT target rate compared with SCA and IWF. This is because ZFB cancels interferences between scheduled UTs which are selected by the semi-orthogonal user selection in each subchannel. For a total power constraint across the two cooperating BSs, we see that SCA and IWF can achieve a maximum of 45 and 25 Mbits/s per UT, respectively. From this comparison, we notice that SCA offers a better interference management than IWF with a lower total transmit power for a given per-UT rate target.
Transmit power comparison of SCA, IWF, and ZFB in high-interference environments with different per-UT rate targets and d=5 km
Next, we investigate the relationship between the minimum achievable rates per-UT and the coverage radius, r. To ensure the distribution of UT changes with the coverage radius, we place the UTs uniformly distributed on the circumference of the coverage circle as the circle expands to simulate UTs scatter between cooperating BSs. The minimum achievable rates refer to the minimum rate between per-UT rate targets when the per-antenna powers are close to be fully utilized. The results are obtained from the average of 100 simulation iterations.
Figure 6 compares the minimum achievable rate of SCA, IWF, and ZFB with a BS-to-BS separation distance d=40 km. In this scenario, we notice that both SCA and IWF result in a similar performance whereas ZFB achieves the lowest minimum per-UT rate target. Comparing SCA and IWF, the minimum per-UT rate target increases with coverage radius as the interference decreases.
Minimum achievable rate comparison of SCA, IWF, and ZFB with a BS-to-BS separation of d=40 km
Figure 7 compares the minimum achievable rate of SCA, IWF, and ZFB with a BS-to-BS separation distance d=20 km. The performance between SCA and IWF widens as the interference increases at each coverage radius. ZFB outperforms IWF when the coverage radius is less than 4.5 km as the MUI is the dominating factor in system performance. The altruistic approach of allocating power in SCA is able to outperform ZFB despite the MUI completely eliminated by the beamformers.
Figure 8 compares the minimum achievable rate of SCA, IWF, and ZFB with a BS-to-BS separation distance d=5 km. In this scenario, the performance of both SCA and IWF is limited by the severity of the MUI and the minimum achievable rates are 48 and 20 Mbit/s, respectively. As expected, ZFB results in the best performance as the approach provides interference-free MIMO-OFDM spatial subchannels for scheduled UTs. From these plots, we notice that the performance gap between SCA and IWF depends on the severity of the MUI. The egoistic approach of allocating powers in IWF results in lower performance compared to the interference minimizing approach of SCA.
Minimum achievable rate comparison of SCA, IWF, and ZFB with a BS-to-BS separation of d=5 km
Complexity analysis
The computational complexity of the proposed algorithm, IWF and ZFB, consists of two stages: (1) computation of beamformers for each MIMO-OFDM subchannel and (2) updates of power allocation and Lagrange multipliers. We focus on the computational complexity of obtaining beamformers as the power and Lagrange multipliers update has fixed complexity and is negligible compared to the beamforming of each MIMO-OFDM subchannel. The computational complexity of each algorithm is calculated as follows:
For the proposed algorithm: The eigenbeamforming of each MIMO-OFDM subchannel is obtained by the SVD of MIMO-OFDM subchannel \(\mathbf {H}_{n}^{k}\). The channel matrix \(\mathbf {H}_{n}^{k}\) is a L R ×M L T complex matrix. To obtain the SVD of each \(\mathbf {H}_{n}^{k}\) requires 8(4L R 2 M L T +8L R (M L T )2+9(M L T )3) complex floating point operations [32]. The total number of complex floating point operations across all MIMO-OFDM subchannel and UTs is approximately
$$ \sum\limits_{k=1}^{K}\sum\limits_{n=1}^{N}\,8kn\left[4{L_{R}}^{2}{ML}_{T} + 8L_{R}\left({ML}_{T}\right)^{2} + 9\left({ML}_{T}\right)^{3}\right]. $$
Therefore, the overall computational complexity of the proposed algorithm is
$$ \mathcal{O}\left\{8KN\left[4{L_{R}}^{2}{ML}_{T} + 8L_{R}\left({ML}_{T}\right)^{2} + 9\left({ML}_{T}\right)^{3}\right]\right\}. $$
The overall computational complexity of IWF is approximately the same as the proposed algorithm since the eigenbeamforming is performed across all MIMO-OFDM subchannels, which is given by
$$ {\mathcal{O}}\left\{8KN\left[4{L_{R}}^{2}{ML}_{T} + 8L_{R}\left({ML}_{T}\right)^{2} + 9\left({ML}_{T}\right)^{3}\right]\right\}. $$
For ZFB with semi-orthogonal user selection: This algorithm consists of two stages: (1) semi-orthogonal user selection and (2) obtaining zero-forcing beamformers. The computational complexity of semi-orthogonal user selection is given by \(\mathcal {O}[KN(L_{T})^{3}]\) [33]. Finding zero-forcing beamformers involves block diagonalization across all the MIMO-OFDM subchannels, which can be obtained by performing SVD. The total number of complex floating point operations is approximately
$$ \sum\limits_{k=1}^{K}\sum\limits_{n=1}^{N}\,8nk\left[8L_{R}\left({ML}_{T}\right)^{2} + 9\left({ML}_{T}\right)^{3}\right]. $$
Therefore, the overall computational complexity of ZFB with semi-orthogonal user selection is given by
$$ \mathcal{O}\left\{KN\left[64L_{R}\left({ML}_{T}\right)^{2}+{L_{T}^{3}}+72\left({ML}_{T}\right)^{3}\right]\right\}. $$
In this paper, the individual UT rate target is achieved by transforming a non-convex optimization problem into a tractable set of successive convex approximations. A convex lower bound is updated at each iteration to improve the approximation of the achievable rate region, where a dual Lagrange decomposition and a subgradient method is efficient in obtaining the locally optimal solution. Average power constraints are enforced on each antenna for all BSs, which helps manage the resulting peak power effects (via OFDM's inherently high PAPR) for all transmission high-powered amplifiers. We envision this work to be more suited for small cells with low user mobility, but more importantly, for fixed-wireless applications in sparsely populated regions that require high data rates to UTs over very large network areas.
The effectiveness of our proposed SCA-based algorithm was demonstrated through a performance comparison of SCA and the alternative approaches of IWF in [24] and ZFB in [15]. Comparing SCA and IWF, we see that SCA provides a lower total transmit power and higher minimum per-UT rate target relative to IWF in a range of interference environments. In general, we find that the higher the interference between UTs, the larger difference in terms of total transmit power and minimum per-UT target rate between SCA and IWF. As expected, ZFB performs well in high-interference environments as it provides interference-free subchannels for the scheduled UTs. However, the performance of ZFB is limited by the number of transmit antennas and the mutual orthogonality of the scheduled UTs' channel conditions. As such, we find that ZFB results in a higher total transmit power and lower minimum achievable rate solution than SCA and IWF in both medium- and low-interference environments.
A Ghosh, R Ratasuk, B Mondal, N Mangalvedhe, T Thomas, LTE-Advanced: next-generation wireless broadband technology. IEEE Wirel. Commun. 17(3), 10–22 (2010).
M Sawahashi, Y Kishiyama, A Morimoto, D Nishikawa, M Tanno, Coordinated multipoint transmission/reception techniques for LTE-Advanced. IEEE Wirel. Commun. Mag. 17(3), 26–34 (2010).
D Lee, H Seo, B Clerckx, E Hardouin, D Mazzarese, SNK Sayana, Coordinated multiple transmission and reception in LTE-Avanced: deployment scenarios and operational challenges. IEEE Commun. Mag. 50(2), 148–155 (2012).
R Irmer, H Droste, P Marsch, M Grieger, G Fettweis, S Brueck, H-P Mayer, L Thiele, V Jungnickel, Coordinated multipoint: concepts, performance, and field trial results. IEEE Commun. Mag. 49(2), 102–111 (2011).
J Lee, Y Kim, H Lee, BL Ng, D Mazzarese, J Liu, W Xiao, Y Zhou, Coordinated multipoint transmission and reception in LTE-Avanced systems. IEEE Commun. Mag. 50(11), 44–50 (2012).
S Kaviani, WA Krzymień, in Proc. IEEE Wireless Communications and Networking Conference. Sum rate maximization of MIMO broadcast channels with coordinated of base stations (Las Vegas, 2008), pp. 1079–1084.
W Hardjawana, B Vucetic, Y Li, Multi-user cooperative base station systems with joint processing and beamforming. IEEE J. Sel. Topics Signal Process. 3(6), 1079–1093 (2009).
R Zhang, Cooperative multi-cell block diagonalization with per-base-station power constraints. IEEE J. Sel. Areas Commun. 28(9), 1435–1445 (2010).
CY Hsu, BS Krongold, in Proc. IEEE Global Communications Conference. Coordinated multi-point transmission of MIMO-OFDM system with per-antenna power constraints (Anaheim, 2012).
BS Krongold, K Ramchandran, DL Jones, Computationally efficient optimal power allocation algorithm for multicarrier communication systems. IEEE Trans. Commun. 48(1), 23–27 (2000).
MHM Costa, Writing on dirty paper. IEEE Trans. Inf. Theory. 29(3), 439–441 (1983).
DHN Nguyen, T Le-Ngoc, Sum-rate maximization in the multicell MIMO broadcast channel with interference coordination. IEEE Trans. Signal Process. 62(6), 1501–1513 (2014).
W Yu, W Rhee, S Boyd, JM Cioffi, Iterative water-filling for Gaussian vector multiple-access channel. IEEE Trans. Inf. Theory. 50(1), 145–152 (2004).
QH Spencer, AL Swindlehurst, M Haardt, Zero-forcing methods for downlink spatial multiplexing in multiuser MIMO channels. IEEE Trans. Signal Process. 52(2), 461–471 (2004).
S Kaviani, WA Krzymień, in Proc. IEEE Global Communications Conference. User selection for multiple-antenna broadcast channel with zero-forcing beamforming (New Orleans, 2008).
M Pischella, J-C Belfiore, Distributed resource allocation for rate-constrained users in multi-cell OFDMA networks. IEEE Commun. Lett. 12(4), 250–252 (2008).
C Hellings, M Joham, W Utschick, Gradient-based power minimization in MIMO broadcast channels with linear precoding. IEEE Trans. Signal Process. 60(2), 877–890 (2012).
J Papandriopoulos, JS Evans, SCALE: A low-complexity distributed protocol for spectrum balance in multiuser DSL networks. IEEE Trans. Inf. Theory. 8(8), 3711–3724 (2009).
NU Hassan, M Assaad, in Proc. IEEE International Workshop on Signal Processing Advances in Wireless Communications. Optimal downlink beamforming and resource allocation in MIMO-OFDMA systems (Marrakech, 2011).
L Venturino, N Prasad, X Wang, Coordinated scheduling and power allocation in downlink multicell OFDMA networks. IEEE Trans. Veh. Technol. 6(58), 2835–2848 (2009).
H Zhu, J Wang, Chunk-based resource allocation in OFDMA systems—part i: chunk allocation. IEEE Trans. Commun. 57(9), 2734–2744 (2009).
H Zhu, J Wang, Chunk-based resource allocation in OFDMA systems—part ii: joint chunk, power and bit allocation. IEEE Trans. Commun. 60(2), 499–509 (2012).
S Boyd, L Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004).
M Kobayashi, G Caire, in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. Iterative waterfilling for weighted rate sum maximization in MIMO-OFDM broadcast channels, (2007).
A Goldsmith, Wireless Communications (Cambridge University Press, New York, 2005).
HR Anderson, Fixed Broadband Wireless System Design (Wiley, UK, 2003).
GL Stüber, J Barry, SW McLaughlin, YG Li, MA Ingram, TG Pratt, Broadband MIMO-OFDM wireless communication. Proc. IEEE. 92:, 271–294 (2004).
R Horst, H Tuy, Global Optimization: Deterministic Approaches, 2nd edn (Springer, Berlin, 1993).
RD Yates, A framework for uplink power control in cellular radio system. IEEE J. Sel. Areas Commun. 13(7), 1341–1347 (1995).
H Holma, A Toskala (eds.), WCDMA for UMTS - HSPA Evolution And LTE, 4th edition (Wiley, UK, 2007).
H Holma, A Toskala (eds.), LTE for UMTS: OFDMA and SC-FDMA Based Radio access (Wiley, UK, 2009).
GH Golub, CFV Loan, Matrix Computations (John Hopkins University Press, Baltimore, 1996).
J Mao, J Gao, Y Liu, G Xie, Simplified semi-orthogonal user selection for MU-MIMO systems with ZFBF. IEEE Wirel. Commun. Lett. 1(1), 42–45 (2012).
Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Australia
Chih-yu Hsu
, Phee Lep Yeoh
& Brian S. Krongold
Search for Chih-yu Hsu in:
Search for Phee Lep Yeoh in:
Search for Brian S. Krongold in:
Correspondence to Chih-yu Hsu.
Hsu, C., Yeoh, P. & Krongold, B.S. Power minimization for cooperative MIMO-OFDM systems with individual user rate constraints. J Wireless Com Network 2016, 43 (2016) doi:10.1186/s13638-016-0541-4
MIMO-OFDM
Successive convex approximation | CommonCrawl |
Works by Jia Liu
( view other items matching `Jia Liu`, view all matches )
Disambiguations Disambiguations:
Jia Liu [18] Jia-Bao Liu [13] Jian Liu [8] Jiang Liu [6]
Jiayi Liu [3] Jianping Liu [3] Jiamou Liu [3] Jianrong Liu [3]
Not all matches are shown. Search with initial or firstname to single out others.
See also See also:
Jiatong Liu
Jiawen Liu
Jianjiang Liu
Ethical Issues in Fecal Microbiota Transplantation in Practice.Yonghui Ma, Jiayu Liu, Catherine Rhodes, Yongzhan Nie & Faming Zhang - 2017 - American Journal of Bioethics 17 (5):34-45.details
Fecal microbiota transplantation has demonstrated efficacy and is increasingly being used in the treatment of patients with recurrent Clostridium difficile infection. Despite a lack of high-quality trials to provide more information on the long-term effects of FMT, there has been great enthusiasm about the potential for expanding its applications. However, FMT presents many serious ethical and social challenges that must be addressed as part of a successful regulatory policy response. In this article, we draw on a sample of the scientific (...) and bioethics literatures to examine clusters of ethical and social issues arising in five main areas: informed consent and the vulnerability of patients; determining what a "suitable healthy donor" is; safety and risk; commercialization and potential exploitation of vulnerable patients; and public health implications. We find that these issues are complex and worthy of careful consideration by health care professionals. Desperation of a patient should not be the basis for selecting treatment with FMT, and the patient's interests should always be of paramount concern. Authorities must prioritize development of appropriate and effective regulation of FMT to safeguard patients and donors, promote further research into safety and efficacy, and avoid abuse of the treatment. (shrink)
Biomedical Ethics in Applied Ethics
RT₂² does not imply WKL₀.Jiayi Liu - 2012 - Journal of Symbolic Logic 77 (2):609-620.details
We prove that RCA₀ + RT $RT\begin{array}{*{20}{c}} 2 \\ 2 \\ \end{array} $ ̸͢ WKL₀ by showing that for any set C not of PA-degree and any set A, there exists an infinite subset G of A or ̅Α, such that G ⊕ C is also not of PA-degree.
RT2 2 does not imply WKL0.Jiayi Liu - 2012 - Journal of Symbolic Logic 77 (2):609-620.details
Number of Spanning Trees in the Sequence of Some Graphs.Jia-Bao Liu & S. N. Daoud - 2019 - Complexity 2019:1-22.details
Man's Pursuit of Meaning: Unexpected Termination Bolsters One's Autonomous Motivation in an Irrelevant Ensuing Activity.Wei Wei, Zan Mo, Jianhua Liu & Liang Meng - 2020 - Frontiers in Human Neuroscience 14.details
Philosophy of Neuroscience in Philosophy of Cognitive Science
Tai Chi Chuan and Baduanjin Mind-Body Training Changes Resting-State Low-Frequency Fluctuations in the Frontal Lobe of Older Adults: A Resting-State fMRI Study.Jing Tao, Xiangli Chen, Jiao Liu, Natalia Egorova, Xiehua Xue, Weilin Liu, Guohua Zheng, Ming Li, Jinsong Wu, Kun Hu, Zengjian Wang, Lidian Chen & Jian Kong - 2017 - Frontiers in Human Neuroscience 11.details
On Computation of Entropy of Hex-Derived Network.Pingping Song, Haidar Ali, Muhammad Ahsan Binyamin, Bilal Ali & Jia-Bao Liu - 2021 - Complexity 2021:1-18.details
A graph's entropy is a functional one, based on both the graph itself and the distribution of probability on its vertex set. In the theory of information, graph entropy has its origins. Hex-derived networks have a variety of important applications in medication store, hardware, and system administration. In this article, we discuss hex-derived network of type 1 and 2, written as HDN 1 n and HDN 2 n, respectively of order n. We also compute some degree-based entropies such as Randić, (...) ABC, and G A entropy of HDN 1 n and HDN 2 n. (shrink)
A Review of Artificial Intelligence (AI) in Education from 2010 to 2020. [REVIEW]Xuesong Zhai, Xiaoyan Chu, Ching Sing Chai, Morris Siu Yung Jong, Andreja Istenic, Michael Spector, Jia-Bao Liu, Jing Yuan & Yan Li - 2021 - Complexity 2021:1-18.details
This study provided a content analysis of studies aiming to disclose how artificial intelligence has been applied to the education sector and explore the potential research trends and challenges of AI in education. A total of 100 papers including 63 empirical papers and 37 analytic papers were selected from the education and educational research category of Social Sciences Citation Index database from 2010 to 2020. The content analysis showed that the research questions could be classified into development layer, application layer, (...) and integration layer. Moreover, four research trends, including Internet of Things, swarm intelligence, deep learning, and neuroscience, as well as an assessment of AI in education, were suggested for further investigation. However, we also proposed the challenges in education may be caused by AI with regard to inappropriate use of AI techniques, changing roles of teachers and students, as well as social and ethical issues. The results provide insights into an overview of the AI used for education domain, which helps to strengthen the theoretical foundation of AI in education and provides a promising channel for educators and AI engineers to carry out further collaborative research. (shrink)
Ethics of Artificial Intelligence, Misc in Philosophy of Cognitive Science
The Influence of Perceived Organizational Support on Police Job Burnout: A Moderated Mediation Model.Xiaoqing Zeng, Xinxin Zhang, Meirong Chen, Jianping Liu & Chunmiao Wu - 2020 - Frontiers in Psychology 11.details
Objective: Based on the theory of perceived organizational support (POS), conservation of resource (COR) and job demands-resources (JD-R) model, this study establishes a moderated mediation model to test the role of job satisfaction in mediating the relationship between perceived organizational support and job burnout, as well as the role of regulatory emotional self-efficacy in moderating the above mediating process. Method: A total of 784 police officers were surveyed with the Perceived Organizational Support Scale, the Job Burnout Questionnaire, the Regulatory Emotional (...) Self-Efficacy Scale, and the Minnesota Job Satisfaction Questionnaire. Results: (1) After controlling for gender, seniority, age, police classification, education, and marital status, regression analysis showed a significant negative correlation between perceived organizational support and burnout (r = -0.42, p < 0.01), and the former had a significant negative predictive effect on job burnout (β = -0.42, p < 0.001). (2) The mediating effect test shows that job satisfaction plays a partial role in mediating the relationship between perceived organizational support and job burnout. (3) Through the analysis of the moderated mediation model test, regulatory emotional self-efficacy moderates the first half of the path of "perceived organizational support → job satisfaction → job burnout". Conclusion: Perceived organizational support not only directly affects police job burnout but also indirectly affects police job burnout through job satisfaction. Regulatory emotional self-efficacy enhances the influence of organizational support on job satisfaction. This study indicates the combined effect of perceived organizational support, job satisfaction and regulatory emotional self-efficacy on job burnout and has certain guiding significance for alleviating police job burnout. Keywords: Perceived organizational support, Job burnout, Job satisfaction, Regulatory emotional self-efficacy, Moderated mediation. (shrink)
Study of Ion-Acoustic Solitary Waves in a Magnetized Plasma Using the Three-Dimensional Time-Space Fractional Schamel-KdV Equation.Min Guo, Chen Fu, Yong Zhang, Jianxin Liu & Hongwei Yang - 2018 - Complexity 2018:1-17.details
Sex-Specific Functional Connectivity in the Reward Network Related to Distinct Gender Roles.Yin Du, Yinan Wang, Mengxia Yu, Xue Tian & Jia Liu - 2021 - Frontiers in Human Neuroscience 14.details
Gender roles are anti-dichotomous and malleable social constructs that should theoretically be constructed independently from biological sex. However, it is unclear whether and how the factor of sex is related to neural mechanisms involved in social constructions of gender roles. Thus, the present study aimed to investigate sex specificity in gender role constructions and the corresponding underlying neural mechanisms. We measured gender role orientation using the Bem Sex-Role Inventory, used a voxel-based global brain connectivity method based on resting-state functional magnetic (...) resonance imaging to characterize the within-network connectivity in the brain reward network, and analyzed how the integration of the reward network is related to gender role scores between sex groups. An omnibus analysis of voxel-wise global brain connectivity values within a two-level linear mixed model revealed that in female participants, femininity scores were positively associated with integration in the posterior orbitofrontal cortex and subcallosal cortex, whereas masculinity scores were positively associated with integration in the frontal pole. By contrast, in male participants, masculinity was negatively correlated with integration in the nucleus accumbens and subcallosal cortex. For the first time, the present study revealed the sex-specific neural mechanisms underlying distinct gender roles, which elucidates the process of gender construction from the perspective of the interaction between reward sensitivity and social reinforcement. (shrink)
The Effect of Reviewers' Self-Disclosure of Personal Review Record on Consumer Purchase Decisions: An ERPs Investigation.Jianhua Liu, Zan Mo, Huijian Fu, Wei Wei, Lijuan Song & Kewen Luo - 2021 - Frontiers in Psychology 11.details
Personal review record, as a form of personally identifiable information, refers to the past review information of a reviewer. The disclosure of reviewers' personal information on electronic commerce websites has been found to substantially impact consumers' perception regarding the credibility of online reviews. However, personal review record has received little attention in prior research. The current study investigated whether the disclosure of personal review record influenced consumers' information processing and decision making by adopting event-related potentials measures, as ERPs allow for (...) a nuanced examination of the neural mechanisms that underlie cognitive processes. At the behavioral level, we found that the purchase rate was higher and that the reaction time was shorter when the review record was disclosed, indicating that the disclosed condition was more favorable to the participants. Moreover, ERPs data showed that the disclosed condition induced an attenuated N400 component and an increased LPP component relative to the undisclosed condition, suggesting that the former condition gave rise to less cognitive and emotional conflict and to more positive evaluations. Thus, by elucidating potential cognitive and neural underpinnings, this study demonstrates the positive impact of reviewers' disclosure of personal review record on consumers' purchase decisions. (shrink)
Anterior cingulate cortex-related connectivity in first-episode schizophrenia: a spectral dynamic causal modeling study with functional magnetic resonance imaging.Long-Biao Cui, Jian Liu, Liu-Xian Wang, Chen Li, Yi-Bin Xi, Fan Guo, Hua-Ning Wang, Lin-Chuan Zhang, Wen-Ming Liu, Hong He, Ping Tian, Hong Yin & Hongbing Lu - 2015 - Frontiers in Human Neuroscience 9.details
Understanding the neural basis of schizophrenia (SZ) is important for shedding light on the neurobiological mechanisms underlying this mental disorder. Structural and functional alterations in the anterior cingulate cortex (ACC), dorsolateral prefrontal cortex (DLPFC), hippocampus, and medial prefrontal cortex (MPFC) have been implicated in the neurobiology of SZ. However, the effective connectivity among them in SZ remains unclear. The current study investigated how neuronal pathways involving these regions were affected in first-episode SZ using functional magnetic resonance imaging (fMRI). Forty-nine patients (...) with a first-episode of psychosis and diagnosis of SZ—according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision—were studied. Fifty healthy controls (HCs) were included for comparison. All subjects underwent resting state fMRI. We used spectral dynamic causal modeling (DCM) to estimate directed connections among the bilateral ACC, DLPFC, hippocampus, and MPFC. We characterized the differences using Bayesian parameter averaging (BPA) in addition to classical inference (t-test). In addition to common effective connectivity in these two groups, HCs displayed widespread significant connections predominantly involved in ACC not detected in SZ patients, but SZ showed few connections. Based on BPA results, SZ patients exhibited anterior cingulate cortico-prefrontal-hippocampal hyperconnectivity, as well as ACC-related and hippocampal-dorsolateral prefrontal-medial prefrontal hypoconnectivity. In summary, spectral DCM revealed the pattern of effective connectivity involving ACC in patients with first-episode SZ. This study provides a potential link between SZ and dysfunction of ACC, creating an ideal situation to associate mechanisms behind SZ with aberrant connectivity among these cognition and emotion-related regions. (shrink)
Causal Modeling in Epistemology
Schizophrenia in Philosophy of Cognitive Science
The Evolution and Determinants of Interorganizational Coinvention Networks in New Energy Vehicles: Evidence from Shenzhen, China.Jia Liu, Zhaohui Chong & Shijian Lu - 2021 - Complexity 2021:1-12.details
With the increasing attention to climate change, air pollution, and related public health issues, China's new energy vehicles industry has developed rapidly. However, few studies investigated the evolution of interorganizational collaborative innovation networks in the sector domain of NEVs and the influence of different drivers on the establishment of innovation relationships. In this context, this paper uses the joint invention patent of Shenzhen, a low-carbon pilot city of China, to investigate the dynamics of network influencing factors. The social network analysis (...) shows that the scale of coinvention network of NEVs is constantly increasing, which is featured with diversified cooperative entities, and collaboration depth is also expanding. The empirical results from the Exponential Random Graph Model demonstrate that, with the deepening of collaborative innovation, technological upgrading caused by knowledge exchange makes organizations in the network more inclined to cognitive proximity and less dependent on geographical proximity. In addition, organizational proximity and triadic closure contribute positively to the collaborative network, with their relevance remaining nearly the same, while the impeding effect of cultural/language difference is slightly decreasing with time. (shrink)
Automatic Change Detection of Emotional and Neutral Body Expressions: Evidence From Visual Mismatch Negativity.Xiaobin Ding, Jianyi Liu, Tiejun Kang, Rui Wang & Mariska E. Kret - 2019 - Frontiers in Psychology 10.details
Individual differences in cortical face selectivity predict behavioral performance in face recognition.Lijie Huang, Yiying Song, Jingguang Li, Zonglei Zhen, Zetian Yang & Jia Liu - 2014 - Frontiers in Human Neuroscience 8.details
The Fusiform Face Area Plays a Greater Role in Holistic Processing for Own-Race Faces Than Other-Race Faces.Guifei Zhou, Jiangang Liu, Naiqi G. Xiao, Si Jia Wu, Hong Li & Kang Lee - 2018 - Frontiers in Human Neuroscience 12.details
Psychopathy Moderates the Relationship between Orbitofrontal and Striatal Alterations and Violence: The Investigation of Individuals Accused of Homicide.Bess Y. H. Lam, Yaling Yang, Robert A. Schug, Chenbo Han, Jianghong Liu & Tatia M. C. Lee - 2017 - Frontiers in Human Neuroscience 11.details
Psychopathy, Misc in Philosophy of Cognitive Science
Effective Approach to Calculate Analysis Window in Infinite Discrete Gabor Transform.Rui Li, Yong Huang & Jia-Bao Liu - 2018 - Complexity 2018:1-10.details
The long-periodic/infinite discrete Gabor transform is more effective than the periodic/finite one in many applications. In this paper, a fast and effective approach is presented to efficiently compute the Gabor analysis window for arbitrary given synthesis window in DGT of long-periodic/infinite sequences, in which the new orthogonality constraint between analysis window and synthesis window in DGT for long-periodic/infinite sequences is derived and proved to be equivalent to the completeness condition of the long-periodic/infinite DGT. By using the property of delta function, (...) the original orthogonality can be expressed as a certain number of linear equation sets in both the critical sampling case and the oversampling case, which can be fast and efficiently calculated by fast discrete Fourier transform. The computational complexity of the proposed approach is analyzed and compared with that of the existing canonical algorithms. The numerical results indicate that the proposed approach is efficient and fast for computing Gabor analysis window in both the critical sampling case and the oversampling case in comparison to existing algorithms. (shrink)
How and why non-balanced reciprocity differently influence employees' compliance behavior: The mediating role of thriving and the moderating roles of perceived cognitive capabilities of artificial intelligence and conscientiousness.Nan Zhu, Yuxin Liu, Jianwei Zhang, Jia Liu, Jun Li, Shuai Wang & Habib Gul - 2022 - Frontiers in Psychology 13.details
Previous studies have paid more attention to the impact of non-balanced reciprocity in the organization on employees' behaviors and outcomes, and have expected that the reciprocity norm could improve employees' compliance behavior. However, there are two distinct types of non-balanced reciprocity, and whether generalized reciprocity affects employees' compliance behavior rather than negative reciprocity and its mechanisms has not been further explored so far. Building on the social exchange theory and cognitive appraisal theory, we established and examined a model in a (...) scenario-based experiment across a two-stage survey of 316 participants. In this article, we propose that generalized reciprocity positively influences employees' compliance behavior, and thriving at work mediates its relationship. Furthermore, we argue that the positive association between generalized reciprocity and thriving at work is moderated by the perceived cognitive capabilities of artificial intelligence. This association is amplified for people high in the perceived cognitive capabilities of AI. We also propose that the positive association between thriving at work and compliance behavior is moderated by conscientiousness, such that the association is amplified for people high in conscientiousness. These findings have theoretical and practical implications. (shrink)
A superhigh diamond in the c.e. tt-degrees.Douglas Cenzer, Johanna Ny Franklin, Jiang Liu & Guohua Wu - 2011 - Archive for Mathematical Logic 50 (1-2):33-44.details
The notion of superhigh computably enumerable (c.e.) degrees was first introduced by (Mohrherr in Z Math Logik Grundlag Math 32: 5–12, 1986) where she proved the existence of incomplete superhigh c.e. degrees, and high, but not superhigh, c.e. degrees. Recent research shows that the notion of superhighness is closely related to algorithmic randomness and effective measure theory. Jockusch and Mohrherr proved in (Proc Amer Math Soc 94:123–128, 1985) that the diamond lattice can be embedded into the c.e. tt-degrees preserving 0 (...) and 1 and that the two atoms can be low. In this paper, we prove that the two atoms in such embeddings can also be superhigh. (shrink)
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants.Maojin Liang, Junpeng Zhang, Jiahao Liu, Yuebo Chen, Yuexin Cai, Xianjun Wang, Junbo Wang, Xueyuan Zhang, Suijun Chen, Xianghui Li, Ling Chen & Yiqing Zheng - 2017 - Frontiers in Human Neuroscience 11.details
Infima of d.r.e. degrees.Jiang Liu, Shenling Wang & Guohua Wu - 2010 - Archive for Mathematical Logic 49 (1):35-49.details
Lachlan observed that the infimum of two r.e. degrees considered in the r.e. degrees coincides with the one considered in the ${\Delta_2^0}$ degrees. It is not true anymore for the d.r.e. degrees. Kaddah proved in (Ann Pure Appl Log 62(3):207–263, 1993) that there are d.r.e. degrees a, b, c and a 3-r.e. degree x such that a is the infimum of b, c in the d.r.e. degrees, but not in the 3-r.e. degrees, as a < x < b, c. In (...) this paper, we extend Kaddah's result by showing that such a structural difference occurs densely in the r.e. degrees. Our result immediately implies that the isolated 3-r.e. degrees are dense in the r.e. degrees, which was first proved by LaForte. (shrink)
Areas of Mathematics in Philosophy of Mathematics
Positive Periodic Solutions for a Class of Strongly Coupled Differential Systems with Singular Nonlinearities.Ruipeng Chen, Guangchen Zhang & Jiayin Liu - 2021 - Complexity 2021:1-6.details
This article studies the existence of positive periodic solutions for a class of strongly coupled differential systems. By applying the fixed point theory, several existence results are established. Our main findings generalize and complement those in the literature studies.
Hamilton Connectivity of Convex Polytopes with Applications to Their Detour Index.Sakander Hayat, Asad Khan, Suliman Khan & Jia-Bao Liu - 2021 - Complexity 2021:1-23.details
A connected graph is called Hamilton-connected if there exists a Hamiltonian path between any pair of its vertices. Determining whether a graph is Hamilton-connected is an NP-complete problem. Hamiltonian and Hamilton-connected graphs have diverse applications in computer science and electrical engineering. The detour index of a graph is defined to be the sum of lengths of detours between all the unordered pairs of vertices. The detour index has diverse applications in chemistry. Computing the detour index for a graph is also (...) an NP-complete problem. In this paper, we study the Hamilton-connectivity of convex polytopes. We construct three infinite families of convex polytopes and show that they are Hamilton-connected. An infinite family of non-Hamilton-connected convex polytopes is also constructed, which, in turn, shows that not all convex polytopes are Hamilton-connected. By using Hamilton connectivity of these families of graphs, we compute exact analytical formulas of their detour index. (shrink)
Study of Mental Health Status of the Resident Physicians in China During the COVID-19 Pandemic.Shuang-Zhen Jia, Yu-Zhen Zhao, Jia-Qi Liu, Xu Guo, Mo-Xian Chen, Shao-Ming Zhou & Jian-Li Zhou - 2022 - Frontiers in Psychology 13.details
ObjectiveInvestigating the mental health status of Chinese resident physicians during the 2019 new coronavirus outbreak.MethodsA cluster sampling method was adopted to collect all China-wide resident physicians during the epidemic period as the research subjects. The Symptom Checklist-90 self-rating scale was used to assess mental health using WeChat electronic questionnaires.ResultsIn total, 511 electronic questionnaires were recovered, all of which were valid. The negative psychological detection rate was 93.9%. Among the symptoms on the self-rating scale, more than half of the Chinese resident (...) physicians had mild to moderate symptoms of mental unhealthiness, and a few had asymptomatic or severe unhealthy mental states. In particular, the detection rate of abnormality was 88.3%, obsessive-compulsive symptoms was 90.4%, the sensitive interpersonal relationship was 90.6%, depression abnormality was 90.8% /511), anxiety abnormality was 88.3%, hostility abnormality was 85.3%, terror abnormality was 84.9%, paranoia abnormality was 86.9%, psychotic abnormalities was 89.0%, and abnormal sleeping and eating status was 90.8%. The scores of various psychological symptoms of pediatric resident physicians were significantly lower than those of non-pediatrics.ConclusionThe new coronavirus epidemic has a greater impact on the mental health of Chinese resident physicians. (shrink)
Narcissistic Enough to Challenge: The Effect of Narcissism on Change-Oriented Organizational Citizenship Behavior.Yi Lang, Hongyu Zhang, Jialin Liu & Xinyu Zhang - 2022 - Frontiers in Psychology 12.details
During the COVID-19 pandemic, organizations need to effectively manage changes, and employees need to proactively adapt to these changes. The present research investigated when and how individual employees' narcissism was related to their change-oriented organizational citizenship behavior. Specifically, based on a trait activation perspective, this research proposed the hypotheses that individual employees' narcissism and environmental uncertainty would interactively influence employees' change-oriented organizational citizenship behavior via felt responsibility for constructive change; furthermore, the effect of narcissism on change-oriented organizational citizenship behavior via (...) felt responsibility for constructive change would be stronger when the environmental uncertainty prompted by the COVID-19 pandemic was high rather than low. Two studies were conducted to test these hypotheses: an online survey of 180 employees in mainland China and a field study of 167 leader–follower dyads at two Chinese companies. The current research reveals a bright side of narcissism, which has typically been recognized as a dark personality trait, and enriches the understanding of the antecedents of change-oriented organizational citizenship behavior. This research can also guide organizations that wish to stimulate employee proactivity. (shrink)
A corpus-based study on Chinese and American students' rhetorical moves and stance features in dissertation abstracts.Yingliang Liu, Xuechen Hu & Jiaying Liu - 2022 - Frontiers in Psychology 13.details
Dissertation is the most important research genre for graduate students as they step into the academic community. The abstract found at the beginning of the dissertation is an essential part of the dissertation, serving to "sell" the study and impress the readers. Learning to compose a well-organized abstract to promote one's research is therefore an important skill for novice writers when they step into the academic community in their discipline. By comparing 112 dissertation abstracts in material science by Chinese and (...) American doctoral students, this study attempts to analyze not only the rhetorical moves of dissertation abstracts but also the lexical-grammatical features of stance in different abstract moves. The findings show that most of the abstracts include five moves, namely, Situating the research, Presenting the research, Describing the methodology, Summarizing the findings, and Discussing the research. However, fewer abstracts by Chinese students include all five moves. In addition, the choices of stance expressions by the two groups vary across the five abstract moves for different communication purposes. The results of this study have pedagogical implications for facilitating the development of academic writing skills for L2 writers. (shrink)
Applying Social Cognitive Theory in Predicting Physical Activity Among Chinese Adolescents: A Cross-Sectional Study With Multigroup Structural Equation Model.Jianxiu Liu, Muchuan Zeng, Dizhi Wang, Yao Zhang, Borui Shang & Xindong Ma - 2022 - Frontiers in Psychology 12.details
This cross-sectional study aimed to assess the applicability of social cognitive determinants among the Chinese adolescents and examine whether the predictability of the social cognitive theory model on physical activity differs across gender and urbanization. A total of 3,000 Chinese adolescents ranging between the ages of 12–15 years were randomly selected to complete a set of questionnaires. Structural equation modeling was applied to investigate the relationships between social cognitive variables and PA in the urbanization and gender subgroups. The overall model (...) explained 38.9% of the variance in PA. Fit indices indicated that the structural model of SCT was good: root mean square error of approximation = 0.047, RMR = 0.028, goodness of fit index = 0.974, adjusted goodness of fit index = 0.960, Tucker–Lewis coefficient = 0.971, and comparative fit index = 0.978. Regarding the subgroup analysis, social support had a more substantial impact on the PA of adolescents in suburban areas than that in urban areas, whereas self-regulation had a more substantial impact on the PA of adolescents in urban areas than in suburban areas. The results indicate that the SCT model predicts the PA of Chinese adolescents substantially. An SCT model could apply over a range of subgroups to predict the PA behavior and should be considered comprehensively when designing interventions. These findings would benefit PA among the Chinese adolescents, especially across genders and urbanization. (shrink)
Assessing the Complexity of Intelligent Parks' Internet of Things Big Data System.Jialu Liu, Renzhong Guo, Zhiming Cai, Wenjian Liu & Wencai Du - 2021 - Complexity 2021:1-12.details
Today, intelligence in all walks of life is developing at an unexpectedly fast speed. The complexity of the Internet of Things big data system of intelligent parks is analyzed to unify the information transmission of various industries, such as smart transportation, smart library, and smart medicine, thereby diminishing information islands. The traditional IoT systems are analyzed; on this basis, a relay node is added to the transmission path of the data information, and an intelligent park IoT big data system is (...) constructed based on relay cooperation with a total of three hops. Finally, the IoT big data system is simulated and tested to verify its complexity. Results of energy efficiency analysis suggest that when the power dividing factor is 0.5, 0.1, and 0.9, the energy efficiency of the IoT big data system first increases and then decreases as α0 increases, where the maximum value appears when α0 is about 7 J. Results of outage probability analysis demonstrate that the system's simulation result is basically the same as that of the theoretical result. Under the same environment, the more hop paths the system has, the more the number of relays is; moreover, the larger the fading index m, the better the system performance, and the lower the outage possibility. Results of transmission accuracy analysis reveal that the IoT big data system can provide a result that is the closest to the actual result when the successful data transmission probability is 100%, and the parameter λ values are between 0.01 and 0.05; in the meantime, the delay of successful data transmission is reduced gradually. In summary, the wireless relay cooperation transmission technology can reduce the outage probability and data transmission delay probability of the IoT big data system in the intelligent park by adding the multihop path, thereby improving the system performance. The above results can provide an experimental basis for exploring the complexity of IoT systems in intelligent parks. (shrink)
Dong fang mei dian: 20 shi ji "Zhongguo yi shu jing shen" wen ti yan jiu = Dongfang meidian.Jianping Liu - 2017 - Beijing Shi: Ren min chu ban she.details
Personalized recommendation system based on social tags in the era of Internet of Things.Jianshun Liu, Wenkai Ma, Gui Li & Jie Dong - 2022 - Journal of Intelligent Systems 31 (1):681-689.details
With the rapid development of the Internet, recommendation systems have received widespread attention as an effective way to solve information overload. Social tagging technology can both reflect users' interests and describe the characteristics of the items themselves, making group recommendation thus becoming a recommendation technology in urgent demand nowadays. In traditional tag-based recommendation systems, the general processing method is to calculate the similarity and then rank the recommended items according to the similarity. Without considering the influence of continuous user behavior, (...) in this article, we propose a personalized recommendation algorithm based on social tags by combining the ideas of Markov chain and collaborative filtering. This algorithm splits the three-dimensional relationship of into two two-dimensional relationships of and. The user's interest degree to the tags is calculated by the Markov chain model, and then the items corresponding to them are matched by the recommended tag set. The influence between tags is used to model the satisfaction of items based on the correlation between the tags contained in the matched items, and collaborative filtering is used to complete the sparse values when calculating the interest and satisfaction between user–tags and user–items to improve the accuracy of recommendations. The experiments show that in the publicly available dataset, the personalized recommendation algorithm proposed in this article has significantly improved in accuracy and recall rate compared with the existing algorithms. (shrink)
The Development of Social Function Questionnaire for Chinese Older Adults.Conghui Liu, Yunping Wang, Jing Li, Xinzhu Xing, Xiaojuan Chen, Jianbing Liu & Xuanna Wu - 2022 - Frontiers in Psychology 13.details
ObjectivesSocial function is an important indicator for physical and psychological health of older adults. However, there is a lack of a standardized questionnaire for measuring social function of older adults. This study developed a questionnaire to assess Chinese older adults' social function.MethodsWe used three samples to test the reliability and validity of the questionnaire.ResultsBased on exploratory and confirmatory factor analyses with two samples, the final version of Social Function Questionnaire for Chinese Older Adults contained three dimensions with 12 items: social (...) support, social adaptation, and social engagement. Criterion validity test with the third sample showed that SFQCOA was positively related to the healthy indices and negatively related to the unhealthy indices.ConclusionThe validity and reliability of the questionnaire reach the requirements of psychometric standards, suggesting it is an effective tool for measuring social function of older adults. (shrink)
The Laplacian Spectrum, Kirchhoff Index, and the Number of Spanning Trees of the Linear Heptagonal Networks.Jia-Bao Liu, Jing Chen, Jing Zhao & Shaohui Wang - 2022 - Complexity 2022:1-10.details
Let H n be the linear heptagonal networks with 2 n heptagons. We study the structure properties and the eigenvalues of the linear heptagonal networks. According to the Laplacian polynomial of H n, we utilize the method of decompositions. Thus, the Laplacian spectrum of H n is created by eigenvalues of a pair of matrices: L A and L S of order numbers 5 n + 1 and 4 n + 1 n! / r! n − r!, respectively. On the (...) basis of the roots and coefficients of their characteristic polynomials of L A and L S, we get not only the explicit forms of Kirchhoff index but also the corresponding total number of spanning trees of H n. (shrink)
Xin Shi Qi Nong Cun Dao de Jian She Yan Jiu.Jianrong Liu - 2004 - Zhongguo She Hui Ke Xue Chu Ban She.details
Ethics and Culture in Value Theory, Miscellaneous
Zhuangzi de xian dai ming yun.Jianmei Liu - 2013 - Xianggang: Da shan wen hua chu ban she you xian gong si.details
Zong Jiao Yu Sheng Si: Zong Jiao Zhe Xue Lun Ji.Jiancheng Liu - 2011 - Xiu Wei Zi Xun Ke Ji Gu Fen You Xian Gong Si.details
死亡是生命的終極問題,不論死亡是斷滅還是斷續,都是生命不可迴避的歸宿或必經之途。面對死亡是在世存有根本之生命責任,可以忽略,能夠拖延,但終究無可避免。 宗教是人類的終極關懷,死亡之性質及其安頓乃構成宗教關懷的核心。【秀威資訊科技股份有限公司製作】.
Religious Topics in Philosophy of Religion
Function of Perceived Corporate Social Responsibility in Safety of Sports Activities and Home Aerobic Equipment in the Late Period of COVID-19.Lang Ma, Jiang Liu, Yicheng Liu, Yue Zhang & Chunmei Yang - 2022 - Frontiers in Psychology 13.details
The pandemic has impacted various industries, including the sports industry. However, corporate social responsibility can mitigate the adverse effects of the crisis and promote the sports industry. To analyze the effect of CSR, the study examined the impact of perceived corporate social responsibility on injury prevention expectation, injury risk perception, and health up-gradation with the mediation of sports safety measures. There are 259 sportsmen of local sports bodies provided the data through a self-administered survey. Data analysis was conducted through Smart-PLS (...) and SEM techniques. The outcome of the analysis showed that perceived corporate social responsibility leads to injury prevention expectation, injury risk perception, and health up-gradation. Also, the study found that sports safety measure mediates the relationship between perceived corporate social responsibility and injury prevention expectation, between perceived corporate social responsibility and injury risk perception, and between perceived corporate social responsibility and health up-gradation among sportsmen of local sports bodies. The theoretical implications were presented related to the significance of CSR and sports safety measure and their impact on sportsmen injury prevention expectation, health, and risk perception. The practical implications were related to the management of local sports bodies and how they can induce CSR initiatives and programs. Some limitations related to sample size, incorporating other variables, examining the model in other contexts, and using different study designs, have also been mentioned in the study. (shrink)
Experimental study of transcranial pulsed current stimulation on relieving athlete's mental fatigue.Yangyang Shen, Jian Liu, Xinming Zhang, Qingchang Wu & Hu Lou - 2022 - Frontiers in Psychology 13.details
ObjectiveTo explore the effect of independently developed transcranial pulsed current stimulation on alleviating athlete's mental fatigue.MethodsA total of 60 college athletes were randomly divided into the active stimulation group and the sham stimulation group. Subjective questionnaires, behavior test, and functional near-infrared spectroscopy test were conducted before and after the experiment. Two-way ANOVA with repeated measures was used to compare the differences in mental fatigue indexes before and after the two experimental conditions.ResultsAfter 7 days of exercise training, there was a significant (...) difference in the main effect of the time factor in all indexes of the two groups. The scores of rated perceived exertion scale, positive and negative affect schedule, critical flicker frequency, and reaction time, in the tPCS treatment group, were better than those in the sham stimulation group. After 7 days of exercise training, all the subjects had different degrees of athlete's mental fatigue; the subjects in the active stimulation group have a good evaluation of the tPCS developed by the research group without adverse actions.ConclusiontPCS intervention can improve emotional state, reduce the subjective evaluation of fatigue, improve behavioral levels such as attention and reaction time and increase cerebral prefrontal blood flow and oxygen supply. (shrink)
The Moderation of Human Characteristics in the Control Mechanisms of Rumours in Social Media: The Case of Food Rumours in China.Sangluo Sun, Xiaowei Ge, Xiaowei Wen, Fernando Barrio, Ying Zhu & Jiali Liu - 2022 - Frontiers in Psychology 12.details
Social networks are widely used as a fast and ubiquitous information-sharing medium. The mass spread of food rumours has seriously invaded public's healthy life and impacted food production. It can be argued that the government, companies, and the media have the responsibility to send true anti-rumour messages to reduce panic, and the risks involved in different forms of communication to the public have not been properly assessed. The manuscript develops an empirical analysis model from 683 food anti-rumour cases and 7,967 (...) data of the users with top comments to test the influence of the strength of rumour/anti-rumour on rumour control. Furthermore, dividing the users into three categories, Leaders, Chatters, and General Public, and study the influence of human characteristics on the relationship between the strength of rumour/anti-rumour and rumour control by considering the different human characteristics as moderator variables. The results showed that anti-rumours have a significant positive impact on the control of rumours; the ambiguity of rumours has a significant negative impact on the Positive Comment Index in rumour control. Further, the Leaders increased the overall level of PCI, but negatively adjusted the relationship between evidence and PCI; the Chatters and the General Public reduced the overall level of PCI, and Chatters weakened the relationship between the specific type of anti-rumour form and PCI while the General Public enhanced the relationship between the specific type of anti-rumour form and PCI. In the long run, the role of Leaders needs to be further improved, and the importance of the General Public is growing in the food rumour control process. (shrink)
Experimental Research on Aerated Supercavitation Suppression of Capillary Outlet Throttling Noise.Qianxu Wang, Shouchuan Wang, Huan Zhang, Yuxuan Wang, Junhai Zhou, Panpan Zhao & Jia-Bao Liu - 2022 - Complexity 2022:1-11.details
The aim of this work is the reduction of the throttling noise when the capillary is used as a throttling device. Based on the theory of bubble dynamics, two-phase flow, and aerated supercavitation, four different sizes of aerated devices used in refrigerator refrigeration systems are designed. Throttling noise and the temperature and pressure of inlet and outlet of the capillary are measured under stable operation. To compare the noise suppression effects in different groups of experiments, we introduced the cavitation number (...) to analyze, revealed the principle of aerated supercavitation to suppress noise, and combined the results of Fluent simulations to get the relationship between the noise suppression effect and the aerated quality. The experimental results showed that the aerated device can obviously suppress the throttling noise of the capillary outlet, up to 2.63 dB, which provides a new way for reducing the capillary throttling noise. (shrink)
How does teacher-perceived principal leadership affect teacher self-efficacy between different teaching experiences through collaboration in China? A multilevel structural equation model analysis based on threshold.Zhiyong Xie, Rongxiu Wu, Hongyun Liu & Jian Liu - 2022 - Frontiers in Psychology 13.details
Teacher self-efficacy is one of the most critical factors influencing Students' learning outcomes. Studies have shown that teacher-perceived principal leadership, teacher collaboration, and teaching experience are the critical factor that affects teacher self-efficacy. However, little is known about the mechanisms behind this relationship. This study examined whether teacher collaboration would mediate the relationship between teacher-perceived principal leadership and teacher self-efficacy, and the moderating role of teaching experience in the mediating process. With an analysis of a dataset from 14,121 middle school (...) teachers in China, this study first testified to the positive role that teacher-perceived principal leadership played in teacher self-efficacy. Furthermore, it revealed that teacher collaboration mediates this relationship and the mediated path was moderated by teaching experience. Finally, it also indicated that the threshold of teaching experience linking the teacher-perceived leadership with teacher self-efficacy was approximately in the third year, and their relationship was stronger when teaching experience was below the threshold. This study highlighted the mediating and moderating mechanisms linking the teacher-perceived principal leadership and teacher self-efficacy, which has important theoretical and practical implications for intervention and enhancement of teacher self-efficacy. (shrink)
Window Opening Behavior of Residential Buildings during the Transitional Season in China's Xi'an.Xiaolong Yang, Jiali Liu, Qinglong Meng, Yingan Wei, Yu Lei, Mengdi Wu, Yuxuan Shang, Liang Zhang & Yingchen Lian - 2022 - Complexity 2022:1-16.details
Window opening behavior in residential buildings has important theoretical significance and practical value for improving energy conservation, indoor thermal comfort, and indoor air quality. Climate and cultural differences may lead to different window opening behavior by residents. Currently, research on residential window opening behavior in northwest China has focused on indoor air quality, and few probabilistic models of residential window behaviors have been established. Therefore, in this study, we focused on an analysis of factors influencing window opening behavior and the (...) establishment of a predictive model for window opening behavior. Four typical residential buildings in different locations and building types in Xi'an were selected. The indoor and outdoor environments and window opening states were measured. Subsequently, a multivariate analysis of variance was used to determine the factors that had a significant effect on window opening behavior. Single- and multiparameter logistic regression models for window opening behavior were established. Of all the measured factors, we found that indoor temperature and CO2 concentration, outdoor temperature, and relative humidity had significant effects on window opening behavior, and indoor relative humidity and noise did not. Meanwhile, the temperature was positively correlated with the window opening probability, whereas indoor CO2 concentration and outdoor relative humidity were negatively correlated. The prediction accuracy of the multiparameter model was promising, at almost 75%, and the model can provide theoretical support for modelling residential buildings in Xi'an. (shrink)
An Empirical Study on the Improvement of College Students' Employability Based on University Factors.Yi-Cheng Zhang, Yang Zhang, Xue-li Xiong, Jia-Bao Liu & Rong-Bing Zhai - 2022 - Frontiers in Psychology 13.details
With the popularization of higher education and the promotion of college enrollment expansion, the number of college graduates increases sharply. At the same time, the continuous transformation and upgrading of the industrial structure put forward higher requirements on the employability of college students, which leads to the imbalance between supply and demand in the labor market. The key to dealing with employment difficulties lie in the improvement of college students' employability. Therefore, we make a regression analysis of 263 valid samples (...) from universities in Anhui Province and extract the factors that influence the improvement of college students' employability in the process of talent cultivation in university. The result shows that there is a positive correlation between course setting, course teaching, club activities, and college students' employability, among which the course teaching and club activities are the most critical factors which may influence college students' employability. In addition, from the viewpoint of individual college students, the overall grades of college students and the time of participating in the internship are also closely related to their employability, i.e., college students with good overall grades and long internship time should also have stronger employability. (shrink)
Research on the Resilience Evaluation and Spatial Correlation of China's Sports Regional Development Under the New Concept.Jing Zhang, Jing-Ru Gan, Ying Wu, Jia-Bao Liu, Su Zhang & Bin Shao - 2022 - Frontiers in Psychology 12.details
In order to fully implement the new development concept, bring into full play the potential of sports development, and maintain the resilience of China's sports development. This paper studies the resilience evaluation and spatial correlation of Chinese sports development under the new development concept. First, we constructed Resilience Evaluation Indexes System for Sports Development in China based on the analysis of the resilience features of sports development and the DPSIR model, which is from the five aspects of "driving force – (...) pressure – state – influence – response." Second, used Coefficient of Variation and Technique for Order Preference by Similarity to an Ideal Solution Method to measure the resilience level of sports development in 31 provinces in China from 2013 to 2017. Then, we introduced the obstacle degree model to identify the obstacle factors that hinder the resilience of Chinese sports development in different periods. Finally, we used the global and local Moran indexes to analyze the spatial correlation of China sports regional development. The results showed that: overall, the development level of sports resilience in 31 provinces in China showed an upward trend from 2013 to 2017, while some provinces showed obvious fluctuations. The obstacles to the development of sports resilience in China mainly include sports scientific research equipment, the number of national fitness monitoring stations, the number of national fitness centers, the full-time equivalent of personnel, and the number of sports scientific research projects. The response subsystem is the main obstacle factor that affects the improvement of the resilience level of sports development in China. There is a positive spatial autocorrelation between the resilience level of sports development and regional spatial distribution, and the correlation shows a weakening trend, and the internal difference is significant. Finally, we concluded that we must take the new development philosophy as the guiding principle. First, we should stick to innovation-driven development to fully upgrade the resilience of China's sports development. Second, we should adhere to the principle of coordinated development to promote the overall and balanced development of sports. Lastly, we should promote shared development so as to deliver benefits for all in an equal way. (shrink)
How Could Policies Facilitate Digital Transformation of Innovation Ecosystem: A Multiagent Model.Wei Yang, Jian Liu, Lingfei Li, Qing Zhou & Lixia Ji - 2021 - Complexity 2021:1-19.details
The digital transformation of the innovation ecosystem is not only an inevitable direction of innovation activities in the era of digital economy but also a highly complex and uncertain process. The way to facilitate transformation with policies has become a topic of common concern of academia and policymakers. This paper builds a multiagent model and studies the impacts of supply-side policies, demand-side policies, and environmental policies on enterprises' transformation willingness, digital level, and income level as well as the proportion of (...) enterprises that carry out transformation in the whole innovation ecosystem and innovation network structure by numerical experiments. According to research findings, supply-side policies play the biggest role in the facilitation of transformation, demand-side policies are second important to them, and environmental policies have comparatively weak impacts. (shrink)
An almost-universal cupping degree.Jiang Liu & Guohua Wu - 2011 - Journal of Symbolic Logic 76 (4):1137-1152.details
Say that an incomplete d.r.e. degree has almost universal cupping property, if it cups all the r.e. degrees not below it to 0′. In this paper, we construct such a degree d, with all the r.e. degrees not cupping d to 0′ bounded by some r.e. degree strictly below d. The construction itself is an interesting 0″′ argument and this new structural property can be used to study final segments of various degree structures in the Ershov hierarchy.
Joining to high degrees via noncuppables.Jiang Liu & Guohua Wu - 2010 - Archive for Mathematical Logic 49 (2):195-211.details
Cholak, Groszek and Slaman proved in J Symb Log 66:881–901, 2001 that there is a nonzero computably enumerable (c.e.) degree cupping every low c.e. degree to a low c.e. degree. In the same paper, they pointed out that every nonzero c.e. degree can cup a low2 c.e. degree to a nonlow2 degree. In Jockusch et al. (Trans Am Math Soc 356:2557–2568, 2004) improved the latter result by showing that every nonzero c.e. degree c is cuppable to a high c.e. degree (...) by a low2 c.e. degree b. It is natural to ask in which subclass of low2 c.e. degrees can b in Jockusch et al. (Trans Am Math Soc 356:2557–2568, 2004) be located. Wu proved in Math Log Quart 50:189–201, 2004 that b can be cappable. We prove in this paper that b in Jockusch, Li and Yang's result can be noncuppable, improving both Jockusch, Li and Yang, and Wu's results. (shrink)
The Influence of Culture on Attitudes Towards Humorous Advertising.Yi Wang, Su Lu, Jia Liu, Jiahui Tan & Juyuan Zhang - 2019 - Frontiers in Psychology 10.details
An other-race effect for configural and featural processing of faces: upper and lower face regions play different roles.Zhe Wang, Paul C. Quinn, James W. Tanaka, Xiaoyang Yu, Yu-Hao P. Sun, Jiangang Liu, Olivier Pascalis, Liezhong Ge & Kang Lee - 2015 - Frontiers in Psychology 6.details
1 — 50 / 86 | CommonCrawl |
A systematic review to investigate the measurement properties of goal attainment scaling, towards use in drug trials
Charlotte M. W. Gaasterland1,
Marijke C. Jansen-van der Weide1,
Stephanie S. Weinreich1,2 &
Johanna H. van der Lee1
BMC Medical Research Methodology volume 16, Article number: 99 (2016) Cite this article
One of the main challenges for drug evaluation in rare diseases is the often heterogeneous course of these diseases. Traditional outcome measures may not be applicable for all patients, when they are in different stages of their disease. For instance, in Duchenne Muscular Dystrophy, the Six Minute Walk Test is often used to evaluate potential new treatments, whereas this outcome is irrelevant for patients who are already in a wheelchair. A measurement instrument such as Goal Attainment Scaling (GAS) can evaluate the effect of an intervention on an individual basis, and may be able to include patients even when they are in different stages of their disease. It allows patients to set individual goals, together with their treating professional. However, the validity of GAS as a measurement instrument in drug studies has never been systematically reviewed. Therefore, we have performed a systematic review to answer two questions: 1. Has GAS been used as a measurement instrument in drug studies? 2: What is known of the validity, responsiveness and inter- and intra-rater reliability of GAS, particularly in drug trials?
We set up a sensitive search that yielded 3818 abstracts. After careful screening, data-extraction was executed for 58 selected articles.
Of the 58 selected articles, 38 articles described drug studies where GAS was used as an outcome measure, and 20 articles described measurement properties of GAS in other settings. The results show that validity, responsiveness and reliability of GAS in drug studies have hardly been investigated. The quality of the reporting of validity in studies in which GAS was used to evaluate a non-drug intervention also leaves much room for improvement.
We conclude that there is insufficient information to assess the validity of GAS, due to the poor quality of the validity studies. Therefore, we think that GAS needs further validation in drug studies, especially since GAS can be a potential solution when a small heterogeneous patient group is all there is to test a promising new drug.
The protocol has been registered in the PROSPERO international prospective register for systematic reviews, with registration number CRD42014010619. http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42014010619.
One of the main challenges for drug evaluation in rare diseases is the heterogeneous course of these diseases. When a disease course differs from patient to patient, traditional outcome measures may not be applicable for all patients of a certain disease. Trial designs are often limited to patients for whom the outcome measure is relevant, whereas the underlying disease mechanism may be similar in a larger group. This increases the problem of small numbers that already challenges rare disease research.
For example, in Duchenne muscular dystrophy (DMD), new drug trials until recently often used the 6-min Walk Test (6MWT) as an outcome measure. The 6MWT has been validated as a reliable and feasible outcome measure, and has been recommended as the primary outcome measure in ambulatory DMD patients [1, 2]. However, although the 6MWT may be a relevant outcome measure for boys who are not (yet) depending on a wheelchair, it is obviously irrelevant for, usually somewhat older, boys who are. This problem in DMD research has been picked up by patient representatives and researchers from all over the world [3].
As the DMD example shows, existing measurement instruments use an outcome that is not relevant for all patients, or may not be responsive enough to measure the effect of an intervention in a rare disease. However, the development of disease-specific and patient-relevant outcome measures is hampered by the small number and heterogeneity of patients with a particular rare disease. In their handbook "Measurement in Medicine" De Vet et al. [4] recommend a minimum number of 50 patients for validation studies.
A measurement instrument that can evaluate the effect of an intervention on an individual basis may help overcome the problem of small, heterogeneous populations. The importance of patient reported outcome measures is widely recognized by pharmaceutical companies and clinical researchers as well as regulators and government agencies such as FDA and NIH [5].
Goal Attainment Scaling (GAS) is a measurement instrument that is intended for individual evaluation of an intervention. It allows patients to set individual goals, together with their treating professional. The number of goals and the content of these goals may differ per patient, but the attainment of the goals is measured in a standardized way. This makes a standardized evaluation of an intervention possible, even when the patients are all in a different stage of their disease.
Goal Attainment Scaling was first introduced in 1968, by Kiresuk and Sherman [6], originally for the evaluation of mental health services. It contains a variable number of self-defined goals and very explicit descriptions of five possible levels of goal attainment that are formulated before the intervention, usually in consultation between the patient and the clinician. In the original definition, the levels are each quantified in a 5-point scale that ranges from −2 to +2, where −2 = the most unfavorable treatment outcome thought likely, −1 = less than expected level of treatment success, 0 = expected level of treatment success, +1 = more than expected success with treatment, and +2 = best conceivable success with treatment. For each goal the expected level of treatment success and at least two other levels need to be described in such a specific way that an independent observer can assess the outcome.
There is no maximum number of goals that can be set. Each goal can be assigned a weight, according to its importance to patient and/or clinician. From the scores reached after the intervention, a composite goal attainment score is computed using the following formula:
$$ T=50+\frac{10{\displaystyle \sum {w}_i{x}_i}}{\sqrt{\left(1-\rho \right){{\displaystyle \sum {w}_i^2+\rho \left({\displaystyle \sum {w}_i}\right)}}^2}} $$
where T is the composite score, wi is the weight assigned to the goali, xi is the original score for goali ranging from −2 to +2, and ρ is the estimated correlation between goal scores. According to Kiresuk and Sherman, it is safe to assume that the correlation between the goal scores is constant, and can be set at 0.3. The T-score has a mean of 50 and a standard deviation of 10, under the assumptions as proposed by Kiresuk and Sherman [6].
Besides mental health and non-medical fields such as education and social service applications [7], GAS is reportedly used in a few specific medical research areas, such as rehabilitation [8–12] and geriatrics [13–15]. However, the validity of GAS as a measurement instrument in drug studies has never been systematically reviewed. To evaluate the usefulness of GAS in drug studies, we formulated the following three research questions:
Has Goal Attainment Scaling been used as a measurement instrument in drug studies?
What (drug) interventions were evaluated by studies using GAS?
What is known of the validity, responsiveness and inter- and intra-rater reliability of Goal Attainment Scaling in general, and in particular in drug trials?
In this study, we follow the COSMIN guidelines, which are the generally used and accepted standards for measurement properties evaluation [16]. This checklist contains standards for evaluating the methodological quality of studies on the measurement properties of health measurement instruments. According to the COSMIN guidelines, a health status measurement instrument can be used when its validity, reliability and responsiveness, have been tested and considered adequate. We considered GAS useful when the validity, reliability and responsiveness have been described, tested and found acceptable according to these guidelines.
We conducted a systematic review, according to the PRISMA guidelines [17].
We set up a sensitive search in Medline, PsychInfo and Embase. We searched for literature from 1968, the year when GAS was introduced by Kiresuk and Sherman [6], to May 1st, 2015. For the full search strategy, see Additional file 1. Reference lists of relevant review articles were screened for additional papers.
Papers were included in which:
Goal Attainment Scaling met the following criteria:
One or more individual goals were established by the patient or by one or more researchers or practitioners, either with or without input of the patient, prior to the intervention. The goals did not have to be devised by the patient/researcher, as long as the goals were individually chosen per patient.
The scale had to consist of at least three points (e.g. more than just goal attained – goal not attained). At least 2 points on the scale were described precisely and objectively, so that an independent observer would be able to determine whether the patient performs above or below that point.
The study was either a trial in which drugs are evaluated, or a study of any design in which psychometric properties of GAS were evaluated.
The outcome measure was the attainment of goals that had been established before the onset of the intervention.
The goals had been set up individually, i.e. per patient.
Excluded were:
Trials using an outcome measure called Goal Attainment Scaling, when the outcome measure did not meet our definition of GAS.
Studies in which goal setting was used as an intervention rather than outcome measurement.
Reviews or narratives.
Conference abstracts.
Papers published in languages other than English, French, Dutch, German or Spanish.
Papers published before 1968.
The selection of articles and data-extraction were performed in pairs of two independent reviewers. Disagreements were discussed until consensus was reached; if necessary a third reviewer acted as a referee. A standardized data-extraction form was used (see Additional file 2). We divided the included studies into two categories, i.e. drug studies, and non-drug studies in which the measurement properties of GAS were investigated.
We extracted information about the following measurement properties, defined according to the COSMIN guidelines [18]: Inter-rater reliability, intra-rater reliability, face validity, content validity, construct validity, and responsiveness. For the full definitions of the measurement properties, see Table 1. We used the quality criteria as proposed by Terwee et al. [19] to evaluate the measurement properties, as also displayed in Table 1. We chose to limit the evaluation of the quality of the measurement properties to the criteria as proposed by Terwee et al., instead of using the full COSMIN guidelines, because the COSMIN guidelines are very detailed, and many details are not relevant as these aspects cannot be evaluated for GAS, e.g. internal consistency, measurement error, criterion validity.
Table 1 COSMIN definitions [49] of the evaluated measurement properties, and their quality criteria [19]
The search yielded 3007, 1413, and 1039 abstracts from Medline, Embase and PsychInfo, respectively. After eliminating duplicates, a total of 3818 abstracts remained for screening. In the screening phase, we excluded 3511 articles based on title and abstract, and 249 articles based on the full text. Data-extraction was executed for the remaining 58 articles (see Fig. 1). Of these 58 articles, 38 articles described drug studies in which GAS was used as an outcome measure, and 20 articles described measurement properties of GAS in other settings (Fig. 2).
The number of articles in- and excluded in the SR
Venn-diagram depicting the number of studies in the categories drug-studies and methodology studies, and the number of studies in both categories
In Table 2 the characteristics of the articles are presented. Most studies are trials in patients with cerebral palsy or patients with spasticity due to other causes, such as acquired brain trauma or stroke (28 studies). Also, many studies focussed on the geriatric population (15 studies). There were also some studies on autism (three studies), or neurological disorders such as MS (two studies). The remaining studies covered research areas such as family problems, goal setting in adolescent students or behaviour and psychiatric problems.
Table 2 Reported Patients, Interventions, Comparisons and Outcomes in the included studies
Most drug studies evaluated an intervention with botulinum toxin (25 studies), mainly in patients with cerebral palsy and spasticity. Baclofen was also evaluated in children with spasticity (three studies). Other drugs that were evaluated, were galantamine (three studies), donepezil for Alzheimer's Disease (two studies), fluvoxamine, trihexyphenidil, memantine, a phenol nerve block, and linopirdine (one study each).
An overview of the reported measurement properties of GAS in the 38 drug studies and the 20 non-drug studies is presented in Tables 3 and 4, respectively.
Table 3 Reported measurement properties of GAS in included drug studies
Table 4 Reported measurement properties of GAS in included validity studies
Face validity
As is shown in Tables 3 and 4, face validity is reported in one article [20]. This is a drug study that evaluated the use of Fluvoxamine in patients who met the criteria for panic disorder with moderate to severe agoraphobia. GAS was used as a primary outcome measure. Both therapists and independent raters who assessed the level of goal attainment after the intervention, were asked to rate the relevance of the chosen goals on a scale of 1 to 5 (with one meaning irrelevant and five meaning very relevant). Therapists only rated the GAS score of patients not treated by themselves. The mean score of the therapists was 4.68 (SD = .51), and the mean score of the independent raters was 4.66 (SD = .52). The researchers concluded that these numbers show that 'the goal areas were suitably chosen'. The target population of GAS (the patients) were not involved in this evaluation, which is one of the requirements of the quality criteria that we use. However, it is inherent in the measurement instrument that the patient is involved in the choice of the items. Therefore, we score the quality of the face validity evaluation as 'good quality'.
Content validity
Content validity was reported in five studies, of which one was a drug study. Content validity was measured in several ways, as shown in Table 5; by rating the usefulness or importance of the goals [21, 22], by comparing the goal areas with essential components as recommended by position papers in the specific field [23] and by checking whether the goals were formulated according to the criteria 'Specific, Measurable, Assignable, Realistic, and Time-related'(SMART) [24, 25]. In one study, the content validity was reportedly tested by grouping the goals into major categories, and analyzing the content of these categories [26]. However, the study did not report the results of the categorization of the goals [26]. The quality of the content validity varied from 'good quality' in two studies, 'intermediate quality' in two studies and 'poor quality' in one study. Authors reported a 'good overall usefulness' of the goals [22], stated that all recommended areas were represented in the goals [23], whether goals were set according to the SMART principle (in this particular study, it was concluded that there was, even after a refinement process of the goal statements, still a difference in the quality of the goal statements between the different sites) [24, 25] or that more than 70 % of the responders rated GAS as a 4 or 5 on a 5-point scale as clinically relevant and important [21].
Table 5 Reported content validity of GAS in included studies
Construct validity
Construct validity was reported in 18 studies, of which six were drug studies (Table 6). In all 18 studies construct validity was assessed by correlations with other instruments measuring a construct similar to the goals that were expected to be set by the patients in each specific research area. Also, T-tests between the placebo and intervention condition [27], or T-tests between the lowest and highest T-score differences [28], were used to verify construct validity. In none of the studies, a hypothesis was formulated on the expected construct validity outcomes. Therefore, the quality of the construct validity is difficult to evaluate. Of the 18 studies, 14 reported significant correlations with other measurement instruments that were relevant for the research area. The measurement instruments used to establish the construct validity varied considerably, since GAS is used for different research areas. Three studies reported that no significant correlations with other measurement instruments were found [21, 29, 30]. In one study correlations between change scores were measured. The results were not clearly reported [31].
Table 6 Reported construct validity of GAS in included studies
Intra- and inter-rater reliability
As can be seen in Tables 3 and 4, intra-rater reliability was not assessed in any of the included studies. Inter-rater reliability was reported in 12 studies, of which two were drug studies. Different methods were used to measure the inter-rater reliability (Table 7). In four studies we rated the quality of the inter-rater reliability as poor, whereas eight studies were rated with 'good quality'. Eight out of the 12 studies reported an ICC score. Five of those studies reported that the ICC values were all 0.9 and higher [31–35]. Two studies reported ICC values between 0.8 and 0.95 [26, 36]. In one study, the reported ICC was lower than 0.5 [37]. The specific calculation for the ICC was reported in one study [37]. Confidence intervals for the ICC values were also reported in one study [35]. Inter-rater reliability was also reported with kappa-values [21, 38], where the values ranged from substantial to almost perfect agreement. Another method that was used was calculating a correlation, which had a value of 0.84 [28]. One study reported 'agreement' between objective goal setters and the therapists who performed the interventions, and 'agreement' between objective goal setters and people who did the intake of the patients before the patients were randomized. The results were an agreement of 43 and 57 % respectively. However, in the article the method used to calculate this agreement were not reported [20].
Table 7 Reported inter-rater reliability of GAS in included studies
Responsiveness was reported in 14 studies, of which two were drug studies (Table 8). None of the studies used measurement properties as advised by Terwee et al. [19]. Therefore, it is difficult to evaluate the quality of the responsiveness. In nine of those 14 studies, an effect size of the measured differences was reported [26, 29–31, 33, 39–42]. Of those nine studies, the reported effect size was below 1 in only one study [29]. In five studies, a Relative Efficiency was reported [26, 30, 31, 33, 41]. The relative efficiency of two procedures or measurement instruments is the ratio of their efficiencies. For instance, a comparison can be made between GAS and a regularly used measurement instrument. The Relative Efficiency varied between 3 and 57, but was substantial in most studies, meaning that GAS is more efficient, or needs less observations, than other measurement instruments. A Standardized Response Mean was reported in six studies [22, 23, 26, 40–42]. A standardized response mean (SRM) is an effect size index used to measure the responsiveness of scales to clinical change. The SRM is computed by dividing the mean change score by the standard deviation of the change. The SRM's that were reported varied between 1.2 and 3.54. Two studies measured responsiveness with a paired t-test comparing response before and after the intervention, with a significant difference in GAS T-scores in both studies [22, 39]. In one study, the sensitivity, specificity and positive and negative predictive value were calculated based on a group of responders and non-responders [43]. The results were 52, 85, 81 and 60 %, respectively. In another study, responsiveness was reported as the number of patients who showed a change in T-scores of different goal areas [44]. The proportion of patients showing changes on GAS was larger than on other measurement instruments. The number of patients showing change were nine out of 23 patients on the physical goals, 18 out of 23 patients on occupational goals and 12 out of 18 patients on speech goals, whereas there was only one patient that showed change on the Gross Motor Function Measure (GMFM-66).
Table 8 Reported responsiveness of GAS in included studies
In this systematic review, we have found 58 articles, of which 38 drug studies, where GAS was used as an outcome measure. Therefore, we may conclude that GAS has indeed been used in drug studies. Most drug studies that report any information on the validity of GAS, used Botulinum Toxin as an intervention for spasticity, usually in combination with physical or occupational therapy. The generalizability of the results of these validation studies is limited. The validity, responsiveness and reliability of GAS in drug studies have scarcely been studied. In only seven of the 38 drug studies that we found, some validation has been performed. The methods used to validate the measurements instruments often differ from the methods as proposed by COSMIN. The quality of the methods to assess measurement properties varies, and results are often difficult to interpret. We found 20 articles concerning non-drug studies reporting on the validity, responsiveness and inter-rater reliability of GAS. However, also in studies in which GAS was used to evaluate a non-drug intervention, the quality of the validity reports leaves much room for improvement.
In most articles, either drug or non-drug studies, no definition was given of the measurement properties that were assessed, the formulae used for calculation of parameters were not presented, and in some papers the results of the validity check were not reported [26, 31]. Also, none of the included articles describe hypotheses to test construct validity, which makes evaluating the reported results virtually impossible. Therefore, we conclude that the validity and reliability of GAS have not been researched extensively, neither in studies where a drug intervention was evaluated, nor in other studies.
Of all clinimetric characteristics that were investigated, the responsiveness of GAS was investigated most thoroughly. The responsiveness was consistently reported to be very good compared to other measurement instruments, such as the Gross Motor Function Measure (GMFM-66) in the evaluation of children with cerebral palsy, or the Standardized Mini Mental State Examination (SMMSE) for geriatric assessment. However, none of the studies evaluated the responsiveness according to the guidelines as proposed by Terwee et al. [19]. Therefore, it is difficult to be conclusive on the responsiveness of GAS, although the reported results suggest we may tentatively be optimistic.
The search of this systematic review was very sensitive, to make sure that no studies on GAS were missed. However, our definition of GAS is rather specific, which excludes studies with an approach that is similar, but not exactly the same. Also, we may have missed studies that did not use similar terminology, but did use an approach similar to GAS.
Our findings are consistent with previous systematic reviews on the measurement properties of GAS. For instance, Steenbeek et al. [10] concluded that, in the setting of pediatric rehabilitation, GAS is a very responsive method for treatment evaluation and individual goal setting, but sufficient knowledge is lacking about its reliability and validity, particularly. Also, in the field of psychogeriatrics, GAS may be considered useful from a theoretical point of view. Geriatric patients are heterogeneous, and GAS may be a useful tool to evaluate geriatric interventions. However, the measurement properties of GAS in geriatrics show mixed results. The evidence is not yet strong enough to state that GAS is an applicable outcome measure in this particular field [14]. In a systematic review on the feasibility of measurement instruments related to goal setting, GAS is considered a helpful tool for setting goals, although it is time-consuming and may be difficult for patients with cognitive impairments. However, the patient-centered nature of GAS makes it easier to focus on meaningful patient-directed treatment goals. Also, according to the results the scaling of GAS makes it possible to detect very small progress that may be of great significance to the patient, underlining its potential in responsiveness [45].
A problem in the evaluation of the validity of GAS may be that GAS does not measure one clear construct, since the content of the goals generally differs from patient to patient. One of the possibilities to overcome this inherent problem may be to make an item bank of possible goals that patients would be able to choose from, to make sure that the methodological properties of the goals are known [46]. However, this would be practically very difficult to achieve, since we suspect that for many orphan diseases the patient numbers are smaller, and goals could be more diverse than those of non-orphan disease patients. Another way of approaching the construct validity is to see GAS as a measurement instrument that measures the construct of the attainment of goals. Then, the construct validity could be evaluated by comparing GAS with another measurement instrument that evaluates the attainment of goals, such as the COPM. To our knowledge, this approach has not been considered so far.
The importance and difficulty of goals are often taken into account by assigning weights to the goals (more important goals are assigned a larger weight then less important goals). However, terms such as importance and difficulty are by nature subjective. What is important for one patient, may be less important for another. For example, a Duchenne patient may perceive being able to brush his teeth as very important, where someone else may conceive it as trivial. Can this difference in importance objectively be measured? In a study on the reliability of GAS weights, Marson, Wei and Wasserman [47] conclude that assigning weights to the goals of GAS according to the severity of the problem has an acceptable inter-rater reliability when scored by different objective students trained in the use of GAS. This indicates that although importance and difficulty are difficult to objectively measure, objective raters may still score goals similarly. However, more research should be carried out on this topic to answer the question more definitively.
GAS is a measurement instrument with a high potential, especially in rare diseases, but in order to use it in drug studies, more research on its validity is essential. One way of achieving this would be to use GAS as an additional measurement instrument in an ongoing drug trial, to further explore its validity. For GAS to be possibly useful, the effect of the evaluated drug should be objectively measureable in terms of behavior, and it should measure something that is valuable and noticeable for a patient, and cannot be measured otherwise. Also, the drug that is evaluated should have an effect that is also clinically relevant. Again, Duchenne Muscular Dystrophy may serve as an example. A potential drug should do more than just improve for instance the dystrophin values in muscle biopsies. It should be able to improve something that is valuable for the patient, which can be measured by activities that patients perceive as important, such as brushing teeth or using a computer. GAS may be a useful outcome measure, since it can evaluate a potential drug on a patient level, and is therefore intrinsically clinically relevant.
According to guidelines on Patient Reported Outcomes and Health Related Quality of Life by the FDA and EMA, and open comments on these guidelines by experts [48], the following qualities were essential: a PRO should be based on a clearly defined framework, patients should be involved in the development of the measurement instrument, PRO claims should be based on and supported by improvement in all domains of a specific disease, an appropriate recall period is necessary when the effects of an intervention are tested, the test-retest reliability should be assessed, as well as the ability to detect change and the interpretability of the measurement instrument. Finally, an effect found by a PRO measurement instrument can only be valid when found in an RCT.
In general these requirements also apply to GAS, e.g. patient involvement. However, not all of them are applicable to this instrument, such as test-retest reliability. Before GAS can be used in drug trials, more validity research is needed. GAS has not yet been sufficiently validated to be supported by the regulatory agencies, but it may have potential in specific drug trials, especially in rare diseases where there is a lack of validated and responsive outcome measurement instruments.
We conclude that currently there is insufficient information to assess the validity of GAS, due to the poor quality of the validity studies. However, the overall reported good responsiveness of GAS suggests that it may be a valuable measurement instrument. GAS is an outcome measure that is inherently relevant for patients, making it a valuable tool for research in heterogeneous and small samples. Therefore, we think that GAS needs further validation in drug studies, especially since GAS can be a potential solution when only a small heterogeneous patient group is available to test a promising new drug.
ADAS-cog, Alzheimer's disease assessment scale – cognitive subscale; AHA, assisting hand assessment; AMPS, assessment of motor and process scales; AQoL, assessment of quality of life; ARAT, action research arm test; AUC, Area under the receiver operating characteristics curve; BAD-scale, Barry-Albright Dystonia scale; CBS, Caregiving Burden Scale; CDS, Cardiac depression scale; CES-D, Center for epidemiological studies depression scale; CGI, clinical global impression; CHQ, child heath questionnaire; CIBIC-plus, Clinician's interview based impression of change-plus; COPM, Canadian occupational performance measure; DAD, disability assessment for dementia; DCD Pinch, dynamic computerized dynamometry; FAC, functional ambulation category; FAQ, functional activities questionnaire; FIM, functional independence measure; GAS, goal attainment scaling; GHQ, general health questionnaire; GMFM, gross motor function measure; HADS, hospital anxiety and depression scale; IADL, instrumental activities of daily living; ICC, intraclass correlation coefficient; LASIS, leeds adult spasticity impact scale; LoA, limits of agreement; MAS, modified Ashworth scale; MAUULF, Melbourne assessment of unilateral upper limb function; MHOQ, Michigan hand outcomes questionnaire; MIC, minimal important change; MMSE, mini-mental state examination; MPQ, McGill pain questionnaire; MTS, Modified Tardieu Scale; NHP, Nottingham health profile; NRS, pain intensity numerical rating scale; OARS IADL, Older Americans resource scale for instrumental activities of daily living; ODQ, Oswestry low back pain disability questionnaire; PAIRS, pain and impairment relationship scale; PDMS-FM, peabody developmental motor scale – fine motor; PEDI, pediatric evaluation of disability inventory; PET-GAS, psychometrically equivalence tested goal attainment scaling; PSMS, physical self-maintenance scale; QoL, quality of life; QUEST, quality of upper extremity skills test; RR, responsiveness ratio; SDC, smallest detectable change; TSA, Tardieu Spasticity Angle
McDonald CM, Henricson EK, Abresch RT, Florence J, Eagle M, Gappmaier E, et al. The 6-minute walk test and other clinical endpoints in duchenne muscular dystrophy: reliability, concurrent validity, and minimal clinically important differences from a multicenter study. Muscle Nerve. 2013;48(3):357–68. doi:10.1002/mus.23905.
McDonald CM, Henricson EK, Han JJ, Abresch RT, Nicorici A, Elfring GL, et al. The 6-minute walk test as a new outcome measure in Duchenne muscular dystrophy. Muscle Nerve. 2010;41(4):500–10. doi:10.1002/mus.21544.
Mayhew A, Mazzone ES, Eagle M, Duong T, Ash M, Decostre V, et al. Development of the performance of the upper limb module for Duchenne muscular dystrophy. Dev Med Child Neurol. 2013;55(11):1038–45.
De Vet HC, Terwee CB, Mokkink LB, Knol DL. Measurement in medicine: a practical guide. Cambridge: Cambridge University Press; 2011.
Mendell JR, Csimma C, McDonald CM, Escolar DM, Janis S, Porter JD, et al. Challenges in drug development for muscle disease: a stakeholders' meeting. Muscle Nerve. 2007;35(1):8–16.
Kiresuk TJ, Sherman RE. Goal attainment scaling: a general method for evaluating comprehensive community mental health programs. Community Ment Health J. 1968;4(6):443–53. doi:10.1007/bf01530764.
Kiresuk TJ, Smith A, Cardillo JE. Goal attainment scaling: applications, theory, and measurement. London: Psychology Press; 2014.
Odding E, Roebroeck ME, Stam HJ. The epidemiology of cerebral palsy: incidence, impairments and risk factors. Disabil Rehabil. 2006;28(4):183–91. doi:10.1080/09638280500158422.
Pandyan AD, Gregoric M, Barnes MP, Wood D, Van Wijck F, Burridge J, et al. Spasticity: clinical perceptions, neurological realities and meaningful measurement. Disabil Rehabil. 2005;27(1–2):2–6.
Steenbeek D, Ketelaar M, Galama K, Gorter JW. Goal attainment scaling in paediatric rehabilitation: a critical review of the literature. Dev Med Child Neurol. 2007;49(7):550–6. doi:10.1111/j.1469-8749.2007.00550.x.
van Kuijk AA, Geurts AC, Bevaart BJ, van Limbeek J. Treatment of upper extremity spasticity in stroke patients by focal neuronal or neuromuscular blockade: a systematic review of the literature. J Rehabil Med. 2002;34(2):51–61.
Wade DT. Goal planning in stroke rehabilitation: evidence. Topology. 1999;6(2):37–42. http://dx.doi.org/10.1310/FMYJ-RKG1-YANB-WXRH.
Birks J, Craig D. Galantamine for vascular cognitive impairment. Cochrane Database Syst Rev. 2013;4:Cd004746. doi:10.1002/14651858.CD004746.pub2.
Bouwens SF, van Heugten CM, Verhey FR. Review of goal attainment scaling as a useful outcome measure in psychogeriatric patients with cognitive disorders. Dement Geriatr Cogn Disord. 2008;26(6):528–40. doi:10.1159/000178757.
Loy C, Schneider L. Galantamine for Alzheimer's disease and mild cognitive impairment. Cochrane Database Syst Rev. 2006;1:Cd001747. doi:10.1002/14651858.CD001747.pub3.
Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, et al. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study. Qual Life Res. 2010;19(4):539–49.
Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg. 2010;8(5):336–41.
Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, et al. The COSMIN study reached international consensus on taxonomy, terminology, and definitions of measurement properties for health-related patient-reported outcomes. J Clin Epidemiol. 2010;63(7):737–45.
Terwee CB, Bot SD, de Boer MR, van der Windt DA, Knol DL, Dekker J, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60(1):34–42.
De Beurs E, Lange A, Blonk RWB, Koele P, Van Balkom AJLM, Van Dyck R. Goal attainment scaling: an idiosyncratic method to assess treatment effectiveness in agoraphobia. J Psychopathol Behav Assess. 1993;15(4):357–73.
Palisano RJ, Gowland C. Validity of goal attainment scaling in infants with motor delays. Phys Ther. 1993;73(10):651–60.
Stolee P, Awad M, Byrne K, DeForge R, Clements S, Glenny C. A multi-site study of the feasibility and clinical utility of Goal Attainment Scaling in geriatric day hospitals. Disabil Rehabil. 2012;34(20):1716–26. http://dx.doi.org/10.3109/09638288.2012.660600.
Yip AM, Gorman MC, Stadnyk K, Mills WG, MacPherson KM, Rockwood K. A standardized menu for Goal Attainment Scaling in the care of frail elders. Gerontologist. 1998;38(6):735–42.
Turner-Stokes L, Fheodoroff K, Jacinto J, Maisonobe P. Results from the Upper Limb International Spasticity Study-II (ULIS-II): a large, international, prospective cohort study investigating practice and goal attainment following treatment with botulinum toxin a in real-life clinical management. BMJ Open. 2013;3(6). http://dx.doi.org/10.1136/bmjopen-2013-002771.
Turner-Stokes L, Fheodoroff K, Jacinto J, Maisonobe P, Zakine B. Upper limb international spasticity study: rationale and protocol for a large, international, multicentre prospective cohort study investigating management and goal attainment following treatment with botulinum toxin A in real-life clinical practice. BMJ Open. 2013;3(3). http://dx.doi.org/10.1136/bmjopen-2012-002230.
Stolee P, Stadnyk K, Myers AM, Rockwood K. An individualized approach to outcome measurement in geriatric rehabilitation. J Gerontol Ser A Biol Med Sci. 1999;54A(12):M641–M7. http://dx.doi.org/10.1093/gerona/54.12.M641.
Rockwood K, Stolee P, Howard K, Mallery L. Use of Goal Attainment Scaling to measure treatment effects in an anti-dementia drug trial. Neuroepidemiology. 1996;15(6):330–8.
Woodward CA, Santa-Barbara J, Levin S, Epstein NB. The role of goal attainment scaling in evaluating family therapy outcome. Am J Orthopsychiatry. 1978;48(3):464–76.
Cusick A, McIntyre S, Novak I, Lannin N, Lowe K. A comparison of goal attainment scaling and the Canadian Occupational Performance Measure for paediatric rehabilitation research. Pediatr Rehabil. 2006;9(2):149–57.
Gordon JE, Powell C, Rockwood K. Goal attainment scaling as a measure of clinically important change in nursing-home patients. Age Ageing. 1999;28(3):275–81.
Rockwood K, Stolee P, Fox RA. Use of goal attainment scaling in measuring clinically important change in the frail elderly. J Clin Epidemiol. 1993;46(10):1113–8.
Brown DA, Effgen SK, Palisano RJ. Performance following ability-focused physical therapy intervention in individuals with severely limited physical and cognitive abilities. Phys Ther. 1998;78(9):934–47. discussion 48–50.
Rockwood K, Joyce B, Stolee P. Use of goal attainment scaling in measuring clinically important change in cognitive rehabilitation patients. J Clin Epidemiol. 1997;50(5):581–8.
Ruble L, McGrew JH. Teacher and child predictors of achieving IEP goals of children with autism. J Autism Dev Disord. 2013;43(12):2748–63. http://dx.doi.org/10.1007/s10803-013-1884-x.
Ruble L, McGrew JH, Toland MD. Goal attainment scaling as an outcome measure in randomized controlled trials of psychosocial interventions in autism. J Autism Dev Disord. 2012;42(9):1974–83. http://dx.doi.org/10.1007/s10803-012-1446-7.
Ruble LA, McGrew JH, Toland MD, Dalrymple NJ, Jung LA. A randomized controlled trial of COMPASS web-based and face-to-face teacher coaching in autism. J Consult Clin Psychol. 2013;81(3):566–72. http://dx.doi.org/10.1037/a0032003.
Bovend'Eerdt TJ, Dawes H, Izadi H, Wade DT. Agreement between two different scoring procedures for goal attainment scaling is low. J Rehabil Med. 2011;43(1):46–9. http://dx.doi.org/10.2340/16501977-0624.
Steenbeek D, Meester-Delver A, Becher JG, Lankhorst GJ. The effect of botulinum toxin type a treatment of the lower extremity on the level of functional abilities in children with cerebral palsy: evaluation with goal attainment scaling. Clin Rehabil. 2005;19(3):274–82.
Hartman D, Borrie MJ, Davison E, Stolee P. Use of goal attainment scaling in a dementia special care unit. Am J Alzheimers Dis. 1997;12(3):111–6. http://dx.doi.org/10.1177/153331759701200303.
Khan F, Pallant JF, Turner-Stokes L. Use of goal attainment scaling in inpatient rehabilitation for persons with multiple sclerosis. Arch Phys Med Rehabil. 2008;89(4):652–9. http://dx.doi.org/10.1016/j.apmr.2007.09.049.
Rockwood K, Howlett S, Stadnyk K, Carver D, Powell C, Stolee P. Responsiveness of goal attainment scaling in a randomized controlled trial of comprehensive geriatric assessment. J Clin Epidemiol. 2003;56(8):736–43.
Turner-Stokes L, Williams H, Johnson J. Goal attainment scaling: does it provide added value as a person-centred measure for evaluation of outcome in neurorehabilitation following acquired brain injury? J Rehabil Med. 2009;41(7):528–35. http://dx.doi.org/10.2340/16501977-0383.
Turner-Stokes L, Baguley IJ, De Graaff S, Katrak P, Davies L, McCrory P, et al. Goal attainment scaling in the evaluation of treatment of upper limb spasticity with botulinum toxin: a secondary analysis from a double-blind placebo-controlled randomized clinical trial. J Rehabil Med. 2010;42(1):81–9. http://dx.doi.org/10.2340/16501977-0474.
Steenbeek D, Gorter JW, Ketelaar M, Galama K, Lindeman E. Responsiveness of Goal Attainment Scaling in comparison to two standardized measures in outcome evaluation of children with cerebral palsy. Clin Rehabil. 2011;25(12):1128–39. http://dx.doi.org/10.1177/0269215511407220.
Stevens A, Beurskens A, Köke A, van der Weijden T. The use of patient-specific measurement instruments in the process of goal-setting: a systematic review of available instruments and their feasibility. Clin Rehabil. 2013;0269215513490178.
Tennant A. Goal attainment scaling: current methodological challenges. Disabil Rehabil. 2007;29(20–21):1583–8.
Marson SM, Wei G, Wasserman D. A reliability analysis of goal attainment scaling (GAS) weights. Am J Eval. 2009;30(2):203–16.
Bottomley A, Jones D, Claassens L. Patient-reported outcomes: assessment and current perspectives of the guidelines of the food and drug administration and the reflection paper of the European medicines agency. Eur J Cancer. 2009;45(3):347–53.
Mokkink L, Terwee C, Patrick D, Alonso J, Strat-ford P, Knol D, et al. International consensus on taxonomy, terminology, and definitionsof measurement properties for health-related patientreportedoutcomes: results of the COSMIN study. J Clin Epidemiol.
Rockwood K, Graham JE, Fay S, Investigators A. Goal setting and attainment in Alzheimer's disease patients treated with donepezil. J Neurol Neurosurg Psychiatry. 2002;73(5):500–7.
Ashford S, Turner-Stokes L. Ma of shoulder and proximal upper limb spasticity using botulinum toxin and concurrent therapy interventions: a preliminary analysis of goals and outcomes. Disabil Rehabil. 2009;31(3):220–6. http://dx.doi.org/10.1080/09638280801906388.
Barden HL, Baguley IJ, Nott MT, Chapparo C. Dynamic computerised hand dynamometry: Measuring outcomes following upper limb botulinum toxin-A injections in adults with acquired brain injury. J Rehabil Med. 2014;46(4):314–20.
Barden HLH, Baguley IJ, Nott MT, Chapparo C. Measuring spasticity and fine motor control (pinch) change in the hand after botulinum toxin-a injection using dynamic computerized hand dynamometry. Arch Phys Med Rehabil. 2014;95(12):2402–9.
Bonouvrie LA, Becher JG, Vles JSH, Boeschoten K, Soudant D, de Groot V, et al. Intrathecal baclofen treatment in dystonic cerebral palsy: a randomized clinical trial: The IDYS trial. BMC Pediatr. 2013;13(1). http://dx.doi.org/10.1186/1471-2431-13-175.
Borg J, Ward AB, Wissel J, Kulkarni J, Sakel M, Ertzgaard P, et al. Rationale and design of a multicentre, double-blind, prospective, randomized, European and Canadian study: evaluating patient outcomes and costs of managing adults with post-stroke focal spasticity. J Rehabil Med. 2011;43(1):15–22. http://dx.doi.org/10.2340/16501977-0663.
Demetrios M, Gorelik A, Louie J, Brand C, Baguley IJ, Khan F. Outcomes of ambulatory rehabilitation programmes following Botulinum toxin for spasticity in adults with stroke. J Rehabil Med. 2014;46(8):730–7.
Ferrari A, Maoret AR, Muzzini S, Alboresi S, Lombardi F, Sgandurra G, et al. A randomized trial of upper limb botulimun toxin versus placebo injection, combined with physiotherapy, in children with hemiplegia. Res Dev Disabil. 2014;35(10):2505–13.
Fietzek UM, Schroeteler FE, Ceballos-Baumann AO. Goal attainment after treatment of parkinsonian camptocormia with botulinum toxin. Mov Disord. 2009;24(13):2027–8. http://dx.doi.org/10.1002/mds.22676.
Lam K, Lau KK, So KK, Tam CK, Wu YM, Cheung G, et al. Can botulinum toxin decrease carer burden in long term care residents with upper limb spasticity? A randomized controlled study. J Am Med Dir Assoc. 2012;13(5):477–84. http://dx.doi.org/10.1016/j.jamda.2012.03.005.
Lam K, Wong D, Tam CK, Wah SH, Myint MWWJ, Yu TKK, et al. Ultrasound and electrical stimulator-guided obturator nerve block with phenol in the treatment of Hip adductor spasticity in long-term care patients: a randomized, triple blind, placebo controlled study. J Am Med Dir Assoc. 2015;16(3):238–46.
Leroi I, Atkinson R, Overshott R. Memantine improves goal attainment and reduces caregiver burden in Parkinson's disease with dementia. Int J Geriatr Psychiatry. 2014;29(9):899–905.
Lowe K, Novak I, Cusick A. Low-dose/high-concentration localized botulinum toxin A improves upper limb movement and function in children with hemiplegic cerebral palsy. Dev Med Child Neurol. 2006;48(3):170–5.
Lowe K, Novak I, Cusick A. Repeat injection of botulinum toxin A is safe and effective for upper limb movement and function in children with cerebral palsy. Dev Med Child Neurol. 2007;49(11):823–9.
Mall V, Heinen F, Siebel A, Bertram C, Hafkemeyer U, Wissel J, et al. Treatment of adductor spasticity with BTX-A in children with CP: a randomized, double-blind, placebo-controlled study. Dev Med Child Neurol. 2006;48(1):10–3.
McCrory P, Turner-Stokes L, Baguley IJ, De Graaff S, Katrak P, Sandanam J, et al. Botulinum toxin A for treatment of upper limb spasticity following stroke: a multi-centre randomized placebo-controlled study of the effects on quality of life and other person-centred outcomes. J Rehabil Med. 2009;41(7):536–44. http://dx.doi.org/10.2340/16501977-0366.
Molenaers G, Fagard K, Van Campenhout A, Desloovere K. Botulinum toxin A treatment of the lower extremities in children with cerebral palsy. J Child Orthop. 2013;7(5):383–7.
Nott MT, Barden HL, Baguley IJ. Goal attainment following upper-limb botulinum toxin-A injections: Are we facilitating achievement of client-centred goals? J Rehabil Med. 2014;46(9):864–8.
Olesch CA, Greaves S, Imms C, Reid SM, Graham HK. Repeat botulinum toxin-A injections in the upper limb of children with hemiplegia: a randomized controlled trial. Dev Med Child Neurol. 2010;52(1):79–86. http://dx.doi.org/10.1111/j.1469-8749.2009.03387.x.
Rice J, Waugh MC. Pilot study on trihexyphenidyl in the treatment of dystonia in children with cerebral palsy. J Child Neurol. 2009;24(2):176–82. http://dx.doi.org/10.1177/0883073808322668.
Rockwood K, Fay S, Song X, MacKnight C, Gorman M. Video-imaging synthesis of treating Alzheimer's disease I. Attainment of treatment goals by people with Alzheimer's disease receiving galantamine: a randomized controlled trial. Cmaj. 2006;174(8):1099–105.
Rockwood K, Fay S, Jarrett P, Asp E. Effect of galantamine on verbal repetition in AD: a secondary analysis of the VISTA trial. Neurology. 2007;68(14):1116–21.
Rockwood K, Fay S, Gorman M, Carver D, Graham JE. The clinical meaningfulness of ADAS-Cog changes in Alzheimer's disease patients treated with donepezil in an open-label trial. BMC Neurol. 2007;7:26.
Rockwood K, Fay S, Gorman M. The ADAS-cog and clinically meaningful change in the VISTA clinical trial of galantamine for Alzheimer's disease. Int J Geriatr Psychiatry. 2010;25(2):191–201. http://dx.doi.org/10.1002/gps.2319.
Russo RN, Crotty M, Miller MD, Murchland S, Flett P, Haan E. Upper-limb botulinum toxin A injection and occupational therapy in children with hemiplegic cerebral palsy identified from a population register: a single-blind, randomized, controlled trial. Pediatrics. 2007;119(5):e1149–58.
Scheinberg A, Hall K, Lam LT, O'Flaherty S. Oral baclofen in children with cerebral palsy: a double-blind crossover pilot study. J Paediatr Child Health. 2006;42(11):715–20.
Schramm A, Ndayisaba J-P, Brinke M, Hecht M, Herrmann C, Huber M et al. Spasticity treatment with onabotulinumtoxin a: Data from a prospective german real-life patient registry. J Neural Transm. 2014(Pagination):No Pagination Specified. http://dx.doi.org/10.1007/s00702-013-1145-3.
Turner-Stokes L, Ashford S. Serial injection of botulinum toxin for muscle imbalance due to regional spasticity in the upper limb. Disabil Rehabil. 2007;29(23):1806–12.
Wallen MA, O'Flaherty SJ, Waugh MCA. Functional Outcomes of Intramuscular Botulinum Toxin Type A in the Upper Limbs of Children with Cerebral Palsy: A Phase II Trial. Arch Phys Med Rehabil. 2004;85(2):192–200. http://dx.doi.org/10.1016/j.apmr.2003.05.008.
Wallen M, O'Flaherty SJ, Waugh MC. Functional outcomes of intramuscular botulinum toxin type a and occupational therapy in the upper limbs of children with cerebral palsy: a randomized controlled trial. Arch Phys Med Rehabil. 2007;88(1):1–10.
Ward FA, Pulido-Velazquez M. Incentive pricing and cost recovery at the basin scale. J environ manage. 2009;90(1):293–313. http://dx.doi.org/10.1016/j.jenvman.2007.09.009.
Ward AB, Wissel J, Borg J, Ertzgaard P, Herrmann C, Kulkarni J, et al. Functional goal achievement in poststroke spasticity patients: The BOTOX® Economic Spasticity Trial (BEST). J Rehabil Med. 2014;46(6):504–13.
Bovend'Eerdt TJ, Dawes H, Sackley C, Izadi H, Wade DT. An integrated motor imagery program to improve functional task performance in neurorehabilitation: a single-blind randomized controlled trial. Arch Phys Med Rehabil. 2010;91(6):939–46.
Fisher K, Hardie RJ. Goal attainment scaling in evaluating a multidisciplinary pain management programme. Clin Rehabil. 2002;16(8):871–7.
Sheldon KM, Elliot AJ. Not all personal goals are personal: Comparing autonomous and controlled reasons for goals as predictors of effort and attainment. Personal Soc Psychol Bull. 1998;24(5):546–57. http://dx.doi.org/10.1177/0146167298245010.
We would like to thank René Spijker for his excellent help with the design of the literature search.
This research was funded by the EU FP7 program: EU FP7 HEALTH.2013.4.2-3 project Advances in Small Trials dEsign for Regulatory Innovation and eXcellence (Asterix): Grant 603160.
The dataset supporting the conclusions of this article is included within the article and its additional files.
CMWG has set up and executed the study, and written the article. MCJ has co-written the article, was second reviewer in the abstract selection process and second reviewer in the data-extraction process, and has helped with the analysis and interpretation. SSW has co-written the article, was second reviewer in the abstract selection process and has helped with the analysis and interpretation. JHL has co-written the article, was second reviewer in the abstract selection process and second reviewer in the data-extraction process, and has designed and supervised the study. All authors read and approved the final manuscript.
Not applicable, as this study concerns literature only.
Pediatric clinical Research Office, Academic Medical Center, University of Amsterdam, Meibergdreef 9, 1105, AZ, Amsterdam, Netherlands
Charlotte M. W. Gaasterland, Marijke C. Jansen-van der Weide, Stephanie S. Weinreich & Johanna H. van der Lee
Department of Clinical Genetics and EMGO Institute for Health and Care Research, VU University Medical Center, BS7, PO Box 7057, 1007, MB, Amsterdam, Netherlands
Stephanie S. Weinreich
Charlotte M. W. Gaasterland
Marijke C. Jansen-van der Weide
Johanna H. van der Lee
Correspondence to Charlotte M. W. Gaasterland.
GAS search. This additional file is the complete search with all the terms that we used to come to the set of articles that we included. (PDF 354 kb)
Data extraction form GAS. This additional file is the complete data extraction form that we have used for the included articles. (PDF 158 kb)
Gaasterland, C.M.W., Jansen-van der Weide, M.C., Weinreich, S.S. et al. A systematic review to investigate the measurement properties of goal attainment scaling, towards use in drug trials. BMC Med Res Methodol 16, 99 (2016). https://doi.org/10.1186/s12874-016-0205-4
Goal attainment scaling
Drug trials | CommonCrawl |
A synthetic derivative of Piracetam, aniracetam is believed to be the second most widely used nootropic in the Racetam family, popular for its stimulatory effects because it enters the bloodstream quickly. Initially developed for memory and learning, many anecdotal reports also claim that it increases creativity. However, clinical studies show no effect on the cognitive functioning of healthy adult mice.
Tuesday: I went to bed at 1am, and first woke up at 6am, and I wrote down a dream; the lucid dreaming book I was reading advised that waking up in the morning and then going back for a short nap often causes lucid dreams, so I tried that - and wound up waking up at 10am with no dreams at all. Oops. I take a pill, but the whole day I don't feel so hot, although my conversation and arguments seem as cogent as ever. I'm also having a terrible time focusing on any actual work. At 8 I take another; I'm behind on too many things, and it looks like I need an all-nighter to catch up. The dose is no good; at 11, I still feel like at 8, possibly worse, and I take another along with the choline+piracetam (which makes a total of 600mg for the day). Come 12:30, and I disconsolately note that I don't seem any better, although I still seem to understand the IQ essays I am reading. I wonder if this is tolerance to modafinil, or perhaps sleep catching up to me? Possibly it's just that I don't remember what the quasi-light-headedness of modafinil felt like. I feel this sort of zombie-like state without change to 4am, so it must be doing something, when I give up and go to bed, getting up at 7:30 without too much trouble. Some N-backing at 9am gives me some low scores but also some pretty high scores (38/43/66/40/24/67/60/71/54 or ▂▂▆▂▁▆▅▇▄), which suggests I can perform normally if I concentrate. I take another pill and am fine the rest of the day, going to bed at 1am as usual.
…It is without activity in man! Certainly not for the lack of trying, as some of the dosage trials that are tucked away in the literature (as abstracted in the Qualitative Comments given above) are pretty heavy duty. Actually, I truly doubt that all of the experimenters used exactly that phrase, No effects, but it is patently obvious that no effects were found. It happened to be the phrase I had used in my own notes.
I'm wary of others, though. The trouble with using a blanket term like "nootropics" is that you lump all kinds of substances in together. Technically, you could argue that caffeine and cocaine are both nootropics, but they're hardly equal. With so many ways to enhance your brain function, many of which have significant risks, it's most valuable to look at nootropics on a case-by-case basis. Here's a list of 9 nootropics, along with my thoughts on each.
The abuse liability of caffeine has been evaluated.147,148 Tolerance development to the subjective effects of caffeine was shown in a study in which caffeine was administered at 300 mg twice each day for 18 days.148 Tolerance to the daytime alerting effects of caffeine, as measured by the MSLT, was shown over 2 days on which 250 g of caffeine was given twice each day48 and to the sleep-disruptive effects (but not REM percentage) over 7 days of 400 mg of caffeine given 3 times each day.7 In humans, placebo-controlled caffeine-discontinuation studies have shown physical dependence on caffeine, as evidenced by a withdrawal syndrome.147 The most frequently observed withdrawal symptom is headache, but daytime sleepiness and fatigue are also often reported. The withdrawal-syndrome severity is a function of the dose and duration of prior caffeine use…At higher doses, negative effects such as dysphoria, anxiety, and nervousness are experienced. The subjective-effect profile of caffeine is similar to that of amphetamine,147 with the exception that dysphoria/anxiety is more likely to occur with higher caffeine doses than with higher amphetamine doses. Caffeine can be discriminated from placebo by the majority of participants, and correct caffeine identification increases with dose.147 Caffeine is self-administered by about 50% of normal subjects who report moderate to heavy caffeine use. In post-hoc analyses of the subjective effects reported by caffeine choosers versus nonchoosers, the choosers report positive effects and the nonchoosers report negative effects. Interestingly, choosers also report negative effects such as headache and fatigue with placebo, and this suggests that caffeine-withdrawal syndrome, secondary to placebo choice, contributes to the likelihood of caffeine self-administration. This implies that physical dependence potentiates behavioral dependence to caffeine.
Many people find that they experience increased "brain fog" as they age, some of which could be attributed to early degeneration of synapses and neural pathways. Some drugs have been found to be useful for providing cognitive improvements in these individuals. It's possible that these supplements could provide value by improving brain plasticity and supporting the regeneration of cells.10
Medication can be ineffective if the drug payload is not delivered at its intended place and time. Since an oral medication travels through a broad pH spectrum, the pill encapsulation could dissolve at the wrong time. However, a smart pill with environmental sensors, a feedback algorithm and a drug release mechanism can give rise to smart drug delivery systems. This can ensure optimal drug delivery and prevent accidental overdose.
Neuroplasticity, or the brain's ability to change and reorganize itself in response to intrinsic and extrinsic factors, indicates great potential for us to enhance brain function by medical or other interventions. Psychotherapy has been shown to induce structural changes in the brain. Other interventions that positively influence neuroplasticity include meditation, mindfulness , and compassion.
The methodology would be essentially the same as the vitamin D in the morning experiment: put a multiple of 7 placebos in one container, the same number of actives in another identical container, hide & randomly pick one of them, use container for 7 days then the other for 7 days, look inside them for the label to determine which period was active and which was placebo, refill them, and start again.
Like caffeine, nicotine tolerates rapidly and addiction can develop, after which the apparent performance boosts may only represent a return to baseline after withdrawal; so nicotine as a stimulant should be used judiciously, perhaps roughly as frequent as modafinil. Another problem is that nicotine has a half-life of merely 1-2 hours, making regular dosing a requirement. There is also some elevated heart-rate/blood-pressure often associated with nicotine, which may be a concern. (Possible alternatives to nicotine include cytisine, 2'-methylnicotine, GTS-21, galantamine, Varenicline, WAY-317,538, EVP-6124, and Wellbutrin, but none have emerged as clearly superior.)
As I am not any of the latter, I didn't really expect a mental benefit. As it happens, I observed nothing. What surprised me was something I had forgotten about: its physical benefits. My performance in Taekwondo classes suddenly improved - specifically, my endurance increased substantially. Before, classes had left me nearly prostrate at the end, but after, I was weary yet fairly alert and happy. (I have done Taekwondo since I was 7, and I have a pretty good sense of what is and is not normal performance for my body. This was not anything as simple as failing to notice increasing fitness or something.) This was driven home to me one day when in a flurry before class, I prepared my customary tea with piracetam, choline & creatine; by the middle of the class, I was feeling faint & tired, had to take a break, and suddenly, thunderstruck, realized that I had absentmindedly forgot to actually drink it! This made me a believer.
The majority of smart pills target a limited number of cognitive functions, which is why a group of experts gathered to discover a formula which will empower the entire brain and satisfy the needs of students, athletes, and professionals. Mind Lab Pro® combines 11 natural nootropics to affect all 4 areas of mental performance, unlocking the full potential of your brain. Its carefully designed formula will provide an instant boost, while also delivering long-term benefits.
That is, perhaps light of the right wavelength can indeed save the brain some energy by making it easier to generate ATP. Would 15 minutes of LLLT create enough ATP to make any meaningful difference, which could possibly cause the claimed benefits? The problem here is like that of the famous blood-glucose theory of willpower - while the brain does indeed use up more glucose while active, high activity uses up very small quantities of glucose/energy which doesn't seem like enough to justify a mental mechanism like weak willpower.↩
It is a known fact that cognitive decline is often linked to aging. It may not be as visible as skin aging, but the brain does in fact age. Often, cognitive decline is not noticeable because it could be as mild as forgetting names of people. However, research has shown that even in healthy adults, cognitive decline can start as early as in the late twenties or early thirties.
In my last post, I talked about the idea that there is a resource that is necessary for self-control…I want to talk a little bit about the candidate for this resource, glucose. Could willpower fail because the brain is low on sugar? Let's look at the numbers. A well-known statistic is that the brain, while only 2% of body weight, consumes 20% of the body's energy. That sounds like the brain consumes a lot of calories, but if we assume a 2,400 calorie/day diet - only to make the division really easy - that's 100 calories per hour on average, 20 of which, then, are being used by the brain. Every three minutes, then, the brain - which includes memory systems, the visual system, working memory, then emotion systems, and so on - consumes one (1) calorie. One. Yes, the brain is a greedy organ, but it's important to keep its greediness in perspective… Suppose, for instance, that a brain in a person exerting their willpower - resisting eating brownies or what have you - used twice as many calories as a person not exerting willpower. That person would need an extra one third of a calorie per minute to make up the difference compared to someone not exerting willpower. Does exerting self control burn more calories?
A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes."
Studies show that B vitamin supplements can protect the brain from cognitive decline. These natural nootropics can also reduce the likelihood of developing neurodegenerative diseases. The prevention of Alzheimer's and even dementia are among the many benefits. Due to their effects on mental health, B vitamins make an excellent addition to any smart drug stack.
The flanker task is designed to tax cognitive control by requiring subjects to respond based on the identity of a target stimulus (H or S) and not the more numerous and visually salient stimuli that flank the target (as in a display such as HHHSHHH). Servan-Schreiber, Carter, Bruno, and Cohen (1998) administered the flanker task to subjects on placebo and d-AMP. They found an overall speeding of responses but, more importantly, an increase in accuracy that was disproportionate for the incongruent conditions, that is, the conditions in which the target and flankers did not match and cognitive control was needed.
Manually mixing powders is too annoying, and pre-mixed pills are expensive in bulk. So if I'm not actively experimenting with something, and not yet rich, the best thing is to make my own pills, and if I'm making my own pills, I might as well make a custom formulation using the ones I've found personally effective. And since making pills is tedious, I want to not have to do it again for years. 3 years seems like a good interval - 1095 days. Since one is often busy and mayn't take that day's pills (there are enough ingredients it has to be multiple pills), it's safe to round it down to a nice even 1000 days. What sort of hypothetical stack could I make? What do the prices come out to be, and what might we omit in the interests of protecting our pocketbook?
Fatty acids are well-studied natural smart drugs that support many cognitive abilities. They play an essential role in providing structural support to cell membranes. Fatty acids also contribute to the growth and repair of neurons. Both functions are crucial for maintaining peak mental acuity as you age. Among the most prestigious fatty acids known to support cognitive health are:
Bacopa is a supplement herb often used for memory or stress adaptation. Its chronic effects reportedly take many weeks to manifest, with no important acute effects. Out of curiosity, I bought 2 bottles of Bacognize Bacopa pills and ran a non-randomized non-blinded ABABA quasi-self-experiment from June 2014 to September 2015, measuring effects on my memory performance, sleep, and daily self-ratings of mood/productivity. Because of the very slow onset, small effective sample size, definite temporal trends probably unrelated to Bacopa, and noise in the variables, the results were as expected, ambiguous, and do not strongly support any correlation between Bacopa and memory/sleep/self-rating (+/-/- respectively).
Some supplement blends, meanwhile, claim to work by combining ingredients – bacopa, cat's claw, huperzia serrata and oat straw in the case of Alpha Brain, for example – that have some support for boosting cognition and other areas of nervous system health. One 2014 study in Frontiers in Aging Neuroscience, suggested that huperzia serrata, which is used in China to fight Alzheimer's disease, may help slow cell death and protect against (or slow the progression of) neurodegenerative diseases. The Alpha Brain product itself has also been studied in a company-funded small randomized controlled trial, which found Alpha Brain significantly improved verbal memory when compared to adults who took a placebo.
We included studies of the effects of these drugs on cognitive processes including learning, memory, and a variety of executive functions, including working memory and cognitive control. These studies are listed in Table 2, along with each study's sample size, gender, age and tasks administered. Given our focus on cognition enhancement, we excluded studies whose measures were confined to perceptual or motor abilities. Studies of attention are included when the term attention refers to an executive function but not when it refers to the kind of perceptual process taxed by, for example, visual search or dichotic listening or when it refers to a simple vigilance task. Vigilance may affect cognitive performance, especially under conditions of fatigue or boredom, but a more vigilant person is not generally thought of as a smarter person, and therefore, vigilance is outside of the focus of the present review. The search and selection process is summarized in Figure 2.
Productivity is the most cited reason for using nootropics. With all else being equal, smart drugs are expected to give you that mental edge over other and advance your career. Nootropics can also be used for a host of other reasons. From studying to socialising. And from exercise and health to general well-being. Different nootropics cater to different audiences.
The absence of a suitable home for this needed research on the current research funding landscape exemplifies a more general problem emerging now, as applications of neuroscience begin to reach out of the clinical setting and into classrooms, offices, courtrooms, nurseries, marketplaces, and battlefields (Farah, 2011). Most of the longstanding sources of public support for neuroscience research are dedicated to basic research or medical applications. As neuroscience is increasingly applied to solving problems outside the medical realm, it loses access to public funding. The result is products and systems reaching the public with less than adequate information about effectiveness and/or safety. Examples include cognitive enhancement with prescription stimulants, event-related potential and fMRI-based lie detection, neuroscience-based educational software, and anti-brain-aging computer programs. Research and development in nonmedical neuroscience are now primarily the responsibility of private corporations, which have an interest in promoting their products. Greater public support of nonmedical neuroscience research, including methods of cognitive enhancement, will encourage greater knowledge and transparency concerning the efficacy and safety of these products and will encourage the development of products based on social value rather than profit value.
(People aged <=18 shouldn't be using any of this except harmless stuff - where one may have nutritional deficits - like fish oil & vitamin D; melatonin may be especially useful, thanks to the effects of screwed-up school schedules & electronics use on teenagers' sleep. Changes in effects with age are real - amphetamines' stimulant effects and modafinil's histamine-like side-effects come to mind as examples.)
Vitamin B12 is also known as Cobalamin and is a water-soluble essential vitamin. A (large) deficiency of Vitamin B12 will ultimately lead to cognitive impairment [52]. Older people and people who don't eat meat are at a higher risk than young people who eat more meat. And people with depression have less Vitamin B12 than the average population [53].
And in his followup work, An opportunity cost model of subjective effort and task performance (discussion). Kurzban seems to have successfully refuted the blood-glucose theory, with few dissenters from commenting researchers. The more recent opinion seems to be that the sugar interventions serve more as a reward-signal indicating more effort is a good idea, not refueling the engine of the brain (which would seem to fit well with research on procrastination).↩
One symptom of Alzheimer's disease is a reduced brain level of the neurotransmitter called acetylcholine. It is thought that an effective treatment for Alzheimer's disease might be to increase brain levels of acetylcholine. Another possible treatment would be to slow the death of neurons that contain acetylcholine. Two drugs, Tacrine and Donepezil, are both inhibitors of the enzyme (acetylcholinesterase) that breaks down acetylcholine. These drugs are approved in the US for treatment of Alzheimer's disease.
There is much to be appreciated in a brain supplement like BrainPill (never mind the confusion that may stem from the generic-sounding name) that combines tried-and-tested ingredients in a single one-a-day formulation. The consistency in claims and what users see in real life is an exemplary one, which convinces us to rate this powerhouse as the second on this review list. Feeding one's brain with nootropics and related supplements entails due diligence in research and seeking the highest quality, and we think BrainPill is up to task. Learn More...
Intrigued by old scientific results & many positive anecdotes since, I experimented with microdosing LSD - taking doses ~10μg, far below the level at which it causes its famous effects. At this level, the anecdotes claim the usual broad spectrum of positive effects on mood, depression, ability to do work, etc. After researching the matter a bit, I discovered that as far as I could tell, since the original experiment in the 1960s, no one had ever done a blind or even a randomized self-experiment on it.
There are also premade 'stacks' (or formulas) of cognitive enhancing superfoods, herbals or proteins, which pre-package several beneficial extracts for a greater impact. These types of cognitive enhancers are more 'subtle' than the pharmaceutical alternative with regards to effects, but they work all the same. In fact, for many people, they work better than smart drugs as they are gentler on the brain and produce fewer side-effects.
Or in other words, since the standard deviation of my previous self-ratings is 0.75 (see the Weather and my productivity data), a mean rating increase of >0.39 on the self-rating. This is, unfortunately, implying an extreme shift in my self-assessments (for example, 3s are ~50% of the self-ratings and 4s ~25%; to cause an increase of 0.25 while leaving 2s alone in a sample of 23 days, one would have to push 3s down to ~25% and 4s up to ~47%). So in advance, we can see that the weak plausible effects for Noopept are not going to be detected here at our usual statistical levels with just the sample I have (a more plausible experiment might use 178 pairs over a year, detecting down to d>=0.18). But if the sign is right, it might make Noopept worthwhile to investigate further. And the hardest part of this was just making the pills, so it's not a waste of effort.
Still, the scientific backing and ingredient sourcing of nootropics on the market varies widely, and even those based in some research won't necessarily immediately, always or ever translate to better grades or an ability to finally crank out that novel. Nor are supplements of any kind risk-free, says Jocelyn Kerl, a pharmacist in Madison, Wisconsin.
COGNITUNE is for informational purposes only, and should not be considered medical advice, diagnosis or treatment recommendations. Always consult with your doctor or primary care physician before using any nutraceuticals, dietary supplements, or prescription medications. Seeking a proper diagnosis from a certified medical professional is vital for your health.
The main area of the brain effected by smart pills is the prefrontal cortex, where representations of our goals for the future are created. Namely, the prefrontal cortex consists of pyramidal cells that keep each other firing. However in some instances they can become disconnected due to chemical imbalances, or due to being tired, stressed, and overworked.
Furthermore, there is no certain way to know whether you'll have an adverse reaction to a particular substance, even if it's natural. This risk is heightened when stacking multiple substances because substances can have synergistic effects, meaning one substance can heighten the effects of another. However, using nootropic stacks that are known to have been frequently used can reduce the chances of any negative side effects.
11:30 AM. By 2:30 PM, my hunger is quite strong and I don't feel especially focused - it's difficult to get through the tab-explosion of the morning, although one particularly stupid poster on the DNB ML makes me feel irritated like I might on Adderall. I initially figure the probability at perhaps 60% for Adderall, but when I wake up at 2 AM and am completely unable to get back to sleep, eventually racking up a Zeo score of 73 (compared to the usual 100s), there's no doubt in my mind (95%) that the pill was Adderall. And it was the last Adderall pill indeed.
As with any thesis, there are exceptions to this general practice. For example, theanine for dogs is sold under the brand Anxitane is sold at almost a dollar a pill, and apparently a month's supply costs $50+ vs $13 for human-branded theanine; on the other hand, this thesis predicts downgrading if the market priced pet versions higher than human versions, and that Reddit poster appears to be doing just that with her dog.↩
My worry about the MP variable is that, plausible or not, it does seem relatively weak against manipulation; other variables I could look at, like arbtt window-tracking of how I spend my computer time, # or size of edits to my files, or spaced repetition performance, would be harder to manipulate. If it's all due to MP, then if I remove the MP and LLLT variables, and summarize all the other variables with factor analysis into 2 or 3 variables, then I should see no increases in them when I put LLLT back in and look for a correlation between the factors & LLLT with a multivariate regression.
A key ingredient of Noehr's chemical "stack" is a stronger racetam called Phenylpiracetam. He adds a handful of other compounds considered to be mild cognitive enhancers. One supplement, L-theanine, a natural constituent in green tea, is claimed to neutralise the jittery side-effects of caffeine. Another supplement, choline, is said to be important for experiencing the full effects of racetams. Each nootropic is distinct and there can be a lot of variation in effect from person to person, says Lawler. Users semi-annonymously compare stacks and get advice from forums on sites such as Reddit. Noehr, who buys his powder in bulk and makes his own capsules, has been tweaking chemicals and quantities for about five years accumulating more than two dozens of jars of substances along the way. He says he meticulously researches anything he tries, buys only from trusted suppliers and even blind-tests the effects (he gets his fiancée to hand him either a real or inactive capsule).
At dose #9, I've decided to give up on kratom. It is possible that it is helping me in some way that careful testing (eg. dual n-back over weeks) would reveal, but I don't have a strong belief that kratom would help me (I seem to benefit more from stimulants, and I'm not clear on how an opiate-bearer like kratom could stimulate me). So I have no reason to do careful testing. Oh well.
The infinite promise of stacking is why, whatever weight you attribute to the evidence of their efficacy, nootropics will never go away: With millions of potential iterations of brain-enhancing regimens out there, there is always the tantalizing possibility that seekers haven't found the elusive optimal combination of pills and powders for them—yet. Each "failure" is but another step in the process-of-elimination journey to biological self-actualization, which may be just a few hundred dollars and a few more weeks of amateur alchemy away.
Nootropics are a great way to boost your productivity. Nootropics have been around for more than 40 years and today they are entering the mainstream. If you want to become the best you, nootropics are a way to level up your life. Nootropics are always personal and what works for others might not work for you. But no matter the individual outcomes, nootropics are here to make an impact!
A large review published in 2011 found that the drug aids with the type of memory that allows us to explicitly remember past events (called long-term conscious memory), as opposed to the type that helps us remember how to do things like riding a bicycle without thinking about it (known as procedural or implicit memory.) The evidence is mixed on its effect on other types of executive function, such as planning or ability on fluency tests, which measure a person's ability to generate sets of data—for example, words that begin with the same letter.
Several new medications are on the market and in development for Alzheimer's disease, a progressive neurological disease leading to memory loss, language deterioration, and confusion that afflicts about 4.5 million Americans and is expected to strike millions more as the baby boom generation ages. Yet the burning question for those who aren't staring directly into the face of Alzheimer's is whether these medications might make us smarter.
In 3, you're considering adding a new supplement, not stopping a supplement you already use. The I don't try Adderall case has value $0, the Adderall fails case is worth -$40 (assuming you only bought 10 pills, and this number should be increased by your analysis time and a weighted cost for potential permanent side effects), and the Adderall succeeds case is worth $X-40-4099, where $X is the discounted lifetime value of the increased productivity due to Adderall, minus any discounted long-term side effect costs. If you estimate Adderall will work with p=.5, then you should try out Adderall if you estimate that 0.5 \times (X-4179) > 0 ~> $X>4179$. (Adderall working or not isn't binary, and so you might be more comfortable breaking down the various how effective Adderall is cases when eliciting X, by coming up with different levels it could work at, their values, and then using a weighted sum to get X. This can also give you a better target with your experiment- this needs to show a benefit of at least Y from Adderall for it to be worth the cost, and I've designed it so it has a reasonable chance of showing that.)
A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea.
Jesper Noehr, 30, reels off the ingredients in the chemical cocktail he's been taking every day before work for the past six months. It's a mixture of exotic dietary supplements and research chemicals that he says gives him an edge in his job without ill effects: better memory, more clarity and focus and enhanced problem-solving abilities. "I can keep a lot of things on my mind at once," says Noehr, who is chief technology officer for a San Francisco startup.
But, if we find in 10 or 20 years that the drugs don't do damage, what are the benefits? These are stimulants that help with concentration. College students take such drugs to pass tests; graduates take them to gain professional licenses. They are akin to using a calculator to solve an equation. Do you really want a doctor who passed his boards as a result of taking speed — and continues to depend on that for his practice? | CommonCrawl |
MISCELLANEOUS TECHNICAL ARTICLES BY Dr A R COLLINS
.id.au
Benjamin Robins on Ballistics
Benjamin Robins (1707 - 1751)
In the recent book "Civilization: The West and the Rest", historian Niall Ferguson attempted to identify the major factors that generated the rapid development of Western society over the last 500 years. He nominated the general acceptance of the scientific method as being one factor in providing, among other things, technological and military supremacy. As an example he cited the work of Benjamin Robins and his contribution to the physics of guns. This work significantly improved accuracy and effectiveness of the artillery of Western armies.
Robins' work on ballistics was published in 1742, as a short book entitled "The New Principles Of Gunnery". A later edition, published in 1805 includes "several other tracts on the improvement of practical gunnery" that Robins presented at the Royal Society.
"The New Principles Of Gunnery" has only 2 Chapters, in these Robins sets out a series of propositions which form a model of the physics of guns. He then reports the results of testing the model's predictions against experiment. A beautiful example of the scientific method.
The 18th century style of scientific writing was rather wordy and the mathematics was presented in a descriptive manner rather by using equations. To assist the modern reader in appreciating Robins' work, here is a brief summary of the book with the physics and mathematics of each proposition set out in a modern format.
"New Principles in Gunnery"
Of the Force of Gunpowder
In this chapter, Robins puts forward thirteen propositions to model the behaviour of smooth bore, black powder guns firing Lead or Iron balls. The propositions are:
Prop I. Gunpowder, fired either in a Vacuum or in Air, produces by its Explosion a permanent elastic Fluid.
Robins proposes that the explosion of gunpowder converts the powder to a gas, an "elastic fluid". His experiments to verify the proposition consisted of igniting gunpowder in a sealed container with a manometer attached to measure the gas pressure. He notes that the volume of gas produced is proportional to the quantity of powder exploded and that the gas produced is permanent, not diminishing in volume over time.
Prop II. To explain more particularly the Circumstances attending the Explosion of Gunpowder, either in a Vacuum or in Air, when fired in the Manner described in the experiments of the last Proposition.
The proposition is that the pressure in the vessel containing the exploding gunpowder is caused by two effects, the large volume of gas produced and secondly, that the gas is initially a hot flame and so is of higher pressure. The gas quickly cools and the pressure decreases, but only back to the pressure caused by the volume of the permanent gas.
Prop III. The Elasticity or Pressure of the Fluid produced by the firing of Gunpowder, is ceteris paribus [all else being the same] directly as its Density.
The proposition states that the volume of gas produces is proportional to the quantity of gunpowder exploded. Robins explodes some larger quantities and observes the resulting gas pressure is proportional to the gunpowder consumed.
Prop IV. To determine the Elasticity and Quantity of this elastic Fluid, produced from the Explosion of a given Quantity of Gunpowder.
Robins determines by experiment that 1 oz of gunpowder produces 460 cubic inches of gas (at room temperature and pressure). Robins states that the gas may be air, at least it appears to have the same density as air. From the known density of air, 460 cubic inches represents about \(\frac{1}{3}\) of the mass of gunpowder being converted to gas. He then measures the density of gunpowder and concludes that 1 cubic inch of gunpowder will produce 244 cubic inches of gas. Similarly, if the volume of the gas is restricted to the same volume as the gunpowder, the gas pressure will rise by a factor of 244.
Prop V. To determine how much the Elasticity of the Air is augmented, when heated to the extremist Heat of red-hot Iron.
Robins experimentally determines the increase in the volume of air when heated from room temperature to the temperature of red hot iron. To do this Robins heats a very rigid hollow cylinder to red heat, stoppers the end with a tapered metal bung, and so has a known volume of hot gas. He then immerses the bunged end in water, when the cylinder has cooled, the bung is removed and water rushes in to equalise the now low pressure inside the tube. The bung is replaced. The volume of water in the tube is measured and the difference between the volume of the tube and the volume of the water is a measurement of the room temperature volume of air that filled the tube when red hot.
The experiments showed that the average expansion ratio was \(4\frac{1}{11}\) to 1 for air heated from ambient to red heat.
Prop VI. To determine how much that Elasticity of the Fluid produced by the firing of Gunpowder, which we have above assigned, is augmented by the Heat it has at the Time of its Explosion.
Robins assumes that the heat of exploding gunpowder is at least that of red hot iron. Therefore the pressure of the gas produced, which was shown to be 244 times atmospheric pressure, is further increased by a factor of \(4\frac{1}{11}\). The resulting pressure is therefore approximately 1000 atmospheres, assuming it is confined to the same volume as the original gunpowder charge occupied.
Prop VII. Given the Dimensions of any Piece of Artillery, the Density of its Ball, and a Quantity of its Charge, to determine the Velocity which the Ball will acquire from the Explosion, supposing the Elasticity of the Powder at the first Instant of its firing to be given.
Based on the previous propositions, Robins works through an example of his theory of interior ballistics, resulting in the calculation of muzzle velocity.
He takes as the example, a gun with barrel length 45", a bore of ¾" and the length of the powder charge in the barrel of 2⅝" firing a lead ball. Robins models the interior ballistics of the gun as follows.
The volume of exploding gunpowder produces a gas occupying the same volume as the powder and so exerts a pressure 1000 × atmospheric pressure. The force on the ball is this pressure × the cross-sectional area of the barrel. As the ball is pushed down the barrel the pressure will drop as the volume behind the ball increases. Boyle's law states that the pressure of a gas is inversely proportional to the volume it occupies. The volume increases linearly with distance down the barrel, so the pressure will be inversely proportional to the distance the ball moves down the barrel. Robins draws the graph of the force shown in Fig 1.
Figure 1. Cannon interior ballistics.
The line HNQ represents the force on the ball due to the gas pressure, which falls as \( \frac{1}{x} \),
$$F(x) = \frac{k}{x} $$
where \(k\) is a constant and \(x\) is the distance from the breech end of the barrel.
The force on the ball immediately after the explosion, \(F_0\) is represented by the line FH, this is equal to the initial pressure, \(P_0\), acting on the cross-sectional area of the ball, \(A\). $$F_0 = P_0 \; A $$ and $$ P_0 = R \; P_{atm}, $$ where \(R\) is the ratio of the initial gas pressure to atmospheric pressure, \( P_{atm}\), which Robins measured to be 1000 from Prop. VI.
Hence $$ \begin{aligned} F_0 &= R P_{atm} A \\ &= R P_{atm} \pi (\frac{ d}{2})^2 \end{aligned} $$
\(k\) can be calculated from the initial force on the ball at its initial position, a distance \(c\) along the barrel. \(c\) is the length of powder charge in the barrel \(c\) = 2⅝"
$$ F_0 = k/c $$
Noting that atmospheric pressure is equivalent to the weight of a column of water 34 ft high, and the density of water is 62.4 lb/ft2, and the diameter of the ball is ¾"
$$\begin{aligned} k &= 1000 \times 34 \times 62.4 \times 32.2 \times \pi \left(\frac{0.75}{2 \times 12}\right)^2 \times \left(\frac{2.625}{12}\right) \\ &= 45,800. \end{aligned} $$
so when the ball has moved a distance x down the barrel, shown in the diagram as point M, the force on the ball, represented by length MN, will be $$ F(x) = \frac{45,800}{x}$$ where \(x\) is in ft.
To calculate the expected muzzle velocity of the ball, Robins quotes from Newton's "Principia" that the integral of the force as a function of distance moved by a body is proportional to the square of the velocity the body acquires i.e. equal to its increase in kinetic energy $$ \tfrac{1}{2} m v^2 = \int F(x) dx $$ where \(m\) is the mass of the Lead ball ¾" diameter = 0.09 lb
Noting that the integral of \(\frac{1}{x}\) is \(ln(x)\), the integral of the force on the ball as it travels from point F ( \( x = c \) ) to B ( \( x = L \) ), where \(c\) is the length of the charge (= 2⅝") and \(L\) the length of the barrel (= 45").
$$ \begin{aligned} v^2 &= \frac{2k}{m} \int_{c}^{L} \frac{1}{x}dx \\ &= \frac{2 \times 45,800}{0.09} \int_{2.6}^{45} \frac{1}{x}dx \\ &= \frac{2 \times 45,800}{0.09} \times (ln(45) - ln(2.625)) \\ &= 1,018,000 \times 2.84 \\ &= 2,890,000 \\ \\ v &= 1700 \quad ft/sec. \end{aligned} $$
Robins uses some slightly different values for the parameters at various points in his calculation and reports a muzzle velocity of 1668 ft/sec.
Robins then goes on to state how this velocity would scale with the various factors in the calculation, length of barrel, different bore etc. His points may be summarised by the following expression for muzzle velocity: $$ v \propto \sqrt{\frac{c}{d} ln(\frac{L}{c})} $$
where: \(c\) = distance to the back of the ball,
\(d\) = bore diameter,
\(L\) = full length of the barrel
In the scholium to Prop. VII, a further example is given of the way the muzzle velocity scales, the velocity is inversely proportional to the square root of the mass of the ball. If the same gun is loaded with 2 balls (shot mass is doubled but all else the same), then the muzzle velocity would be change by a factor \( \frac{1}{\sqrt{2}} = 0.707\). If the single ball has muzzle velocity of 1700 ft/sec, then double shotted, the muzzle velocity will be approximately 1200 ft/sec.
[A more complete description of the equation for muzzle velocity implied by Robins' propositions is given in the Smooth Bore Cannon Ballistics page.]
Prop VIII. To determine the Velocity, which any Ball moves with at any Distance from the Piece, it is discharged from.
To test the veracity of his gun dynamics Robins needed a method to accurately measure a gun's muzzle velocity. To do this he invented the ballistic pendulum. It did not require the then difficult measurement of very small time intervals but instead measured the velocity of a projectile by measuring its momentum and, knowing the mass of the projectile calculate the velocity.
Figure 2. Robins ballistic pendulum
The ballistic pendulum, shown in Fig 2, consisted of an arm suspended from a frame with a broad iron plate as its swinging mass. To the plate was screwed a thick sheet of wood into which the bullets were fired, wood was used so the bullet would be absorbed by the wood and not bounce back or shatter and the fragments bounce off.
Since the mass of the pendulum was much larger than the mass of the bullet, the initial velocity of the pendulum after impact was very much slower than the bullet, this allowed the height of the pendulum swing to be determined from the length of the arc through which it swung. This length was measured by attaching a ribbon to the lower part of the pendulum, the ribbon was lightly held by a pair of jaws mounted to the frame. A pin was inserted in the ribbon adjacent to the jaws, when the pendulum swung back, the extent of the swing was given by the length of ribbon from the jaws to the pin.
Robins shows how the velocity of the bullet can be calculated by working through an example.
A compound pendulum may be modelled as a simple pendulum with the same period, if the simple pendulum's length is equal to the distance from the pivot to the centre of oscillation Rco, given by $$ R_{co} = \frac{I}{M R_{cm}} $$
where: \(I\) is the moment of inertia of the compound pendulum,
\(M\) is the mass of the compound pendulum,
\(R_{cm} \) is the distance from the pivot point to the centre of mass.
The moment of inertia of a compound pendulum is given by $$I = \left(\frac{T^2}{4\pi^2} \right)MgR_{cm} $$ where \(T\) is the period of its swing.
Robins' ballistic pendulum had the following parameters
$$ \begin{aligned} M &= 56.19 \text{lb} \\ R_{cm} &= 52" \\ T &= 2.53 \text{ sec} \end{aligned} $$
$$R_{co} = 62.65". $$
The equivalent simple pendulum model will have the same moment of inertia if its mass, \(M_{co}\) is taken to be
$$M_{co} = \frac{R_{cm}}{R_{co}} M. $$
The equivalent mass of the simple pendulum is therefore 46.63 lb.
Robins fired bullets at the centre of the wooden block, 66" from the pivot. He models the system for this point of impact as a simple pendulum 66" long and with the same moment of inertia as the compound pendulum, equivalent to a mass, \( M'\) at distance 66" from the pivot. \( M'\) is given by $$ M' = \frac{62.65}{66}\times 46.63 = 42.02 \text{ lb.} $$
The calculation of the bullet velocity is then a two step process. The conservation of momentum gives the following: $$ \begin{aligned} mv &= (M' + m)V \\ v &= \left(\frac{M' + m}{m}\right)V \end{aligned} $$
where: \(m\) is the mass of the bullet,
\(v\) is velocity of the bullet just before impact
\(V\) is the velocity of the merged pendulum and bullet just after impact, at a distance 66" from the pivot.
The bullets used in this experiment had a mass \( m = \frac{1}{12} \text{ lb } (0.083\text{ lb}) \). Substituting, $$ \begin{aligned} v &= \left(\frac{42.02 + 0.083}{0.083} \right) V \\ &= 505 \times V \end{aligned} $$
Robins then calculates \(V\) using the conservation of energy. The equivalent simple pendulum, with initial velocity \(V\), will swing up to some maximum height, \(h\), converting its kinetic energy into potential energy at the top of the swing. $$ \frac{1}{2}(M' + m)V^2 = (M' + m)gh $$ where \(h\) is the maximum vertical height reached by the point of impact. $$ V = \sqrt{2gh} $$
Substituting into the equation for the bullet's velocity $$v = 505 \times \sqrt{2gh} $$
The ribbon, attached at a distance of 71⅛" from the pivot gave a measurement of the arc length of 17¼", this scales to 16" as the length of the arc described by the point of impact, the corresponding vertical height, \(h\) may be calculated as $$ h = 66 - 66cos\left(\frac{16}{66}\right) = 1.93" $$
Hence the velocity of the bullet is $$ v = 1625 \text{ ft/sec} $$
Robins introduces some inaccuracy when rounding various parameters and he quotes a muzzle velocity value of 1641 ft/sec.
Corrected Muzzle Velocity
Robins' calculation of the initial velocity of the ballistic pendulum was in error. The error was only present in the worked example, the correct calculation was used for all the other muzzle velocities measurements in the book. The erratum was published in a paper Robins read to the Royal Society in 1743.
Robins calculation of the conservation of momentum was valid, as he modelled the equivalent simple pendulum to have the same moment of inertia as the compound pendulum, but the calculation of the velocity of the pendulum after impact was not valid. It is the height that the centre of oscillation rises that accurately represents the potential energy of a compound pendulum, rather than the height that the point of impact rises, as Robins used.
The corrected calculation is as follows:
The height, \(h_{co}\) that the centre of oscillation rises, where the energy is all potential, is given by $$h_{co} = \left(\frac{62.65}{66} \right) h $$ where \(h\) is the height the point of impact reached.
The velocity that the centre of oscillation will reach at the bottom of the swung, where all energy would be kinetic, \(V_{co} \) will be $$V_{co} = \sqrt{2g\frac{62.65}{66}h} $$
and this means that the velocity of the point of impact will be
$$ \begin{aligned} V &= \left(\frac{66}{62.65}\right) \sqrt{2g\frac{62.65}{66}h} \\ &= \sqrt{\frac{66}{62.65}} \sqrt{2gh} \end{aligned} $$
So the original equation for the bullet velocity $$v = 505 \times \sqrt{2gh} $$ becomes $$ \begin{aligned} v &= 505 \times \sqrt{\frac{66}{62.65}} \times \sqrt{2g 1.93} \text{ in/sec}\\ &= 1625 \times \sqrt{\frac{66}{62.65}} \text{ ft/sec}\\ v &= 1668 \text{ ft/sec.} \end{aligned} $$
Prop IX. To compare the actual Velocities with which Bullets of different Kinds are discharged from their respective Pieces, with their Velocities computed from the Theory.
Robins now does a series of experiments firing bullets into his ballistic pendulum. He uses the original 45" barrel and a shorter, 12⅜" barrel and two different lengths of powder charge to test the theory. The resulting muzzle velocities are calculated using the method in Prop. VIII and the predicted values calculated in the manner described in Proposition VII.
The results are tabulated and the agreement between theory and experiment is remarkable good with an error of about ±2%.
Robins then builds a heavier ballistic pendulum of 97 lb and does more experiments adding a third, 7" barrel. With barrels from 7" to 45" and powder charges from 6 dw to 36 dw, the theory is again shown to predict the measured velocities to within about 2%.
A series of trials with very small charges, 1/12 those previously used, showed much lower velocities than the theory predicted. Muzzle velocities were approximately 400 ft/sec when the theory predicted 480. Robins postulates that the small powder charge does not reach the same high temperature as the larger charges and loses its heat more quickly as the gas expands down the barrel.
In the scholium, Robins discusses the significance of the theory of internal ballistics he has developed.
"The variety of these experiments and the accuracy with which they correspond to the theory, leave no room to doubt the certainty [that] the theory ... contains the true and genuine determination of the force and manner of acting of fired gunpowder. ... from this theory many deductions may be made of the greatest consequence to the practical part of gunnery. From hence the thickness of the piece, which will enable it to confine without bursting any charge of powder is easily determined, since the effort of the powder is known."
Prop X. To assign the Charges in the Force of Powder, which arise from the different State of the Atmosphere.
Robins tested the effect of the density of the atmosphere on the performance of guns. He tested the muzzle velocity in different seasons, in night and day and found no significant difference in the performance. He remarks that the quantity of moisture in the powder does affect performance, both with lower velocity and greater variability in velocity for the same charge.
Prop XI. To investigate the Velocity which the Flame of Gunpowder acquires, by expanding itself, supposing it be fired in a given Piece of Artillery, without either a Bullet or any other Body before it.
Robins models the products of exploding gunpowder to be the 3/10ths converted to hot gas and the 7/10ths remaining as hot particulate matter, swept along with the gas down the barrel. He attempts to measure the speed of the gas emerging from the barrel when fired with no ball, using the ballistic pendulum. The velocity of the expelled gas is measured by fixing the barrel to the ballistic pendulum and measuring the speed of recoil. Assuming all the powder is burnt and ejected with a uniform velocity, then the average speed of the gas is approximately 7000 ft/sec.
From these data Robins is able to calculate the force of petards, small explosive devices held on the end of a pole, since "their action depends solely on the impulse of the flame". He concludes they are equivalent to a ball of twice the weight of the petard's charge travelling at 1400 to 1500 ft/sec.
Prop XII. To ascertain the Manner in which the Flame of Powder impels are Ball, which is laid at a considerable Distance from the Charge.
Having observed that the exploding gunpowder will reach much higher speeds if it is pushing a ball along the barrel, Robins experiments to see if a ball placed away from the charge, some way down the barrel, will have a higher muzzle velocity. The experiments show this to be true, a ball which would reach 1200 ft/sec if placed against the charge will reach 1400 ft/sec if placed just 11" from the charge. There is a considerable local increase in pressure as the shock wave from the explosion reaches the ball and Robins warns that it is likely to burst standard barrels.
Prop XIII. To enumerate the various Kinds of Powder, and to describe the properest Methods of examining its Goodness.
Robins describes the difference in strength of gunpowder available from various sources. He compares the British government issue 'battle' powder to the equivalent Spanish and French, concluding they are of similar high quality. The best was an expensive Dutch gunpowder with 25% more force than the British. The commercial powder sold to the public was much poorer and of variable quality. The worse of all was the powder made for the African trade.
The source of the difference in quality Robins conjectures, is some deviation from the optimum ratio of the components; 75% Saltpetre, 12½% Sulphur and 12½% Charcoal.
Robins makes the suggestion that the best way to test the quality of powder for acceptance, is to use his ballistic pendulum.
Of the Resistance of the Air, and of the Track described by the Flight of Shot and Shells
In this chapter, Robins puts forward eight propositions to develop a mathematical model for the trajectory of a projectile, taking into account air resistance.
Prop I. To describe the general principles of the Resistance of Fluids to solid Bodies moving in them.
Robins describes the general principles of the Newtonian model of drag on bodies moving in a fluid. The drag being proportional to the cross-sectional area of the projectile in the direction of motion. He proposes that the resistance will increase with velocity since the fluid cannot close in behind high speed projectile as it can at slow speed, thus depriving the projectile of the forward thrust provided by the fluid pressure behind it. As a result, he proposes that the initial drag on a cylindrical projectile may be as high as four times the drag it experiences after it has slowed down later in flight, since the speed that the fluid can move to close in behind the bullet will be equal or greater than the speed of the bullets and so the pressure behind the bullet will be greater. This effect will be diminished for a spherical projectile due to its oblique surfaces, so the initial drag on round shot will be more like three times the Newtonian, low speed, drag. From this reasoning he suggests it is false to assume drag will be proportional to the square of the velocity for all velocities, as many of his contemporaries contended.
Prop II. To determine the resistance of the air to projectiles by experiments.
To test his hypothesis in Prop. I, Robins measures the velocity of ¾" musket balls using his ballistic pendulum. The bullet speed was measured at 25 ft, 75 ft and 125 ft from the muzzle. The velocities were 1670, 1550 and 1425 ft/sec respectively. This represents a drag 120 times the weight of the ball which, weighing \(\frac{1}{12}\)lb would amount to 10 lb weight of drag.
to arrive at this figure Robins would have used the equation of motion: $$ v^2 = u^2 + 2ax $$
where: \(v\) is the final velocity,
\(u\) is the initial velocity,
\(a\) is the acceleration, in this case deceleration due to drag,
\(x\) the distance travelled.
re-arranging and substituting, $$ \begin{aligned} a &= \frac{1550^2 - 1670^2}{2 \times 50}\\ &= -3864 \text{ ft/sec}^2 \end{aligned} $$ taking \( g = -32 \text{ ft/sec}^2 \), $$ a = 120 g. $$
Since the bullet weighed \(\frac{1}{12}\)lb this is equivalent to 10 lb weight of drag.
Robins compares this value to that predicted by Newton's model for fluid drag. Newton investigated the resistance to motion through a fluid, which he discussed in Vol 2 of his "Principia". Newton's measurements of drag were all conducted at low speeds. He concluded that the resistance is be proportional to the square of the velocity, more specifically, the drag force, \(F_D \) is given by $$ F_D = C_D \frac{1}{2}\rho A v^2$$
where: \( C_D \) is the drag coefficient \( = 0.5 \) for a sphere
\( \rho \) is the density of air \( = 0.074 \quad \text{lb/ft}^3 \)
\( A \) is the cross-sectional area of the projectile \(= \pi({\frac{3}{8}\frac{1}{12})^2} \quad \text{ft}^2 \).
Hence the drag predicted by Newton's model for Robins' musket ball, travelling at 1600 ft/sec would be $$ F_D = 145.3 \text{ lb ft/sec}^2$$ taking \( g = 32 \text{ ft/sec}^2 \), $$ F_D = 4.5 \text{ lb weight.} $$ Robins quotes \(4\frac{1}{6}\) lb.
Robins measurement indicated a drag of 10 lb, for bullets travelling at around 1600 ft/sec, a force between 2 and 3 times the drag predicted by the Newtonian model.
He repeated the experiment taking the average of 5 shots over 175 ft and found the ratio of the drag for high speed drag compared to the drag observed by Newton and others for low speed experiments, to be nearer to the ratio of 3 to 1.
Repeating the experiments for projectile velocities near 1000 ft/sec Robins finds the drag to be about 1.6 times that the Newtonian model predicts.
To test even lower speeds Robins measured the time of flight of a bullets with initial velocity of 400 ft/sec, they typically travelled about 950 ft in about 4.25 sec. The Newtonian drag formula for time of flight over this distance is: $$ s = \frac{v_t^2}{g} ln\left(\frac{v_t^2 + gut}{v_t^2} \right) $$ where \(v_t \) is the terminal velocity for freefall \(= \sqrt{\frac{2mg}{C_DA\rho}} \)
which gives an expected time of flight for 950 ft of 3.2 sec. So the observed time of flight is slower than Newtonian \(V^2\) drag by a factor of \((4.25/3.2)^2 = 1.76\).
Prop III. To assign the different augmentations of the resisting power of the air according to the different velocities of the resisted body.
Robins proposes that the velocity dependent drag will increase linearly from the Newtonian drag value, applicable for low speed projectiles, to a value 3 to 4 times the predicted Newtonian drag, for velocities up to 1700 ft/sec.
Prop IV. To determine the velocities with which musket and cannon-shot are discharged from their respective pieces by their usual allotment of powder.
Robins calculates the muzzle velocity of Lead musket balls and 24 lb Iron cannon balls, predicted by his interior ballistics model, described in Chapter 1, Prop. VII.
(ft/sec)
3/4 Lead 1/12 1/24 1700
5 Iron 24 16 1650
In a corollary to this proposition, Robins calculates the charge that would give maximum velocity to a cannonball. Increasing the charge beyond this value will actually result in a lesser muzzle velocity. Robins would have used Newton's calculus to find the maximum value of his equation for the muzzle velocity as a function of the length of charge. Robins doesn't actually show the equation, but, as shown on this page in the discussion of Ch 1, Prop. VII, it is of the form: $$ v = k \sqrt{c \times ln(\frac{L}{c})} $$
where: \(k\) = is a constant
\(c\) = length of the powder charge, distance to the back of the ball.
The maximum velocity as a function of charge length, \(c\), will occur where the differential, with respect to \(c\), of this function is 0.
$$ln(\frac{L}{c}) - 1 = 0$$ $$\frac{L}{c} = e. $$
So the length of charge for maximum muzzle velocity is \(\frac{1}{2.72}\) of the length of the barrel.
Prop V. When a Cannon-Ball of 24lb. weight, fired with a full Charge of Powder, first issues from the Piece, the Resistance of the Air on its Surface amounts to more than twenty Times its Gravity.
As another example of his interior ballistics model, Robins calculates the pressure on a 24 lb cannonball when fired with a 16 lb charge, the weight of gunpowder used by the military at the time. The muzzle velocity he calculates to be 1650 ft/sec and the initial pressure on the ball will be 540 lb weight, equivalent to nearly 23 times the weight of the ball.
Robins doesn't shown his calculations, but we may assume he used the dimensions of a typical Armstrong pattern 24 lb cannon, which had a bore of 5.5" and was 9½ ft long. The mass of the ball is 24 lb, specific gravity of the Iron ball would be 7.87. Assuming the density of gunpowder to be 230 grains/in3, then 16 lb will occupy a volume of 481 in3, in the shape of a cylinder 5½" diameter this charge would be 24" long. Substituting these values into the equation for muzzle velocity returns a figure of 1659 ft/sec.
Prop VI. The Track described by the Flight of Shot or Shells is neither a Parabola, nor nearly a Parabola, unless they are projected with small Velocities.
Robins refutes the then held theory that air resistance can be neglected and that therefore the ballistic path of a projectile is a parabola, with maximum range achieved at an elevation of 45° and so on.
A musket ball fired at 1700 ft/sec with no air resistance will travel 17 miles. Robins quotes many accounts that the true maximum range of similar shots is closer to ½ mile, \(\frac{1}{34}\)th the value neglecting air resistance. He then dismisses the theory for heavier projectiles too. Taking the heaviest field piece then in use, the 24 pounder, Robins quotes his calculated muzzle velocity of 1650 ft/sec would give a 24 lb ball a 16 mile maximum range, but many sources state the maximum range is less than 3 miles, less than \(\frac{1}{5}\)th the parabolic trajectory model.
Prop VII. Bullets in their Flight are not only depressed beneath their original Direction by the Action of Gravity, but are also frequently driven to the right or left of that Direction by the Action of some other Force.
Robins describes the path of a bullet as curving in the vertical plane due to the force of gravity, but that a bullet's path will also curve left to right or right to left during its flight. This is not a simple linear error in the horizontal angle of firing since the horizontal deviation at 300 yd is not 30 times the deviation at 10 yd, but is much greater. The path of appears to be a curve, initially tangential to the axis of the barrel but increasing its deviation more than just in proportion to its distance from the muzzle.
In the scholium, Robins proposes that the cause of the lateral or even excessive downward curve of the trajectory is caused by the effect of air resistance on a spinning projectile. He conjectures that when spinning the different parts of the surface of the ball will strike the resisting air at an angle different from the angle at which it would strike if not spinning.
Robins thus described, quite accurately, the effect now usually referred to as the Magnus effect. Newton too, had also described the phenomenon even earlier than Robins.
Prop VIII. If Bullets of the same Diameter and Density impinge on the same solid Substance with different Velocities, they will penetrate that Substance to different Depths, which will be in the duplicate Ratio of those Velocities nearly. And the Resistance of solid Substances to the Penetration of Bullets is uniform.
Robins makes one last note regarding his experiments with the ballistic pendulum. He notes that the depth of penetration of musket balls appears to be in the ratio of the square of their velocities. A ball travelling at 1700 ft/sec penetrates about 5" and a ball travelling at 730 ft/sec only penetrates \(\frac{7}{8}\)" the ratio of velocities is 2.33, the ratio of depths is 5.7, approximately 2.332. Robins goes on to give several similar examples. | CommonCrawl |
Environment for integration of distributed heterogeneous computing systems
Thiago W. B. Silva2,
Daniel C. Morais1,
Halamo G. R. Andrade1,
Antonio M. N. Lima2,
Elmar U. K. Melcher2 &
Alisson V. Brito ORCID: orcid.org/0000-0001-5215-443X1
Journal of Internet Services and Applications volume 9, Article number: 4 (2018) Cite this article
Connecting multiple and heterogeneous hardware devices to solve problems raises some challenges, especially in terms of interoperability and communications management. A distributed solution may offer many advantages, like easy use of dispersed resources in a network and potential increase in processing power and data transfer speed. However, integrating devices from different architectures might not be an easy task. This work deals with the synchronization of heterogeneous and distributed hardware devices. For this purpose, a loosely coupled computing platform named Virtual Bus is presented as main contribution of this work. In order to provide interoperability with legacy systems, the IEEE 1516 standard (denoted HLA - High Level Architecture) is used. As proof of concept, Virtual Bus was used to integrate three different computing architectures, a multi-core CPU, a GPU and a board with an Altera FPGA and an ARM processor, which execute a remote image processing application that requires a communication between the devices. All components are managed by Virtual Bus. This proposal simplify the coding efforts to integrate heterogeneous distributed devices and results demonstrated the successful data exchanging and synchronization among all devices, proving its feasibility.
Integrating heterogeneous devices allows to raise the processing capacity without, necessarily, having a centralized control on a single device. To improve performance and increase cost-effectiveness, the processing tasks can be, normally, distributed. However, the integration of diverse devices demands a reliable communication, which is not an easy task, needing a mechanism that manages and synchronizes the members' messages. Building an environment to manage the exchange of data is even more difficult, because problems may arise from the integration of different devices.
The integration of computing systems (software and hardware) allows to create a System of Systems (SoS). Without a careful management there is high probability to instability and difficulties. According to [1], two systems can be considered stable when working individually, but nothing can be said about their operation at the time they are operating in an integrated manner. There are two major problems: to split processing between all involved, dividing a task in subtasks to be processed by members; and assign a specific predefined task to each member in the system. A possible solution is to use a distributed communication architecture that allows the data exchange synchronously between the system's components.
A case study which puts the synchronization problem into evidence is presented in this paper. This problem appeared during the research when trying to verify some specific functionalities working with distributed systems in a functional verification setup. The main problems arise in the synchronization of the messages exchanged by the components. There was an inconsistency with regard to the way in which the components expected to exchange information. It often generated a certain rework and some communication problems during the previous experiments. Thus, this work aims at synchronizing communication of heterogeneous systems. Another key issue addressed in this work is the challenge of integrating legacy codes written in different languages for heterogeneous hardware architectures. The proposed solution provides a distributed computing platform with an API of high level functions for data exchanging and synchronization, independent of languages and architectures. To achieve this aim, our solution is based on the IEEE 1516 standard High Level Architecture (HLA) [2] as communication and synchronization platform.
HLA is a specification of a software architecture which has the purpose of facilitating intercommunication between distributed heterogeneous systems, mainly simulations, and allows the division of tasks among members [3]. This standard is a general-purpose architecture defined by the Defense Modeling and Simulation Office (DMSO) and designed to use a wide number of different types of simulators [2]. In this paper, HLA is used in an innovative way to provide interoperability of distributed heterogeneous hardware devices, instead of only simulations.
One of the possibilities proposed by the HLA specification is the use of diverse applications to compose a heterogeneous co-simulation. Therefore, it is feasible to build a computing platform based on the integration of heterogeneous devices and properly manage tasks to accelerate processing.
The purpose of this work is to create a platform that simplifies the intercommunication of distributed heterogeneous devices (composed of hardware and software). So, the main contribution of this work is the development of a platform to integrate heterogeneous computing devices (independent of architecture) in a loosely coupled way. As already being introduced, initially this work started from the idea of building a middleware to intermediate distributed devices to perform functional verification of components developed in the laboratory. Then, we decided to build a more general purpose software to abstract the underlying distributed architecture, instead of doing a specific solution. For this, the HLA standard for inter-operation among those systems was used, and a library was developed to unify the way of programming communication and synchronization. In previous work, HLA has been used to integrate circuit simulation tools for functional verification and power consumption estimation. In that case, different hardware architectures were simulated, but no physical devices were integrated [4, 5].
The HLA supports our implementation of a platform that emulates a bus, here named Virtual Bus. This paper also explores parallel computing in order to allow that multiples processing elements available in distributed devices can be used independent of their architectures. Virtual Bus is presented to programmers as an API with basic functions for reading and writing data to the bus, check available data and do synchronization.
As proof of concept, Virtual Bus was used to integrate three different computing architectures in a single platform: a multi-core CPU, a GPU and a System-on-Chip composed by a FPGA and an ARM processor. An example was developed running a remote image processing application that requires communication between the devices. The usage of Virtual Bus permits to reduce the number of code lines necessary to integrate all components. Without Virtual Bus, it would be necessary to write at least about 1000 lines of code for each one, making it impractical as the number of components increases. With the Virtual Bus, it only takes a couple of lines to instantiate Virtual Bus and the Federates might be reused always when necessary. The platform may be extended to other architectures and to more devices in future works.
This paper is organized as follows. In the following section, related Works involving heterogeneous systems are presented. Section 3 gives a brief explanation of HLA and other background details. Then, in Section 4 the proposed platform that intercommunicates heterogeneous systems, the Virtual Bus itself, is addressed. Section 5 presents the methodology of the proposed experiments. The results of computational experiments are exposed in Section 6. Finally, a conclusion and perspectives are presented in Section 7.
This section presents some discussion about relevant aspects in related papers, i.e. works involving technologies with integration of distributed systems and heterogeneous hardware. Table 1 highlights the pros and cons of the related work compared to our approach.
Table 1 Comparison between different strategies
In [6] a model that simulates a heterogeneous system controlled by Ptolemy II is presented. The major contribution of this work is the integration of different simulators with Ptolemy such as Simulink to model building automation systems. In our work, we do not use simulation to abstract heterogeneous hardware, but we use a distributed simulation platform, based on HLA and adapted to provide interoperability among heterogeneous hardware platforms.
In contrast with our approach, the work in [7] presents a programming model for modeling distributed systems, but it does not allow the execution of such systems in a distributed manner. This hinders the scalability of those systems. Other works use the concept of heterogeneous distributed systems to provide the connection of multiple systems. As the authors of [8], who propose a networked virtual platform to develop simulatable models of large-scale heterogeneous systems and support the programming of embedded applications. Different to our work, the contribution of that paper is the simulation of a system that includes processors and peripherals and uses binary translation to simulate the target binary code on top of a host instruction-set architecture. The integration of different hardware architectures in a distributed way is not considered.
A network solution is also suggested in [9], which proposes the integration of TCP/IP network simulators into a Discrete Event (DE) co-simulation platform. The paper proposes splitting network topologies into several models and defining input/output ports inside existing models. Our work delegates the network management to HLA, which defines all operations necessary for data exchange and synchronization.
In [10], a mixed simulation is introduced to coordinate several parallel simulations as a distributed simulation system. The parallel simulations are conducted according to HLA. The HLA has been used as co-simulation bridge. The work exposed by [11] uses HLA to run a fight simulation of aircraft attacking air defense units. In [12] HLA is applied for real time aircraft simulation, to validate real time behavior on target computing platform. None of these works deals with the problem of integrating heterogeneous architectures in a unique computing platform. Their focus is on simulation, while our work focuses on heterogeneous distributed computing.
The work [13] proposes to use HLA as a master for Functional Mockup Interface (FMI) compatible simulation components. The main objective is to provide a generic and standalone master for the FMI, making FMI-based simulation components usable as plug-and-play components, on a variety of distributed environments including grids and clouds. It is related to our work due to its goal to create a distributed computing platform, but its main objective is the simulation of FMI models, while we focus on heterogeneous distributed architectures.
In [14] the authors replace the transport layer of HLA-based system by Data Distribution Service (DDS) communication. They present a combination of distributed HLA-based simulation with network control using DDS. The HLA and DDS are combined to form a unique middleware. It consists of service and network configuration and an API for interconnecting the data object between HLA and DDS. HLA-DDS does not only allow network controllable distributed simulation but also preserves existing HLA-based distributed simulation systems. The goal is to implement a bridge between HLA and DDS, while our work focuses on lower level integration, when different hardware architecture can be integrated.
DDS has also been used in [15] to manage the interaction between high computation power nodes and ARM-based embedded computers. In that work, a flexible library to create the communication using different underlying communication software is presented. The target system integrates heterogeneous nodes and base servers. Although our solution is built on HLA, another version could also be implemented over DDS. In future works, our solution may also use DDS in replacement of HLA for distributed and heterogeneous applications.
Our work brings a contribution regarding abstraction and intercommunication, but in the case of time-sensitive applications, the work of [16] is more specific. The authors propose a middleware with high degree of integration with the hardware platform, through the use of operating system calls to control the computing cores. However, our work proposes a more generic solution, independent of hardware architecture or operating system.
There are other works that investigate middlewares that supports composition of components, services and modules, with support to dynamic changes in real time [17, 18]. The authors propose reconfigurable middleware for real-time distributed services-based systems. However, our solution focuses on the integration of different hardware platforms in a unique environment in a loose-coupled way, not necessarily based on services nor components.
The authors in [19] propose a Gateway/Middleware High Level Architecture (HLA) implementation and the extra services that this implementation provides to a simulation. That paper contributes to incorporate Gateway/Middleware Services into HLA interface that is denoted, a Simulation Object Middleware Classes (SMOC) Gateway.
In our previous work, HLA is used to integrate five different simulations tools: Ptolemy II, SystemC, Omnet++, Veins, Stage and physical robots [20]. The idea is the development and evaluation of a distributed simulation platform of heterogeneous simulators. That work inspired the present work with the idea to extend HLA for not only simulations, but to general computing applications running in heterogeneous hardware architectures.
In our solution, OpenCL is used to explore parallel computing in multi-core CPU and in GPU, due to its versatility. Other works have also used OpenCL, to explore high-performance computing [21, 22], though it presents lower performance than CUDA solutions [22]. Therefore, the most advantage in using OpenCL in our context is the vast compatibility with heterogeneous hardware platforms.
In Table 1 the related works are compared with our work focusing on main contributions of this paper.
The high level architecture (HLA)
The High Level Architecture (HLA) is a standard of the Institute of Electrical and Electronic Engineers (IEEE), developed by Simulation Interoperability Standards Organization (SISO). Initially it was not an open standard, but it was later recognized and adopted by the Object Management Group (OMG) and IEEE [2].
There are several standards based on distributed computing, such as SIMNET, Distributed Interactive Simulation (DIS), Service Oriented Architecture (SOA), Data Distribution Service (DDS), HLA, among others. HLA was chosen as standard to integrate distributed heterogeneous devices because it manages both, data and synchronization, and allows the interoperability and composition of the widest possible range of platforms. One of the most notable advantages of using HLA for this purpose is that it already has a trustworthy and widely used solution for time synchronization. There are also a large quantity of simulations and tools compatible with it (e.g. Matlab, Simulink, OMNet++, Ptolemy) which turns easier further applications with different tools.
HLA is not a software implementation, but a standard with diverse independent implementations, including some open-source, like CERTI [23] and Portico [24]. HLA is specified in three documents: the first deals with the general framework and main rules [2], the second deals with the specification of the interface between the simulator and the HLA [25] and the third is the model for data specification (OMT) transferred between the simulators [26].
The main HLA characteristics are defined under the leadership of the Defence Modelling and Simulation Office (DMSO) to support reuse and interoperability. Interoperability is a term that covers more than just send and receive data, it also allows multiple systems to work together. However, the systems must operate in such a way that they can achieve a goal together through collaboration.
The main idea of the HLA is to provide a general purpose platform where the functionality of a system can be separated in distributed machines without loss of consistency. For this, it uses the Runtime Infrastructure (RTI), which manages data exchanging and centralize the control of a global time for synchronization among the Federates (see Fig. 1). This union of Federates through RTI is called Federation. Here we use the term HLA Time to distinguish it from local time of each Federate and it refers to a logical time and not a clock time.
Structure of the HLA architecture
To connect various Federates with RTI, two components must exist: one local RTI Ambassador (RTIA) and a global RTI Gateway (RTIG). RTIA defines the interface of the Federate with RTI, calling functions of RTIG for updating, reflection, transmitting and receiving data. RTIG is responsible for synchronization and data consistency. Messages among RTIG and RTIA are exchanged through a TCP/IP network in order to perform the services in a distributed manner. In this work, the HLA implementation CERTI [23] was used. CERTI is an open source implementation of HLA specification and is developed by its open source community and maintained by ONERA. This implementation supports HLA 1.3 specification and it is already used in robust co-simulations [4, 20, 27–29].
Federates do not communicate directly with other Federates. Each one is connected to the RTI, then they communicate with each other using only the services that the RTI provides. There is always an interface between the RTI and the Federates, and each member has a unique connection with the RTI.
The RTI provides an interface called RTI Ambassador and for each Federate an interface called Federate Ambassador must be implemented for communication with RTI, as presents in Fig. 1. Typically, RTI Gateway (RITG) is provided by HLA implementations and developers must implement (or reuse) a Federate Ambassador for each system or device that will be part of the Federation. In this work, three Federate Ambassadors were developed, for GPU, SoC and Multi-core CPU. The Federate Ambassador has two main objectives: to exchange data through RTI, and to manage the synchronization with the RTI.
In our implementation, we use the available "publish and subscribe" communication mechanism provided by RTI. Messages are used to update values by calling the function updateAttribute of the Federate Ambassador. All updating is requested by a Federate to RTI, which propagates it, calling the function reflectAttribute of all Federate Ambassadors. Once it happens, our implementation of this function save the values into internal variables, and signalize that a new data have been received. This flag will remain on, until the getReceivedData function is called for reading.
To deliver the messages in a consistent order, the HLA has specific mechanisms of time management. They are associated with the idea of advancing time step, which is an abstraction of a global time to all Federation, which we call HLA time. RTI manages the advancing of HLA time to guarantee that each Federate will advance to next step only when all the others reach the same HLA time.
For this, the Federate Ambassador defines the federateTime and advanceTime functions. The first one is used to read to current global time (or HLA Time), and the second is to send an advancing time request to RTI. The Federate is blocked until RTI grants the time advancing. The grant will occur only when all registered Federates request the time advancing to the same point.
As a Federate communicates with each other through RTI, the data exchanging is performed in terms of interactions and objects. An interaction is the operation of sending data in one time-step, and objects are the data packets sent during an interaction. To initiate a Federation, it is necessary to start the RTI Gateway (RTIG) to allow all Federates to join the Federation. Updates of new messages are received when a Federate applies for an object. Therefore, all updates in those objects are reflected to those interested Federates. The Virtual Bus encapsulates both the request and the reflection of objects.
Each Federate knows its own internal logical time and can advance it following some policies. A Federate can be time-constrained, when the advance of local time is restricted by other Federates; can be time-regulating, in which the advance of its local time regulates other Federate; both or none. In this project the Virtual Bus configures the time management of all Federates to both, time-constrained and time-regulating.
The platform for distributed heterogeneous computing (Virtual Bus)
In this section is presented the proposed platform for distributed heterogeneous computing called Virtual Bus, which is responsible for sending and receiving data on the network and for allowing interoperability between multiple heterogeneous hardware platforms.
This work used the intercommunication standard HLA (see Section 3) as a middleware for communication between these platforms. Virtual Bus has the role of letting data exchange operations transparent to the user, providing an API over the HLA, without the user having to perform its configuration explicitly. So, each device is a Federate that will communicate through the Virtual Bus.
Figure 2 presents the extensions of HLA proposed here (called Virtual Bus) implemented to turn the distributed computing more transparent. Different architectures as CPU, GPU, ARM and FPGA might be connected using the Virtual Bus, which is built on top of the CERTI/HLA environment.
General architecture of Virtual Bus
To join a Federation, a Federate must call the runFederate function from Virtual Bus API, described in Code 1. This function creates an instance of RTI Ambassador, requesting the RTI to create the Federation if it does not exist, and create the Federate Ambassador. The actual Federate joins the Federation and signals the RTI that it is ready to run as soon as all other Federates reach the synchronization point called READY_TO_RUN. Finally, when a Federate calls the method publishAndSubscribe() the time policy is set and all interesting objects to receive and send updates are registered.
Virtual Bus also offers in its API two functions for writing and reading data to facilitate communication between the Federates. The writeData function is responsible for sending data through Virtual Bus. The main logics of these functions are presented in Code 2. It creates an object to manipulate its attributes. All new values are set to these attributes, which are sent to the RTI together with the HLA local time of that Federate. As previously presented in section 3, Virtual Bus configures all Federates to Time Constrained and Time Regulating policies. This guarantees that a Federate will advance its local time to a specified global time (HLA time) only when all other Federates also reach a time equals or greater than that.
To explicitly request time advancing, Federates must call the advanceTime function from Virtual Bus API and wait for a granting message from RTI. Only when all Federates are granted, the global time of the Federation is advanced. Meanwhile, the Federate is blocked waiting for this granting. In Virtual Bus, the advanceTime is called always when the function writeData of Virtual Bus is called. It means each Federate advances its local time after each updating the values of the attributes registered to it. When every Federate advances its local time, RTI advances the global time and one cycle is completed.
To receive data, the Federate must use the readData function that returns the last received data from RTI. The function works as follows: if any data has been received, the Federate updates a flag to true. This flag can be checked by hasReceivedData function. If there is available data, it is returned, otherwise a null value is returned. The pseudo code of readData function is shown in Code 3.
The Virtual Bus works as illustrated in the Fig. 3. On the sender side, the writeData function is used to send data through RTI. Once this function has been called, the values are updated calling the updateValues function, which calls updateAttributes. In this step, each component of an array item is rearranged and passed to RTI Ambassador. The control of time and distribution of data is carried out by the RTI, calling the functions to synchronize all Federates (waitSync) and to distribute data among all registered Federates (distributeData).
Virtual Bus flow
On the receiver side, data is reflected into internal variables by the reflectAttribute function, called by the Federate Ambassador. This method calls receivedData to store data internally and set the flag hasReceivedData to true. A Federate in Virtual Bus is configured to check at each internal cycle if some new data was received by calling readData method. This method checks the flag and, if it is true, the data is returned to application.
The format of the data exchanged by the Ambassadors is defined following the Object Model Template (OMT) of the HLA [26], which is specified in a file common to all Federates. In Virtual Bus, each object has attributes to identify the destination and origin of message, besides attributes of N data values. The size of N is set in advance depending on each configuration scenario. The description file for the Data Object Model for Virtual Bus is presented in Code 4.
The Virtual Bus offers a general propose API for distributed systems, and its use must be easy and simple. So, the only changes that are needed to integrate new projects, or even legacy codes, are to add some libraries from the CERTI HLA and to include the Virtual Bus package. As shown in the Fig. 4, the package contains basically the Virtual Bus Federate and Federate Ambassador (classes and interfaces). From this point, to use the Virtual Bus the only necessary functions to be called are: runFederate, writeData and readData. The first one is to initialize the Federate, the second one to send and the last one to receive data. The Federate Ambassador is referenced as a black box code, used by the Virtual Bus Federate and does not have to be called directly.
Components of the Virtual Bus Package, needed to be included tho use the proposed solution
The main idea of these experiments is to run some Federates that exchange data of different types and sizes. Therefore, the experiment was assembled with four Federates: the Sender Federate (running in a PC), the SoC Federate (ARM+FPGA), the Multi-core Federate and the GPU Federate. The Sender Federate sends images to the other Federates, that will process some operation on the images and return the result back to the Sender Federate. All Federates use the same implementation of Virtual Bus developed in C++.
To ease the manipulation of the images, the OpenCV framework was used in Sender and Multi-core Federates. OpenCV is a library used to manipulate and process images, originally developed by Intel (http://opencv.org). It was used in this work only for basic handling of pixels through its functions and data structures.
In this section is presented the configuration of the experiment and some lessons learned during the work. In Subsection 5.1 the list of equipment specifications can be found and how they are connected. The Subsection 5.2 shows the scenarios that were configured. Many data formats were tested and they are presented in Subsection 5.3. Then, it is given a more detailed description of how each Federate works in Subsections 5.4 and 5.5.
In general, a Federation was configured composed by four computing machines, corresponding to the following Federates: the Sender Federate, the Multi-core Federate, the GPU Federate and the SoC Federate (ARM+FPGA). The basic configuration of each one is describe in Table 2. The Sender Federate is a desktop computer running Ubuntu 14.04 LTS. The SoC (ARM+FPGA) has an Altera Cyclone V SE SoC, which has a Cyclone V FPGA integrated with a dual-core ARM Cortex A9 processor on a single chip, running Ubuntu 12.04 LTS. The GPU Federate uses a GeForce GT from NVidea, and was running Ubuntu 16.04 LTS. The multi-core Federate runs OpenSuse 13.2 Harlequin.
Table 2 Equipment specification
The experiment was divided in five scenarios as listed in Table 3. In the first scenario, the Sender Federate communicates only with the SoC. Following, it communicates separately with Multi-core and then with GPU Federate in scenarios 2 and 3, respectively. In scenario 4 the communication is done between the Sender, SoC and Multi-core Federates. Then, in the last scenario, the Multi-core Federate is replaced by the GPU Federate.
Table 3 Scenarios used in the experiments
The idea in these scenarios is to test separately each Federate with the Sender Federate in scenarios 1 to 3, and later to integrate two Federates per experiment in scenarios 4 and 5. With this, it is possible to analyze the behavior of the Virtual Bus in separated cases.
Figure 5 gives an overview of how the devices are connected. The Sender Federate is in the left side of the figure. the Sender Federate. It is responsible to generate data to all other Federates and collect the results from them. In the right side of the same figure are the other Federates: SoC, where the ARM bridges the Virtual Bus with the FPGA, and the Multi-core and GPU Federates, which use OpenCL to interface the Virtual Bus with the parallel architecture.
Configuration of experiments using Virtual Bus
Data configuration and exchanging
One of the contributions of this work is to improve the data transfer to an acceptable rate. In this subsection, we present some results in the development process with some details regarding the implementation of data exchanging in Virtual Bus. This discussion is more relevant in the cases where a considerable amount of data must be transfered, like an image, for example. In this experiment the following data exchanging strategies were used:
one-by-one: pixels are sent one by one in each HLA message;
multi-pixel: a group of N pixels are sent in N attributes, one attribute for each pixel;
multi-pixel in one attribute: a group of N pixels is sent in one array attribute of N size.
Some formats for the messages were defined to improve the data exchanging in Virtual Bus. The overall format is presented in Fig. 6.
General structure of messages
The field called data contains part of the image that follows in each message. During the experiments, a variation of sizes of the data in messages were experimented, which resulted into different data transfer methodologies. They essentially differ in the number of pixels per message and the way the pixel information is organized in attributes.
The first methodology, hereinafter referred as one-by-one, has been implemented to send one pixel per message. That is, to send an image, each message have the information of the source, plus the position of pixel (x and y) and the corresponding pixel data. Thus, the amount of messages is equal to the number of pixels in the image.
These messages were structured like presented in Fig. 7. For example, the source field is the ID of the Sender Federate, the address is the ID of the target Federate that must receive the message, the position x and y are the pixel coordinates being sent and the pixel_data field is the content of pixel itself.
Structure for one-by-one messages
In the second methodology of the experiment, called multi-pixels, it has been adopted the strategy of sending image information only in the first message, such as resolution and number of channels. And then the next messages carry only the pixels (multiple ones by messages). It also means a variation in the number of fields per message.
Remembering that the number of elements is equal to the number of pixels multiplied by the number of channels. For example, five pixels in a image of three channels (RGB) means fifteen data fields per message. The structure of the messages is presented in Fig. 8.
Example of multi-pixel message, transporting five pixels of three channels
Based on the first message sent with the resolution information and number of channels, it is possible to manage the receipt of pixels. Hence, to send a complete image, this strategy produces the following number of messages: the number of pixels, times the number of channels, divided by the number of elements sent per message, adding yet the first message.
The last methodology to transfer the images, called multi-pixel in one attribute, is a variation of the second implementation. The structure of the message is the same as presented in Fig. 8, but here the HLA is used in a different way. Now, it adds multiple pixel content in only one field of HLA message attribute. Here the Object Model used in Virtual Bus (as shown in Code 4) is changed to use one unique data field of multiple elements. This field is managed by HLA as an array, thus the data of all pixels is encapsulated in a unique array type (also called data). In HLA, it means the RTI will try to send as maximum as data per TCP packet, instead of been limited by a fixed number of data fields.
As it is presented in next section, this methodology called multi-pixel in one attribute achieved the best performance. Therefore, this was the chosen approach to be used in the following experiments presented here.
As presented in next section, the aforementioned methodology called thereafter multi-pixel in one attribute provides the best performance. Consequently, this strategy has been used in the following experiments.
Sender federate
The Sender Federate is responsible for sending data to be processed by the other Federates and for receiving the results back. The Sender Federate opens an image file with resolution of 512×512 pixels and sends all the pixels. In practice, this image is a matrix of unsigned char elements. Because the image is colored, each pixel is represented in three channels, this is due to the RGB representation.
The data transmission using Virtual Bus turns the interactions between those Federates involved with the execution transparent. After the image is loaded into memory, each pixel is fragmented in a scalar format to be sent in bursts, repeated in a main loop. In the Code 5 the main logic is presented. Notice that the first action is to start up the RTI (lines 2-4). Following, the value of each channel in a pixel is stored in an array (lines 7-10). The next lines (14-21) check if all elements of image were already sent or if it must send the next line of the image matrix. Finally, the data array is sent to RTI (line 26). In the last lines (28-35), it receives the processed data from RTI and handle it according to which is the destination Federate.
This loop code is for sending data up to the value of NUMBER_OF_ELEMENTS_BURST variable is reached. For example, if 900 elements burst size is chosen, that means 300 pixels per message will be sent. So, the for structure has the stop condition because it buffers the 300 pixels in the variable called data, to send it subsequently. The x and y variables represent the coordinates of the pixel which is being accessed.
SoC integration with Virtual Bus
As proof of concept, the MD5 algorithm was implemented and executed in FPGA. MD5 is a hash function vastly used as a checksum to verify data integrity [29], which takes as input a message of arbitrary length with maximum of 512 bits and produces as output a fingerprint of 128 bits.
The input message should be an arbitrary and not negative integer.
The Code 6 shows how the Federate is implemented in the ARM processor. As an initial solution, a loop is proposed instead of using processor interrupts to check if any data is received. Once received, the function to calculate MD5 by the FPGA is called (line 9). The result is only sent to the Virtual Bus when the calculation is completed. This is controlled by a flag called received (line 11). When the result of the calculation is sent by the FPGA, then the four words is sent to Virtual Bus (line 12) and received by Sender Federate on the other side.
The communication between the ARM and FPGA in the software layer is made by the calculate m d5 function in line 8 at Code 6. In ARM, this communication is done through written records. This is configured with the Qsys framework, which maps the FPGA as a peripheral device of the ARM processor. The MD5 was implemented as a FSM which receives a sequence of 512 bits, separated in 16 blocks of 4 bytes each.
The Cyclone V SE SoC has a physical limitation that does not allow the transmission of 512 bits in one clock cycle. So, we have created a wrapper in Verilog to connect the MD5 code (in FPGA) with the ARM. This logic splits the transfer between FPGA and ARM in transfers of 32 bits, until the 512 bits are transfered (see Code 7). Thus, the input signals \(in\underline {}wdata\) and \(in\underline {}addr\) and output signal \(out\underline {}rdata\) are mapped in the ARM registers and can be easily accessed from the software layer.
The wrapper receives data from ARM via \(in\underline {}wdata\), and store it in a register bank at address provided by \(in\underline {}addr\). To read the result from MD5, it is necessary to wait 63 positive clock edges, then set \(in\underline {}addr\) to the address that holds the results and read \(out\underline {}rdata\). For reading and writing 32-bit words are used, while for addressing 64 bits are used.
The registers from 0 to 15 (4 bytes each) are used for transmitting parts of the message. After sending the message completely, the least significant bit from register 16 is set in order to turn the MD5 block available. So, after 64 clock cycles the result is stored in registers 32 to 35. Finally, \(in\underline {}addr\) is set to indicate the address of registers and their values are returned via \(out\underline {}rdata\).
Federates based on OpenCL
In our implementation, two Federates are based on OpenCL, the Multi-core Federate and the GPU Federate. Both of them work in similar ways. They receive all data of image from Sender Federate in the same way the other Federates, then they build the image matrix in the device memory and an OpenCL kernel is initialized. So, they execute a mask operation in the image. It consists on recalculating all the image pixels by applying the Eq. 1.
$$ \begin{aligned} I(i,j)& = 5 * I(i,j) - [I(i-1,j) + I(i+1,j) \\ &\quad+ I(i,j-1) + I(i,j+1)] \Leftrightarrow I(i,j) * M, \end{aligned} $$
$$ \begin{aligned} I(i,j) * \left[ \begin{array}{ccc} 0 & -1 & 0\\ -1 & 5 & -1\\ 0 & -1 & 0 \\ \end{array}\right] \end{aligned} $$
The Eq. 1 was obtained by multiplying each image element by the mask matrix, as shown in Eq. 2. This calculation adjusts each pixel value based on how much influence the current and the neighbor pixels have.
To reach a satisfactory portability of the kernel between the diverse hardwares, some calculations were done to adjust the runtime environment of the OpenCL. When using OpenCL is important to calculate properly the number of work-groups [30, 31]. For this implementation, it was taking into account the number of elements to be calculated based on the resolution of the image, the number of cores of the current architecture and the number of compute units of the processor. This last two parameters are based on the values returned by some OpenCL functions appropriate for querying hardware attributes.
Given the low degree of complexity of the kernel in this experiment, only this information is necessary to calculate the required number of threads. Dividing them into work-groups with appropriate amounts according to the number of compute units.
In the experiment involving Multi-core CPU and GPU Federates, the following steps were executed:
Sender Federate reads an image and shows it on the screen;
Sender Federate sends the image to the Multi-core/GPU Federate;
Multi-core/GPU Federate receives the image and displays it on the screen for a subjective integrity check;
Multi-core/GPU performs the processing of OpenCL kernel;
Multi-core/GPU shows the resulting processed image;
Multi-core/GPU sends a response to the Sender Federate;
Sender Federate receives the response and displays it on the screen.
The results presented in this section refer to an analysis of the data transfer approaches, presented in Subsection 5.3, and then some results from the experiments discussed in Section 5, and specifically presented in Subsection 5.2. In Subsection 6.1 are presented the data exchanging results, and in the next subsections are presented the overall results for the different experiment scenarios.
Data exchanging analysis
Table 4 presents the time and throughput to transmit an image from the Sender Federate to other Federates via an Ethernet LAN network. The Lena image used in the experiments is presented in Fig. 9, with the resolution of 512×512 in RGB (a matrix of 786,432 unsigned char elements, or 768 KB).
The Lena image used in the experiments
Table 4 Transfer times of Lena image with 786,432 elements
In the first experiment, pixels are sent one by one in each HLA message. A multi-pixel approach is used in experiment 2, where 15 pixels are sent in 15 attributes (HLA), one attributes for each pixel. In experiment 3 the same approach is executed, but now with 100 pixels. In experiments 4 and 5, a group of 100 and 300 pixels, respectively, are sent in arrays of the same size. In these two last experiments the time is much lesser because HLA tries always to send to complete array in a unique message. This experiment was important to evaluate the impact of different approaches in HLA to organize data in messages.
It is important to note that this throughput average is based only on the image data sent and received (payload), not including the traffic of control messages sent by the CERTI RTI implementation. This gives an idea of the necessary time to transfer data via Virtual Bus. Thus, it turns more evident the Virtual Bus capacity of sending the image from one Federate to another.
The first line of the table contains the values from the one-by-one experiment, line 2 and line 3 refers to results in the multi-pixel experiments, where there is one attribute for each element to be sent. Finally, line 4 and 5 are the results for multi-pixel in one attribute, that sends multiple pixels in a single HLA attribute. The last column named "speedup" presents an overall speedup of each throughput result in comparison with one-by-one approach.
Comparing the line 3 and 4, the same number of pixels per message was sent, but with different transmission approaches. In this case occurred a time reduction and an increase in throughput when more pixels were transmitted by message.
The increase of the number of attributes from one to 15 pixels per message, respectively (experiments 1 and 2 in Table 4), brought a speedup of 9.5 times. When transferring 100 pixels per message (experiments 3), the speedup was 16.5 times, demonstrating a smooth increasing. And the highest speedups were achieved when the multiple data was encapsulated in a unique array attribute, reaching speedups of 20 and 317.7 times (experiments 4 and 5, respectively), demonstrating a exponential increasing.
This result demonstrates the improvement obtained from different methodologies addressed in the proposed environment. The speedup is presented comparing the different configurations with the most simple one, where only one pixel is transmitted per simulation cycle. This provides data for comparisons in future work to assist the choice of which HLA configuration is more appropriate when using Virtual Bus. In this experiments, we demonstrated that for applications where large amounts of data must be transfered, the must appropriate approach is to transfer multiple data in a unique HLA array type, like in experiments 4 and 5.
With OpenCL it was possible to implement a component that allows the use of heterogeneous hardware platforms integrated to Virtual Bus. This enables the use of both multi-core CPU, and GPU. It was also possible to adaptively manage the number of work groups. This number is calculated dynamically according to the image resolution and to the number of cores, among other device-specific features. This calculation made possible to achieve better results while exploring more cores per device.
Table 5 presents the time to process the mask operation over Lena image by the OpenCL kernel in a GPU Federate. For all experiments, the processing time was the same, 281 ms to process the mask, because this time is independent of the transmission strategy. This table demonstrates that the data transfer can be the highest bottleneck in this scenario. Although, the results demonstrated that when transmitting 300 pixels per attribute, the transmission decreases to 45% of overall time.
Table 5 Processing time and total time of Lena image with 786,432 elements by GPU Federate
Scenario 1: SoC (ARM+FPGA)
The results presented in the following sections show the interaction among the Federates in Virtual Bus. They demonstrate how the data exchanging occurred and when each Federate took action.
In Figs. 10 and 11 are presented the processing activity of the Sender and the SoC Federates, respectively. It is possible to see that both charts have the same shape and the average is 129μs for each. This is because the SoC returns the result of MD5 hash in the next HLA time after the Sender Federate sends the input data. After receiving the last data to be processed, the SoC is the unique Federate transmitting via Virtual Bus, so it expends only 4μs to conclude the data transfer to Sender Federate.
Sender Federate activity during the transition to SoC Federate
SoC Fed erate activity during the communication with Sender Federate
Scenario 2: Multi-core
Figure 12 presents the communication between Sender and Multi-core Federates. Now the x-axis represents the number of interactions. The Sender sends each message in 131μs on average to the Multi-core Federate. After receiving all data, the Multi-core Federate takes 811μs to apply the mask operation and start to send the result to Sender Federate, this is shown in peak around interaction 2600. The result is sent back in messages, which take 6μs on average to reach the Sender Federate.
Transmission activity between Sender and Multi-core Federates
Scenario 3: GPU
Figure 13 presents the activity during communication among Sender and GPU Federates. The results are similar to the communication between Sender and Multi-core Federates. The only difference is that the GPU takes 940μs to apply the filter and return the first message to the Sender Federate. Since the focus of this work is the communication among heterogeneous devices, this code was not optimized for the GPU, resulting in this discrepancy.
Transmission activity between Sender and GPU
Scenario 4: Multi-core and SoC (ARM+FPGA)
In order to evaluate three devices communicating via Virtual Bus, the Sender, SoC (ARM+FPGA) and Multi-core Federates were connected. The Fig. 14 represents the activity during this communication. The integration of these three devices did not interfered in results that were obtained when two devices were exchanging data. In the Fig. 14 the FPGA activity overlaps the Sender as previously described.
Transmission activity between Sender, FPGA and Multicore
Scenario 5: SoC (ARM+FPGA) and GPU
The same experiment was made but replacing the Multi-core by GPU, and the result was repeated as could be seen in Fig. 15. A similarity between this result and scenario 4 is clear, since both use the same OpenCL code, and the communication bottleneck continues in same amount.
Transmission activity between Sender, FPGA and GPU
Final considerations
In this work a platform named Virtual Bus for communication between distributed heterogeneous embedded systems was presented. It provides a simple and clear way of exchanging data, without necessity to know in details the architectures involved. The Virtual Bus can also be adapted for many devices, as it is based on the consolidated standard HLA (IEEE 1516).
The experiments demonstrated the communication between different devices using Virtual Bus. Some different devices were integrated in a unique execution environment. A PC, a DE1-SoC with an ARM and an Altera FPGA, a GPU and a Multi-core processor. Once the Virtual Bus was implemented in devices, the communication and synchronization among them were transparent and only three functions were necessary for any application to deal with the bus.
With the experiments it was possible to prove the feasibility of the proposed architecture to perform data transfers preserving consistency of time and content of messages, as well as enabling the necessary infrastructure for parallel processing in a each device connected via HLA. To support massive parallel processing of images, an OpenCL Federate was developed to manage multiple compute units in GPU and multi-core CPU.
The potential and limitations of our platform became evident. The main potential is the possibility to integrate heterogeneous architecture in a transparent and synchronous fashion. The most important limitation is the transmission overhead. HLA is a centralized approach, which is important to manage synchronization but increases the communication bottleneck. However, we have demonstrated that when using array types in HLA the transmission overhead can be decreased. This possibly enables the use of Virtual Bus by distributed applications which demands synchronization and explore multiple compute units of heterogeneous architectures. For example, multiplayer games, distributed simulation, distributed hardware-in-the-loop simulation, etc. In future works, Virtual Bus will be applied in these and other scenarios. Also, other communication middlewares (e.g. DDS) could replace HLA and the results compared with our current implementation.
Simpson JJ, Dagli CH. System of systems: Power and paradox. In: 2008 IEEE International Conference on System of Systems Engineering. Washington, DC: IEEE Computer Society;2008. p. 1–5. https://doi.org/10.1109/SYSOSE.2008.4724165.
IEEE standard for modeling and simulation (M&S) high level architecture (HLA)– framework and rules. IEEE Comput Soc. 2010; 2010:1–38. https://doi.org/10.1109/IEEESTD.2010.5553440.
Scrudder R, Saunders R, Björn Möller KLM. IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA) - Object Model Template (OMT) Specification. IEEE Comput Soc. 2010;2010. https://doi.org/10.1109/IEEESTD.2010.5557731.
Brito AV, Negreiros AV, Roth C, Sander O, Becker J. Development and Evaluation of Distributed Simulation of Embedded Systems Using Ptolemy and HLA. In: 2013 IEEE/ACM 17th International Symposium on Distributed Simulation and Real Time Applications. Washington, DC: IEEE Computer Society;2013. p. 189–96. http://dl.acm.org/citation.cfm?id=2570454.2570892.
Oliveira HFA, Araújo JMR, Brito AV, Melcher EUK. ian approach for power estimation at electronic system level using distributed simulation. J Integrated Circ Syst. 2016; 11(3):159–67.
Wetter M, Haves P. A modular Building Controls Virtual Test Bed for the integration of heterogeneous systems. In: Proceedings of the 3rd SimBuild Conference. Berkeley: Published by authors;2008. p. 69–76. http://simulationresearch.lbl.gov/wetter/download/SB08-04-2-Wetter.pdf.
Eidson J, Lee EA, Matic S, Seshia SA, Zou J. Distributed real-time software for cyber-physical systems. Proc IEEE (special issue on CPS). 2012; 100(1):45–59.
Jung Y, Park J, Petracca M, Carloni LP. netship: A networked virtual platform for large-scale heterogeneous distributed embedded systems. In: Proceedings of the 50th Annual Design Automation Conference. New York: ACM;2013. p. 1–169:10. https://doi.org/10.1145/2463209.2488943.
Vaubourg J, Chevrier V, Ciarletta L, Camus B. Co-Simulation of IP Network Models in the Cyber-Physical Systems Context, using a DEVS-based Platform. Research report, Inria Nancy - Grand Est (Villers-lès-Nancy, France) ; Université de Lorraine ; CNRS - Nancy ; Loria & Inria Grand Est. Pasadena: ACM; 2016. https://hal.archives-ouvertes.fr/hal-01256907/file/paper.pdf.
Van Tran H, Truong TP, Nguyen KT, Huynh HX, Pottier B. A Federated Approach for Simulations in Cyber-Physical Systems In: Vinh CP, Alagar V, editors. Context-Aware Systems and Applications: 4th International Conference, ICCASA 2015, Vung Tau, Vietnam, November 26-27, 2015, Revised Selected Papers. Cham: Springer;2016. p. 165–76.
Siron P. Design and implementation of a HLA RTI prototype at ONERA. In: 1998 Fall Simulation Interoperability Workshop. Toulouse: Published by authors;1998.
Gervais C, Chaudron JB, Siron P, Leconte R, Saussié D. Real-time distributed aircraft simulation through HLA. In: Distributed Simulation and Real Time Applications (DS-RT), 2012 IEEE/ACM 16th International Symposium on. Washington, DC: IEEE Computer Society;2012. p. 251–4.
Awais MU, Palensky P, Elsheikh A, Widl E, Matthias S. The high level architecture RTI as a master to the functional mock-up interface components. In: Computing, Networking and Communications (ICNC), 2013 International Conference on: 2013. p. 315–20.
Paterson DJ, Hougl ESDP, Sanmiguel JJ. A gateway/middleware hla implementation and the extra services that can be provided to the simulation. In: 2000 Fall Simulation Interoperability Workshop Conference Proceedings, No. 00F-SIW-007. State College: Citeseer;2000.
García-Valls M, Ampuero-Calleja J, Ferreira LL. Integration of Data Distribution Service and Raspberry Pi In: Au MA, Castiglione A, Choo KR, Palmieri F, Li K-C, editors. Green, Pervasive, and Cloud Computing: 12th International Conference, GPC 2017, Cetara, Italy, May 11–14, 2017, Proceedings. Cham: Springer International Publishing;2017. p. 490–504.
García-Valls M, Calva-Urrego C. Improving service time with a multicore aware middleware. In: Proceedings of the Symposium on Applied Computing, SAC '17. New York: ACM;2017. p. 1548–53. https://doi.org/10.1145/3019612.3019741.
Valls MG, Lopez IR, Villar LF. iland: An enhanced middleware for real-time reconfiguration of service oriented distributed real-time systems. IEEE Trans Ind Inf. 2013; 9(1):228–36. https://doi.org/10.1109/TII.2012.2198662.
García-Valls M, Cucinotta T, Lu C. Challenges in real-time virtualization and predictable cloud computing. J Syst Archit Embedded Syst Des. 2014; 60(9):726–40. https://doi.org/10.1016/j.sysarc.2014.07.004.
Park Y, Min D. Development of hla-dds wrapper api for network-controllable distributed simulation. In: 2013 7th International Conference on Application of Information and Communication Technologies. Washington, DC: IEEE Computer Society;2013. p. 1–5. https://doi.org/10.1109/ICAICT.2013.6722799.
Brito AV, Bucher H, Oliveira H, Costa LFS, Sander O, Melcher EUK, Becker J. A distributed simulation platform using hla for complex embedded systems design. In: 2015 IEEE/ACM 19th International Symposium on Distributed Simulation and Real Time Applications (DS-RT). Washington, DC: IEEE Computer Society;2015. p. 195–202.
Macri M, Rango AD, Spataro D, D'Ambrosio D, Spataro W. Efficient lava flows simulations with opencl: A preliminary application for civil defence purposes. In: 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC). Washington, DC: IEEE Computer Society;2015. p. 328–35. https://doi.org/10.1109/3PGCIC.2015.107.
Weber R, Gothandaraman A, Hinde RJ, Peterson GD. Comparing hardware accelerators in scientific applications: A case study. IEEE Trans Parallel Distributed Syst. 2011; 23(1):58–68.
Noulard E, Rousselot JY, Siron P. Certi, an open source rti, why and how. In: Spring Simulation Interoperability Workshop. Palaiseau: Published by authors;2009. p. 23–7.
Nouman A, Anagnostou A, Taylor SJ. Developing a distributed agent-based and des simulation using portico and repast. In: Proceedings of the 2013 IEEE/ACM 17th International Symposium on Distributed Simulation and Real Time Applications. Washington, DC: IEEE Computer Society;2013. p. 97–104.
IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA)– Federate Interface Specification. IEEE Comput Soc. 2010; 2010:1–378. https://doi.org/10.1109/IEEESTD.2010.5557728.
IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA)– Object Model Template (OMT) Specification. IEEE Comput Soc. 2010; 2010:1–110. https://doi.org/10.1109/IEEESTD.2010.5557731.
Liu B, Yao Y, Jiang Z, Yan L, Qu Q, Peng S. HLA-Based Parallel Simulation: A Case Study. In: 2012 ACM/IEEE/SCS 26th Workshop on Principles of Advanced and Distributed Simulation. Washington, DC: IEEE Computer Society: 2012. p. 65–7.
Lasnier G, Cardoso J, Siron P, Pagetti C, Derler P. Distributed Simulation of Heterogeneous and Real-Time Systems. In: 2013 IEEE/ACM 17th International Symposium on Distributed Simulation and Real Time Applications. Washington, DC: IEEE;2013. p. 55–62. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6690494.
IETF. Rfc 1321 - the md5 message-digest algorithm. Internet Engineering Task Force (IETF). 1992;1992. https://doi.org/10.17487/RFC1321.
Stone JE, Gohara D, Shi G. OpenCL: A parallel programming standard for heterogeneous computing systems. Comput Sci Eng. 2010; 12(3):66–72. https://doi.org/10.1109/MCSE.2010.69.
Cummins C, Petoumenos P, Steuwer M, Leather H. Autotuning OpenCL Workgroup Size for Stencil Patterns, Vol. 8; 2016. 1511.02490.
The acknowledgements goes to the CNPq and CAPES from Brazil for supporting this research.
Universidade Federal da Paraíba (UFPB), Joao Pessoa, Brazil
Daniel C. Morais, Halamo G. R. Andrade & Alisson V. Brito
Universidade Federal de Campina Grande (UFCG), Campina Grande, Brazil
Thiago W. B. Silva, Antonio M. N. Lima & Elmar U. K. Melcher
Thiago W. B. Silva
Daniel C. Morais
Halamo G. R. Andrade
Antonio M. N. Lima
Elmar U. K. Melcher
Alisson V. Brito
For the development of this work, TWS, AB, AML and EM developed the concept and design of the Virtual Bus. TS, DM and HA implemented the Virtual Bus and the experiments. AML, AB and EM also reviewed the text. All authors read and approved the final manuscript
Correspondence to Alisson V. Brito.
Silva, T.W., Morais, D.C., Andrade, H.G. et al. Environment for integration of distributed heterogeneous computing systems. J Internet Serv Appl 9, 4 (2018). https://doi.org/10.1186/s13174-017-0072-1
Complex Distributed Systems and Systems of Systems | CommonCrawl |
Results for 'Maxim V. Vorobiev'
Increasing Η ‐Representable Degrees.Andrey N. Frolov & Maxim V. Zubkov - 2009 - Mathematical Logic Quarterly 55 (6):633-636.details
In this paper we prove that any Δ30 degree has an increasing η -representation. Therefore, there is an increasing η -representable set without a strong η -representation.
Representation in Philosophy of Mind
Inflation Due to Quantum Potential.Maxim V. Eingorn & Vitaliy D. Rusov - 2015 - Foundations of Physics 45 (8):875-882.details
In the framework of a cosmological model of the Universe filled with a nonrelativistic particle soup, we easily reproduce inflation due to the quantum potential. The lightest particles in the soup serve as a driving force of this simple, natural and promising mechanism. It is explicitly demonstrated that the appropriate choice of their mass and fraction leads to reasonable numbers of e-folds. Thus, the direct introduction of the quantum potential into cosmology of the earliest Universe gives ample opportunities of successful (...) reconsideration of the modern inflationary theory. (shrink)
Quantum Mechanics in Philosophy of Physical Science
The Early Universe in Philosophy of Physical Science
N atalia G. S ukhova & E rki T ammiksaar, Aleksandr Fedorovich Middendorf: K dvukhsotletiyu so dnia rozhdeniya [Alexander Theodor von Middendorff: On the Bicentenary of His Birthday], 2nd edition, revised and expanded, St. Petersburg: Nestor-Istoriya, 2015, 380 pp., price 300 roubles [In Russian]. [REVIEW]Maxim V. Vinarski & Tatiana I. Yusupova - 2017 - History and Philosophy of the Life Sciences 40 (1):14.details
Revisiting the Maxim-Law Dynamic in the Light of Kant's Theory of Action.V. K. Radhakrishnan - 2019 - Kantian Journal 38 (2):45-72.details
A stable classification of practical principles into mutually exclusive types is foundational to Kant's moral theory. Yet, other than a few brief hints on the distinction between maxims and laws, he does not provide any elaborate discussion on the classification and the types of practical principles in his works. This has led Onora O'Neill and Lewis Beck to reinterpret Kant's classification of practical principles in a way that would clarify the conceptual connection between maxims and laws. In this paper I (...) argue that the revised interpretations of O'Neill and Beck stem from a mistaken reading of the fundamental basis of the classification of practical principles. To show this, I first argue that Kant distinguishes between maxims and laws on the bases of validity and reality. I then argue that although a practical principle necessarily has the feature of validity, its reality in actually moving the agents to action sufficiently makes a principle a practical principle. If this is so, I argue that the classification of practical principles must be based on the extent to which they are effective in human agents. Such a classification yields us three exhaustive and mutually exclusive types namely, "maxims that are not potential laws", "maxims that are potential laws" and "laws that are not maxims". (shrink)
Kant: Ethics, Misc in 17th/18th Century Philosophy
Kant: Maxims in 17th/18th Century Philosophy
Moral Principles, Misc in Meta-Ethics
Auditory Mismatch Negativity Response in Institutionalized Children.Irina Ovchinnikova, Marina A. Zhukova, Anna Luchina, Maxim V. Petrov, Marina J. Vasilyeva & Elena L. Grigorenko - 2019 - Frontiers in Human Neuroscience 13.details
Maximal Kripke-Type Semantics for Modal and Superintuitionistic Predicate Logics.D. P. Skvortsov & V. B. Shehtman - 1993 - Annals of Pure and Applied Logic 63 (1):69-101.details
Recent studies in semantics of modal and superintuitionistic predicate logics provided many examples of incompleteness, especially for Kripke semantics. So there is a problem: to find an appropriate possible- world semantics which is equivalent to Kripke semantics at the propositional level and which is strong enough to prove general completeness results. The present paper introduces a new semantics of Kripke metaframes' generalizing some earlier notions. The main innovation is in considering "n"-tuples of individuals as abstract "n"-dimensional vectors', together with some (...) transformations of these vectors. Soundness of the semantics is proved to be equivalent to some non- logical properties of metaframes; and thus we describe the maximal semantics of Kripke- type. (shrink)
Modal and Intensional Logic in Logic and Philosophy of Logic
Semantics for Modal Logic in Logic and Philosophy of Logic
Maximizing Students' Retention Via Spaced Review: Practical Guidance From Computational Models of Memory.Mohammad M. Khajah, Robert V. Lindsey & Michael C. Mozer - 2014 - Topics in Cognitive Science 6 (1):157-169.details
During each school semester, students face an onslaught of material to be learned. Students work hard to achieve initial mastery of the material, but when they move on, the newly learned facts, concepts, and skills degrade in memory. Although both students and educators appreciate that review can help stabilize learning, time constraints result in a trade-off between acquiring new knowledge and preserving old knowledge. To use time efficiently, when should review take place? Experimental studies have shown benefits to long-term retention (...) with spaced study, but little practical advice is available to students and educators about the optimal spacing of study. The dearth of advice is due to the challenge of conducting experimental studies of learning in educational settings, especially where material is introduced in blocks over the time frame of a semester. In this study, we turn to two established models of memory—ACT-R and MCM—to conduct simulation studies exploring the impact of study schedule on long-term retention. Based on the premise of a fixed time each week to review, converging evidence from the two models suggests that an optimal review schedule obtains significant benefits over haphazard (suboptimal) review schedules. Furthermore, we identify two scheduling heuristics that obtain near optimal review performance: (a) review the material from μ-weeks back, and (b) review material whose predicted memory strength is closest to a particular threshold. The former has implications for classroom instruction and the latter for the design of digital tutors. (shrink)
The Works of George Berkeley, Bishop of Cloyne.The Works of George Berkeley, Bishop of Cloyne: Vol. IV. De Motu: The Analyst, Defence of Free-Thinking in Mathematics, Reasons for Not Replying to Walton's Full Answer, Arithmetica, Miscellanea Mathematica, Of Infinites, Letters on Vesuvius, on Petrifactions, on Earthquakes, Description of Cave of Dunmore.The Works of George Berkeley, Bishop of Cloyne: Vol. V. Siris, Letters to Thomas Prior and Dr. Hales, Farther Thoughts on Tar-Water, Varia.The Works of George Berkeley, Bishop of Cloyne: Vol. VI. Passive Obedience, Advice to Tories Who Have Taken the Oaths, Essay Towards Preventing the Ruin of Great Britain, The Querist, Letter on a National Bank, The Irish Patriot, Discourse to Magistrates, Letters on the Jacobite Rebellion, A Word to the Wise, Maxims Concerning Patriotism.William T. Parry - 1953 - Philosophy and Phenomenological Research 14 (2):263-263.details
Berkeley: Philosophy of Science in 17th/18th Century Philosophy
Berkeley: Value Theory in 17th/18th Century Philosophy
Changes in Functional Connectivity Within the Fronto-Temporal Brain Network Induced by Regular and Irregular Russian Verb Production.Maxim Kireev, Natalia Slioussar, Alexander D. Korotkov, Tatiana V. Chernigovskaya & Svyatoslav V. Medvedev - 2015 - Frontiers in Human Neuroscience 9.details
Which Scoring Rule Maximizes Condorcet Efficiency Under Iac?Davide P. Cervone, William V. Gehrlein & William S. Zwicker - 2005 - Theory and Decision 58 (2):145-185.details
Consider an election in which each of the n voters casts a vote consisting of a strict preference ranking of the three candidates A, B, and C. In the limit as n→∞, which scoring rule maximizes, under the assumption of Impartial Anonymous Culture (uniform probability distribution over profiles), the probability that the Condorcet candidate wins the election, given that a Condorcet candidate exists? We produce an analytic solution, which is not the Borda Count. Our result agrees with recent numerical results (...) from two independent studies, and contradicts a published result of Van Newenhizen (Economic Theory 2, 69–83. (1992)). (shrink)
Condorcet in 17th/18th Century Philosophy
Maximally Consistent Sets of Instances of Naive Comprehension.Luca Incurvati & Julien Murzi - 2017 - Mind 126 (502).details
Paul Horwich (1990) once suggested restricting the T-Schema to the maximally consistent set of its instances. But Vann McGee (1992) proved that there are multiple incompatible such sets, none of which, given minimal assumptions, is recursively axiomatizable. The analogous view for set theory---that Naïve Comprehension should be restricted according to consistency maxims---has recently been defended by Laurence Goldstein (2006; 2013). It can be traced back to W.V.O. Quine(1951), who held that Naïve Comprehension embodies the only really intuitive conception of set (...) and should be restricted as little as possible. The view might even have been held by Ernst Zermelo (1908), who,according to Penelope Maddy (1988), subscribed to a 'one step back from disaster' rule of thumb: if a natural principle leads to contra-diction, the principle should be weakened just enough to block the contradiction. We prove a generalization of McGee's Theorem, anduse it to show that the situation for set theory is the same as that for truth: there are multiple incompatible sets of instances of Naïve Comprehension, none of which, given minimal assumptions, is recursively axiomatizable. This shows that the view adumbrated by Goldstein, Quine and perhaps Zermelo is untenable. (shrink)
Liar Paradox in Logic and Philosophy of Logic
Russell's Paradox in Philosophy of Mathematics
The Nature of Sets, Misc in Philosophy of Mathematics
Seismic Imaging and Statistical Analysis of Fault Facies Models.Dmitriy R. Kolyukhin, Vadim V. Lisitsa, Maxim I. Protasov, Dongfang Qu, Galina V. Reshetova, Jan Tveranger, Vladimir A. Tcheverda & Dmitry M. Vishnevsky - 2017 - Interpretation: SEG 5 (4):SP71-SP82.details
Interpretation of seismic responses from subsurface fault zones is hampered by the fact that the geologic structure and property distributions of fault zones can generally not be directly observed. This shortcoming curtails the use of seismic data for characterizing internal structure and properties of fault zones, and it has instead promoted the use of interpretation techniques that tend to simplify actual structural complexity by rendering faults as lines and planes rather than volumes of deformed rock. Facilitating the correlation of rock (...) properties and seismic images of fault zones would enable active use of these images for interpreting fault zones, which in turn would improve our ability to assess the impact of fault zones on subsurface fluid flow. We use a combination of 3D fault zone models, based on empirical data and 2D forward seismic modeling to investigate the link between fault zone properties and seismic response. A comparison of spatial statistics from the geologic models and the seismic images was carried out to study how well seismic images render the modeled geologic features. Our results indicate the feasibility of extracting information about fault zone structure from seismic data by the methods used. (shrink)
Maximizing the Predictive Value of Production Rules.Sholom M. Weiss, Robert S. Galen & Prasad V. Tadepalli - 1990 - Artificial Intelligence 45 (1-2):47-71.details
Diffusion Centrality: A Paradigm to Maximize Spread in Social Networks.Chanhyun Kang, Sarit Kraus, Cristian Molinaro, Francesca Spezzano & V. S. Subrahmanian - 2016 - Artificial Intelligence 239:70-96.details
Rotations and Pattern Formation in Granular Materials Under Loading.Elena Pasternak, Arcady V. Dyskin, Maxim Esin, Ghulam M. Hassan & Cara MacNish - 2015 - Philosophical Magazine 95 (28-30):3122-3145.details
Constrained Maximization.Jordan Howard Sobel - 1991 - Canadian Journal of Philosophy 21 (1):25 - 51.details
This paper is about David Gauthier's concept of constrained maximization. Attending to his most detailed and careful account, I try to say how constrained maximization works, and how it might be changed to work better. In section I, that detailed account is quoted along with amplifying passages. Difficulties of interpretation are explained in section II. An articulation, a spelling out, of Gauthier's account is offered in section III to deal with these difficulties. Next, in section IV, constrained maximization thus articulated (...) is tested on several choice problems and shown to be seriously wanting. It appears that there are prisoners' dilemmas in which constrained maximizers would not cooperate to mutual advantage, but would interact sub-optimally just as straight-maximizers would. 'Coordination problems' are described with which constrained maximizers might, especially if transparent to one another, not be able to cope–problems in which they might not be able to make up their minds to do anything at all. And I prove that there are prisoners' dilemmas that, though possible for real agents and for straight maximizers, are not possible for constrained maximizers, so that agents' internalising dispositions of constrained maximization could not be of help in connection with such possibly impending dilemmas. Taking constrained maximization as it stands, there are many problems for which it does not afford the 'moral solutions' with which Gauthier would have it replace Hobbesian political ones. After displaying these shortcomings of constrained maximization as presently designed, I sketch, in section V, possible revisions that would reduce them, stressing that these revisions would not be cost-free. Whether finishing the job of fixing up and making precise constrained maximization would be worth the considerable trouble it would involve lies beyond the issues taken up in this paper. So, of course, do substantive comparisons of constrained maximization, perfected and made precise, and straight maximization. (shrink)
Game Theory in Philosophy of Action
Prisoner's Dilemma in Philosophy of Action
A Simple Maximality Principle.Joel David Hamkins - 2003 - Journal of Symbolic Logic 68 (2):527-550.details
In this paper, following an idea of Christophe Chalons. I propose a new kind of forcing axiom, the Maximality Principle, which asserts that any sentence varphi holding in some forcing extension $V^P$ and all subsequent extensions $V^{P\ast Q}$ holds already in V. It follows, in fact, that such sentences must also hold in all forcing extensions of V. In modal terms, therefore, the Maximality Principle is expressed by the scheme $(\lozenge \square \varphi) \Rightarrow \square \varphi$ , and is equivalent to (...) the modal theory S5. In this article. I prove that the Maximality Principle is relatively consistent with ZFC. A boldface version of the Maximality Principle, obtained by allowing real parameters to appear in φ, is equiconsistent with the scheme asserting that $V_\delta \prec V$ for an inaccessible cardinal δ, which in turn is equiconsistent with the scheme asserting that ORD is Mahlo. The strongest principle along these lines is $\square MP\!_{\!\!\!\!\!\!_{\!\!_\sim}}$ , which asserts that $MP\!_{\!\!\!\!\!\!_{\!\!_\sim}}$ holds in V and all forcing extensions. From this, it follows that $0^\#$ exists, that $x^\#$ exists for every set x, that projective truth is invariant by forcing, that Woodin cardinals are consistent and much more. Many open questions remain. (shrink)
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
A Simple Maximality Principle.Joel Hamkins - 2003 - Journal of Symbolic Logic 68 (2):527-550.details
In this paper, following an idea of Christophe Chalons, I propose a new kind of forcing axiom, the Maximality Principle, which asserts that any sentence φ holding in some forcing extension $V\P$ and all subsequent extensions V\P*\Qdot holds already in V. It follows, in fact, that such sentences must also hold in all forcing extensions of V. In modal terms, therefore, the Maximality Principle is expressed by the scheme $\implies\necessaryφ$, and is equivalent to the modal theory S5. In this article, (...) I prove that the Maximality Principle is relatively consistent with \ZFC. A boldface version of the Maximality Principle, obtained by allowing real parameters to appear in φ, is equiconsistent with the scheme asserting that $Vδ\elesub V$ for an inaccessible cardinal δ, which in turn is equiconsistent with the scheme asserting that $\ORD$ is Mahlo. The strongest principle along these lines is $\necessary\MPtilde$, which asserts that $\MPtilde$ holds in V and all forcing extensions. From this, it follows that 0# exists, that x# exists for every set x, that projective truth is invariant by forcing, that Woodin cardinals are consistent and much more. Many open questions remain. (shrink)
Transformation of Person and Society in the Anthropotechnical Turn: Educational Aspect.V. N. Vashkevich & O. V. Dobrodum - 2018 - Anthropological Measurements of Philosophical Research 13:112-123.details
Introduction. Anthropotechnical turn in culture is based on educational practices that characterize a person as a subject and at the same time as an object of educational and corrective influence. Theoretical basis. We use the method of categorical analysis, which allows revealing the main outlook potentials of anthropotechnical turn as an essential transformation of modern socio-culture. Originality. For the first time, we conducted a categorical analysis of the glossary of anthropotechnical turn as dialectic of active and passive in the personal (...) and social modes such as education. Conclusions. The anthropotechnical turn of modern socio-culture means the actualization of the dialectic of active and passive in the process of socialization and formation of a person in a modern society. The world-view potential of the anthropotechnical turn is producing a new maxim and stratagem of person's behaviour through the formation of a new way of self-identification and self-esteem. The modern educational system, given the theory of anthropotechnical rotation, should change the content of timological energies from obedience to self-actualization and self-improvement. A prerequisite for this task is the change in the motivation of the education sector and the improvement of the social status of the teacher as an intellectual and leader of opinion. The analysis of the specificity of the information society and its determinatory impact on the individual provides grounds for identifying modern culture as a culture of lost opportunities. Thus, the main cause of disorientation and ignorance of a person is not the lack of information, but the lack of motivation. Therefore, the fundamental principles of anthropotechnical turn are productive in solving pressing problems of our time. (shrink)
V = L and Intuitive Plausibility in Set Theory. A Case Study.Tatiana Arrigoni - 2011 - Bulletin of Symbolic Logic 17 (3):337-360.details
What counts as an intuitively plausible set theoretic content (notion, axiom or theorem) has been a matter of much debate in contemporary philosophy of mathematics. In this paper I develop a critical appraisal of the issue. I analyze first R. B. Jensen's positions on the epistemic status of the axiom of constructibility. I then formulate and discuss a view of intuitiveness in set theory that assumes it to hinge basically on mathematical success. At the same time, I present accounts of (...) set theoretic axioms and theorems formulated in non-strictly mathematical terms, e.g., by appealing to the iterative concept of set and/or to overall methodological principles, like unify and maximize, and investigate the relation of the latter to success in mathematics. (shrink)
Axiomatic Truth in Philosophy of Mathematics
The Nature of Sets in Philosophy of Mathematics
Universism and Extensions of V.Carolin Antos, Neil Barton & Sy-David Friedman - forthcoming - Review of Symbolic Logic:1-50.details
A central area of current philosophical debate in the foundations of mathematics concerns whether or not there is a single, maximal, universe of set theory. Universists maintain that there is such a universe, while Multiversists argue that there are many universes, no one of which is ontologically privileged. Often model-theoretic constructions that add sets to models are cited as evidence in favour of the latter. This paper informs this debate by developing a way for a Universist to interpret talk that (...) seems to necessitate the addition of sets to V. We argue that, despite the prima facie incoherence of such talk for the Universist, she nonetheless has reason to try and provide interpretation of this discourse. We present a method of interpreting extension-talk (V-logic), and show how it captures satisfaction in `ideal' outer models and relates to impredicative class theories. We provide some reasons to regard the technique as philosophically virtuous, and argue that it opens new doors to philosophical and mathematical discussions for the Universist. (shrink)
New Axioms in Set Theory in Philosophy of Mathematics
Set-Theoretic Constructions in Philosophy of Mathematics
Mohammed Abdellaoui/Editorial Statement 1–2 Mohammed Abdellaoui and Peter P. Wakker/The Likelihood Method for Decision Under Uncertainty 3–76 AAJ Marley and R. Duncan Luce/Independence Properties Vis--Vis Several Utility Representations 77–143. [REVIEW]Davide P. Cervone, William V. Gehrlein, William S. Zwicker, Which Scoring Rule Maximizes Condorcet, Marcello Basili, Alain Chateauneuf & Fulvio Fontini - 2005 - Theory and Decision 58:409-410.details
Scoring Rules in Philosophy of Probability
There Exist Exactly Two Maximal Strictly Relevant Extensions of the Relevant Logic R.Kazimierz Swirydowicz - 1999 - Journal of Symbolic Logic 64 (3):1125-1154.details
In [60] N. Belnap presented an 8-element matrix for the relevant logic R with the following property: if in an implication A → B the formulas A and B do not have a common variable then there exists a valuation v such that v(A → B) does not belong to the set of designated elements of this matrix. A 6-element matrix of this kind can be found in: R. Routley, R.K. Meyer, V. Plumwood and R.T. Brady [82]. Below we prove (...) that the logics generated by these two matrices are the only maximal extensions of the relevant logic R which have the relevance property: if A → B is provable in such a logic then A and B have a common propositional variable. (shrink)
Polish Philosophy in European Philosophy
Relevance Logic in Logic and Philosophy of Logic
A Co-Analytic Maximal Set of Orthogonal Measures.Vera Fischer & Asger Törnquist - 2010 - Journal of Symbolic Logic 75 (4):1403-1414.details
We prove that if V = L then there is a $\Pi _{1}^{1}$ maximal orthogonal (i.e., mutually singular) set of measures on Cantor space. This provides a natural counterpoint to the well-known theorem of Preiss and Rataj [16] that no analytic set of measures can be maximal orthogonal.
Model Theory in Logic and Philosophy of Logic
Theologie patristique grecque (suite). IV. Le V^ e sibcle: Isidore de Peluse, Cyrille d'Alexandrie, Theodoret. V. Du VII^ e au VIII^ e siecle: Maxime le Confesseur, Damascene. VI. La Bible des Peres. VII. Ouvrages generaux. Theologie des Peres. [REVIEW]B. Sesboue - 1998 - Recherches de Science Religieuse 86:221-248.details
Hereditarily Structurally Complete Modal Logics.V. V. Rybakov - 1995 - Journal of Symbolic Logic 60 (1):266-288.details
We consider structural completeness in modal logics. The main result is the necessary and sufficient condition for modal logics over K4 to be hereditarily structurally complete: a modal logic λ is hereditarily structurally complete $\operatorname{iff} \lambda$ is not included in any logic from the list of twenty special tabular logics. Hence there are exactly twenty maximal structurally incomplete modal logics above K4 and they are all tabular.
Explaining Maximality Through the Hyperuniverse Programme.Sy-David Friedman & Claudio Ternullo - 2018 - In Carolin Antos, Sy-David Friedman, Radek Honzik & Claudio Ternullo (eds.), The Hyperuniverse Project and Maximality. Birkhäuser. pp. 185-204.details
The iterative concept of set is standardly taken to justify ZFC and some of its extensions. In this paper, we show that the maximal iterative concept also lies behind a class of further maximality principles expressing the maximality of the universe of sets V in height and width. These principles have been heavily investigated by the first author and his collaborators within the Hyperuniverse Programme. The programme is based on two essential tools: the hyperuniverse, consisting of all countable transitive models (...) of ZFC, and V -logic, both of which are also fully discussed in the paper. (shrink)
$63.00 used $91.10 new (collection) Amazon page
Maximality Principles in the Hyperuniverse Programme.Sy-David Friedman & Claudio Ternullo - forthcoming - Foundations of Science:1-19.details
In recent years, one of the main thrusts of set-theoretic research has been the investigation of maximality principles for V, the universe of sets. The Hyperuniverse Programme has formulated several maximality principles, which express the maximality of V both in height and width. The paper provides an overview of the principles which have been investigated so far in the programme, as well as of the logical and model-theoretic tools which are needed to formulate them mathematically, and also briefly shows how (...) optimal principles, among those available, may be selected in a justifiable way. (shrink)
Profit and More: Catholic Social Teaching and the Purpose of the Firm. [REVIEW]Andrew V. Abela - 2001 - Journal of Business Ethics 31 (2):107 - 116.details
The empirical findings in Collins and Porras'' study of visionary companies, Built to Last, and the normative claims about the purpose of the business firm in Centesimus Annus are found to be complementary in understanding the purpose of the business firm. A summary of the methodology and findings of Built to Lastand a short overview of Catholic Social Teaching are provided. It is shown that Centesimus Annus'' claim that the purpose of the firm is broader than just profit is consistent (...) with Collins and Porras empirical finding that firms which set a broader objective tend to be more successful than those which pursue only the maximization of profits. It is noted however that a related finding in Collins and Porras, namely that the content of the firm''s objective is not as important as internalizing some objective beyond just profit maximization, can lead to ethical myopia. Two examples are provided of this: the Walt Disney Company and Philip Morris. Centesimus Annus offers a way to expose such myopia, by providing guidance as to what the purpose of the firm is, and therefore as to what kinds of objectives are appropriate to the firm. (shrink)
Business Ethics and Religion in Applied Ethics
Isols and Maximal Intersecting Classes.Jacob C. E. Dekker - 1993 - Mathematical Logic Quarterly 39 (1):67-78.details
In transfinite arithmetic 2n is defined as the cardinality of the family of all subsets of some set v with cardinality n. However, in the arithmetic of recursive equivalence types 2N is defined as the RET of the family of all finite subsets of some set v of nonnegative integers with RET N. Suppose v is a nonempty set. S is a class over v, if S consists of finite subsets of v and has v as its union. Such a (...) class is an intersecting class over v, if every two members of S have a nonempty intersection. An IC over v is called a maximal IC , if it is not properly included in any IC over v. It is known and readily proved that every MIC over a finite set v of cardinality n ≥ 1 has cardinality 2n-1. In order to generalize this result we introduce the notion of an ω-MIC over v. This is an effective analogue ot the notion of an MIC over v such that a class over a finite set v is an ω-MIC iff it is an MIC. We then prove that every ω-MIC over an isolated set v of RET N ≥ 1 has RET 2N-1. This is a generalization, for while there only are χ0 finite sets, there are ϰ isolated sets, where c denotes the cardinality of the continuum, namely all the finite sets and the c immune sets. MSC: 03D50. (shrink)
Areas of Mathematics in Philosophy of Mathematics
Condorcet's Paradox and the Likelihood of its Occurrence: Different Perspectives on Balanced Preferences.William V. Gehrlein - 2002 - Theory and Decision 52 (2):171-199.details
Many studies have considered the probability that a pairwise majority rule (PMR) winner exists for three candidate elections. The absence of a PMR winner indicates an occurrence of Condorcet's Paradox for three candidate elections. This paper summarizes work that has been done in this area with the assumptions of: Impartial Culture, Impartial Anonymous Culture, Maximal Culture, Dual Culture and Uniform Culture. Results are included for the likelihood that there is a strong winner by PMR, a weak winner by PMR, and (...) the probability that a specific candidate is among the winners by PMR. Closed form representations are developed for some of these probabilities for Impartial Anonymous Culture and for Maximal Culture. Consistent results are obtained for all cultures. In particular, very different behaviors are observed for odd and even numbers of voters. The limiting probabilities as the number of voters increases are reached very quickly for odd numbers of voters, and quite slowly for even numbers of voters. The greatest likelihood of observing Condorcet's Paradox typically occurs for small numbers of voters. Results suggest that while examples of Condorcet's Paradox are observed, one should not expect to observe them with great frequency in three candidate elections. (shrink)
Condorcet's Paradox in Social and Political Philosophy
The Discussion on the Principle of Universalizability in Moral Philosophy in the 1970s and 1980s: An Analysis.E. V. Loginov - 2018 - Russian Journal of Philosophical Sciences 10:65-80.details
In this paper, I analyzed the discussion on the principle of universalizability which took place in moral philosophy in 1970–1980s. In short, I see two main problems that attracted more attention than others. The first problem is an opposition of universalizability and generalization. M.G. Singer argued for generalization argument, and R.M. Hare defended universalizability thesis. Hare tried to refute Singer's position, using methods of ordinary language philosophy, and claimed that in ethics generalization is useless and misleading. I have examined Singer's (...) defense and concluded that he was right and Hare was mistaken. Consequently, generalization argument is better in clarification of the relationship between universality and morality than Hare's doctrine of universalizability, and hence the universality of moral principles is not incompatible with the existence of exclusions. The second problem is the substantiation of the application of categorical imperative in the theory of relevant act descriptions and accurate understanding of the difference between maxims and non-maxims. In Generalization in Ethics, Singer drew attention to this theme and philosophers have proposed some suggestions to solve this problem. I describe ideas of H.J. Paton, H. Potter, O. O'Neill and M. Timmons. Paton coined the teleological-law theory. According to Potter, the best criterion for the relevant act descriptions is causal one. O'N eill suggested the inconsistency-of-intention theory. Timmons defended the causal-law theory. My claim is that the teleological-law theory and the causal-law theory fail to solve the relevant act descriptions problem and the causal criterion and the inconsistency-of-intention theory have their limits. From this, I conclude that these approaches cannot be the basis for clarifying the connection between universality and morality, in contrast to Singer's approach, which, therefore, is better than others to clarify the nature of universality in morality. (shrink)
On the Temporal Boundaries of Simple Experiences.Michael V. Antony - 1998 - Twentieth World Congress of Philosophy.details
I have argued elsewhere that our conception of phenomenal consciousness commits us to simple phenomenal experiences that in some sense constitute our complex experiences. In this paper I argue that the temporal boundaries of simple phenomenal experiences cannot be conceived as fuzzy or vague, but must be conceived as instantaneous or maximally sharp. The argument is based on an account of what is involved in conceiving fuzzy temporally boundaries for events generally. If the argument is right, and our conception of (...) phenomenal consciousness is assumed to reflect the facts about consciousness, then since the temporal boundaries of neurophysiological events can be conceived as fuzzy, considerable pressure can be applied to neurophysiological identity theories, as well as to dualist accounts that posit temporal correspondence with neurophysiological events. (shrink)
Temporal Experience, Misc in Philosophy of Mind
Gonzales V. Oregon and Physician-Assisted Suicide: Ethical and Policy Issues.Ken Levy - 2007 - Tulsa Law Review 42:699-729.details
The euthanasia literature typically discusses the difference between "active" and "passive" means of ending a patient's life. Physician-assisted suicide differs from both active and passive forms of euthanasia insofar as the physician does not administer the means of suicide to the patient. Instead, she merely prescribes and dispenses them to the patient and lets the patient "do the rest" – if and when the patient chooses. One supposed advantage of this process is that it maximizes the patient's autonomy with respect (...) to both her decision to die and the dying process itself. Still, despite this supposed advantage, Oregon is the only state to have legalized physician-assisted suicide. After summarizing the most important Supreme Court opinions on euthanasia (namely, Cruzan v. Director, Missouri Dep't of Health; Vacco v. Quill; Washington v. Glucksberg; and Gonzales v. Oregon), this paper argues that while there are no strong ethical reasons against legalizing physician-assisted suicide, there are some very strong policy reasons for keeping it criminal in the other forty-nine states. (shrink)
Assisted Suicide in Applied Ethics
Autonomy in Applied Ethics in Applied Ethics
Euthanasia in Applied Ethics
Suicide in Applied Ethics
The Corporate Objective After eBay V. Newmark.John R. Boatright - 2017 - Business and Society Review 122 (1):51-70.details
The Delaware court's decision in eBay v. Newmark has been viewed by many commentators as a decisive affirmation of shareholder wealth maximization as the only legally permissible objective of a for-profit corporation. The implications of this court case are of particular concern for the emerging field of social enterprise, in which some organizations, such as, in this case, Craigslist, choose to pursue a social benefit mission in the for-profit corporate form. The eBay v. Newmark decision may also threaten companies that (...) seek to be socially responsible by serving constituencies other than shareholders at the expense of some profit. This examination of the court decision concludes that a legal requirement to maximize shareholder value may not preclude a commitment to social responsibility and may even permit the pursuit of a social benefit objective, such as the preservation of the culture developed by Craigslist. In particular, the court's decision in eBay v. Newmark reflects unique features of the case that could have been avoided by Craigslist and by other similar companies. (shrink)
Co-Immune Subspaces and Complementation in V∞.R. Downey - 1984 - Journal of Symbolic Logic 49 (2):528 - 538.details
We examine the multiplicity of complementation amongst subspaces of V ∞ . A subspace V is a complement of a subspace W if V ∩ W = {0} and (V ∪ W) * = V ∞ . A subspace is called fully co-r.e. if it is generated by a co-r.e. subset of a recursive basis of V ∞ . We observe that every r.e. subspace has a fully co-r.e. complement. Theorem. If S is any fully co-r.e. subspace then S has (...) a decidable complement. We give an analysis of other types of complements S may have. For example, if S is fully co-r.e. and nonrecursive, then S has a (nonrecursive) r.e. nowhere simple complement. We impose the condition of immunity upon our subspaces. Theorem. Suppose V is fully co-r.e. Then V is immune iff there exist M 1 , M 2 ∈ L(V ∞ ), with M 1 supermaximal and M 2 k-thin, such that $M_1 \oplus V = M_2 \oplus V = V_\infty$ . Corollary. Suppose V is any r.e. subspace with a fully co-r.e. immune complement W (e.g., V is maximal or V is h-immune). Then there exist an r.e. supermaximal subspace M and a decidable subspace D such that $V \oplus W = M \oplus W = D \oplus W = V_\infty$ . We indicate how one may obtain many further results of this type. Finally we examine a generalization of the concepts of immunity and soundness. A subspace V of V ∞ is nowhere sound if (i) for all Q ∈ L(V ∞ ) if $Q \supset V$ then Q = V ∞ , (ii) V is immune and (iii) every complement of V is immune. We analyse the existence (and ramifications of the existence) of nowhere sound spaces. (shrink)
The Problem of Searching the Meaning of Human Existence: Contemporary Context.V. M. Petrushov & V. M. Shapoval - 2020 - Anthropological Measurements of Philosophical Research 17:55-64.details
Purpose. The purpose of the article is the analysis of the reasons and grounds of the crisis in the sphere of meaning-making, as well as searching answers to the questions about the meaning of human life in the contemporary world, which are maximally relevant in connection with the escalation of global problems, revealing the points of convergence between various theoretical positions, evaluation of their heuristic potential. Theoretical basis of the research is the historical-philosophical, comparative and system approaches, as well as (...) the analysis of philosophical insights in the field of global studies. Originality. Originality lies in the fact that this article is the first attempt to conduct comprehensive analysis in the problem of the sense of the Existence as it is presented in the first quarter of the 21st century and to relate it with the modern social situation that is characterized by a complex range of interconnected and interdependent anthropological problems of our time. Authors emphasize that the main reason in the crisis of meaning is that a man has lost touch with his roots, which is wildlife. He has created an artificial structure, civilization to satisfy his needs and finds no way to the transcendental, which is the true House of his being. Conclusions. A human must refuse from false self-conceit concerning his potential omniscience and omnipotence, cease dictating his own rules to the Existence, determine the boundaries of his freedom and try to clearly realize his place in the objective structure of being. The global situation can change for the better only if a dramatic change in the area of meaning-making happens. The decisive force, which may encourage nudging to the positive changes, can be either the free will of people who have realized the criticality of the situation or external natural and social circumstances that will make people reorganize radically. The proper prioritizing, a deep awareness of universal goals and solidarity between people could be the value basis that will become the foundation to find the meaning and create a more favorable future. (shrink)
The Other Machiavelli.V. D. Vinogradov & D. V. Ivanov - 1996 - Russian Studies in Philosophy 34 (4):36-50.details
The term 'Machiavellianism', used to designate a tough politics knowing no ethical barriers, entered firmly into circulation as far back as the sixteenth century. It was the negative reaction to the maxims in The Prince that defined the initial attitude toward Machiavelli's doctrine, and the internal polemic with this initial assessment has spawned an endless stream of literature endeavoring to justify in one way or other the ill-starred secretary of the Florentine Republic. In sheer number of publications, pro-Machiavelli views exceed (...) anti-Machiavelli views by many times. And yet questions remain; the original negative reaction is not eradicated, just as the striving for apologetics is not eradicated. (shrink)
Niccolo Machiavelli in Medieval and Renaissance Philosophy
Plotinus's Treatise On the Virtues (I.2) and Its Interpretation by Porphyry and Marinus.D. V. Bugai - 2003 - Russian Studies in Philosophy 42 (1):84-95.details
As is well known, Plotinus's philosophy served as the starting point for the development of all Neoplatonism. It created the basic schema that set the framework for the thought of all later representatives of this tendency from Porphyry to Damascius. The doctrine of the transcendence of the One, of the three original hypostases, the application of the categories of Plato's Parmenides in the construction of ontology—all this and much else besides became the property of the Neoplatonic schools, which were scattered (...) throughout the Roman empire, and was incorporated partly into the Christian theology, which was then in the process of formation. Naturally, as a result of its wide dissemination and the change in its cultural and social environment, Plotinus's legacy appeared in a different light and took on new forms; through these changes in form we can try to understand the difference in content. For this purpose Plotinus's treatise on the virtues is of special interest. The point is that Porphyry relies precisely on this treatise and at times even literally borrows large fragments from it in setting out the doctrine concerning the virtues in the thirty-second maxim . The same treatise serves as a basis for Marinus in his biography of Proclus. The aim of my lecture is to analyze the content of the treatise and its interpretations. (shrink)
Plotinus in Ancient Greek and Roman Philosophy
Porphyry in Ancient Greek and Roman Philosophy
The Ambitious Idea of Kant's Corollary.Susan V. H. Castro - 2018 - In Violetta L. Waibel, Margit Ruffing & David Wagner (eds.), Natur Und Freiheit. Akten des Xii. Internationalen Kant-Kongresses. De Gruyter. pp. 1779-1786.details
Misrepresentations can be innocuous or even useful, but Kant's corollary to the formula of universal law appears to involve a pernicious one: "act as if the maxim of your action were to become by your will a universal law of nature". Humans obviously cannot make their maxims into laws of nature, and it seems preposterous to claim that we are morally required to pretend that we can. Given that Kant was careful to eradicate pernicious misrepresentations from theoretical metaphysics, the (...) imperative to act as if I have this supernatural power has typically been treated as an embarrassment meriting apology. The wording of the corollary may be vindicated, however, by recognizing that "as if" (als ob) is a technical term both in the Critique of Pure Reason and here. It signals a modal shift from the assertoric to the problematic mode of cognition, one that is necessitated by the attempt to incorporate the natural effects of a free will into a universal moral imperative that is philosophically practical. In this paper I sketch how the modal shift makes sense of the corollary as a subjectively necessary, philosophically practical idealization of the extension of human freedom into nature, one that accurately represents a necessary parameter of moral conduct: moral ambition. (shrink)
Kant: Epistemology, Misc in 17th/18th Century Philosophy
Kant: Formula of Universal Law in 17th/18th Century Philosophy
$460.40 new $480.00 from Amazon $598.25 used (collection) Amazon page
Decision-Making in the Critically Ill Neonate: Cultural Background V Individual Life Experiences.C. Hammerman, E. Kornbluth, O. Lavie, P. Zadka, Y. Aboulafia & A. I. Eidelman - 1997 - Journal of Medical Ethics 23 (3):164-169.details
OBJECTIVES: In treating critically ill neonates, situations occasionally arise in which aggressive medical treatment prolongs the inevitable death rather than prolonging life. Decisions as to limitation of neonatal medical intervention remain controversial and the primary responsibility of the generally unprepared family. This research was designed to study response patterns of expectant mothers towards treatment of critically ill and/or malformed infants. DESIGN/SETTING: Attitudes were studied via comprehensive questionnaires divided into three sections: 1-Sociodemographic data and prior personal experience with perinatal problems; 2-Theoretical (...) philosophical principles used in making medical ethical decisions; and 3-Hypothetical case scenarios with choices of treatment options. SUBJECTS AND RESULTS: Six hundred and fifty pregnant women were studied. Maternal birthplace (p = 0.005) and level of religious observance (p = 0.02) were strongly associated with the desire for maximally aggressive medical intervention in the hypothetical case scenario. Specific personal experiences such as infertility problems, previous children with serious mental or physical problems were not correlated with the selection of different treatment choices. Of the theoretical principles studied, only the desire to preserve life at all costs was significantly associated with the choice for maximal medical treatment (p = 0.003). CONCLUSIONS: Maternal ethnocultural background and philosophical principles more profoundly influenced medical ethical decision-making than did specific personal life experiences. (shrink)
Regular Embeddings of the Stationary Tower and Woodin's Σ 2 2 Maximality Theorem.Richard Ketchersid, Paul B. Larson & Jindřich Zapletal - 2010 - Journal of Symbolic Logic 75 (2):711-727.details
We present Woodin's proof that if there exists a measurable Woodin cardinal δ, then there is a forcing extension satisfying all $\Sigma _{2}^{2}$ sentences ϕ such that CH + ϕ holds in a forcing extension of V by a partial order in V δ . We also use some of the techniques from this proof to show that if there exists a stationary limit of stationary limits of Woodin cardinals, then in a homogeneous forcing extension there is an elementary embedding (...) j: V → M with critical point $\omega _{1}^{V}$ such that M is countably closed in the forcing extension. (shrink)
Axioms of Set Theory in Philosophy of Mathematics
R&D Cooperation in Emerging Industries, Asymmetric Innovative Capabilities and Rationale for Technology Parks.Vivekananda Mukherjee & Shyama V. Ramani - 2011 - Theory and Decision 71 (3):373-394.details
Starting from the premise that firms are distinct in terms of their capacity to create innovations, this article explores the rationale for R&D cooperation and the choice between alliances that involve information sharing, cost sharing or both. Defining innovative capability as the probability of creating an innovation, it examines firm strategy in a duopoly market, where firms have to decide whether or not to cooperate to acquire a fixed cost R&D infrastructure that would endow each firm with a firm-specific innovative (...) capability. Furthermore, since emerging industries are often characterized by high technological uncertainty and diverse firm focus that makes the exploitation of spillovers difficult, this article focuses on a zero spillover context. It demonstrates that asymmetry has an impact on alliance choice and social welfare, as a function of ex-post market competition and fixed costs of R&D. With significant asymmetry no alliance may be formed, while with similar firms the cost sharing alliance is dominant. Finally, it ascertains the settings under which the equilibrium outcome is distinct from that maximizing social welfare, thereby highlighting some conditions under which public investment in a technology park can be justified. (shrink)
Nanotechnology in Applied Ethics
Dostoevsky and Mendeleev: An Antispiritist Dialogue.I. L. Volgin & V. L. Rabinovich - 1972 - Russian Studies in Philosophy 11 (2):170-194.details
The sources of the real conflict between science and various kinds of undertakings in occultism pretending to be science date back to the end of the 16th and beginning of the 17th centuries, when modern scientific method was barely taking shape. The natural philosophy of the 16th century, which put forth natural magic in place of divine magic, was the ideological antipode of the new science in process of formation. The pantheistic reinterpretation of monotheistic Christian creationism is a characteristic feature (...) of constructs in natural philosophy with their striving toward maximal substantialization of the nonmaterial. Thus, for example, the rationalist mystic of natural philosophy, Girolamo Cardano, returns in his work to the medieval notion of the world soul, but understands by it an entirely material substance which he identifies with light and heat. (shrink)
Russian Philosophy in European Philosophy
Marginalia to Kant's Essay "On the Alleged Right to Lie".Vadim V. Vasilyev - 2009 - Russian Studies in Philosophy 48 (3):82-89.details
The author argues that despite universal and formal character of the foundation of Kant's ethics, its principles appear to be compatible with recognition of the possibility of lying for philanthropic reason. To have an effect in the world, our obligations must necessarily have empirical components that point to specific conditions, under which the maxim will have a moral worth. One of such condition may be the requirement that probable consequences of the action will not clash with other obligations.
Kant's Works in Practical Philosophy, Misc in 17th/18th Century Philosophy
A Class of {Sigma {3}^{0}} Modular Lattices Embeddable as Principal Filters in {Mathcal{L}^{Ast }(V{Infty })}.Rumen Dimitrov - 2008 - Archive for Mathematical Logic 47 (2):111-132.details
Let I 0 be a a computable basis of the fully effective vector space V ∞ over the computable field F. Let I be a quasimaximal subset of I 0 that is the intersection of n maximal subsets of the same 1-degree up to *. We prove that the principal filter ${\mathcal{L}^{\ast}(V,\uparrow )}$ of V = cl(I) is isomorphic to the lattice ${\mathcal{L}(n, \overline{F})}$ of subspaces of an n-dimensional space over ${\overline{F}}$ , a ${\Sigma _{3}^{0}}$ extension of F. As a (...) corollary of this and the main result of Dimitrov (Math Log 43:415–424, 2004) we prove that any finite product of the lattices ${(\mathcal{L}(n_{i}, \overline{F }_{i}))_{i=1}^{k}}$ is isomorphic to a principal filter of ${\mathcal{ L}^{\ast}(V_{\infty})}$ . We thus answer Question 5.3 "What are the principal filters of ${\mathcal{L}^{\ast}(V_{\infty}) ?}$ " posed by Downey and Remmel (Computable algebras and closure systems: coding properties, handbook of recursive mathematics, vol 2, pp 977–1039, Stud Log Found Math, vol 139, North-Holland, Amsterdam, 1998) for spaces that are closures of quasimaximal sets. (shrink)
The Ontology of Intentionality I: The Dependence Ontological Account of Order: Mediate and Immediate Moments and Pieces of Dependent and Independent Objects.Gilbert T. Null - 2007 - Husserl Studies 23 (1):33-69.details
This is the first of three essays which use Edmund Husserl's dependence ontology to formulate a non-Diodorean and non-Kantian temporal semantics for two-valued, first-order predicate modal languages suitable for expressing ontologies of experience (like physics and cognitive science). This essay's primary desideratum is to formulate an adequate dependence-ontological account of order. To do so it uses primitive (proper) part and (weak) foundation relations to formulate seven axioms and 28 definitions as a basis for Husserl's dependence ontological theory of relating moments. (...) The essay distinguishes between dependence v. independence, pieces v. moments, mediate v. immediate pieces and moments, maximal v. non-maximal pieces, founded v. unfounded qualities, integrative v. disintegrative dependence, and defines the concepts of the completion of an object, the adumbrational equivalence relation of objects, moments of unity which unify objects, and relating moments which relate objects. The eight theorems [CUT90]-[CUT97] show that relating moments of unity provide an adequate account of order in terms of primitive (proper) part and (weak) foundation relations. (shrink)
Husserl: Intentionality, Misc in Continental Philosophy
Husserl: Ontology in Continental Philosophy
The Search for New Axioms in the Hyperuniverse Programme.Claudio Ternullo & Sy-David Friedman - 2016 - In Andrea Sereni & Francesca Boccuni (eds.), Objectivity, Realism, and Proof. FilMat Studies in the Philosophy of Mathematics. Berlin: Springer. pp. 165-188.details
The Hyperuniverse Programme, introduced in Arrigoni and Friedman (2013), fosters the search for new set-theoretic axioms. In this paper, we present the procedure envisaged by the programme to find new axioms and the conceptual framework behind it. The procedure comes in several steps. Intrinsically motivated axioms are those statements which are suggested by the standard concept of set, i.e. the `maximal iterative concept', and the programme identi fies higher-order statements motivated by the maximal iterative concept. The satisfaction of these statements (...) (H-axioms) in countable transitive models, the collection of which constitutes the `hyperuniverse' (H), has remarkable 1st-order consequences, some of which we review in section 5. (shrink)
Independence Results in Set Theory in Philosophy of Mathematics
Indeterminacy in Mathematics in Philosophy of Mathematics
The Iterative Conception of Set in Philosophy of Mathematics
Simple and Hyperhypersimple Vector Spaces.Allen Retzlaff - 1978 - Journal of Symbolic Logic 43 (2):260-269.details
Let $V_\propto$ be a fixed, fully effective, infinite dimensional vector space. Let $\mathscr{L}(V_\propto)$ be the lattice consisting of the recursively enumerable (r.e.) subspaces of $V_\propto$ , under the operations of intersection and weak sum (see § 1 for precise definitions). In this article we examine the algebraic properties of $\mathscr{L}(V_\propto)$ . Early research on recursively enumerable algebraic structures was done by Rabin [14], Frolich and Shepherdson [5], Dekker [3], Hamilton [7], and Guhl [6]. Our results are based upon the more (...) recent work concerning vector spaces of Metakides and Nerode [12], Crossley and Nerode [2], Remmel [15], [16], and Kalantari [8]. In the main theorem below, we extend a result of Lachlan from the lattice E of r.e. sets to $\mathscr{L}(V_\propto)$ . We define hyperhypersimple vector spaces, discuss some of their properties and show if $A, B \in \mathscr{L}(V_\propto)$ , and A is a hyperhypersimple subspace of B then there is a recursive space C such that A + C = B. It will be proven that if $V \in \mathscr{L}(V_\propto)$ and the lattice of superspaces of V is a complemented modular lattice then V is hyperhypersimple. The final section contains a summary of related results concerning maximality and simplicity. (shrink)
Symposium on "Cognition and Rationality: Part I" Minimal Rationality. [REVIEW]Isaac Levi - 2006 - Mind and Society 5 (2):199-211.details
An argument is advanced to show why E-admissibility should be preferred over maximality as a principle of rational choice where rationality is understood as minimal rationality. Consideration is given to the distinction between second best and second worst options in three way choice that is ignored according to maximality. It is shown why the behavior exhibited in addressing the problems posed by Allais (Econometrica 21:503–546, 1952) and by Ellsberg (Q Econ 75:643–669, 1961) do not violate the independence postulate according to (...) minimal rationality. (shrink) | CommonCrawl |
Haskell Curry
We choose to do mathematics, not because it is easy, but because it is hard.
$$ \text{Einstein's Field Equations:} \quad \mathbf{G} = \frac{8 \pi G}{c^{4}} \mathbf{T}. $$
Lhasa, Tibet
math.jussieu.fr/~leila/…
Mathematics 16.7k 16.7k 4040 silver badges8686 bronze badges
MathOverflow 696 696 33 silver badges1313 bronze badges
10 What can moderators do when a user defaces his/her own post?
8 Is there any etiquette to be observed when editing a question or an answer?
4 Creating a Hyperlink in a Comment Box
20 Statements which were given as axioms, which later turned out to be false.
15 Spectra of elements of a Banach algebra and the role played by the Hahn-Banach Theorem.
11 The Speed of Gravitational Waves in General Relativity
9 A certain theorem about finite-dimensional Lie algebras over an algebraically closed field with zero characteristic.
8 Statements which were given as axioms, which later turned out to be false.
8 Setting up a local-coordinate system in space-time using only a single clock and light beams
7 On similar concepts in mathematics whose similarity is a non-trivial fact.
real-analysis
functional-analysis
general-topology
sequences-and-series
measure-theory
79 Discontinuous derivative. Feb 1 '13
65 Given a group $ G $, how many topological/Lie group structures does $ G $ have? Jan 5 '13
45 How to prove that $ \text{int}(\text{cl}(A)) = \text{cl}(\text{int}(A)) $? Feb 26 '13
40 Multiples of an irrational number forming a dense subset Jan 8 '13
34 Is the Nash Embedding Theorem a special case of the Whitney Embedding Theorem? Nov 13 '12
33 Gelfand-Naimark Theorem Dec 31 '12
28 Is every function with the intermediate value property a derivative? Mar 11 '13
27 Semi-Norms and the Definition of the Weak Topology Feb 16 '13
25 Computing the Expectation of the Square of a Random Variable: $ \text{E}[X^{2}] $. Feb 18 '13
25 Is the set of real numbers the largest possible totally ordered set? Dec 10 '12
Strunk & White | CommonCrawl |
IZA Journal of Labor Economics
The role of employment interruptions and part-time work for the rise in wage inequality
Martin Biewen1,2,
Bernd Fitzenberger ORCID: orcid.org/0000-0001-6739-38712,3,4,5,6,7 &
Jakob de Lazzer3
IZA Journal of Labor Economics volume 7, Article number: 10 (2018) Cite this article
The incidence of employment interruptions and temporary part-time work has grown strongly among full-time workers, yet little is known about the impact on wage inequality. This is the first study showing that such episodes play a substantial role for the rise in inequality of full-time wages, considering the case of Germany. While there are also strong composition effects of education for males and of age and experience for females, changes in industry and occupation explain fairly little of the inequality rise. Extending the analysis to total employment reveals substantial negative selection into part-time work.
JEL-Classification: J31, J20, J60
The incidence of employment interruptions and temporary part-time work has grown strongly, raising concerns about the stability of employment and low wages among part-time workers (OECD 2010). Less known is that the incidence of previous part-time work and employment interruptions has also grown among full-time workers. However, employment interruptions and part-time experience may be associated with lower future wages due to lower human capital accumulation, negative signalling effects, or lower labor force attachment (Arulampalam 2001; Blundell et al. 2016; Heckman 1981, Paul 2016). The literature on the rise in wage inequality among full-time workers has so far not taken this into account. This is the first study to examine the impact of changes in recent labor market histories on the rise in wage inequality. Re-examining the development of the wage distribution in Germany, we use administrative panel data to investigate the role of composition changes, in particular changes in recent labor market experience, for the rise in wage inequalityFootnote 1. As the key novel aspects, our study accounts explicitly for previous part-time work and employment interruptions among full-time employees, and we extend the analysis to total employment.
Motivating our analysis, Fig. 1 shows for the years 1985 and 2010 the number of days in part-time employment and nonemployment, respectively, during the previous 5 years by decile of the wage distribution. For full-timers, both the incidence of previous part-time and nonemployment experience increased considerably between 1985 and 2010. Put differently, full-timers have over time become more likely to have experienced part-time work or employment interruptions in the past. The prevalence of previous part-time experience and nonemployment increases in the lower part of the full-time wage distribution, implying that among workers with particular low wages the share of workers, who have recently worked part-time or who have experienced nonemployment in the recent past, has grown over time. Figure 1 shows that nonemployment experience is more important than part-time experience, with male (female) full-timers in 2010 in the lowest decile having experienced an average of more than 600 (500) days of nonemployment and more than 40 (110) days of part-time employment during the time period 2005 to 2009. The evidence for part-time employment is consistent with studies showing that part-time work has increased strongly and that transitions between part-time and full-time work and employment interruptions have become more frequent (Tisch and Tophoven 2012; Potrafke 2012; Tamm et al. 2017). Below, we will also show evidence that the dispersion of nonemployment and part-time experience among full-timers has grown over time. There was a secular increase of unemployment in Germany from the 1980s until the mid 2000s. Afterwards, unemployment fell almost continuously until 2010 (SVR 2014). Our analysis will focus on long-term changes abstracting form cyclical variation in nonemployment and part-time experience among full-timersFootnote 2.
Part-time employment and nonemployment during the previous 5 years in different parts of the full-time wage distribution. Average number of days in part-time employment/nonemployment during the years 1980–1984 and 2005–2009, respectively, by decile of the full-time wage distribution in the years 1985 and 2010
There is ample evidence suggesting that episodes of part-time work or nonemployment have negative long-term impacts on the career path and therefore on future wagesFootnote 3. First, human capital accumulation slows down or there is even depreciation when workers interrupt their career or temporarily downgrade to part-time employment. Second, employment interruptions or part-time experience may lead to scarring effects leading to lower wage offers and poorer career possibilities upon re-employment. A third point is that lagged employment outcomes are indicators of permanent characteristics which drive employment and wages. Accordingly, periods of nonemployment or part-time employment in the past may indicate a lower labor force attachment—in addition to being a negative productivity signal. Lagged employment outcomes are unobserved in the cross-sectional data sets, typically used in the literature on wage inequality for most countries (see, e.g., Acemoglu and Autor (2011) and the literature discussion in Section 2).
For the aforementioned reasons, our paper investigates the role of employment interruptions and part-time employment in a statistical decomposition of the rise in wage inequality among full-time working employees. In light of the evidence in Fig. 1, the growing importance of part-time employment and nonemployment is likely to play an important role for the increase of lower tail wage inequality. The literature review in Section 2 reveals that the studies on the rise of wage inequality have so far not taken into account the rise in previous nonemployment and part-time employment among full-timers. Furthermore, little attention has been paid to gender differences in the rise in wage inequality. For instance, negative long-term career effects of transition from full-time to part-time work for women after childbirth have been studied by Connolly and Gregory (2009) and Paul (2016). Fitzenberger et al. (2016) document that women in Germany, who had been working full-time before birth, take fairly long spells of maternity leave after child birth and often then return to part-time work.
Our paper makes the following contributions. First, in our decomposition of the rise in wage inequality among full-timers, we add the previous labor market history involving part-time and nonemployment experience. This plays an important role in explaining the rise in wage inequality both among males and females. At the same time, adding previous labor market history accounts for unobserved heterogeneity in employment decisions. As such, our analysis is of interest for all countries experiencing similar labor market trends, because ours is the first study investigating the role of the rise in nonemployment and part-time employment in explaining the rise in wage inequality among full-time employees. As a related second contribution, we estimate the effect of further observable characteristics to the increase in male and female wage inequality in Germany over the recent decades. Such a parallel analysis for Germany does not exist. Compositional changes in observable characteristics explain over 50 percent of the increase in male wage inequality and up to 80 percent of the increase in female wage inequality. To the best of our knowledge, the extremely strong role of composition effects for the rise of female wage inequality has not been recognized so far. Third, we estimate composition effects with regard to the counterfactual distribution of full-time wages for all employees, which confirms the robustness of our main findings. Furthermore, this shows that part-timers (especially female part-timers) represent a negative selection with respect to observable characteristics. Including part-timers into the analysis also speaks to the role of increasingly heterogeneous labor market histories for the rise in German wage inequality.
The remainder of this paper is structured as follows. Section 2 reviews the literature on the rise of wage inequality. Section 3 discusses the data used and presents first descriptive evidence. Section 4 discusses our findings. Section 5 concludes. The Appendix provides more details and supplementary empirical results. A supplementary appendix with further details is available as Additional file 1.
Wage inequality has been increasing in many industrialized countries between the 1980s and the 2000s (see the comprehensive survey in Acemoglu and Autor (2011), or the literature discussion in Lemieux (2006), Autor et al. (2008), Dustmann et al. (2009)). Many studies focus on the USA, but the same mechanisms operating in the USA are also at play in other industrialized countries, including Germany. Skill-biased technical change (SBTC) is the most prominent explanation for the rise in wage inequality, predicting rising wage inequality across the entire wage distribution. This is consistent with the evidence for the USA for the 1980s but not for the 1990s, as in the 1990s inequality stopped to grow at the bottom of the wage distribution (Autor et al. 2008). Acemoglu and Autor (2011) take the latter as evidence for the task-based approach (see Autor et al. 2003) implying a falling demand for occupations with medium skill requirements (which are relatively more routine intensive and thus easier to substitute by technology) relative to both occupations with high or with low skill requirements, resulting in polarization of employment across occupations. The evidence regarding a polarization of wages across the wage distribution in the USA seems to be limited to the 1990s, and a polarization of wages is not an unambiguous prediction of the task-based approach (Autor 2013). Some studies for the USA emphasize the role of changing labor market institutions such as de-unionization and falling real minimum wages (see also the discussion in Autor et al. (2003)). DiNardo et al. (1996) show that the fall in unionization levels explains an important part of the rise in wage inequality during the 1980s.
In related work, Lemieux (2006) shows that changes in the composition of the workforce regarding education and experience explain a major part of the rise in wage inequality in the USA. Also, Autor et al. (2008) find strong composition effects, especially for females, but focus on other explanations for the rise in wage inequality. Composition effects also affect residual wage inequality, i.e., the wage differences among employees with the same observable characteristics (DiNardo et al. 1996; Lemieux 2006). Altogether, this evidence motivated us to scrutinize the role of composition effects for the rise of wage inequality in Germany.
Wage inequality has been rising in West Germany [henceforth Germany] since the 1980s (Dustmann et al. 2009)Footnote 4. Until the mid 1990s, the increase in wage dispersion among full-timers was restricted to the top of the wage distribution, whereas wage inequality increased from mid 1990s onwards until 2004 across the entire distribution (Dustmann et al. 2009). The evidence until the mid 1990s is consistent with skill-biased technological change and the hypothesis that labor market institutions such as unions and minimum wages prevented a rise in wage inequality at the bottom of the wage distribution before the mid 1990s, which resulted in rising unemployment among the low-skilled (Fitzenberger 1999). Dustmann et al. (2009) show that changes in the composition of workers regarding age and education and the sizeable decline in coverage by collective bargaining both explain major components of the rise in wage inequality. At the same time, the study provides evidence for a polarization of employment as found previously for the USA (see also Antonczyk et al. 2018).
Antonczyk et al. (2009) and Antonczyk et al. (2010) find a strong increase of wage inequality between 1999/2001 and 2006. Changes in task assignments cannot explain this rise (Antonczyk et al. 2009). Accounting for coverage by collective bargaining, firm-level characteristics, and personal characteristics, Antonczyk et al. (2010) show that the decline in coverage by collective bargaining does not explain the rise in wage inequality in the lower part of the wage distribution, when firm-level characteristics are held constant. Most important are changes in the quantile regression coefficients of firm-level variables (firm size, region, industry), which reflect a growing heterogeneity in firm-level wage policies. The two studies differ regarding the contribution of changes in personal characteristics. Biewen and Seckler (2017) find that changes in union coverage and personal characteristics are most important for the rise in wage inequality between 1995 and 2010. Card et al. (2013) estimate person and firm fixed effects in wages. The study finds a growing heterogeneity of these fixed effects over time and increasing sorting of workers with high personal fixed effects into firms with high firm fixed effects. Both effects contribute strongly to the rise in wage inequality. Felbermayr et al. (2014) find that the decline in coverage by collective bargaining is the most important explanation for the rise in wage inequality, while there is no important role for international trade. Our short survey of the literature shows that the literature has not yet reached a consensus on the mechanisms behind the rise in wage inequality in Germany until 2010Footnote 5.
None of the aforementioned studies investigates to what extent the rise in interruptions of full-time work is driving the increase in wage inequality, although there is ample evidence of a negative effect of previous nonemployment and part-time experience on wages in full-time employment. Several mechanism may be at work. First, human capital accumulation slows down or there is even depreciation when workers stop working full-time (Beblo and Wolf 2002; Manning and Petrongolo 2008; Edin and Gustavsson 2008; Paul 2016). Employment interruptions due to displacement have been shown to negatively affect wages (Burda and Mertens 2001, Schmieder et al. 2010, Edler et al. 2015). After maternity leave, females often return to part-time employment, but may return to full-time work later on (Fitzenberger et al. 2016; Paul 2016). When a transition from nonemployment or part-time work back into full-time work involves a job change (no recall), this also implies a loss of job-specific human capital. Second, employment interruptions or part-time experience may lead to scarring effects, i.e., employers (rightly or wrongly) interpret previous non-full-time employment as signal of low productivity or low labor force attachment leading to lower wage offers and poorer career possibilities upon re-employment (Ruhm 1991; Arulampalam 2001; Gregory and Jukes 2001). A third potential mechanism, similar to the second, is that lagged employment outcomes are indicators of permanent characteristics which drive employment and wages (Heckman 1981). Accordingly, periods of nonemployment or part-time employment in the past may indicate a lower labor force attachment—in addition to being a negative productivity signal.
The literature on wage effects of temporary part-time work focuses primarily on women and maternal part-time. For females in the UK, Connolly and Gregory (2009) and Blundell et al. (2016) demonstrate that part-time employment in the past results in lower earnings trajectories, even when returning to full-time work. Connolly and Gregory (2009) also show that this holds for part-time episodes at the same employer. They point out that part-time work is often related to downgrading to less skilled tasks that persists if the individual later returns to full-time work. Controlling for selection on unobservables, Paul (2016) finds for Germany a substantial negative impact of part-time work and nonemployment episodes on future earnings of females in full-time work, with the effect being even stronger for nonemployment. While there is no detailed analysis of part-time effects among males available, the mechanisms of human capital depreciation and lack of further training which underly the wage effects of part-time work for female workers are likely to affect male workers in a similar way.
Data and descriptive evidence
Our analysis uses SIAB data involving a 2% sample of all dependent employees who are subject to social security contributions, i.e., excluding the self-employed and civil servantsFootnote 6. We study the period 1985 to 2010. Even though SIAB data are available for earlier years, we do not include them in our analysis because the rise of wage inequality across the entire distribution is only observed after the 1980s (Dustmann et al. 2009; Fitzenberger 2012). Since we may observe several employment spells of various lengths per individual in a given year, all observations are weighted with the share of days worked in a job in the respective year. The sampling weights calculated in this way reflect the relative importance of each wage observation.
We account for an individual's labor market history using four measures. The first two involve the number of days spent in full-time and in part-time employment during the last 5 years. The residual category is the number of days spent in nonemployment during the last 5 years, which may be times of unemployment, education, or any other type of nonemployment. In addition, we use two dummy variables, indicating whether a person had a full-time or a part-time spell at any point during the previous year. This information captures individual short-term employment dynamics. Wages are daily wages in Euros deflated by the CPI to 1990. Since we use administrative data on employment spells, the measures are very precise. Because the SIAB data do not involve hours worked, we follow the literature on wage inequality for Germany and use daily wages, representing an earnings measure. Our sample also includes individuals with part-time employment, but the wage data for part-timers are much more confounded by differences in hours of work than for full-timersFootnote 7. Below, we also estimate the counterfactual distribution of full-time wages for total employment also including part-timers.
All wages above the contribution threshold are top-coded in the SIAB. The censoring threshold lies above the 85% wage quantile in every year. Therefore, we compare the 85/15, the 85/50, and the 50/15 quantile gaps in the wage distribution. In those cases, where we cannot restrict our analysis to values below the 85% quantile (in particular when analyzing developments in wage residuals), we impute wages above the threshold according to individual characteristics. Details of the imputation procedure can be found in the Appendix, "Imputation of wages above censoring threshold" section. Unless noted otherwise, we restrict our analysis to individuals aged 20 to 60 years, in order to focus on the working age population.
Table 1 lists the covariates used and Table 2 provides descriptive statistics for two sample years. We distinguish three education levels: University degree (including Universities of Applied Sciences), degree from Upper secondary school and/or Vocational Training, No/Other degree. We use 14 aggregated industries (German Industry Classification [WZ] 1993) and 63 aggregated occupations (2-digit level of the KldB ["Klassifikation der Berufe"] 1988). For interactions between industry and occupation, we aggregate occupations to the 1-digit level in order to avoid problems with empty cells in our logit regressions. The education variable is cleaned, and interrupted measurements are imputed for consistency based on Fitzenberger et al. (2006).
Table 1 Variable classification
Table 2 Descriptives of full-time samples
Wage inequality
Figure 2 shows the development of log wage quantiles (cumulative changes) from 1985 onwards. Our primary measures of wage inequality are the gaps between the 85th, 50th, and 15th percentiles of log wages. Until about 1991, the different wage quantiles move upward and largely in parallel. After 1991, median wages of male full-timers stagnate (recall that we analyze real wages). For female full-timers there is a continuous but decelerating rise until 2003 and a subsequent decline until 2008. For both genders, we observe a widening of the wage distribution beginning just at the time when median wages start stagnating. Wages at the 85th percentile continue to increase, while wages at the 15th percentile decline. For males, this decline is moderate until the early 2000s but accelerates afterwards. By 2010, male wages at the 15th percentile even lie below their 1985 level. For females, we observe different developments of the three quantiles already in the late 1980s. However, inequality only increases in a more substantial way in the late 1990s, several years later than for males. After 1998 female median wages stagnate, while the 85th percentile rises and the 15th percentile declines rapidly. The corresponding trends in inequality as measured by the 85/50 and 50/15 gaps are depicted by the solid lines in Figs. 10, 11, 12, and 13.
Wage quantiles relative to levels of 1985
Labor market histories
Part-time work in Germany has grown substantially over the last decades (Fig. 3). While this may reflect secular trends in labor market participation, part of the increase is linked to political reforms promoting part-time work. Over our observation period, several changes in legislation focus on part-time work. In 1985, the German government enacted a law (Beschäftigungsförderungsgesetz) which granted part-timers the same level of job protection as full-timers. This law increased the acceptance of part-time work on the side of trade unions and in the general population. In 2001, a law followed which made it easier for employees to enter voluntary part-time work (Teilzeit- und Befristungsgesetz). These changes in legislation had the effect of formally easing the transition between full-time, part-time, and nonemployment. We observe that not only the yearly stock of part-time employees increased for both genders, but that the frequency of temporary part-time episodes for individuals currently working full-time increased as well (Fig. 4). Parallel to the rise of part-time work, two changes in legislation between 1985 and 1998 (Beschäftigungsförderungsgesetz, Arbeitsförderungs-Reformgesetz) facilitated fixed-term contracts and temporary agency work.
Part-time share over time
Days spent in part-time work during the last 5 years (full-timers aged 25–60 years)
Both the intensive and the extensive margin of labor market histories matter for current wages (Burda and Mertens 2001; Arulampalam 2001; Beblo and Wolf 2002; Manning and Petrongolo 2008; Edin and Gustavsson 2008; Schmieder et al. 2010; Edler et al. 2015; Paul 2016; Blundell et al. 2016). Returns to labor market experience are not uniform across jobs and type of work. Not only is experience in part-time work valued lower than that in full-time work, but part-time and nonemployment episodes slow down career progression and wage growth (see literature review in Section 2).
Figure 4 shows increasing average lengths and also increasing variability of previous part-time episodes for men and females, both above and below the median of the respective wage distributionFootnote 8. The mean and variance of number of days spent in part-time work during the last 5 years increases over time for those individuals who are in full-time jobs at the time of observation. Male full-timers experience a noticeable increase in the past part-time episodes, although the total amount of the time previously spent in part-time is lower than for females. While the increase in prevalence of previous part-time for males is only slightly higher below than above the median, the increase in variability of previous part-time experience is considerably stronger below the median. This means that previous part-time episodes are increasingly concentrated on low-wage full-timers which may lead to rising lower-tail wage inequalityFootnote 9. For female full-timers, we observe an increase in the length and variability of previous part-time work both above and below the median, and the overall levels are considerably higher than for males. Incidentally, the part-time experience of full-time females above the median of the distribution shows stronger cyclical variation compared to females below the median, whose part-time experience follows more of a secular upward trend. Note that the labor supply of females is known to be more elastic than that of men and that the part-time experience of females is often related to career interruptions after child birth (Blundell et al. 2016).
There are two further issues concerning temporary part-time episodes to be discussedFootnote 10. The first involves working time accounts which provided a buffer against the negative labor demand shock in Germany during the Great recession 2008/2009 (Burda and Hunt 2011). The SIAB data do not record a variation in hours worked over a year in case of continuous employment at the same employer. In the case of working time accounts, the part-time/full-time classification is based upon agreed (contractual) hours of work. Furthermore, the data involve daily wages defined as total earnings over an employment spell (typically 1 year, when the worker is employed by the same employer for one calendar year) divided by the length of the employment spell in days. Specifically, working time accounts allow to vary the actual hours of work over a year, but there is no variation in monthly earnings. Furthermore, on average over the employment spell the actual hours of work should correspond to the contractual ones. Note further that working time accounts did not play an important role before 2008 and that they show a strong cyclical variation. By contrast, our results below suggest an earlier timing of the distributional effects of previous part-time episodes, reflecting a long-term continuous trends which dominates the cyclical variation. The second issue concerns whether the part-time episodes in our data are with the same employer or with different employers. A recent study shows that a major part of the cyclical variation in part-time employment in the UK and the USA is accounted for by changes in transitions rates between part-time and full-time work at the same employer (Borowczyk-Martins and Lalé 2018). We would expect wage penalties associated with previous part-time episodes to be larger if they occur across employers. Our data show that the vast majority (about 75–80%) of transitions from part-time to full-time employment involve a change of employers (see Fig. 5). We observe only a minor cyclical variation in the division of the part-time to full-time transitions within and between employers, which is unlikely to be of importance in explaining the continuous long-term rise in wage inequality (see decomposition results in Section 4).
Part-time to full-time transitions
We now turn to the descriptive discussion of previous nonemployment episodes. Just as previous part-time experience, nonemployment has a sizeable negative impact on wages. Nonemployment may include all alternative activities such as education or child care or it may be due to involuntary displacement, unemployment, or voluntary absence from the labor market. Such events may lead to human capital obsolescence, with the possible exception of educational spells, and therefore to a decline in wages (Burda and Mertens 2001, Schmieder et al. 2010, Edler et al. 2015). Figure 6 shows the average length and variability of time spent in nonemployment over the past 5 years. Both above and below the median, males and females exhibit increasing previous nonemployment experience. Cross-sectional variability only increases below the median, and there is a cyclical variation, which is stronger below the median. To investigate whether educational spells are driving our results, we reduce the sample to individuals age 30 years or above, for whom we assume that educational spells play a negligible role among nonemployment episodes (see Fig. 17 in the Appendix). Above the median wage, the upward trend now disappears. By contrast, males and females below the median wage still exhibit increasing previous nonemployment experience together with increasing cross-sectional variability. Thus, previous nonemployment episodes are increasingly concentrated on individuals in the lower part of the wage distribution, a trend which may have a strong impact on lower-tail wage inequality. The differences between Figs. 6 and 17 reveals that educational spells are an important part of previous nonemployment episodes among younger workersFootnote 11.
Days spent in non-employment during the last 5 years (full-timers aged 25-60 years)
Irrespective of the type of previous nonemployment episodes, their incidence is higher than previous part-time employment, especially for males but also for females. Moreover, the associated wage losses are likely to be larger than those from part-time episodes (except for educational spells among younger workers, which may, however, be captured in our subsequent analysis by a higher education level). We therefore expect previous nonemployment episodes to have sizeable negative effects on wages, most likely raising lower-tail wage inequality.
Education, experience, industry, and occupation
In addition to the changes in recent labor market history, there have been strong changes in the distribution of education, work experience, and industry structure. Figure 7 shows the percentage of workers in each education category. The share of workers without an educational degree has declined since the 1980s. This holds in particular for female workers, among whom the percentage of unskilled workers decreases from 32% in 1985 to 18% in 2010. We also observe an increase in the share of university graduates. Again, this is most pronounced for females, as the initial percentage of female university graduates is very small in 1985 but catches up to the male share by 2010. For the medium-skilled, i.e., workers with an upper secondary degree or a vocational degree, we observe a hump-shaped development. The share of medium-skilled increases during the late 1980s and the 1990s reaches its peak in the late 1990s and declines in the 2000s, giving way to a rising share of university graduates.
Share of education groups
The corresponding trends for the distribution of worker's potential experience are shown in Fig. 8. Between 1985 and 2010, the percentage of highly experienced workers with 27 or more years of potential experience increases, reflecting the aging of the population. The share of workers with medium levels of potential experience (between 14 and 26 years) follows a hump-shaped trend. The percentage of older workers with 40 or more years of experience did not undergo major changes in our sample, even though the overall population aged considerably. The only major gender difference in potential work experience concerns the share of workers with low experience. Among males, this share is never higher than 20% and it drops to 10% in the late 1990s. Starting at 30% in 1985, the initial share of young female workers is very high but converges to the low level for males in the late 1990s. After the catching-up process among females, the experience composition by gender has become very similar by 2010. Note that our experience measure is potential work experience which mainly reflects both workers' age and educational periods. In this way, we more clearly separate long-term trends in experience (population aging and educational periods) from the factors we intend to capture in our recent labor market histories (recent part-time and nonemployment episodes).
Share of experience groups
Figure 9 shows the development of industry shares for eight aggregated sectors. We observe some sectors with an almost constant share since the 1980s (i.e., transportation and trade), while others experiences strong changes. For males, the largest changes are observed for the construction industry, the manufacturing sector for consumer goods, and the banking and insurance sector. The first two experience a massive decline, while the latter more than doubles its share between 1985 and 2010. Transport and communication as well as health and social services show small increases, whereas the manufacturing sectors for vehicles and for machinery shrink slightly. The initial sector composition differs strongly by gender, but the dynamics of the different sectors are quite similar. In particular, manufacturing declines strongly, while banking and health services grows. The construction sector, which plays no important role for females, does not change in any substantial way.
Share of industry sectors
Shifts between occupations are smaller than those between industries. Table 3 reports the five most frequent occupations in 1985 and 2010. Figure SA2 in the Additional file 1 shows a continuous shift in the aggregate from manufacturing to service sector occupations. At the same time, there are fairly small changes in the distribution of the 63 two-digit occupations. Among males, four out of five occupations are present in the top 5 in both years and their shares are similar. For females, three out of five occupations remain in the top 5 in both years. Furthermore, the correlation coefficient between employment shares for the 63 two-digit occupations in 1985 and 2010 is.91 for males and.96 for females.
Table 3 Most frequent occupations 1985 and 2010 (top 5)
Empirical analysis
Estimation of counterfactuals
First, we analyze the impact of composition changes on wage inequality among full-timers accounting for the selection into full-time work based on observed worker characteristics. For the counterfactual analysis keeping characteristics constant over time, we use the reweighting methodology introduced by DiNardo et al. (1996)Footnote 12. Then, we repeat the analysis for wage inequality for total employment in a similar way. We now provide a brief overview of what we do (full formal details can be found in the Appendix).
We start to estimate the distribution of full-time wages which would result if the distribution of worker characteristics had not changed over time while the conditional wage distribution given worker characteristics changed over time as observedFootnote 13. For example, we hold fixed the composition with respect to education and estimate as counterfactual by how much inequality would have risen if workers' education had not changed. We sequentially add groups of covariates in order to determine the incremental effect of a particular set of covariates. For example, in the situation in which we already leave education constant, we also fix workers' potential work experience in order to determine the incremental effect of experience to rising wage inequality. Our sequential conditioning scheme is such that we move from exogenous and predetermined characteristics towards characteristics that are the likely consequence of endogenous decisions of the individual. Altogether, we start with workers' education and sequentially add the factors potential work experience, recent labor market history as well as workers' occupation and industry (see Table 1). As in Lemieux (2006), we also carry out our decomposition for residual wage inequality, i.e., wage inequality within groups of workers with identical observed characteristics.
In the second part of our analysis, we take the distribution of full-time wage earners, but reweight their characteristics to replicate the distribution of observed characteristics for total employment, i.e., including part-time workers. This estimates the counterfactual wage distribution that would result if all employed workers worked full-time. Contrasting this distribution with the wage distribution among full-timers allows one to gauge to which extent part-timers represent a positive or negative selection compared to full-timers. We repeat our sequential analysis of adding different groups of covariates for the reweighted sample representing total employment.
Wage inequality among full-timers
Starting with male full-timers, we first analyze the effect of educational upgrading on male wage inequality. Figure 10 (left panel) shows the evolution of the quantile gaps in male wages between 1985 and 2010 under the assumption that the 1985 distribution of education is held fixed over time. It turns out that fixing education considerably reduces the increase in inequality, i.e., the observed educational upgrading contributes strongly to the observed rise in wage inequality. Table 4 shows that a share of 17.1% of the increase in overall inequality (as measured by the 85/15 quantile ratio) and 37.5% of the increase in the upper half of the distribution (as measured by the 50/15 quantile ratio) can be explained by changes in education, while these changes did not contribute to rising inequality at the bottom of the distribution (as measured by the 50/15 quantile ratio, see lower part of Fig. 10). This means that the compositional effects of the educational expansion mostly affected the upper part but not the lower part of the male wage distribution. The contribution of changes in education on residual wage inequality amounts to a moderate 7.1%, i.e., there is no strong shift towards groups of workers with above-average levels of within-group inequality. As a next step, Fig. 11 extends the reweighting procedure to include changes in work experience (in addition to changes in education). Based on the evidence shown in Fig. 11 (left panel) and Table 4 (columns 4 to 6), the incremental contribution of work experience is very small.
Fig. 10
Inequality development base year 1985, specification E (Education)
Table 4 Reweighted inequality increase, 1985–2010, males, compositions for base year 1985
Inequality development base year 1985, specification EE (Education, Experience)
In Fig. 12, we add changes in recent labor market histories to our reweighting procedure. This considerably changes the results, affecting in particular the bottom of the distribution. The incremental contribution amounts to 16.9% for overall wage inequality and to 19.2% for lower tail inequality (column 10 of Table 4). This means that increasingly discontinuous labor market histories are important to explain the rise in lower-tail wage inequality. There was also a sizeable contribution to changes in residual wage inequality (10.7%), suggesting that changes in recent labor market histories were associated with shifts towards worker groups with higher levels of within-group inequality. Finally, Fig. 13 adds changes in occupations and industry structure. This also contributes to the general rise in male wage inequality (13.0% for overall wage inequality, 22.2% to inequality at the bottom, and 13.6% to residual wage inequality, see columns 11 to 13 of Table 4).
Inequality development base year 1985, specification EEH (education, experience, labor market history)
Inequality development base year 1985, specification EEHOI (education, experience, labor market history, occupation, industry sector)
Note that adding the stage (Occ+Ind) results in the cumulative effect of changing the joint distribution of all our covariates (Ed+Ex+Hist+Occ+Ind). As shown in column 12 of Table 4, compositional changes explain more than half of the increase in male wage inequality over the period 1985 to 2010 (53.0% of overall wage inequality, 54.6%/51.5% at the top/bottom, 34.0% of residual wage inequality). Our results confirm the importance of compositional effects for male wage inequality changes also found by Dustmann et al. (2009) and Felbermayr et al. (2014), but establish the contribution of the additional factor of changes in recent labor market histories. Note that the explanatory power of compositional changes is particularly high between 1985 and 1995 (holding characteristics fixed there is no increase in inequality at all, see left panel of Fig. 13), but became somewhat weaker from 1996 onwards. Similar to the findings for the USA (Lemieux 2006), the total contribution of the compositional changes considered lies above 50%, which is quite high.
Next, we turn to results for female full-timers, see the right hand panels of Figs. 10, 11, 12, and 13. By contrast to the findings for males, Fig. 10 shows that the increase in female wage inequality remains largely unchanged, when holding constant the 1985 distribution of educationFootnote 14. Adding changes in potential work experience (which are mainly driven by age) yields a strong incremental contribution (35.1% to overall inequality, 30.4% to upper half inequality, and 38.2% to lower half inequality; see Fig. 11 and columns 5 to 7 of Table 5). This also differs from the findings for males. In light of Fig. 8, the findings for females reflect that younger cohorts are much smaller compared to older ones (e.g., the share of females with 0 to 13 years of potential work experience dropped from 30% in 1985 to 10% in 2010). This leads to a rising share of older female full-timers with different wage levels and higher within-group inequality.
Table 5 Reweighted inequality increase, 1985–2010, females, compositions for base year 1985
Adding recent labor market histories again explains a considerable, incremental share (18.6% for overall inequality and 17.1% for residual inequality, columns 8 to 10 in Table 5; see also Fig. 12 to the right). Thus, the impact of part-time episodes and labor market interruptions is similar for males and females. Finally, changes in occupations and industry structure have negligible effects on rising female wage inequality (columns 11 to 13 of Table 5)Footnote 15.
Altogether, we find that compositional changes can account for an even larger share of the rise in female wage inequality than for males. Column 12 of Table 5 shows that 63.6% of the increase in overall inequality, 61.9% of the increase in the upper part, and 64.8% of the increase in the lower part of the distribution can be accounted for by the compositional changes considered. The graph to the right in Fig. 13 implies that, during the period 1991 to 2001, female wage inequality would have fallen even in the absence of compositional changes. An important component has worked through composition changes regarding residual wages, i.e., shifts between groups of workers with different levels of within-group inequality (51.6% of the changes are accounted for by composition changes; see column 12 of Table 5).
In the Additional file 1, we carry out a robustness check of our analysis that reverses the roles of the base and target years (1985 vs. 2010). With few exceptions, all our findings are robust to the choice of the base year (see Additional file 1 for details).
Counterfactual full-time wages for total employment
This section extends the analysis of full-wage wages to total employment, including those working part-time in the year of observation. As explained above, part-time wages are not comparable because we lack detailed information on hours worked in our data set. However, we do observe the personal characteristics of part-timers, which our analysis of composition effects includes. We consider the distribution of characteristics in the combined sample of full-timers and part-timers ("total employment"), thereby estimating inequality of full-time wages among individuals who are currently employed.
This excercise will be informative in four ways. First, comparing the actual wage distribution of full-timers with the counterfactual wage distribution that assumes that both part-time and full-timers are paid full-time wages will be informative about whether part-timers are a positive or negative selection with respect to their characteristics (compared to full-timers). Second, examining the development of the counterfactual wage distribution for the total employment sample over time may serve as an estimate for composition effects on wage inequality in a wider population of part-time and full-timers, which we cannot examine directly given that comparable wage information for part-timers is missing. This also serves as a robustness checks of our above findings for full-timers. Third, the effect of selection into full-time work versus part-time work is mostly accounted for by controlling for the recent employment history. Fourth, we net out selective transitions between part-time and full-time work in our analysis of composition effects, in the sense that we measure composition effects net of such (often temporary) movements between part-time and full-time work.
We start with the estimated counterfactual trends in inequality of full-time wages in a sample sharing the composition of total employment (for a more detailed explanation, see "Composition reweighting for total employment" section in the Appendix). Figure 14 shows the trend in wage inequality if full-timers shared the education composition of total employment. For male workers, the differences between both distributions is very small in 1985. After 2000, we see a slight decline in the 15% quantile of the total employment distribution relative to the full-time distribution, which leads to slightly wider 50/15 and 85/15 quantile gaps. This suggests a negative selection into part-time work for men. However, the part-time share of male workers already starts rising in the early 1990s, while we only observe negative effects of selection into part-time a decade later. This implies that there is no negative selection associated with the initial expansion of part-time work. Also, for females, the initial full-time and total employment distributions for females are quite similar, especially regarding the upper tail. However, the quantiles diverge quickly and by 1990, we see lower wages for the total employment sample over the entire distribution. This means that characteristics that were prevalent among part-time workers involve lower wage returns than those of full-timers, implying negative selection into part-time work. After 1990, the distributional gap between the full-time and the counterfactual total employment sample was almost constant, implying a stable positive selection into full-time work.
Counterfactual wage distribution, if full-timers had total employment characteristics
The differences of the observed female full-time wage distribution in 2010 and the wage distribution for the counterfactual total emloyment sample are also shown in the right panel of Fig. 15 (bold vs. dashed line). Considering the total employment sample shifts the distribution to the left, i.e. the full-time sample is positively selected. The dotted lines in Fig. 15 represent the wage distributions that result when one further changes the characteristics to those of the total employment sample in 1985. This results in a considerable compression of the wage distribution. Again, changing characteristics contribute to rising inequality.
Comparison of observed, counterfactual total employment and reweighted counterfactual total employment sample (specification EEHOI)
Table 6 shows the contribution of composition changes for trends in full-time wage inequality in the total employment sample, which are broadly similar to the results for the male full-timers in Table 4. In particular, there is an important role for composition changes regarding education (especially at the top) and labor market histories (especially at the bottom). Including part-timers into the analysis makes the contribution of labor market histories to rising inequality much more pronounced at the bottom of the distribution (38.6% in Table 6 vs. 22.2% in Table 4). There is only a limited role for changes in occupations and industries. These conclusions are robust to reversing the base year (see Table 6 and SA4 in the Additional file 1). Table 7 shows the results for the female total employment sample. Despite the much higher part-time share in the female sample, the results in Table 7 are again quite similar to Table 5 for female full-timers. There is a role for shifts in experience and recent labor market histories, while changes in education and occupations and industries do not contribute much. In Table SA5 in Additional file 1, we reverse the base year. As in the female full-time sample, this boosts the role of education changes (particularly at the top of the distribution) and leads to a number of smaller unsystematic changes that point to complex interaction effects of compositional and wage structure effects. Similar to males, extending the analysis to total employment for females also amplifies the importance of recent labor market histories for increasing wage inequality at the bottom of the distribution (20.5% vs. 28.6% in Table 5 vs. Table 7, and 11.9% vs. 22.7% in Additional file 1: Table SA7 vs. Additional file 1: Table SA5, column 10).
Table 6 Change in inequality measures since 1985, for males, composition adjusted to total employment in base year 1985
Table 7 Change in inequality measures since 1985, for females, composition adjusted to total employment in base year 1985
This paper scrutinizes the contribution of composition changes in education, potential work experience, labor market history, industry structure, and occupation on the rise in inequality of full-time wages in Germany from 1985 until 2010. We account explicitly for the growing importance of employment interruptions and temporary part-time episodes among full-time workers, and we estimate the counterfactual full-time wage distribution for all employees.
Our results imply that changes in observables account for a large part of the rise in wage inequality and that the the growing importance of employment interruptions and temporary part-time episodes play an important role for wage inequality among full-time workers. For males, we find that (depending on the base year) 43 to 53% of the rise in wage inequality between 1985 and 2010 can be explained by compositional effects of the observables considered. For females, the importance of composition changes is even higher, ranging between 64 and 78%. To the best of our knowledge, the literature has so far not recognized the strong role of composition effects for the rise of female wage inequality. For males, composition changes in education (especially in the upper part of the distribution) and changes in recent labor market histories (especially in the lower part of the distribution) are the main contributors to compositional change. The compositional effects of male labor market histories to rising overall wage inequality range from 14 to 17%, and from 18 to 23% in the lower half of the distribution. For females, we find strong composition effects of changes in age/experience and in recent labor market histories. The latter contribute 17 to 18% to the overall increase in female wage inequality over the period 1985 to 2010. When including part-timers, the role of recent labor market histories becomes even stronger.
Our results are policy relevant because both changes in the age/education structure and in labor market histories are observable and to a certain extent predictable. One might wonder to what extent the contribution of increasing heterogeneity in recent labor market histories is causal or to what extent these are just proxies for unobservables. Still, while we are not in a position to separate between these two explanations, accounting for labor market history in fact also proxies for remaining unobservable differences in employment outcomes. Furthermore, the observed trends in previous part-time work and employment interruptions are very strong, which suggests that observed changes in labor market history are mostly the intended consequences of policy changes (Section 3.2). It is well documented in the literature that part-time work and previous nonemployment have effects on subsequent wages, even when controlling for unobservables (Arulampalam 2001; Schmieder et al. 2010; Paul 2016; Blundell et al. 2016). We therefore expect trends in these variables to directly change the wage distribution in subsequent periods. We also note that our base and target years (1985 and 2010) represent similar points in the business-cycle so that our analysis is unlikely to be affected by huge differences with respect to this aspect. Finally, we note that even if the observed changes in previous part-time work and nonemployment involve increased sorting in terms of unobservables across individuals with differing labor market histories, this would still make histories very relevant factors as their direct effect would be enhanced by changes in unobservables.
Days spent in part-time work during the last 5 years (full-timers aged 30-60 years)
Days spent in nonemployment during the last 5 years (full-timers aged 30-60 years)
Imputation of wages above censoring threshold
Our imputation procedure for wages above the contribution threshold of social security is loosely based on Gartner (2005). We assume that log-wages are approximately normally distributed and estimate expected wages above the censoring point with a Tobit model. We regress log wages on education, age, nationality, and individual labor market history, separately for both genders. Results in the literature suggest that this type of imputation leads to a slight upward bias in the variance of wages each year. Important for our analysis, however, it does not lead to bias in the trend of wage dispersionFootnote 16. As we want to take into account that the variance of wages is potentially correlated with individual characteristics, we modify the procedure suggested by Gartner (2005) to explicitly model a heteroscedastic variance for the Tobit regression. A simple imputation of log wages from the Tobit model would exhibit too little variation. We therefore adjust imputed wages by a random draw from a truncated normal distribution, using the predicted heteroscedastic variance from the Tobit model. We impute separately for each year and for male and female workers. Imputation by this method raises the mean wage by 0.8% and the standard deviation 14.6% for males, and by 0.2% as well as 3.2% for females across all years.
Details of counterfactual analysis
Composition reweighting for full-timers
We account for the selection into full-time work based on the observed composition of workers regarding their socio-economic characteristics. Changes in the composition over time reflect selective movements of individuals into and out of full-time work. Our aim is to quantify the effects of such changes in the composition of full-timers on wage inequality. We use the reweighting methodology introduced by DiNardo et al. (1996) to estimate counterfactual wage distributions fixing the composition of a reference group (in our case the population of full-timers in a reference year).
In the first part of our analysis, we analyze the distribution of full-time wages which would result if the distribution of worker characteristics had not changed over time but only the conditional wage structure (i.e., the wage distribution holding characteristics constant). Based on these counterfactual wage distributions, we calculate and compare the development of inequality as measured by the gaps between the 85th, 50th, and 15th wage percentiles and the spread of residual wages. We take the residuals from a Mincer regression of log wages w on a flexible specification of the characteristics listed in Table 1. The dispersion of residual wages represents wage inequality within narrow groups of workers defined by the characteristics given in Table 1. Changes in residual wage inequality may also be the result of changes in the composition of the labor force (Lemieux 2006). This will be the case if there is heteroscedasticity, i.e., the conditional residual variance depends on observed characteristics. In this case, shifts in the distribution of characteristics affect residual wage inequality. For instance, overall residual wage inequality will typically rise if there is a rising share of workers with above-average levels of within-group inequality.
Let tx=b denote the base year, for which the composition of the work force will be held fixed, and tw=o the year for which we intend to estimate a counterfactual wage distribution. We call this year the observation year. Here, we only use observations on full-timers in years tw and tx. The counterfactual wage distribution using the conditional wage structure of year tw=o but the distribution of characteristics x from the base year tx=b is given by
$$\begin{array}{@{}rcl@{}} & f(w|t_{w}=o,t_{x}=b)=\int_{x} f(w|x,t_{w}=o)dF(x|t_{x}=b)\\ &{\kern118pt}=\int_{x} f(w|x,t_{w}=o)\rho(t_{x}=b)dF(x|t_{x}=o). \end{array} $$
where f(w|tw=o,tx=o) is the actual density of wages for characteristics x in year tw=o and \(\rho (t_{x}=b)=\frac {dF(x|t_{x}=b)}{dF(x|t_{x}=o)}\) is the reweighting factor which translates the density of observed wages into the counterfactual density. Note that as a special case \(f(w|t_{w}=o,t_{x}=o)=\int _{x} f(w|x,t_{w}=o)dF(x|t_{x}=o)\), for which ρ(tx=b)≡1 in Eq. (1). The reweighting factor can be written as the ratio \(\rho (t_{x}=b)=\frac {P(t=b|x)}{P(t=o|x)}\frac {P(t=o)}{P(t=b)}\), where P(t=o) and P(t=b) are the sample proportions of the observation year and the base year when pooling the data for both years.
The proportions P(t=b|x) and P(t=o|x) are estimated by logit regressions of the respective year indicator on flexible specifications of the characteristics shown in Table 1. The logit regressions are based on the sample pooling the base year and the observation year. Using the fitted logit probabilities, we then calculate the individual reweighting factors ρi(tx=b) for observations i. All our estimates use the sample weights si which compensate for the varying length of employment spells. For robustness reasons, we trim the maximum value of individual observation weights to the value of 30, in order to prevent extreme values of the reweighting factor, which may occur as a result of extremely rare combinations of characteristics. We tested a range of trimming thresholds and found that values between 20 and 50 avoid extreme outliers, while at the same time excluding a very small number of observations (details are available upon request).
The reweighting factor can be incorporated into the estimation of counterfactual quantiles based on the sample wage distribution while fixing the composition of full-timers in the base year. Using the abbreviation ρ=ρ(tx=b), the reweighted (composition adjusted) p% quantile is given by
$$ Q_{p}(w|t_{w}=o,t_{x}=b)=\left\{ \begin{array}{ll} \frac{w_{[j-1]}+w_{[j]}}{2} & \text{if} \: {\sum\nolimits}_{i=1}^{j-1}(s\rho)_{[i]}=\frac{p}{100}{\sum\nolimits}_{i=1}^{n}(s\rho)_{[i]}\\ w_{[j]} & \text{otherwise} \end{array}\right. \:, $$
$$j=min \left(k|\sum\limits_{i=1}^{k}(s\rho)_{[i]}>\frac{p}{100}\sum\limits_{i=1}^{n}(s\rho)_{[i]} \right) \;, $$
w[i] is the ith order statistic of wages and (sρ)[i] is defined accordingly (i.e., the order statistic of the compound individual weights sρ, combining the sample weight s with the reweighting factor ρ).
We consider the quantile gaps (differences in quantiles of log wages) between the 85th and 50th, the 85th and 15th as well as the 50th and 15th counterfactual percentile, i.e.
$$\begin{array}{*{20}l} {QG}_{85/50}(w|t_{w}=o,t_{x}=b) & = Q_{85}(w|t_{w}=o,t_{x}=b)-Q_{50}(w|t_{w}=o,t_{x}=b) \end{array} $$
$$\begin{array}{*{20}l} {QG}_{50/15}(w|t_{w}=o,t_{x}=b) & = Q_{50}(w|t_{w}=o,t_{x}=b)-Q_{15}(w|t_{w}=o,t_{x}=b). \end{array} $$
In addition to a graphical comparison of the actual and counterfactual development over time, we also contrast the increase in the counterfactual quantile gaps with the actual increase between 1985 and 2010. This allows us to quantify the share of the increase in inequality associated with composition changes (where g∈{85/50,85/15,50/15})
$$ {shareQG}_{g,x}(w|t_{w}=2010,t_{x}=1985)= $$
$$\frac{{QG}_{g}(w|t_{w}=2010,t_{x}=2010)-{QG}_{g}(w|t_{w}=2010,t_{x}=1985)}{{QG}_{g}(w|t_{w}=2010,t_{x}=2010)-{QG}_{g}(w|t_{w}=1985,t_{x}=1985)}. $$
For the logit regression, we use a sequence of specifications adding covariates in order to investigate the incremental composition effect on wage inequality. We divide the vector of characteristics into five groups of variables, namely educational outcomes (Ed), labor market experience (Ex), labor market history (Hist), occupation and industry characteristics (Occ, Ind) (see Tables 1 and 8). Among those, we consider potential labor market experience as continuous and all other variables as categorial, leading to a highly flexible specification of the logit model. We calculate four versions of the counterfactual quantile gaps, starting with a specification only controlling for education (row E in Table 8).
Table 8 Specification overview
Sequentially adding sets of covariates (characteristics) to our reweighting procedure, we estimate the change in the counterfactual quantile gaps that is associated with the set of covariates considered so far. This way, we quantify the incremental contribution of covariatess to the rise in wage inequality (this contribution is given by the figures in the columns labeled "Increment"; see, e.g., Table 4). We decompose the difference between the observed and counterfactual rise in inequality into the effects of separate sets of covariates. For example, when adding occupation and industry characteristics (OI) to the reweighting function that already contains education, experience and labor market history (EEH), we measure the incremental effect of occupation and industry (OI) net of the effect contributed by the set of covariates already included (EEH). We add covariates in the order given in Table 8. The incremental effect of each set of covariates depends upon the order in which they are added to the model. Our reasoning behind the choice of the sequence shown in Table 8 is that we gradually move from exogenous and predetermined characteristics towards characteristics that are the likely consequence of endogenous decisions of the individual. We start with education because education typically remains fixed after labor market entry. Next, potential work experience is a linear function of time and education. Similary, labor market history involves characteristics which are affected by education and actual work experience. Finally, occupation and industry can in principle be changed any time conditional on education, experience and labor market history, and we are particularly interested as to whether occupation and industry play a role after accounting for all other individual level characteristics.
One may wonder how the reweighting method deals with endogeneity, i.e., unobservables that are not included in the analysis but that are potentially correlated with the included observables. Fortin et al. (2011) show that for a causal interpretation one only has to make the assumption that the distribution of unobservables for workers with identical observables (including observed labor market history) is the same in the base year and the target year (assumption 5, p. 21 in Fortin et al. 2011). Note that this does not rule out correlation of observables and unobservables. Put differently, the relationship between observables and unobservables is assumed to be time-invariant. This assumption would be violated, if, e.g., having prior part-time/nonemployment experience is increasingly associated with good or bad unobservables. While we cannot rule out this possibility, there is no evidence for such an effect. However, the point to be stressed is that a mere correlation between observables and unobservables does not pose a problem to our method as long as the correlation does not vary systematically over time.
Composition reweighting for total employment
The reweighting can be expanded to take into account selection between full-time work and total employment based on observables, thus addressing the limitation that the SIAB data do not provide comparable wages for part-timers. We first calculate wage distributions for full-timers using the distribution of characteristics in the total employment sample, involving both part-timers and full-timers. Then, in a second step, we reweight these counterfactual wage distribution to the characteristics of a base year, analogous to "Composition reweighting for full-timers" section. The resulting distribution can be interpreted as the wages that would have prevailed had all individuals worked full-time and had their characteristics stayed at the level of the base year.
The first step consists in within-period composition reweighting. We calculate counterfactual wage distributions, which would have prevailed if all individuals had been paid full-time wages. This interpretation holds under the assumption that returns to characteristics for non-full-timers are equal to those for full-timers. The results of Manning and Petrongolo (2008) suggest that hourly wage differentials for (female) part-timers in industrialized countries are not driven by differences in returns to characteristics, which lends credibility to our approach. In order to calculate these distributions, we apply the reweighting technique described in Section 4.1, but instead of the full-time sample in a specific base year, the reference group is total employment in the same year. Let e∈{FT,TE} describe the employment group to which each observation belongs, where FT represents full-timers and TE total employment. Full-time workers appear in both FT and TE. The reweighting factor ρ(FT→TE,tx=o) is the probability of characteristics x in the total employment sample in a given year, relative to the probability x in the full-time sample of the same year
$$ {}\rho(FT\rightarrow TE,t_{x}=o)=\frac{dF(x|e_{x}=TE,t_{x}=o)}{dF(x|e_{x}=FT,t_{x}=o)}=\frac{P(e=TE|x,t=o)}{P(e=FT|x,t=o)}\frac{P(e=FT|t=o)}{P(e=TE|t=o)}. $$
Then, the counterfactual distribution of wages, assuming the entire labor force was working full-time, can be written as:
$$\begin{array}{@{}rcl@{}} & & f(w|e_{w}=FT,e_{x}=TE,t_{w}=o,t_{x}=o)\\[1.5ex] & = & \int_{x} f(w|x,e_{w}=FT,t_{w}=o,t_{x}=o)\rho(FT\rightarrow TE,t_{x}=o)dF(x|e_{x}=FT,t_{x}=o). \end{array} $$
Here, P(e=TE|x,t=o) is estimated by a weighted logit regression on the pooled sample of the reference group (total employment TE) and the group of interest (full-timers FT), with the employment status indicator e denoting group membership of each observation. In this step, we use the specification from Table 9, in order to include the full set of observable individual characteristics.
Table 9 Specification for counterfactual total employment
In a second step, we analyze the distribution of wages which would have prevailed, had all employees worked full-time, and had their characteristics been fixed at the level of the base year. By holding the composition of total employment constant over time, we control for changes in the wage distribution due to changes in the selection into total employment over time. This counterfactual distribution can be written as:
$$ \begin{aligned} f(w|e_{w}=FT,e_{x} &= TE,t_{w}=o,t_{x}=b)= \\ {}\int_{x} f(w|x,e_{w}=FT,t_{w}=o)\rho(e_{x}=TE,t_{x} &= b)\rho(FT\rightarrow TE,t_{x}=o)dF(x|e_{x}=FT,t_{x}=o), \end{aligned} $$
$$ {}\rho(e_{x}=TE,t_{x}=b)=\frac{dF(x|e_{x}=TE,t_{x}=b)}{dF(x|e_{x}=TE,t_{x}=o)}=\frac{P(t=b|x,e_{x}=TE)}{P(t=o|x,e_{x}=TE)}\frac{P(t=o|e=TE)}{P(t=b|e=TE)}. $$
Analogous to "Composition reweighting for full-timers" section, we sequentially add groups of covariates to our logit specifications as described by Table 8. This allows us to investigate the incremental changes in inequality associated with the corresponding composition changes.
There is a large literature on the rise of wage inequality in Germany, see, e.g., Dustmann et al. (2009), Antonczyk et al. (2010), Card et al. 2013 as well as the literature review in Section 2.
There is a cyclical component in transitions from nonemployment and part-time employment to full-time employment. During an upswing (downturn), one would expect these to increase (fall). In a recent study, Borowczyk-Martins and Lalé (2018) show that for the UK and the USA transitions from part-time to full-time employment at the same employer are a major driver of the cyclical changes in part-time employment, growing (declining) during an upswing (downturn). Our analysis focuses on the long-term rise in the share of full-timers with nonemployment and part-time experience. As our empirical results show, this long-term rise dominates the cyclical variation.
See, e.g. Arulampalam (2001), Burda and Mertens (2001), Beblo and Wolf (2002), Manning and Petrongolo (2008), Edin and Gustavsson (2008), Schmieder et al. (2010), Edler et al. (2015), or Paul (2016).
See also (in chronological order) Kohn (2006), Gernandt and Pfeiffer (2007), Antonczyk et al. (2010), Fitzenberger (2012), Card et al. (2013), Felbermayr et al. (2014), Dustmann et al. (2014), Riphahn and Schnitzlein (2016), Möller (2016), and Antonczyk et al. (2018). Most recent studies are based on administrative employment records in the Sample of Integrated Employment Biographies (SIAB) – or on earlier versions of the same data source - as provided by the Research Data Center of the IAB and the Federal Employment Agency. Some studies use of the cross-sectional wage surveys in the German Structure of Earnings Survey (GSES) provided by the Research Data Center of the Statistical Offices, the Socio-Economic Panel (GSOEP) provided by DIW or the BIBB-IAB/Bibb-BAuA Labor Force Surveys (BLFS). While the SIAB data only involves earnings, the GSES, the GSOEP, and the BLFS allow for an analysis of hourly wages. Researchers using the SIAB data typically focus on full-time working employees. While the SIAB and the GSOEP provide panel data, the GSES data and the BLFS only involve repeated cross-sections every four to six years and the GSES surveys before 2010 only involve a subset of all industries and they lack very small firms. Compared to the GSOEP and the BLFS, the GSES and the SIAB provide much larger cross-sections on employees and wages. All four data sets document the rise in wage inequality since the mid 1990s, see Dustmann et al. (2009, SIAB), Fitzenberger (2012, SIAB and GSES), Antonczyk et al. (2009, BFLS), and Gernandt and Pfeiffer (2007, GSOEP).
The recent study by Möller (2016) shows that the rise in wage inequality stopped in 2010 based on a new release of the SIAB data. However, the comparison of the years before and after 2011 is plagued by a structural break in 2011 regarding the distinction between part-timers and full-timers. For both reasons, we abstain from analyzing the SIAB data after 2010 since our focus is on analyzing the rise in wage inequality.
This study uses the factually anonymous Sample of Integrated Labour Market Biographies (version 1975 – 2010). Data access was provided via a Scientific Use File supplied by the Research Data Centre(FDZ) of the German Federal Employment Agency (BA) at the Institute for Employment Research (IAB), see vom Berge et al. (2013) for a data documentation.
We have calculated the standard deviation of hours of work for the years 1985 and 2010 based on the German Socioeconomic Panel (detailed results are available upon request). For part-timers, the standard deviation is two to three times higher than for full-timers.
In order to clearly separate previous part-time and nonemployment during educational spells from those after having completed education, we also include the evidence for full-timers aged at 30 to 60 years old, see Figs. 16 and 17 in the Appendix. For part-time experience, the trends are very similar for the 25 to 60 years old and the 30 to 60 years old.
In Table SA2 in the Additional file 1, we show that differences in means and variances below and above the median are highly statistically significant.
We thank an anonymous referee for raising these issues.
Unfortunately, the SIAB data do not record whether a nonemployment episode is due to an educational spell. However, the data involve the educational degree as possible outcome of a previous nonemployment episode.
This method has been applied, among others, by Lemieux (2006) and Dustmann et al. (2009). For an overview of alternative decomposition techniques, see Fortin et al. 2011.
Such an analysis ignores general equilibrium effects, i.e., changes in the conditional wage structure are assumed to be independent of changes in the work force composition.
However, there is a slight difference with regard to the effect of female education when we take as the base year 2010 instead of 1985. This points to interaction effects. We carry out this reverse analysis in section Section SA1.2 in the Additional file 1.
It is not an error that quantile gaps for the overall distribution are unchanged up to the third digit in row 13 of Table 5 when adding occupation and industry characteristics. This is due to the fact that daily wages are rounded to full Euros and quantiles only change if the change in counterfactual weights is large enough to move the quantile value to a different Euro integer.
Compare the discussion in Card et al. (2013).
Acemoglu, D, Autor D (2011) Skills, tasks and technologies: implications for employment and earnings. In: Ashenfelter O Card D (eds)Handbook of Labor Economics, vol 4b, ch. 12, North Holland, Amsterdam.
Antonczyk, D, Fitzenberger B, Leuschner U (2009) Can a task-based approach explain the recent changes in the German wage structure?J Econ Stat 229:214–238.
Antonczyk, D, Fitzenberger B, Sommerfeld K (2010) Rising wage inequality, the decline in collective bargaining, and the gender wage gap. Labour Econ 17:835–847.
Antonczyk, D, DeLeire T, Fitzenberger B (2018) Polarization and rising wage inequality: comparing the U.S. and Germany. Econometrics 6. https://doi.org/10.3390/econometrics6020020.
Arulampalam, W (2001) Is unemployment really scarring? Effects of unemployment experiences on wages. Econ J 111:F585–F606.
Autor, D, Levy F, Murnane R (2003) The skill content of recent technological change: an empirical exploration. Q J Econ 118:1279–1333.
Autor, D, Katz LF, Kearney MS (2008) Trends in US wage inequality: revising the revisionists. Rev Econ Stat 90:300–323.
Autor, D (2013) The 'task approach' to labor markets: an overview. J Labour Mark Res 46:185–199.
Beblo, M, Wolf E (2002) How much does a year off cost? Estimating the wage effects of employment breaks and part-time periods. Cah Economiques Brux 45(2):191–217.
Biewen, M, Seckler M (2017) Changes in the German wage structure: unions, internationalization, tasks, firms, and worker characteristics. IZA Discussion Paper No. 10763. IZA Institute of Labor Economics, Bonn.
Blundell, R, Dias MC, Meghir C, Shaw JM (2016) Female labor supply, human capital, and welfare reform. Econometrica 84:1705–1753.
Borowczyk-Martins, D, Lalé E (2018) The Ins and Outs of Involuntary Part-time Employment. IZA Discussion Paper No. 11826. IZA Bonn.
Burda, M, Mertens A (2001) Estimating wage losses of displaced workers in Germany. Labour Econ 8:15–41.
Burda, M, Hunt J (2011) What Explains the German Labor Market Miracle in the Great Recession?Brook Pap Econ Act 42:273–335.
vom Berge, P, Burghardt A, Trenkle S (2013) Sample of integrated labour market biographies regional file 1975-2010 (SIAB-R 7510). FDZ data report, 09/2013, Nuremberg.
Card, D, Heining J, Kline P (2013) Workplace heterogeneity and the rise of West German wage inequality. Q J Econ 128:967–1015.
Connolly, S, Gregory M (2009) The part-time penalty: earnings trajectories of British women. Oxf Econ Pap 61:76–97.
DiNardo, J, Fortin NM, Lemieux T (1996) Labour market institutions and the distribution of wages. Econometrica 64:1001–1044.
Dustmann, C, Ludsteck J, Schönberg U (2009) Revisiting the German wage structure. Q J Econ 124:843–881.
Dustmann, C, Fitzenberger B, Schönberg U, Spitz-Oener A (2014) From sick man of Europe to economic superstar: Germany's resurgent economy. J Econ Perspect 28:167–188.
Edin, PA, Gustavsson M (2008) Time out of work and skill depreciation. Ind Labor Relat Rev 61:163–180.
Edler, S, Jacobebbinghaus P, Liebig S (2015) Effects of unemployment on wages: differences between types of reemployment and types of occupation. SFB 882 Working Paper Series No. 51. University of Bielefeld, Bielefeld.
Felbermayr, G, Baumgarten D, Lehwald S (2014) Increasing wage inequality in Germany: what role does global trade play? In: Glob Econ Dyn Bertelsmann Stiftung.
Fitzenberger, B (1999) Wages and employment across skill groups: an analysis for West Germany. Physica/Springer, Heidelberg.
Fitzenberger, B (2012) Expertise zur Entwicklung der Lohnungleichheit in Deutschland. Arbeitspapier, Sachverständigenrat zur Begutachtung der Gesamtwirtschaftlichen Entwicklung, Wiesbaden.
Fitzenberger, B, Osikominu A, Völter R (2006) Imputation rules to improve the education variable in the IAB employment subsample. J Appl Soc Sci Stud (Schmollers Jahrbuch) 126:405–436.
Fitzenberger, B, Steffes S, Strittmatter A (2016) Return-to-job during and after parental leave. Int J Hum Resour Manag 27:803–831.
Fortin, NM, Lemieux T, Firpo S, Ashenfelter O, Card D (2011) Decomposition methods in economics In: Handbook of Labor Economics, vol 4, 1–102, North Holland, Amsterdam.
Gartner, J (2005) The Imputation of wages above the contribution limit with the German IAB employment sample In: FDZ Methodenreport No. 2,.. Institut für Arbeitsmarkt und Berufsforschung (IAB), Nürnberg.
Gernandt, J, Pfeiffer F (2007) Rising wage inequality in Germany. J Econ Stat 227:359–380.
Gregory, M, Jukes R (2001) Unemployment and subsequent earnings: estimating scarring among British men, 1984-94. Econ J 111:F607–F625.
Heckman, JJ (1981) Structural Analysis of Discrete Data, chapter 3. In: Manski C McFadden D (eds).. MIT Press, Cambridge, MA.
Kohn, K (2006) Rising wage dispersion, after all! The German wage structure at the turn of the century. IZA Institute of Labor Economics, Bonn.
Lemieux, T (2006) Increasing residual wage inequality: composition effects, noisy data, or rising demand for skill?Am Econ Rev 96:461–498.
Manning, A, Petrongolo B (2008) The part-time penalty for women in Britain. Econ J 118:28–51.
Möller, J (2016) Lohnungleichheit - Gibt es eine Trendwende?Wirtschaftsdienst 96(1):38–44.
Paul, M (2016) Is there a causal effect of working part-time on current and future wages?. Scand J Econ 118:494–523.
Potrafke, N (2012) Unemployment, human capital depreciation and pension benefits: an empirical evaluation of German data. J Pension Econ Finance 11:223–241.
Riphahn, RT, Schnitzlein D (2016) Wage mobility in East and West Germany. Labour Econ 39:11–34.
Ruhm, C (1991) Are workers permanently scarred by job displacements?Am Econ Rev 81:319–324.
OECD (2010) OECD Employment Outlook 2010: Moving beyond the Jobs Crisis. OECD Publishing, Paris. https://doi.org/10.1787/empl_outlook-2010-en.
Sachverständigenrat zur Begutachtung der Gesamtwirtschaftlichen Entwicklung [SVR] (2014) Mehr Vertrauen in Marktprozesse, Jahresgutachten 2014/15. Metzler-Poeschel, Stuttgart.
Schmieder, JF, von Wachter T, Bender S (2010) The long-term impact of job displacement in Germany during the 1982 recession on earnings, income, and employment IAB Discussion Paper No. 1/2010. Institut für Arbeitsmarkt und Berufsforschung (IAB), Nürnberg.
Tamm, M, Bachmann R, Felder R (2017) Erwerbstätigkeit und atypische Beschäftigung im Lebenszyklus - Ein Kohortenvergleich für Deutschland. Perspekt Wirtschpolit 18:263–285.
Tisch, A, Tophoven S (2012) Employment biographies of the German baby boomer generation. J Appl Soc Sci Stud (Schmollers Jahrbuch) 132:205–232.
We are grateful to the Research Data Center at IAB for useful discussions. We thank Benjamin Bruns, Christian Dustmann, Uta Schönberg, Alexandra Spitz-Oener, and various seminar audiences for the helpful comments and suggestions. We would also like to thank the anonymous referees and the editor for the useful remarks.
Responsible editor: Pierre Cahuc
This study was funded by the German Science Foundation (DFG) through the project "Accounting for Selection Effects in the Analysis of Wage Inequality in Germany" (grant number: BI 767/3-1 and FI 692/16-1).
Our empirical analysis uses a scientific use file of German adminstrative employment records (see vom Berge, P., A. Burghardt, S. Trenkle, 2013). These data are confidential and can be accessed through the RDC of IAB/BA in Nuremberg.
University of Tübingen, Tübingen, Germany
Martin Biewen
IZA, Bonn, Germany
& Bernd Fitzenberger
Humboldt University Berlin, Berlin, Germany
Bernd Fitzenberger
& Jakob de Lazzer
IFS, London, UK
CESifo, Munich, Germany
ROA, Maastricht, Netherlands
ZEW, Mannheim, Germany
Search for Martin Biewen in:
Search for Bernd Fitzenberger in:
Search for Jakob de Lazzer in:
Correspondence to Bernd Fitzenberger.
The IZA Journal of Labor Economics is committed to the IZA Guiding Principles of Research Integrity. The authors declare that they have observed these principles.
Additional file 1
Supplementary appendix. (PDF 320 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Biewen, M., Fitzenberger, B. & de Lazzer, J. The role of employment interruptions and part-time work for the rise in wage inequality. IZA J Labor Econ 7, 10 (2018). https://doi.org/10.1186/s40172-018-0070-y
DOI: https://doi.org/10.1186/s40172-018-0070-y
Part-time employment
Employment interruptions
Composition effects
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Original Contribution
Associations between implementation of Project Lazarus and opioid analgesic dispensing and buprenorphine utilization in North Carolina, 2009–2014
Apostolos A. Alexandridis ORCID: orcid.org/0000-0001-9140-28891,2,
Nabarun Dasgupta1,2,
Agnieszka D. McCort1,
Christopher L. Ringwalt1,
Wayne D. Rosamond2,
Paul R. Chelminski3 &
Stephen W. Marshall1,2
Injury Epidemiology volume 6, Article number: 2 (2019) Cite this article
Project Lazarus (PL) is a seven-strategy, community-coalition-based intervention designed to reduce opioid overdose and dependence. The seven strategies include: community education, provider education, hospital emergency department policy change, diversion control, support programs for patients with pain, naloxone policies, and addiction treatment expansion. PL was originally developed in Wilkes County, NC. It was made available to all counties in North Carolina starting in March 2013 with funding of up to $34,400 per county per year. We examined the association between PL implementation and 1) overall dispensing rate of opioid analgesics, and 2) utilization of buprenorphine. Buprenorphine is often used in connection with medication assisted treatment (MAT) for opioid dependence.
Observational interrupted time series analysis of 100 counties over 2009–2014 (n = 7200 county-months) in North Carolina. The intervention period was March 2013–December 2014. 74 of 100 counties implemented the intervention. Exposure data sources comprised process surveys, training records, Prescription Drug Monitoring Program (PDMP) data, and methadone treatment program quality data. Outcomes were PDMP-derived counts of opioid prescriptions and buprenorphine patients. Incidence Rate Ratios were estimated with adjusted GEE Poisson regression models of all seven PL strategies.
In adjusted models, diversion control efforts were positively associated with increased dispensing of opioid analgesics (IRR: 1.06; 95% CI: 1.03, 1.09). None of the other PL strategies were associated with reduced prescribing of opioid analgesics. Support programs for patients with pain were associated with a non-significant decrease in buprenorphine utilization (IRR: 0.93; 95% CI: 0.85, 1.02), but addiction treatment expansion efforts were associated with no change in buprenorphine utilization (IRR: 0.98; 95% CI: 0.91, 1.06).
Implementation of PL strategies did not appreciably reduce opioid dispensing and did not increase buprenorphine utilization. These results are consistent with previous findings of limited impact of PL strategies on overdose morbidity and mortality. Future studies should analyze the uptake of MAT using a more expansive view of institutional barriers, treating community coalition activity around MAT as an effect modifier.
Deaths from opioid overdose began increasing in North Carolina (NC) in the late 1990s (Web-based Injury Statistics Query and Reporting System (WISQARS), 2005). Between 1999 and 2015, opioid mortality increased 486% to over 11 per 100,000 (Injury and Epidemiology Surveillance Unit, Injury and Violence Prevention Branch, Division of Public Health, North Carolina Department of Health and Human Services, 2015). Opioid overdose has become the leading cause of unintentional injury death in the state, and involves prescription opioid analgesics (OA) as well as illicitly manufactured heroin and fentanyl (State Center for Health Statistics, 2015). Addressing this epidemic has become a leading priority for the NC Department of Health and Human Services (NC DHHS), which has promoted supply, demand, and harm reduction strategies (North Carolina Department of Health and Human Services, 2017).
Among demand reduction strategies, medication assisted treatment (MAT), particularly with the partial opioid agonist buprenorphine, has been widely embraced. MAT is supported by substantial evidence-based modalities of substance abuse treatment (Mattick et al., 2014; Thomas et al., 2014). Buprenorphine is the only form of agonist MAT that can be dispensed by traditional retail pharmacies, and can be prescribed by primary care providers who complete an 8-h training through the Substance Abuse and Mental Health Services Administration (SAMHSA) (Fiellin et al., 2004). Buprenorphine also has advantages for patients seeking agonist-based MAT in rural areas (Kraus et al., 2011). Formulations of buprenorphine indicated for MAT are also often used to reduce the risk of opioid abuse in patients receiving high doses of full-agonist opioids.
Project Lazarus (PL) is a comprehensive, community-based series of seven interventions designed to reduce demand, supply and harms related to prescription OA; improve treatment of chronic pain; and promote and improve access to MAT. PL was first piloted in one NC county between 2007 and 2010, and was implemented statewide in early 2013 (Albert et al., 2011). Subsequently, the seven distinct PL strategies were promoted nationally by the White House Office of National Drug Control Policy (ONDCP) 2015 opioid strategy (United States, 2015). Funding for coalitions was made available to all 100 NC counties through a non-competitive application process organized by the state Medicaid implementation authority, Community Care of North Carolina (CCNC), and the Mountain Area Health Education Center (MAHEC). Coalitions were invited to select among the seven PL strategies they felt best represented their community's needs, with a minimum of three.
PL's seven distinct strategies are designed to be implemented together by a community-based coalition. This paper examines the association between the seven PL strategies and (1) overall prescribing rate of opioid analgesics and (2) utilization of buprenorphine. The 7 PL strategies are as follows. (1) Community education promoted public awareness of prescription opioid overdose. (2) Diversion control was designed to remove unused medications and train law enforcement on OA diversion. (3) Support programs for patients with pain provided support groups, case management and pain clinic vetting and referrals. (4) Provider education focused on educating medical professionals in chronic pain treatment, including group trainings and in-office 'academic detailing,' or tailored instruction. The North Carolina Medical Board's published guidelines for pain management were referenced in trainings (Trado, 2004). (5) Hospital emergency department (ED) policies revised hospital practices to limit ED OA prescribing and require checking the state's Prescription Drug Monitoring Program (PDMP) before prescribing. (6) Addiction treatment expansions increased the number of providers in a community able to prescribe buprenorphine-based MAT for opioid dependence, and the number of beds available in inpatient detoxification and treatment facilities. (7) Naloxone policies promoted liberal distribution of the opioid antagonist naloxone to opioid users and their close contacts, first responders including EMS and police, and caregivers. Strategies 1–3 were focused on community entities external to the health care system, whereas strategies 4–7 were focused on health care providers (Table 1).
Table 1 Project Lazarus strategies and hypothesized effects
The statewide implementation of PL in NC has significance as one of the earliest and largest coordinated efforts to address the overdose epidemic using community-based approaches. We hypothesized that the seven PL strategies would have varying effects on opioid overdose morbidity and mortality, opioid prescribing, and utilization of buprenorphine (Table 1). This paper focuses on PL's hypothesized effects on opioid prescribing and buprenorphine utilization. An evaluation of the association between PL and opioid overdose morbidity and mortality has appeared elsewhere (Alexandridis et al., 2018).
NC is a large state in the southeastern US (population 9.9 million in 2014) that had overdose rates comparable to the US average during the 2009–2014 study period. We used an interrupted time series design to examine the relationship between strategies implemented as a part of PL and both prescription OA dispensing and buprenorphine utilization rates.
The general analytic approach has been described previously (Alexandridis et al., 2018). Primary and administrative secondary data sources were aggregated at the level of the county for every month over the time period 2009–2014. These secondary data sources included the state PDMP and drug treatment intake interviews. The resulting time series captured relevant activities of PL coalition activities and opioid-related outcomes across a total of 7200 county-months.
Implementation of PL strategies
PL strategies were implemented by a series of county-based community coalitions. Funding for the intervention was made available to all 100 NC counties via an application process for county-based coalitions beginning in 2011. Funding was distributed through CCNC (the designated state Medicaid implementation authority) and MAHEC, with technical support from the community-based organization Project Lazarus. Coalitions that applied received annual grants of between $6500 and $34,400, from a network of funding sources. Thus, a coalition that received the maximal funding ($34,440) may have been able to provide a full-time salary for a community health worker paid at the average weekly wage in North Carolina. Given that additional coordinators and non-personnel costs would be needed to successfully implement the seven strategies, it is reasonable to assume that no county received funding sufficient to fully implement PL without the need for additional investment by the county or community. Our evaluation included a pre-intervention period (January 2009–February 2013) and an intervention period (March 2013–December 2014). CCNC also funded Medicaid regional coordinators who provided technical assistance to community coalitions, directed provider education, and advocated for changes in hospital policies related to opioid prescribing.
We used measures of coalition activities and ongoing surveys of key community coalition leaders to capture the implementation of the 7 PL strategies in each county in each month. We coded implementation of PL strategies using dichotomous variables that captured the implementation of each strategy, with '0' representing no implementation of a strategy in a county to-date, and '1' representing any ongoing or prior implementation or policy change specific to each strategy.
Community-based coalitions were identified at the time they were funded by CCNC. Coalition activities were captured through structured surveys that three of the authors (ADM, ND, CLR) administered via web survey every 6 months to coalition leaders and the CCNC regional coordinators. Surveys included details on naloxone policy adoption, ED policy changes, creation of support programs for patients with pain, and the location and date of provider and community education events.
For the diversion control strategy, details of the time and location of local law enforcement trainings on diversion control were obtained from the NC State Bureau of Investigation (SBI).
For the addiction treatment strategy and the evaluation of PL association with opioid dispensing, we combined survey data on MAT expansions with measures of incident buprenorphine and methadone utilization. This measure was constructed with data from the NC Controlled Substance Reporting System (CSRS), the state PDMP, and the NC Treatment Outcomes and Program Performance System (NC-TOPPS), a quality monitoring system sponsored by the Substance Abuse and Mental Health Services Administration (SAMHSA). Overall counts of new methadone treatment program patients were abstracted from intake interviews, and added to measures of incident MAT. Buprenorphine treatment episodes were considered incident after a 90-day washout period since the last buprenorphine script dispensed. The evaluation of PL's association with buprenorphine utilization only used the former survey data on MAT expansions and policy change.
Opioid prescribing and buprenorphine utilization
Data from the CSRS were used to construct county-month counts of patients and prescriptions for opioid analgesics. PDMPs such as the CSRS are state government-run electronic databases that can be queried at the point of care by clinicians to review a patient's history of receiving controlled substances. Selected law enforcement officers and medical examiners are allowed access to the database when they are investigating specific cases. The CSRS began collecting data in January 2009, and data were provided by the NC Division of Mental Health, Developmental Disabilities, and Substance Abuse Services (DMHDDSAS). The data are generated when prescriptions for controlled substances are dispensed at regulated pharmacies in North Carolina. The data captured comprise each field of information legally required to be included in a North Carolina prescription for a controlled substance. The data are stored locally at the pharmacy and transmitted periodically to a central database. Data elements include unique identifiers for prescribers, dispensers, and patients and their locations; quantity, dose, days supply, and National Drug Code of the prescription; and age and sex of the patient.
The raw data were tabulated by active pharmaceutical ingredient (API) and dosage form (e.g., solid oral, patch) for opioid analgesics. Opioid analgesics were defined as solid oral, transbuccal, or transdermal formulations containing codeine, fentanyl, hydrocodone, hydromorphone, methadone, morphine, oxycodone and oxymorphone. Prescriptions with APIs comprising the top 99.9% of all prescription records were retained; data cleaning removed non-controlled substances and appended metadata on drug class. Patients were assigned a unique identification number provided by the database vendor (Health Information Designs, Auburn, Alabama, USA), which takes name, date of birth, and residential ZIP code into account, and was provided as a one-way hash algorithm and was continuous over data-years.
Data from the CSRS were used to create county-month counts for the two outcome measures of interest, opioid prescribing and buprenorphine utilization. For opioid prescribing, counts of dispensed full mu-opioid receptor (MOR) agonist analgesics and their prodrugs, including solid oral, transdermal, nasal spray, and transbuccal formulations, were extracted by county and month from the CSRS. For buprenorphine utilization, counts of unique monthly buprenorphine patients, created using prescription data from the CSRS, were used to identify all patients receiving pharmacy-dispensed formulations of buprenorphine with indications for addiction treatment (e.g. Subutex, Suboxone, but not Butrans).
Covariate measures
In order to control for fundamental differences in health status between counties (e.g., "healthy county effect") and over time, a construct from the Robert Wood Johnson Foundation (RWJF) County Health Rankings was used (Remington et al., 2015). This "county health factors" variable is a composite Z-score-based ranking which comprises health behaviors, including tobacco use, diet and exercise, alcohol and drug use, and sexual activity; clinical care, including access to and quality of care; social and economic factors, including education, employment, income, family and social support, and community safety; and physical environment, including air and water quality, housing and transit. This was available for 2010 onwards; for 2009 the data from 2010 to 2016 were used to linearly extrapolate county months. Annual data were linearly interpolated to generate county-month scores. This score was used in fully-adjusted, immediate-effect multivariable models of all seven strategies.
We used incidence rate ratios (IRRs) to quantify the association between the each of the seven PL strategies and our two outcomes, and used Poisson regression to model the IRRs. Models were fit using generalized estimating equations (GEE) to account for clustering at the county level, with a population offset for each county-month (Alexandridis et al., 2018). These models were assessed for indicators of overdispersion, and Negative Binomial (NB2) models were also assessed.
For opioid dispensing and buprenorphine utilization separately, our models were defined as:
$$ \lambda =\ln \left(\mathrm{a}/\mathrm{n}\right)=\ln \left(\mathrm{a}\right)-\ln \left(\mathrm{n}\right)={\upbeta}_0+\left[{\upbeta}_1{\mathrm{X}}_1+\cdots +{\upbeta}_7{\mathrm{X}}_7\right]+\left[{\upbeta}_8{\mathrm{X}}_8\right]+\left[{\upbeta}_9{\mathrm{X}}_9\right]+\left[{\upbeta}_{10}{\mathrm{X}}_{10}\right]+\left[{\upbeta}_{11}{\mathrm{X}}_{11}\dots {\upbeta}_{13}{\mathrm{X}}_{13}\right]+\left[-\ln \left(\mathrm{n}\right)\right] $$
where λ was a rate of opioid dispensing or buprenorphine utilization per county-month residential population, a was a count of opioid dispensing or buprenorphine utilization, and β0 is the intercept.
X1, …, X7 were the independent (exposure) variables that designate the presence or absence of the seven intervention strategies for any given month and county. Each strategy was represented as dichotomous variable with no implementation as the referent (coded 0), and implementation (coded 1).
X8 was the county-month rate of outpatient prescriptions dispensed for opioid analgesics in units of 1000, with total resident population as the denominator. This measure was only used as a covariate in models of buprenorphine utilization.
X9 is the county health status variable of linearly interpolated annual z-scores of Health Factors from RWJF County Health Rankings. This variable was used as a marker for general community health status and was included to control for potential confounding by changes to general community health status over time and between counties.
X10 was a variable for calendar year included to remove linear trends over time ("secular trend").
X11, X12, and X13 were indicator variables for seasonality, implemented with indicator coding for spring, summer and fall, with winter as the referent. These variables are included to de-trend for seasonal effects on overdose and related outcomes, which were observed in preliminary data analysis. For opioid overdoses in 2010, a Walter and Elwood analysis of seasonality using the exact method test suggested the presence of seasonality (chi-square 5.8, p = 0.05, 2 df) with a peak in March.
The offset term ln(n) was the loge denominator of the rates, defined as the resident population of the county. Annual population was obtained from the National Center for Health Statistics and linearly interpolated by month.
Goodness-of-fit was assessed using Akaike and Bayesian Information Criterion (AIC and BIC), with smaller values indicating better fit relative to an intercept-only model. We assessed over-dispersion using Deviance and Pearson's chi-square divided by its degrees of freedom. Initially, univariate models were used to examine each of the individual strategies, without any adjustment for trends by year and season, county health status, or other strategies. Multivariable adjusted models examined the associations for each of the seven strategies while controlling for the other six strategies, with additional adjustment for year and season, and county health factors. Adjusted models of buprenorphine utilization also included adjustment for each county's population-based rate of OA prescribing.
A total of 74 out of 100 NC counties implemented any strategy of PL by the end of the intervention period, covering 70% of the state population. Non-implementing counties were either ineligible due to a lack of resources or did not submit a funding application.
Over the 2009–2014 study period, unique annual OA patients decreased by 6.9%, from 23.0% of all state residents in 2009 to 21.4% in 2014 (Alexandridis et al., 2018). Annual prescriptions dispensed for OA increased by 17.3%, from 6.22 million to 7.30 million, an 11.4% increase (0.66 to 0.73 per person-year). The most commonly dispensed OAs were hydrocodone, oxycodone, codeine and morphine.
Opioid analgesic dispensing
In univariate models (no adjustment), we found weak associations between the adoption of PL strategies and the rate of OA prescription dispensing (Table 2). MAT expansion was associated with a 16% increase in OA dispensing (IRR: 1.16; 95% CI: 1.11, 1.20), and Diversion Control efforts were associated with a 15% increase (IRR: 1.15; 95% CI: 1.12, 1.17) in OA dispensing.
Table 2 Associations between Project Lazarus implementation and opioid analgesic prescribing, by strategy, North Carolina, 2009–2014
In fully-adjusted multivariable models accounting for implementation of all seven strategies, year and season, and county health status, these associations were attenuated. A statistically significant association between Diversion Control strategies and increased prescribing persisted (IRR: 1.06; 95% CI: 1.03, 1.09). No other strategy was associated with a 5% or higher increase or decrease in opioid prescribing. Notably, the strategy of Provider Education was not associated with any change in OA dispensing (IRR: 1.00; 95% CI: 0.97, 1.03).
Buprenorphine utilization
In univariate models (no adjustment), each PL strategy was associated with a 54–82% increase in the rate of buprenorphine utilization (Table 3). After adjustment for time and season, these associations were greatly attenuated in single-strategy models; in fact, support programs for patients with pain were associated with a 15% decrease in buprenorphine (IRR: 0.85; 95% CI: 0.78, 0.93).
Table 3 Associations between Project Lazarus implementation and buprenorphine utilization, by strategy, North Carolina, 2009–2014
In fully-adjusted multivariable models including all seven PL strategies, only support programs for patients with pain were associated with a change of 5% or greater in buprenorphine use (IRR: 0.93; 95% CI: 0.85, 1.02), and no strategy was associated with a statistically significant change. The addiction treatment strategy hypothesized to have a direct impact on this outcome was associated with a 2% reduction in buprenorphine use (IRR: 0.98; 95% CI: 0.91, 1.06). An additional model of only the addiction treatment strategy (including adjustment for county health status in addition to year and season, but without the other six strategies) found no association (IRR: 1.00; 95% CI: 0.92, 1.09).
Project Lazarus was implemented statewide in NC as a community-based program with multi-agency support. Its goals were to address opioid supply, demand, and harm reduction. PL sought to improve access to MAT and reduce opioid prescribing, while maintaining legitimate access to opioids for patients with chronic pain. The results of this analysis, together with our previous analysis of the association between PL and overdose morbidity and mortality, indicate that implementation of the PL strategies neither appreciably reduced opioid dispensing nor increased buprenorphine utilization (Alexandridis et al., 2018).
For a community-coalition-based program such as PL to be successful, as was observed in the pilot implementation in Wilkes County, NC, a strong community-public health partnership needs to be established (Albert et al., 2011). Indicators of a strong partnership include sustained and focused engagement by a local health department or similar public health agency with health care provider networks and/or key enforcement agencies, such as local law enforcement (Alexandridis et al., 2017). Such partnerships are relatively uncommon in communities, particularly around the issue of substance use, pain, opioids, or overdose. Local Health Departments offer a potential starting point for a coalition to crystalize around, but deep engagement with stakeholders outside of the local public health infrastructure is also critical (Alexandridis et al., 2017). The maximal annual funding, less than $35,000, provided to the PL coalitions was insufficient for the hiring of full-time community health worker organizers with sufficient budgets for implementation activity. Even if funding were sufficient to hire full-time employees, the motivation for various activities must also be internal to the community to achieve the greatest sustained effect. It is possible that we may have seen a greater effect from the statewide PL program if both funding levels and community readiness to implement actions based on the PL model had been at higher levels.
Opioid prescribing
Diversion control efforts were the lone PL strategy associated with a statistically significant, 6% increase in opioid dispensing. Though unanticipated in its direction, this association was not clinically significant in its magnitude. Given the consistently high reported levels of unused controlled substances (CS) sharing between friends and family as reported by national data (Lipari & Hughes, 2017), one possible explanation is that aggressive take-back and drop-box efforts have led to modest increases in people seeking opioid prescriptions (Lewis et al., 2014; Wakeland et al., 2015). Likewise, other forms of anti-diversion law enforcement activity may have led to increases in the seeking of legitimate opioid prescriptions. It is also possible that this result is due to bias resulting from a misclassification of exposure in law enforcement trainings, which were a component of the diversion control strategy. The SBI targeted known areas of high opioid diversion activity for their trainings, which were in turn attended by law enforcement officers from multiple counties. As we were only able to capture the counties where trainings occurred, it is possible that counties were uncredited for the implementation of this strategy.
A previous study in Massachusetts demonstrated a significant decline in opioid prescribing and unique opioid patients after a comprehensive opioid and pain policy was adopted by a large statewide private insurer (Garcia et al., 2016). Our null result highlights potential limitations of diffuse community coalitions to create significant changes in prescriber practice as compared to a centralized, insurance-directed approach. The lack of impact of PL statewide implementation on prescribing may potentially reflect insufficient investment in local coalition activities. Additionally, it is important to note that PL, as implemented statewide in NC, was not designed with an explicit focus on reducing opioid dispensing volumes, but rather promoting appropriate pain management. The community-facing supply reduction efforts of PL focused on the prevention of prescription opioid sharing through unused drug disposal and education, whereas the healthcare-facing efforts addressed acute opioid prescribing in EDs and chronic pain treatment among community-based physicians. Only these latter physician education strategies would be expected to have a direct effect on opioid prescribing; however, a reduction was not observed in this study.
It is also likely that the effectiveness of PL activities to limit prescribing were affected by the changing pace and form of the overdose epidemic in the US during the implementation period. When PL was initially piloted in Wilkes County, NC, through the planning of the statewide implementation, it was not anticipated that nested epidemics of heroin and fentanyl overdose would occur at the scale since documented (Ciccarone, 2017; Unick et al., 2013; Cicero et al., 2015). At the time of the implementation, there was evidence that an inflection point in the epidemic had been reached (Dart et al., 2015a; Dart et al., 2015b). Future community-based efforts to reduce overdose must have the capacity to respond rapidly to evolving patterns of substance use and develop a priori contingency plans. One potential tool that state or federal agencies could use to identify motivated communities is the Community Readiness scale developed by the Tri-Ethnic Center for Prevention Research (Ringwalt et al., 2018). A multi-stage process could first identify communities with high motivation and infrastructure to deploy a community-based program, and target them to implement a PL-like program, while simultaneously developing motivation and infrastructure using other approaches elsewhere.
We found no strong association between any component of PL and buprenorphine utilization in adjusted models. Unadjusted univariate models indicated consistent increases in the utilization rate, even for strategies not expected to have a direct impact on buprenorphine, which were hypothesized to be the result of secular trends in MAT over the study period.
In our previous study, we found the PL addiction treatment strategy was associated with increased overdose mortality (Alexandridis et al., 2018). Together with the findings presented here, this suggests that areas with high MAT utilization were not necessarily influenced by PL, as PL-related MAT policy changes were not associated with a change in the rate of buprenorphine utilization. We focused on buprenorphine specifically because of its advantages in the management of opioid use disorder in rural areas (Kraus et al., 2011), and because buprenorphine's non-MAT use remains closely linked to the clinical management of patients with high risks of opioid dependence or use disorder (Fiellin et al., 2014; Blondell et al., 2010).
It is important to note that even buprenorphine MAT requires a substantial investment to reduce fatal overdose. National surveillance of buprenorphine and heroin overdose in France, where buprenorphine accounts for well over 80% of all MAT, found an 82% reduction in heroin overdose deaths between 1995 and 2003 after the introduction of community-based MAT through primary care providers and community pharmacies in 1996 (Emmanuelli & Desenclos, 2005; Carrieri et al., 2006). However, total MAT utilization increased 100-fold nationally in that time period, and each prevented death was associated with upwards of 200 MAT patients. Within the US, the training requirement and patient limits imposed by the Drug Addiction Treatment Act of 2000 provide additional challenges to the effective implementation of buprenorphine-based MAT, due to incomplete coverage of MAT costs among Medicare/Medicaid patients (Knudsen et al., 2011). People in the custody of the criminal justice system also face considerable restrictions on access to MAT, particularly effective agonist-based MAT, as its use is mediated by drug court staff, judges, correctional facilities, and local and state politics (Friedmann et al., 2012; Brinkley-Rubinstein et al., 2017). These challenges and barriers to treatment underscore the difficulties faced in moving beyond simple supply reduction approaches to community-based addiction treatment (Dasgupta et al., 2018).
Recent strategy recommendations, such as the President's Commission Report, have heavily stressed substance abuse treatment expansion, particularly maintaining access to MAT (Christie et al., 2017). Highly motivated and effective community coalitions are only able to expand or maintain such MAT programs when they are supported or endorsed by diverse federal and state entities, such as Medicaid/Medicare; justice departments, drug courts, and correctional systems; and SAMHSA and its state-level counterparts. Stigma and resistance to agonist MAT affects all levels of this structure, and coalition activity may have a limited impact on local attitudes and views (Ringwalt et al., 2018). Future studies should analyze the uptake of MAT using a more expansive view of these institutional barriers, treating coalition and community activity with regards to MAT as an effect modifier of state and federal policies.
Our evaluation of PL was limited by funder priorities that all North Carolina counties should implement PL, necessitating an observational interrupted time series study design. We were therefore unable to randomize communities to receive PL funding and supports, and residual or uncontrolled confounding may be present. The associations between PL strategies and our outcomes cannot be interpreted as causal. In particular, we were unable to quantify any of the selection factors associated with higher intensity of PL implementation in a given community. Multiple factors influencing coalition activity and substance use could not be obtained with the appropriate spatial and temporal resolution, including: previous collaborations among stakeholders (Kegler et al., 2010), external measures of coalition leadership (Kegler et al., 1998), private insurance policy changes (Garcia et al., 2016), prescriber utilization of the PDMP (Delcher et al., 2015), and the implementation of the Risk Evaluation and Mitigation Strategies (REMS) for transmucosal immediate-release fentanyl and extended-release/long-acting opioid analgesics (Food and Drug Administration, 2012; Cepeda et al., 2017). All were therefore assumed to have nondifferential effects. Because the REMS were not fully implemented by the end of the intervention period their likely effect was minimal. Our model of the intervention also assumes that implementation of PL strategies occurs with high fidelity and that all the PL strategies have a sustained ongoing effect, or are continuously implemented. These are strong assumptions to make in the context of community coalition-based programs funded at relatively modest levels. Finally, although we did not detect changes in overall volume of prescribing, it is possible that the nature of prescribing was altered and inappropriate prescribing was reduced.
Outcomes for both analyses in this study were derived from PDMP data, which have the typical caveats of administrative, secondary data. Our ability to identify unique buprenorphine patients is limited by the proprietary entity resolution algorithms used to link prescriptions based on name, address, and date of birth, potentially more challenging in vulnerable populations that may be more geographically mobile (Galea & Vlahov, 2002). Unlinked records would result in overestimations of unique patients; we addressed this possible source of bias by using a prevalent rather than incident patient outcome.
Finally, our post-intervention period was limited to 22 months. The original pilot of PL in Wilkes County saw its greatest effect after three years of implementation (Albert et al., 2011). Future evaluations of community-based approaches to overdose should consider the length of the intervention and follow-up period.
Despite other accomplishments, the statewide implementation of Project Lazarus in North Carolina did not meet its objectives of marked increases in the utilization of buprenorphine or reductions in opioid analgesic prescribing. Future support for community coalitions addressing the opioid crisis may need a more narrow focus and targeted coalition capacity building to ensure impacts on such outcomes as prescribing behaviors and addiction treatment.
CCNC:
Community Care of North Carolina
CSRS:
North Carolina controlled substance reporting system
IRR:
Incidence rate ratios
MAHEC:
Mountain Area Health Education Center
Mu-opioid receptor
NC DHHS:
North Carolina Department of Health and Human Services
NC:
NC-TOPPS:
North Carolina treatment outcomes and program performance system
Opioid analgesics
ONDCP:
PDMP:
Prescription Drug Monitoring Program
Project Lazarus
SAMHSA:
Substance Abuse and Mental Health Services Administration
SBI:
North Carolina State Bureau of Investigation
Albert S, Brason FW 2nd, Sanford CK, Dasgupta N, Graham J, Lovette B. Project Lazarus: community-based overdose prevention in rural North Carolina. Pain Med. 2011;12(Suppl 2):S77–85.
Alexandridis AA, Dasgupta N, Ringwalt C, Sanford C, McCort A. Effect of local health department leadership on community overdose prevention coalitions. Drug & Alcohol Depend. 2017;171:e5–6.
Alexandridis AA, McCort A, Ringwalt CL, Sachdeva N, Sanford C, Marshall SW, et al. A statewide evaluation of seven strategies to reduce opioid overdose in North Carolina. Inj Prev. 2018;24:48–54.
Blondell RD, Ashrafioun L, Dambra CM, Foschio EM, Zielinski AL, Salcedo DM. A clinical trial comparing tapering doses of buprenorphine with steady doses for chronic pain and co-existent opioid addiction. J Addict Med. 2010;4(3):140–6.
Brinkley-Rubinstein L, Cloud DH, Davis C, Zaller N, Delany-Brumsey A, Pope L, et al. Addressing excess risk of overdose among recently incarcerated people in the USA: harm reduction interventions in correctional settings. Int J Prison Health. 2017;13(1):25–31.
Carrieri MP, Amass L, Lucas GM, Vlahov D, Wodak A, Woody GE. Buprenorphine use: the international experience. Clin Infect Dis. 2006;43(Supplement_4):S197–215.
Cepeda MS, Coplan PM, Kopper NW, Maziere J-Y, Wedin GP, Wallace LE. ER/LA opioid analgesics REMS: overview of ongoing assessments of its progress and its impact on health outcomes. Pain Med. 2017;18(1):78–85.
Christie C, Baker C, Cooper R, Kennedy PJ, Madras B, Bondi P. In: Office of National Drug Control Policy, editor. The President's commission on combating drug addiction and the opioid crisis. Washington, DC: The White House; 2017.
Ciccarone D. Fentanyl in the US heroin supply: a rapidly changing risk environment. Int J Drug Policy. 2017;46:107–11.
Cicero TJ, Ellis MS, Harney J. Shifting patterns of prescription opioid and heroin abuse in the United States. N Engl J Med. 2015;373(18):1789–90.
Dart RC, Severtson SG, Bucher-Bartelson B. Trends in opioid analgesic abuse and mortality in the United States. N Engl J Med. 2015a;372(16):1573–4.
Dart RC, Surratt HL, Cicero TJ, Parrino MW, Severtson SG, Bucher-Bartelson B, et al. Trends in opioid analgesic abuse and mortality in the United States. N Engl J Med. 2015b;372(3):241–8.
Dasgupta N, Beletsky L, Ciccarone D. Opioid Crisis: No Easy Fix to Its Social and Economic Determinants. American Journal of Public Health 2018;108(2):182–86.
Delcher C, Wagenaar AC, Goldberger BA, Cook RL, Maldonado-Molina MM. Abrupt decline in oxycodone-caused mortality after implementation of Florida's prescription drug monitoring program. Drug Alcohol Depend. 2015;150:63–8.
Emmanuelli J, Desenclos JC. Harm reduction interventions, behaviours and associated health outcomes in France, 1996-2003. Addiction. 2005;100(11):1690–700.
Fiellin DA, Kleber H, Trumble-Hejduk JG, McLellan AT, Kosten TR. Consensus statement on office-based treatment of opioid dependence using buprenorphine. J Subst Abus Treat. 2004;27(2):153–9.
Fiellin DA, Schottenfeld RS, Cutter CJ, Moore BA, Barry DT, O'Connor PG. Primary care–based buprenorphine taper vs maintenance therapy for prescription opioid dependence: a randomized clinical trial. JAMA Intern Med. 2014;174(12):1947–54.
Food and Drug Administration. Shared risk evaluation mitigation strategy for all immediate-release Transmucosal fentanyl dosage forms. J Pain Palliat Care Pharmacother. 2012;26(2):123–6.
Friedmann PD, Hoskinson R, Gordon M, Schwartz R, Kinlock T, Knight K, et al. Medication-assisted treatment in criminal justice agencies affiliated with the criminal justice-drug abuse treatment studies (CJ-DATS): availability, barriers, and intentions. Subst Abus. 2012;33(1):9–18.
Galea S, Vlahov D. Social determinants and the health of drug users: socioeconomic status, homelessness, and incarceration. Public Health Rep. 2002;117(Suppl 1):S135–S45.
Garcia MC, Dodek AB, Kowalski T, Fallon J, Lee SH, Iademarco MF, et al. Declines in opioid prescribing after a private insurer policy change - Massachusetts, 2011-2015. MMWR Morb Mortal Wkly Rep. 2016;65(41):1125–31.
Injury and Epidemiology Surveillance Unit, Injury and Violence Prevention Branch, Division of Pubilc Health, North Carolina Department of Health and Human Services. Opiate Poisonings by Intent and County, 1999–2015. 2015.
Kegler MC, Rigler J, Honeycutt S. How does community context influence coalitions in the formation stage? A multiple case study based on the community coalition action theory. BMC Public Health. 2010;10(1):90.
Kegler MC, Steckler A, Mcleroy K, Malek SH. Factors that contribute to effective community health promotion coalitions: a study of 10 project ASSIST coalitions in North Carolina. Health Educ Behav. 1998;25(3):338–53.
Knudsen HK, Abraham AJ, Oser CB. Barriers to the implementation of medication-assisted treatment for substance use disorders: the importance of funding policies and medical infrastructure. Eval Program Plann. 2011;34(4):375–81.
Kraus ML, Alford DP, Kotz MM, Levounis P, Mandell TW, Meyer M, et al. Statement of the American society of addiction medicine consensus panel on the use of buprenorphine in office-based treatment of opioid addiction. J Addict Med. 2011;5(4):254–63.
Lewis ET, Cucciare MA, Trafton JA. What do patients do with unused opioid medications? Clin J Pain. 2014;30(8):654–62.
Lipari RN, Hughes A. How people obtain the prescription pain relievers they misuse. CBHSQ Report 2017(Jan 12):1–7.
Mattick RP, Breen C, Kimber J, Davoli M. Buprenorphine maintenance versus placebo or methadone maintenance for opioid dependence. Cochrane Database Syst Rev. 2014;(2). Art. No.: CD002207.
North Carolina Department of Health and Human Services. North Carolina's opioid action plan, 2017–2021. 2017.
Remington PL, Catlin BB, Gennuso KP. The county health rankings: rationale and methods. Popul Health Metrics. 2015;13:11.
Ringwalt C, Sanford C, Dasgupta N, Alexandridis A, McCort A, Proescholdbell S, et al. Community readiness to prevent opioid overdose. Health Promot Pract. 2018;19(5):747-55.
State Center for Health Statistics. North Carolina vital statistics — deaths 2007-2014. V1 ed. Chapel Hill, NC: Odum Institute for Research in Social Science; 2015.
Thomas CP, Fullerton CA, Kim M, Montejano L, Lyman DR, Dougherty RH, et al. Medication-assisted treatment with buprenorphine: assessing the evidence. Psychiatr Serv. 2014;65(2):158–70.
Trado CE. Addressing pain management and palliative care: the official position of the North Carolina medical board. NC Med J. 2004;65(4):236–41.
Unick GJ, Rosenblum D, Mars S, Ciccarone D. Intertwined epidemics: national demographic trends in hospitalizations for heroin- and opioid-related overdoses, 1993-2009. PLoS One. 2013;8(2):e54496.
United States. National drug control strategy. Washington, D.C.: Office of National Drug Control Policy, Executive Office of the President; 2015.
Wakeland W, Nielsen A, Geissert P. Dynamic model of nonmedical opioid use trajectories and potential policy interventions. Am J Drug Alcohol Abuse. 2015;41(6):508–18.
Web-based Injury Statistics Query and Reporting System (WISQARS) [online] [Internet]. 2005.
The authors thank a wide range of collaborators for funding, data, and expertise. The authors are grateful to coalitions and their leaders, CCNC Coordinators, local health departments, public health advocates, and other stakeholders who implemented the intervention, and are too numerous to name, but whose contributions were essential.
This evaluation study was funded by the United States Centers for Disease Control and Prevention (CDC; Cooperative Agreement 5U01CE002162–02), the Kate B. Reynolds Charitable Trust (KBR), a private foundation, and the Office of Rural Health (ORH), NC Department of Health and Human Services. KBR and ORH selected the order in which counties received funding for intervention implementation, but had no role in collection, management, analysis, and interpretation of the data; nor preparation, review, or approval of the manuscript; nor decision to submit the manuscript for publication.
The data set used in analysis containing exposure and contextual variables is available for collaborative sharing upon request to the authors. For many variables, public data were used and the authors can direct interested parties to the original sources. Data on controlled substance prescriptions and drug treatment admissions can be made available for public use but require separate data use agreements directly with the NC Department of Health and Human Services, and cannot be disclosed by the authors without their written permission. Geographic identifiers for low population areas may be anonymized due to privacy concerns.
Injury Prevention Research Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
Apostolos A. Alexandridis
, Nabarun Dasgupta
, Agnieszka D. McCort
, Christopher L. Ringwalt
& Stephen W. Marshall
Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
, Wayne D. Rosamond
Department of Medicine, School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
Paul R. Chelminski
Search for Apostolos A. Alexandridis in:
Search for Nabarun Dasgupta in:
Search for Agnieszka D. McCort in:
Search for Christopher L. Ringwalt in:
Search for Wayne D. Rosamond in:
Search for Paul R. Chelminski in:
Search for Stephen W. Marshall in:
All authors were involved in study design. Data collection instruments were designed by AM, CLR, and ND. AM and AAA were directly involved in data collection. Analyses were conducted by AAA, ND, and SWM. AAA and ND had full access to all of the data used and take full responsibility for data integrity and the accuracy of the analysis. SWM, WR, PRC, CLR, and ND contributed to the development and revision of the manuscript. All authors have given final approval of the version to be published.
Correspondence to Apostolos A. Alexandridis.
Ethics approval was obtained through the Office of Human Reseach Ethics at the University of North Carolina at Chapel Hill (IRB 12–2570, 17–0889).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Alexandridis, A.A., Dasgupta, N., McCort, A.D. et al. Associations between implementation of Project Lazarus and opioid analgesic dispensing and buprenorphine utilization in North Carolina, 2009–2014. Inj. Epidemiol. 6, 2 (2019) doi:10.1186/s40621-018-0179-2
Interrupted time series
Submission enquiries: [email protected] | CommonCrawl |
Journal of Petroleum Exploration and Production Technology
December 2018 , Volume 8, Issue 4, pp 1113–1127 | Cite as
Adjusting porosity and permeability estimation by nuclear magnetic resonance: a case study from a carbonate reservoir of south of Iran
S. M. Fatemi Aghda
M. Taslimi
A. Fahimifar
Original Paper - Production Geology
First Online: 05 June 2018
The aim of this study is to assess the accuracy of nuclear magnetic resonance (NMR) method in estimating the porosity and permeability in a carbonate reservoir located in south of Iran. In this study, 26 carbonate samples were selected and common core and NMR experiments were performed. Comparison of core and NMR porosity showed that NMR method is very accurate for estimation of porosity. However, after comparison of core and NMR permeability, it was found that NMR permeability estimation cannot be used with the common coefficients since they are calibrated in the clastic reservoirs. Therefore, it is necessary to modify coefficients in the permeability models of the considered reservoirs. For this purpose, 16 samples were selected to develop the model, and 10 samples for evaluating the accuracy of the model. In this study, free-fluid and mean T2 models were two main models for permeability estimation using NMR method. Coefficients of the two above-mentioned models were modified in terms of maximizing the coefficient of determination of core permeability and calculated permeability using NMR permeability models. The proposed models were used to estimate permeability in 10 other samples for verifying the reliability of models.
Nuclear magnetic resonance Permeability model Porosity Timur-Coates model Schlumberger Doll Research model
Porosity indicates the amount of pore spaces in the rocks; and permeability represents the capacity of rocks to transmit fluids. Determination of the two aforementioned petrophysical parameters have an undeniable role in evaluation of reservoir rocks, consequently, planning for the development and production of the oil field. It is not difficult to determine the porosity of rocks directly in the laboratory, and it can be done in different ways. But determination of the permeability of rocks is difficult for various reasons such as high cost, time consuming and lack of enough samples. Due to the limitations of direct measurement of permeability, researchers around the world have made many attempts to estimate permeability using indirect methods. Various models and relations have provided to measure the permeability based on other parameters of reservoir rocks such as porosity (Neuzil 1994) specific surface area (Kozeny 1927), grain geometry (Schwartz and Banavar 1989), shape of pores (Yang and Aplin 1998) and grain size (Yang and Aplin 2010). The advantage of proposed relations is the high precision of measurement; and their main drawback is the necessity of having the samples and doing stringent laboratory testing. Unfortunately, in many cases, the use of these relations is associated with serious problems for various reasons, such as the lack of access to samples (especially in horizontal wells), high cost of doing experiments, as well as its lengthy procedure.
NMR technology (in laboratory and well logging) has had many applications in the oil industry from 1990 onwards, particularly for determining various parameters of rock and fluid such as porosity, fluid type, pore size distribution, and permeability (Kenyon 1992; Kleinberg et al. 1993; Kenyon et al. 1995a, b; Kleinberg 1996; Straley et al. 1997; Coates et al. 1999; Al-Mahrooqi et al. 2003; Alvarado et al. 2003; Westphal et al. 2005). NMR technology is able to directly measure porosity; but it cannot measure permeability directly. Therefore, a few models have been presented to estimate permeability (Coates et al. 1999). NMR technology has been studied well in the sandstones; therefore, it is possible to determine different parameters such as porosity, Bulk Volume Movable (BVM), Bulk Volume Irreducible (BVI) and permeability in sandstones (Ehrlich et al. 1991; Chang et al. 1994; Kenyon et al. 1995a, b). However, the situation is different in carbonates; and it is not possible to estimate parameters, in particular permeability, in these kind of rocks. There are two main reasons for this issue. One is the complexities inherent in the type and structure of the pore spaces, and the other, is the lack of sufficient study on carbonates for developing permeability models through laboratory experiments (Kaufman 1994; Lucia 1995; Amabeoku et al. 2001; Westphal et al. 2005).
Kenyon et al. (1995a, b) conducted a laboratory study of NMR and its relation to depositional texture and petrophysical properties in the Thamama carbonate group of Mubarraz field. Various models were used to estimate permeability, indicating that coefficients m and n in Eqs. 3 and 4 must be changed, in the first step. Second, constant coefficients of these models are smaller in carbonates than in the sandstones. Third, NMR permeability model with parameter transverse relaxation time (T2), gives better results alone compared to the Schlumberger Doll Research (SDR) model with routine coefficients (\({\phi ^4} \cdot T_{{2{\text{gm}}}}^{2}\)) in samples with high permeability. Fourth, in the models where T2 parameter is present in a way, give better results compared to the models where only porosity contributes.
Allen et al. (2001) divided carbonate samples into 4 groups based on the ratio of pore throat sorting to T2 and tested SDR model for the estimation of permeability. The results showed that permeability was associated with the square of porosity; and power of T2 cannot be changed as well. The value of correction coefficients can be considered constant in all samples. The important point is that the reduction of value of coefficient of porosity from 4, which is used in sandstones (Straley et al. 1997), to 2 in the carbonates indicates that with reduced porosity, pore networks unusually have a good relation with each other. In this research, the free-fluid model has not been examined well, and its capacity to estimate permeability in carbonates has not been investigated. Moreover, the assumption of the impossibility of changes in T2 can also be discussed.
Amabeoku et al. (2001) have conducted a research on applying Timur-Coates (TC) and SDR models in carbonate rocks and setting parameters of permeability models through laboratory studies. They provided 3 relations with different coefficients for 3 different wells; but in the model corrected for TC, the value of routine cutoff T2 (T2c) (100 ms) was used, and the values of BVI and BVM were determined accordingly. However, the T2 values for carbonates should be determined using NMR experiments in two modes of 100% saturation and residual saturation in order the value of producible fluid (BVM) and non-producible fluid (BVI) to be determined precisely.
Westphal et al. (2005) classified carbonate samples based on the pore types (primary and secondary) and used TC and SDR models as unchanged, with no correction in their coefficients. The results showed that well-related pores (interparticle and intercrystalline pores) had more proper results compared with unrelated or isolated pores (moldic, vugs and intraparticle pores). To achieve better results, they found it necessary to correct the models with experimental data.
Daigle and Dugan (2009, 2011) conducted studies on determining the correction coefficient of SDR model using other parameters such as gamma log and physical properties of rocks (Grain size, specific surface, porosity, magnetic susceptibility, grain density, and surface relaxivity) and showed that the value of correction coefficient in the SDR model can be determined using above methods, and thereby permeability can be estimated by SDR model, with routine coefficients. In this study, only SDR model was discussed; and the model coefficients were announced without correction.
Samples used in the present study were mainly moldic, vuggy and intraparticle porosity type. It tried to do necessary examinations on the accuracy of routine models used to predict permeability; and if necessary, to make needed corrections and adjustments to provide an appropriate model for carbonate rocks with low permeability.
Geological description
Asmari Formation
The Oligo-Miocene Asmari Formation was firstly defined by Thomas (1950) and then by James and Wynd (1965). In its type section (Kuh-e-Asmari), the formation consists of fossiliferous limestone with sandstone tongues in the lower part. Toward SW from type locality, these carbonates change laterally to mixed clastic-carbonate and sandstone facies (Ahwaz Member). In addition, a thick anhydrite unit (Kalhur Member) is recognized in the south of Lurestan province within the Asmari carbonates. Depositional history and regional stratigraphic architecture of this formation are reviewed by Ehrenberg et al. (2006) and Van Buchem et al. (2010).
Burgan Formation
The Burgan Formation, Lower Cretaceous (Albian) sands and shales, is lateral equivalent of the Kazhdumi Formation in the northwestern side of the Persian Gulf. The formation and its equivalents (such as Nahr Umar Formation; Safaniya and Khafji Members) form important reservoir rock in several supergiant and many giant oil fields (Alsharhan 1991, 1994; Al-Eidan et al. 2001; Strohmenger et al. 2006; Van Buchem et al. 2010). The Great Burgan Field in the Kuwait has been ranked as the world's second largest oil field (after Ghawar field) and mainly produces from the Burgan clastics. As well as, many oil fields have been discovered from these intervals in the Arabian countries (Iraq, Kuwait, Saudi Arabia, Qatar, UAE and Oman) and also Iran (Alsharhan 1994). The Burgan Formation was introduced and described first by Owen and Nasr (1958) and it consists of several tens to a few hundred of meters of sands, shale, ooid ironstone and some limestone (Alsharhan 1994; Van Buchem et al. 2010).
Dariyan Formation
The Aptian-aged Dariyan Formation, known as Orbitolina Limestone, is one of the most important petroleum reservoirs in the Dezful Embayment and Persian Gulf areas (Motiei 1995; Ghazban 2007). Firstly, James and Wynd (1965) defined this formation in the Kuh-e-Gadvan. The formation belongs to the Khami Group and composed mainly of Orbitolina-rich carbonates. This formation has been divided into two informal units: Lower and Upper Dariyan. Unlike its equivalent in Arabian countries (Shuaiba Formation), the Dariyan Formation is not well studied and documented in the Zagros area of south and southwest Iran (Alsharhan 1985; Alsharhan et al. 2000).
Ghadvan Formation
The Gadvan Formation (type section in Kuh-e-Gadvan), is dominantly composed of alternating marls and shallow-water limestones, including a limestone marker bed in the upper part that so called Khalij (Dictyoconnus arabicus or Montseciella arabicus) member (James and Wynd 1965; Schroeder et al. 2010; Van Buchem et al. 2010). It is respectively overlaid and underlined by the Dariyan (Shuaiba) and Fahliyan (Yamama) Formations, with gradual boundaries. Previously, the age of formation was thought to range from the Barremian to Aptian. Van Buchem et al. (2010) revised age of this formation to the Barremian, based on benthic foraminifera, ammonites, planktonic foraminifera and carbon isotope curves (Vincent et al. 2010).
Yamama Formation
The Yamama Formation, from Thamama group in Arabian countries (Saudi Arabia, Bahrain and Qatar), is Neocomian limestones between the dense Sulaiy limestone below and the Buwaib or Ratawi Formations above (Steineke and Bramkamp 1952; Sadooni 1993; Shebl and Alshahran 1994; Nairn and Alsharhan 1997; Alsharhan et al. 2000). Upper and lower contacts of this formation are conformable in many locations. It can be correlated with Minagish Formation in Kuwait, Habashan Formation in UAE, and Salil Formation in Oman. This formation is equivalent of the Fahliyan Formation (Khami group) in the onshore Zagros (James and Wynd 1965). The Yamama Formation and its equivalents produce oil (or represent oil show) in the South Iraq, Kuwait, Saudi Arabia, Bahrain, Qatar and UAE (Nairn and Alsharhan 1997).
Sulaiy Formation
There are few published descriptions of the Tithonian–Valanginian Sulaiy Formation in the literature. The formation and its lithostratigraphic equivalents are among the best source rocks in southern Iraq, Kuwait, Saudi Arabia and southwest Iran (Beydoun 1991; Nairn and Alsharhan 1997; Saad and Goff 2006; Al-Ameri et al. 2009). Owing to geological location and formation similarity, the nomenclature used here is borrowed from the Saudi stratigraphic naming. The Makhul and Garau Formations are lithostratigraphic equivalents of this formation in the Arabian and Iranian territories, respectively. Based on the existing information, the Sulaiy Formation was first defined by Steineke and Bramkamp (1952). Powers et al. (1966) re-described the formation in terms of occurrence, thickness, lithological character, nature of contacts, paleontology and age, and also economic aspects. They indicated that this formation is lithologically uniform and is composed mainly of tan, chalky, massive bedded, aphanitic limestone.
Nuclear magnetic resonance (NMR)
The phenomenon of nuclear magnetic resonance occurs in the atoms with an odd number of protons or neutrons. Protons and neutrons rotate around their axis. When the number of neutrons and protons are equal, rotations are neutralized with each other, and there will be no longer a spinning nucleus. But the nucleus of atoms with disparities in the number of protons and neutrons, rotates around its axis and therefore, according to Faraday's law, they will be converted into a magnetic dipole. Normally, orientation of these dipoles is random, but in the presence of an external constant magnetic field (B0), bipolar is polarized and is placed in line with the B0 field. The vector sum of bipolar is the mass magnetization (M0) which is the first step in creating nuclear magnetic resonance. In addition to causing polarization, application of B0 field causes the nuclei to have a precession around B0 with a specific frequency (Larmor). Larmor frequency varies for different nuclei and it is the basis for creating a resonance effect. Because by applying oscillating magnetic field (B1) with specified Larmor frequency, which is the larmor frequency of hydrogen nucleus in NMR studies, nuclei are deflected from B0 and do in-phase precession in the transverse plane. The phenomenon causes resonance signal and its recording in the coils, which are located in the transverse plane. By cutting off the oscillating field, the nuclei begin to return to the original relaxation state. This phenomenon is characterized by longitudinal relaxation time (T1) and transversal relaxation time (T2); which are the output of nuclear magnetic resonance test. In NMR experiments, due to the rapid decay of the signal, resonance pulse sequence is applied, so that the desired parameters can be recorded (Coates et al. 1999).
In general, there are three types of relaxation: bulk relaxation, diffusion-induced relaxation, and surface relaxation. Due to the relationship between the surface relaxation and the pore size, conditions in the laboratory is designed and provided in such a way that the surface relaxation is the dominant mechanism; so that the obtained time T2 represents the pores size. Therefore, having distribution T2 as the output of resonance experiment, the pores size distribution can be obtained (Kleinberg et al. 1994; Coates et al. 1999).
NMR permeability models
NMR permeability estimation models have been developed mainly through the study of sandstones (Coates et al. 1999). TC model (Timur 1968; Coates and Denoo 1981; Coates et al. 1991) (Eq. 1) and SDR model (Kenyon et al. 1988) (Eq. 2) are among two main models of NMR permeability estimation models.
$$k={\left( {\frac{\phi }{C}} \right)^4} \times {\left( {\frac{{{\text{BVM}}}}{{{\text{BVI}}}}} \right)^2},$$
$$k=A \times {\phi ^4} \times T_{{2{\text{gm}}}}^{2},$$
where k is permeability (millidarcy-md), φ is porosity (m3/m3), BVM is producible part of porosity (m3/m3), BVI is non-producible part of porosity (m3/m3), C is the formation-dependent correction coefficient (md−0.25), T2gm is geometric mean of the T2 distribution (ms), A is the formation-dependent correction coefficient (md ms−2). These models can be rewritten in parametric form (Eqs. 3 and 4) (Amabeoku et al. 2001).
$$k={\left( {\frac{\phi }{C}} \right)^m} \times {\left( {\frac{{{\text{BVM}}}}{{{\text{BVI}}}}} \right)^n},$$
$$k=A \times {\phi ^m} \times T_{{2{\text{gm}}}}^{n},$$
The remarkable point is that the dimensions of correction coefficients C and A are dependent on coefficients m and n (in Eqs. 3 and 4).
The present study was conducted on a field located in the northwest of the Persian Gulf. The studied samples were selected from different formations, and a total of 26 samples (16 samples for modeling and 10 samples to test the accuracy of models) were studied (Table 1).
Porosity and permeability of samples
Porosity (fraction)
Permeability (md)
Macroscopic and microscopic tests were conducted on samples; then lithology, facies, and type of pores were determined, and in general, characterization of samples was performed. The studied formations included Asmari, Burgan, Dariyan, Gadvan, Yamama, and Solaiy. Porosity ranged from 2.47 to 33.76% and permeability ranged from 0.00013 to 18.37 md. Pores were also of vuggy, fine pores, intraparticle and moldic type, detailed information of which is given in Table 2.
Routine core analysis
The spectral gamma logging was first performed on the cores and the results were compared with gamma logging data to depth matching. After preparation of cores, samples were prepared and cleaned in the Soxhlet using toluene and methanol. The cleaned samples were dried under the temperature of 90 °C; and their grain density, porosity and permeability were measured.
NMR experiment
Nuclear magnetic resonance device used in this study works under the following conditions:
Ambient temperature of 5–35 °C
Humidity less than 80%
Atmospheric pressure of 84–107 kPa
220 V power supply and (1 ± 50) Hz
The time needed to prepare to work less than 2 h
After NMR experiment, the device output which is a resonance signal decreasing curve is obtained. Then the values of porosity and pore size distribution are measured using the software embedded in the device.
After above-mentioned common core tests, the steps necessary for the preparation of samples were performed for testing nuclear magnetic resonance. For this purpose, samples were cleaned with xylene and methanol in the Soxhlet device and saturated with salt water (brine). Then, the nuclear magnetic resonance experiment at 100% saturation and data analysis were performed and T2 distribution graph was obtained as in Fig. 1.
T2 distribution curve (100% saturation-sample 33)
In the next step, for testing nuclear magnetic resonance in a state of residual saturation, the samples were placed in centrifuge, thus samples with residual saturation were obtained, and nuclear magnetic resonance experiment was performed in the state of residual saturation. After determining the distribution graph T2 in residual saturation as in Fig. 2, the necessary steps to determine the exact T2c was performed for each sample to examine incremental graph T2 in the two states of 100% saturation and residual saturation (see Fig. 3).
T2 distribution curve (residual saturation-sample 33)
Accumulative porosity curves for determination of T2c (sample 33)
After determining the value of T2c for the samples, given the importance of the parameters of BVM, BVI and T2gm in permeability estimation models TC and SDR, these values were calculated for each sample as in Fig. 4. It should be noted that normally in sandstones and carbonates, the values of 33 and 92 ms are used for T2c, respectively (Straley et al. 1997; Westphal et al. 2005; Yao et al. 2010). However, in this study, to increase the accuracy of the models, values of T2c for each sample were determined through comparing the NMR results in both saturated and unsaturated states; Then by having a T2c value for each sample, the T2 distribution graph for each sample was divided into two parts of BVM and BVI, and their values were calculated for each sample. The results of analyzing the porosity and permeability estimation models are proposed bellow.
Determination of BVM and BVI using T2c in T2 distribution curve (sample 33)
Presence of very low permeability (between 0 and 1 md) samples in this study causes that comparison between the core permeability and permeability of models (in the ordinary scale) will be encountered with error in calculating the coefficient of determination. The influence of very low permeability values is very low in comparison with larger amounts in the coefficient of determination. Therefore, the logarithm of permeability results was used for comparing core and model permeabilities, to display the errors that occur at low levels whose influence on the coefficient of determination is not shown well.
Porosity is one of the most important parameters that is paramount of importance in the study of reservoirs and plays significant role in permeability models. For this reason, it should be specified to what extent is the accuracy of NMR method for determining porosity in the samples. In the NMR experiment results, the surface area under T2 distribution graph can be considered as the porosity for the samples (Coates et al. 1999). Thus, porosity for 26 carbonate samples studied in the laboratory was measured using Helium Porosity method. Then, using nuclear magnetic resonance device, porosity of 26 samples were measured at 100% saturation mode. Regression analysis between the results of laboratory-obtained porosity and NMR-obtained porosity showed that a good relationship is between the porosities obtained using NMR method and porosities obtained using Helium technique (R2 = 0.95). Therefore, the porosity obtained by NMR method can be used in the estimations and calculations (see Fig. 5).
NMR and core porosity comparison
Studies on clean sandstones represent a good match between the NMR porosity and core porosity (helium porosity) which shows the error of about 1% (Coates et al. 1999). The studies conducted on sandstones, total porosity is equal to effective porosity due to the lack of the fine porosities. Comparison of helium porosity and NMR porosity in coals showed good accuracy of NMR method in measuring the porosity (Yao et al. 2010). In this study, based on the results obtained in comparison of the core porosity and NMR porosity, it was shown that NMR technique can be used for accurate estimation of porosity in the carbonate samples (see Fig. 5).
As mentioned, NMR method gives indirect permeability. Equations provided in TC and SDR models (Eqs. 3 and 4) (Amabeoku et al. 2001), were used as the two main models for estimating permeability using NMR. Equations 1 and 2 were first used to estimate permeability for the aim of examining the accuracy of aforesaid equations with routine coefficients which are mainly provided for sandstones. Then, Eqs. 3 and 4 were used to estimate permeability, so that from 26 available samples, 16 samples were used to develop the model and correction of coefficients, and 10 samples were used to test the accuracy of the proposed models. Finally, the estimated permeability in different states was compared with core permeability measured in laboratory using Air Permeability method. These procedures are described below.
Routine mode
As mentioned, in this mode, Eqs. 1 and 2 were used as the models used to estimate permeability. In the following, the procedure and the conducted studies are provided in two models of TC and SDR.
TC model
The first step in estimating the permeability using TC model is to determine the correction coefficient of the Eq. 1 (C factor). To determine the amount of C, it is required to rewrite the Eq. 1 as follows (Coates et al. 1999):
$$\sqrt[4]{{{k_{{\text{core}}}}}} \cdot C=\phi \cdot \sqrt[2]{{\left( {\frac{{{\text{BVM}}}}{{{\text{BVI}}}}} \right)}},$$
As mentioned, correction coefficient C is formation dependent, and its value is considered to be 6.2 for sandstones (Coates et al. 1999). Thus, according to Eq. 5, using the proposed coefficients m = 4 and n = 2 for sandstones, diagram \(\phi \cdot \sqrt[2]{{\left( {\frac{{{\text{BVM}}}}{{{\text{BVI}}}}} \right)}}\) against \(\sqrt[4]{{{k_{{\text{core}}}}}}\) was drawn (with zero intercept), and the slope of the best linear fit of data represents the value of C. As mentioned, this value has been obtained as equal to 6.2 for sandstones (Coates et al. 1999). For the samples used in this study, the value of C should be determined with respect to the core data, measured permeability, and the results of NMR, as well as the values of BVM and BVI.
Finally, diagram \(\phi \cdot \sqrt[2]{{\left( {\frac{{{\text{BVM}}}}{{{\text{BVI}}}}} \right)}}\) against \(\sqrt[4]{{{k_{{\text{core}}}}}}\) was drawn (with zero intercept) and linear regression analysis was performed and the value of C was determined as in Fig. 6.
Determination of C value in TC model (m = 4, n = 2)
As is evident in Fig. 6, the value of coefficient of determination (R2 = 0.93) for the fitted equation is acceptable; therefore, this value of C (0.281) can be used for all samples.
Using the value of C in Eq. 1, the permeability of samples can be estimated using TC model, and the obtained values can be compared with the permeability values measured in the laboratory (see Fig. 7). As is shown in Fig. 7, coefficient of determination is equal to 0.79 between core and TC model permeabilities.
Correlation of estimated permeability by TC model and core permeability (m = 4, n = 2)
SDR model
To estimate the permeability using SDR model in routine mode (Eq. 2), the value of A should be calculated using core permeability data and geometric mean of transverse relaxation time (T2gm). To calculate the value of A, it is needed to put the value of permeability in Eq. 2 and draw diagram kcore against\({\phi ^4} \cdot T_{{2{\text{gm}}}}^{2}\). Then, the slope of the best linear fit of the data with zero intercept represents the value of A as in Fig. 8. According to the coefficient of determination 0.96, this value of A, i.e., 0.0939 can be used for the samples.
A determination in SDR model (m = 4, n = 2) (log–log scale)
After determining the value of A, NMR permeability is calculated in the SDR model using Eq. 2 (see Fig. 9). As is evident in Fig. 9, there is not a good match between the results of core permeability and SDR model (R2 = 0.11).
Correlation of estimation SDR permeability and core permeability (m = 4, n = 2)
Modified mode
As shown in the previous section, TC and SDR models with routine coefficients are not able to estimate permeability in the studied samples appropriately, and therefore, it is required to correct the coefficients. For this purpose, 26 studied samples were divided into two groups of 16 and 10.
The group with 16 samples was used for coefficient correction, and the group with 10 samples was used to test the accuracy of the corrected models. It should be noted that, the estimation of permeability is one of the main goals in this study, therefore, in modifying the models, non-linear fitting with criteria of maximizing the coefficient of determination between core permeability and the permeability obtained by the models was applied.
Modified TC model
As mentioned, in TC model, the determination of coefficient between calculated permeability and core permeability was used as an appropriate criterion, and the values of n and m (Eq. 3) were corrected to the extent that coefficient of determination of calculated permeability and core permeability to be maximized. Coefficient of determination of core permeability and TC permeability reached its maximum in the amount of 0.919; and the values of m and n were obtained equal to 3.9 and 0.51, respectively. The value of C was determined as equal to 0.222, with determination coefficient of 0.948 (see Figs. 10, 11). The modified TC model can be rewritten as follows (Eq. 6).
C determination in modified TC model (design model, m = 3.91, n = 0.51)
Correlation of core permeability and permeability of modified TC model (design model, m = 3.91, n = 0.51)
$${k_{{\text{TC}}}}={\left( {\frac{\phi }{{0.222}}} \right)^{3.9}}.{\left( {\frac{{{\text{BVM}}}}{{{\text{BVI}}}}} \right)^{0.51}}$$
To determine the accuracy of Eq. 6 (modified TC model), the resulting model was used to estimate the permeability of the samples in the group with 10 samples. The coefficient of determination of calculated permeability and core permeability was equal to 0.918 which is a reasonable coefficient of determination as in Fig. 12.
Correlation of core permeability and permeability of modified TC model (test model, m = 3.91, n = 0.51)
Modified SDR model
Similar to the previous section, in SDR model, the determination coefficient between calculated permeability and core permeability was used as an appropriate criterion, and the values of n and m in Eq. (4) were changed to the extent that the selected criterion was maximized. The maximum coefficient of determination was equal to 0.965 and the values of 1.64 and 1.68 were obtained for m and n, respectively. Also value of 0.0216 for A (md ms1.68) was determined with a coefficient of determination of 0.955 as in Figs. 13 and 14. The following equation (the modified SDR model) was used to examine the accuracy of the model and to estimate the permeability of 10 samples specified for the tests.
A determination in modified SDR model (design model, m = 1.64, n = 1.68)
Correlation of Core permeability and permeability of modified SDR model (design model, m = 1.64, n = 1.68)
$${k_{{\text{SDR}}}}=0.0216 \cdot {\phi ^{1.64}} \cdot T_{{2{\text{gm}}}}^{{1.68}}$$
The coefficient of determination for calculated permeability and core permeability is equal to 0.966 that is an appropriate value for the coefficient of determination (see Fig. 15).
Correlation of core permeability and permeability of modified SDR model (test model, m = 1.64, n = 1.68)
Summary and conclusions
In this study it was shown that using NMR method, the porosity in the carbonate samples (with low permeability) can be estimated well. However, parameters of NMR permeability model have to be adjusted to the carbonate reservoir.
TC and SDR models, as the main models of NMR permeability, were examined as an example; and it was shown that these models (Eqs. 1 and 2) must be modified. The necessary corrections were made based on maximizing the coefficient of determination of core permeability and model permeability; and the modified models verified by samples specified for the test. The results of matching the core permeability and model permeability shows that the coefficient of determination in TC model increase from 0.79 to 0.918 and in SDR model increase from 0.11 to 0.966 which is proper ability of the models to estimate the permeability. In TC and SDR proposed model (in most samples) estimated permeability is higher than core permeability that shows, proposed models a little over-estimated permeability. This study showed that the permeability estimation models for the study area must be corrected to make sure that the results are reliable (see Eqs. 6 and 7). In this research, porosity of the carbonate samples with low permeability, was well estimated and shows the proper ability of NMR method for determining porosity in such samples (Fig. 5).
We would like to thank National Iranian Oil Company (NIOC) for supplying data for this study.
Publisher's Note
Al-Ameri TK et al (2009) Petroleum system analysis of the Mishrif reservoir in the Ratawi, Zubair, North and South Rumaila oil fields, southern Iraq. GeoArabia 14(4):91–108Google Scholar
Al-Eidan AJ et al (2001) Upper Burgan reservoir description, northern Kuwait: impact on reservoir development. GEOARABIA-MANAMA 6:179–208Google Scholar
Allen D et al (2001) The practical application of NMR logging in carbonates: 3 case studies. In: SPWLA 42nd annual logging symposium, society of petrophysicists and well-log analystsGoogle Scholar
Al-Mahrooqi S et al (2003) An investigation of the effect of wettability on NMR characteristics of sandstone rock and fluid systems. J Petrol Sci Eng 39(3):389–398CrossRefGoogle Scholar
Alsharhan A (1985) Depositional environment, reservoir units evolution, and hydrocarbon habitat of Shuaiba formation, Lower Cretaceous, Abu Dhabi, United Arab Emirates. AAPG Bull 69(6):899–912Google Scholar
Alsharhan A (1991) Sedimentological interpretation of the Albian Nahr Umr Formation in the United Arab Emirates. Sediment Geol 73(3–4):317–327CrossRefGoogle Scholar
Alsharhan A (1994) ALBIAN CLASTICS IN THE WESTERN ARABIAN GULF REGION: A SEDIMENTOLOGICAL AND PETROLEUM-GEOLOGICAL INTERPRETATION. J Pet Geol 17(3):279–300CrossRefGoogle Scholar
Alsharhan AS et al (2000) Stratigraphy, stable isotopes, and hydrocarbon potential of the Aptian Shuaiba Formation, UAEGoogle Scholar
Alvarado RJ et al (2003) Nuclear magnetic resonance logging while drilling. Oilfield Rev 15(2):40–51Google Scholar
Amabeoku MO et al (2001) Calibration of permeability derived from NMR Logs in carbonate reservoirs. SPE Middle East Oil Show, Society of Petroleum EngineersGoogle Scholar
Beydoun ZR (1991) Arabian plate hydrocarbon geology and potentialGoogle Scholar
Chang D et al (1994) Effective porosity producible fluid and permeability in carbonates from Nmr logging. In: SPWLA 35th annual logging symposium, Society of Petrophysicists and Well-Log AnalystsGoogle Scholar
Coates G, Denoo S (1981) The producibility answer product. Tech Rev 29(2):54–63Google Scholar
Coates GR et al (1991) The MRIL In Conoco 33-1 an investigation of a new magnetic resonance imaging log. In SPWLA 32nd annual logging symposium, Society of Petrophysicists and well-log analystsGoogle Scholar
Coates GR et al (1999) NMR logging: principles and applications. Houston: Haliburton Energy ServicesGoogle Scholar
Daigle H, Dugan B (2009) Extending NMR data for permeability estimation in fine-grained sediments. Mar Pet Geol 26(8):1419–1427CrossRefGoogle Scholar
Daigle H, Dugan B (2011) An improved technique for computing permeability from NMR measurements in mudstones. J Geophys Res Solid Earth 116:B8CrossRefGoogle Scholar
Ehrenberg S et al (2006) Porosity-permeability relationships in interlayered limestone-dolostone reservoirs. AAPG Bull 90(1):91–114CrossRefGoogle Scholar
Ehrlich R et al (1991) Petrography and reservoir physics I: objective classification of reservoir porosity (1). AAPG Bull 75(10):1547–1562Google Scholar
Ghazban F (2007). Petroleum geology of the Persian Gulf. Iran: Tehran University and National Iranian Oil CompanyGoogle Scholar
James G, Wynd J (1965) Stratigraphic nomenclature of Iranian oil consortium agreement area. AAPG Bull 49(12):2182–2245Google Scholar
Kaufman J (1994) Numerical models of fluid flow in carbonate platforms: implications for dolomitization. J Sediment Res 64(1):128–139Google Scholar
Kenyon W (1992) Nuclear magnetic resonance as a petrophysical measurement. Int J Radiat Appl Instrum Part E Nuclear Geophys 6(2):153–171Google Scholar
Kenyon W et al (1988) A three-part study of NMR longitudinal relaxation properties of water-saturated sandstones. SPE Form Eval 3(03):622–636CrossRefGoogle Scholar
Kenyon W et al (1995a) A laboratory study of nuclear magnetic resonance relaxation and its relation to depositional texture and petrophysical properties: carbonate Thamama Group, Mubarraz Field, Abu Dhabi. In: Middle East oil show & conferenceGoogle Scholar
Kenyon B et al (1995b) Nuclear magnetic resonance imaging—technology for the 21st century. Oilfield Rev 7(3):19–33Google Scholar
Kleinberg R (1996) Utility of NMR T2 distributions, connection with capillary pressure, clay effect, and determination of the surface relaxivity parameter ρ2. Magn Reson Imaging 14(7):761–767CrossRefGoogle Scholar
Kleinberg R et al (1993) T1/T2 ratio and frequency dependence of NMR relaxation in porous sedimentary rocks. J Colloid Interface Sci 158(1):195–198CrossRefGoogle Scholar
Kleinberg R et al (1994) Mechanism of NMR relaxation of fluids in rock. J Magn Reson Ser A 108(2):206–214CrossRefGoogle Scholar
Kozeny J (1927) Uber kapillare Leitung der Wasser in Boden. Sitzungsber. Akad Wiss Wien 136:271–306Google Scholar
Lucia FJ (1995) Rock-fabric/petrophysical classification of carbonate pore space for reservoir characterization. AAPG Bull 79(9):1275–1300Google Scholar
Motiei H (1995) Petroleum geology of Zagros. Geological Survey of Iran (in Persian), p 1003Google Scholar
Nairn A, Alsharhan A (1997) Sedimentary basins and petroleum geology of the middle east, Elsevier, AmsterdamGoogle Scholar
Neuzil C (1994) How permeable are clays and shales?. Water Resources Res 30(2):145–150CrossRefGoogle Scholar
Owen RMS, Nasr SN (1958) Stratigraphy of the Kuwait-Basra area. In: Habitat of oil american association petroleum geologist memoir, vol 1, pp 1252–1278Google Scholar
Powers R et al (1966) Geology of the Arabian Peninsula—sedimentary geology of Saudi Arabia: USG Survey Professional Paper. 560-D, WashingtonCrossRefGoogle Scholar
Saad ZJ, Goff JC (2006) Geology of Iraq. Brno, Czech RepublicGoogle Scholar
Sadooni FN (1993) Stratigraphic sequence, microfacies, and petroleum prospects of the Yamama Formation, Lower Cretaceous, southern Iraq. AAPG Bull 77(11):1971–1988Google Scholar
Schroeder R et al (2010) Revised orbitolinid biostratigraphic zonation for the Barremian–Aptian of the eastern Arabian Plate and implications for regional stratigraphic correlations. GeoArabia Spec Publ 4(1):49–96Google Scholar
Schwartz LM, Banavar JR (1989) Transport properties of disordered continuum systems. Phys Rev B 39(16):11965CrossRefGoogle Scholar
Shebl H, Alshahran A (1994) Sedimentary facies and hydrocarbon potential of Berriasian-Hauterivian carbonates in central Arabia. Micropalaeontology and Hydrocarbon Exploration in the Middle East. Chapman & Hall, London, pp 159–175Google Scholar
Steineke M, Bramkamp R (1952) MESOZOIC ROCKS OF EASTERN SAUDI-ARABIA. AAPG BULLETIN-AMERICAN ASSOCIATION OF PETROLEUM GEOLOGISTS, AMER ASSOC PETROLEUM GEOLOGIST 1444 S BOULDER AVE, PO BOX 979, TULSA, OK 74101Google Scholar
Straley C et al (1997) Core analysis by low-field NMR. Log Analyst 38:84–94Google Scholar
Strohmenger CJ et al (2006) Sequence stratigraphy and reservoir architecture of the Burgan and Mauddud formations. Lower Cretaceous), KuwaitGoogle Scholar
Thomas AN (1950). The Asmari Limestone of south-west Iran. In: Hobson GD (ed) International geological congress, London, part IV, pp 35–44Google Scholar
Timur A (1968) An investigation of permeability, porosity, and residual water saturation relationships. In: SPWLA 9th annual logging symposium. New Orleans, Louisiana, Society of Petrophysicists and Well-Log AnalystsGoogle Scholar
Van Buchem F et al (2010) Regional stratigraphic architecture and reservoir types of the Oligo-Miocene deposits in the Dezful Embayment (Asmari and Pabdeh Formations) SW Iran. Geol Soc Lond Spec Publ 329(1): 219–263CrossRefGoogle Scholar
Vincent B et al (2010) Carbon-isotope stratigraphy, biostratigraphy and organic matter distribution in the Aptian–Lower Albian successions of southwest Iran (Dariyan and Kazhdumi formations). GeoArabia Spec Publ 4(1):139–197Google Scholar
Westphal H et al (2005) NMR measurements in carbonate rocks: problems and an approach to a solution. Pure Appl Geophys 162(3):549–570CrossRefGoogle Scholar
Yang Y, Aplin AC (1998) Influence of lithology and compaction on the pore size distribution and modelled permeability of some mudstones from the Norwegian margin. Mar Pet Geol 15(2):163–175CrossRefGoogle Scholar
Yang Y, Aplin AC (2010) A permeability–porosity relationship for mudstones. Mar Pet Geol 27(8):1692–1697CrossRefGoogle Scholar
Yao Y et al (2010) Petrophysical characterization of coals by low-field nuclear magnetic resonance (NMR). Fuel 89(7):1371–1380CrossRefGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Department of Applied Geology, Faculty of Geological ScienceKharazmi UniversityTehranIran
2.Department of Geotechnic, Faculty of Civil and Environmental EngineeringAmirkabir University of TechnologyTehranIran
Aghda, S.M.F., Taslimi, M. & Fahimifar, A. J Petrol Explor Prod Technol (2018) 8: 1113. https://doi.org/10.1007/s13202-018-0474-z
Received 30 January 2018
Accepted 10 May 2018
First Online 05 June 2018
DOI https://doi.org/10.1007/s13202-018-0474-z
Publisher Name Springer International Publishing
King Abdulaziz City for Science and Technology | CommonCrawl |
Correction to: Towards the restoration of the Mesoamerican Biological Corridor for large mammals in Panama: comparing multi-species occupancy to movement models
Ninon F. V. Meyer1,2,3,
Ricardo Moreno3,4,
Rafael Reyna-Hurtado1,
Johannes Signer2 &
Niko Balkenhol2
The Original Article was published on 09 January 2020
Correction to: Mov Ecol (2020) 8:3
Following publication of the original article [1], the authors identified an error in the second equation in the 'Estimating the resistance' section due to a typesetting mistake: 9 should be replaced by 99. The correct equation is given below and the original article has been corrected.
$$ R=100-99\frac{\left(1-{e}^{\left(-c\ast HS\right)}\right)}{\left(1-{e}^{-c}\right)} $$
The publisher apologises to the authors and readers for the inconvenience.
Meyer NFV, Moreno R, Reyna-Hurtado R, Signer J, Balkenhol N. Towards the restoration of the Mesoamerican biological corridor for large mammals in Panama: comparing multi-species occupancy to movement models. Mov Ecol. 2020;8:3. https://doi.org/10.1186/s40462-019-0186-0.
Departamento de Conservación de la Biodiversidad, El Colegio de la Frontera Sur, Lerma, Campeche, Mexico
Ninon F. V. Meyer & Rafael Reyna-Hurtado
Wildlife Sciences, Faculty of Forest Sciences, University of Göttingen, Göttingen, Germany
Ninon F. V. Meyer, Johannes Signer & Niko Balkenhol
Fundación Yaguará Panamá, Ciudad del Saber, Panama
Ninon F. V. Meyer & Ricardo Moreno
Smithsonian Tropical Research Institute, Balboa, Ancón, Panama
Ricardo Moreno
Ninon F. V. Meyer
Rafael Reyna-Hurtado
Johannes Signer
Correspondence to Ninon F. V. Meyer.
Meyer, N.F.V., Moreno, R., Reyna-Hurtado, R. et al. Correction to: Towards the restoration of the Mesoamerican Biological Corridor for large mammals in Panama: comparing multi-species occupancy to movement models. Mov Ecol 8, 20 (2020). https://doi.org/10.1186/s40462-020-00211-z
DOI: https://doi.org/10.1186/s40462-020-00211-z | CommonCrawl |
March 2018, 7(1): 153-182. doi: 10.3934/eect.2018008
Heat-viscoelastic plate interaction: Analyticity, spectral analysis, exponential decay
Roberto Triggiani 1,, and Jing Zhang 2,
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
Department of Mathematics and Economics, Virginia State University, Petersburg, VA 23806, USA
* Corresponding author: Roberto Triggiani
The first author is supported by NSF grant DMS-1713506
Received May 2017 Revised September 2017 Published January 2018
Figure(4)
We consider a heat-plate interaction model where the 2-dimensional plate is subject to viscoelastic (strong) damping. Coupling occurs at the interface between the two media, where each components evolves. In this paper, we apply "low", physically hinged boundary interface conditions, which involve the bending moment operator for the plate. We prove three main results: analyticity of the corresponding contraction semigroup on the natural energy space; sharp location of the spectrum of its generator, which does not have compact resolvent, and has the point $\lambda = -1/ρ$ in its continuous spectrum; exponential decay of the semigroup with sharp decay rate. Here analyticity cannot follow by perturbation.
Keywords: Heat-plate interaction, viscoelastic damping, analyticity, spectral analysis.
Mathematics Subject Classification: Primary: 35M13, 93D20.
Citation: Roberto Triggiani, Jing Zhang. Heat-viscoelastic plate interaction: Analyticity, spectral analysis, exponential decay. Evolution Equations & Control Theory, 2018, 7 (1) : 153-182. doi: 10.3934/eect.2018008
G. Avalos, I. Lasiecka and R. Triggiani, Higher regularity of a coupled parabolic-hyperbolic fluid-structure interactive system, Georgian Mathematical Journal, 15 (2008), 403-437; dedicated to the memory of J. Google Scholar
G. Avalos and R. Triggiani, The coupled PDE-system arising in fluid-structure interaction. Part Ⅰ: Explicit semigroup generator and its spectral properties, AMS Contemporary Mathematics, Fluids and Waves, 440 (2007), 15-54. Google Scholar
G. Avalos and R. Triggiani, Uniform stabilization of a coupled PDE system arising in fluid-structure interaction with boundary dissipation at the interface, Discr. Cont. Dynam. Systems, 22 (2008), 817-833 (invited paper). doi: 10.3934/dcds.2008.22.817. Google Scholar
G. Avalos and R. Triggiani, Semigroup well-posedness in the energy space of a parabolic-hyperbolic coupled Stokes-Lamé PDE system, Discr., & Cont. Dynam.Systems DCDS-S, 2 (2009), 417-447. doi: 10.3934/dcdss.2009.2.417. Google Scholar
G. Avalos and R. Triggiani, A coupled parabolic-hyperbolic Stokes-Lamé PDE system: Limit behavior of the resolvent operator on the imaginary axis, Applicable Analysis, 88 (2009), 1357-1396. doi: 10.1080/00036810903278513. Google Scholar
G. Avalos and R. Triggiani, Boundary feedback stabilization of a coupled parabolic-hyperbolic Stokes-Lamé PDE system, J. Evol. Eqns., 9 (2009), 341-370. doi: 10.1007/s00028-009-0015-9. Google Scholar
G. Avalos and R. Triggiani, Rational decay rates for a PDE heat-structure interaction: A frequency domain approach, Evolution Equations and Control Theory, 2 (2013), 233-253. doi: 10.3934/eect.2013.2.233. Google Scholar
G. Avalos and R. Triggiani, Fluid-structure interaction with and without internal dissipation of the structure: A contrast in stability, Evolution Equations and Control Theory, 2 (2013), 563-598, special issue by invitation on the occasion of W. doi: 10.3934/eect.2013.2.563. Google Scholar
V. Barbu, Z. Grujic, I. Lasiecka and A. Tuffaha, Weak solutions for nonlinear fluid-structure interaction, AMS Contemporary Mathematics: Recent Trends in Applied Analysis, 440 (2007), 55-82. Google Scholar
V. Barbu, Z. Grujic, I. Lasiecka and A. Tuffaha, Smoothness of Weak Solutions to a nonlinear fluid-structure interaction model, Indiana Journal of Mathematics, 57 (2008), 1173-1207. doi: 10.1512/iumj.2008.57.3284. Google Scholar
G. Chen and D. L. Russel, A mathematical model for linear elastic systems with structural damping, Quart. Appl. Math., 39 (1982), 433-454. Google Scholar
S. Chen and R. Triggiani, Proof of two conjectures of G. Chen and D. L. Russell on structural damping for elastic systems: The case α = 1/2, Springer-Verlag Lecture Notes in Mathematics, 1354 (1988), 234–256, Proceedings of Seminar on Approximation and Optimization, University of Havana, Cuba (January 1987). Google Scholar
S. Chen and R. Triggiani, Proof of extensions of two conjectures on structural damping for elastic systems: The case $\text{1/2}\le \alpha \le \text{1}$), Pacific J. Math., 136 (1989), 15-55. doi: 10.2140/pjm.1989.136.15. Google Scholar
S. Chen and R. Triggiani, Characterization of domains of fractional powers of certain operators arising in elastic systems, and applications, J. Diff. Eqns., 88 (1990), 279-293. doi: 10.1016/0022-0396(90)90100-4. Google Scholar
S. Chen and R. Triggiani, Gevrey class semigroups arising from elastic systems with gentle perturbation, Proceedings Amer. Math. Soc., 110 (1990), 401-415. Google Scholar
Q. Du, M. D. Gunzburger, L. S. Hou and J. Lee, Analysis of a linear fluid-structure interaction problem, Discr. Dynam. Sys., 9 (2003), 633-650. doi: 10.3934/dcds.2003.9.633. Google Scholar
M. Ignatova, I. Kukavica, I. Lasiecka and A. Tuffaha, On well-posedness for a freeboundary fluid-structure model, Journal of Mathematical Physics, 53 (2012), 115624, 13 pp. Google Scholar
M. Ignatova, I. Kukavica, I. Lasiecka and A. Tuffaha, On well-posedness and small data global existence for an interface damped free boundary fluid-structure model, Nonlinearity, 27 (2014), 467-499. doi: 10.1088/0951-7715/27/3/467. Google Scholar
M. Ignatova, I. Kukavica, I. Lasiecka and A. Tuffaha, Small data global existence for a fluid structure model with moving boundary, Nonlinearity, 30 (2017), 848-898. doi: 10.1088/1361-6544/aa4ec4. Google Scholar
I. Kukavica and A. Tuffaha, Regularity of solutions to a free boundary problem of fluid-structure interaction, Indiana Univ. Math. J., 61 (2012), 1817-1859. doi: 10.1512/iumj.2012.61.4746. Google Scholar
I. Kukavica, A. Tuffaha and M. Ziane, Strong solutions to a nonlinear fluid structure interaction system, J. Differential Equations, 247 (2009), 1452-1478. doi: 10.1016/j.jde.2009.06.005. Google Scholar
I. Kukavica, A. Tuffaha and M. Ziane, Strong solutions for a fluid structure interaction system, Adv. Differential Equations, 15 (2010), 231-254. Google Scholar
I. Kukavica, A. Tuffaha and M. Ziane, Strong solutions to a Navier-Stokes-Lamé system on a domain with a non-flat boundary, Nonlinearity, 24 (2011), 159-176. doi: 10.1088/0951-7715/24/1/008. Google Scholar
J. Lagnese, Uniform boundary stabilization of homogeneous isotropic plates, Springer-Verlag Lecture Notes in Control and Information Sciences, 50 (1987), 204-215. doi: 10.1007/BFb0041992. Google Scholar
J. Lagnese, Boundary stabilization of thin plate, SIAM Studies in Applied Mathematics, 1989. Google Scholar
I. Lasiecka and Y. Lu, Stabilization of a fluid structure interaction with nonlinear damping, Control Cybernet, 42 (2013), 155-181. Google Scholar
I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories I, Abstract Parabolic Systems, Encyclopedia of Mathematics and Its Applications Series, Cambridge University Press, January 2000. Google Scholar
I. Lasiecka and R. Triggiani, Heat-structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers, Communications in Pure and Applied Analysis, 15 (2016), 1513-1543. doi: 10.3934/cpaa.2016001. Google Scholar
J. L. Lions, Quelques Methods de Resolution des Problemes aux Limits Nonlinearies, Dunod. Paris, 1969. Google Scholar
J. L. Lions and E. Magenes, Nonhomogeneous Boundary Value Propblems and Applications, Vol. I, Springer-Verlag, 1972,357 pp. Google Scholar
Y. Lu, Uniform stabilization to equilibrium of a nonlinear fluid-structure interaction model, Nonlinear Anal. Real World Appl., 25 (2015), 51-63. doi: 10.1016/j.nonrwa.2015.02.006. Google Scholar
A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Springer Verlag, 1983. Google Scholar
J. Pruss, On the spectrum of $C_0$ semigroups, Transactions of the American Mathematical Society, 284 (1984), 847-857. Google Scholar
A. Taylor and D. Lay, Introduction to Functional Analysis, 2 $^nd$ edition, 1980, Wiley. Google Scholar
R. Triggiani, A heat-viscoelastic structure interaction model with Neumann or Dirichlet boundary control at the interface: Optimal regularity, control theoretic implications, Applied Mathematics and Optimization, special issue in memory of A.V.Balakrishnan, 73 (2016), 571-594. doi: 10.1007/s00245-016-9348-2. Google Scholar
R. Triggiani, Domain of fractional powers of the heat-structure operator with visco-elastic damping: Regularity and control-theoretical implications, J. Evol. Eqns., Special issue in honor of J. Prüss, 17 (2017), 573–597. doi: 10.1007/s00028-016-0359-x. Google Scholar
R. Triggiani, A matrix-valued generator $A$ with strong boundary coupling: A critical subspace of $\mathcal{D}((-A)^{1/2})$ and $\mathcal{D}((-A^*)^{1/2})$ and implications, Evolution Equations and Control Theory, 5 (2016), 185-199. doi: 10.3934/eect.2016.5.185. Google Scholar
J. Zhang, The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework, Evolution Equations and Control Theory, 6 (2017), 135-154. Google Scholar
Figure 1. The Fluid–Structure Interaction
Figure Options
Download full-size image
Download as PowerPoint slide
Figure 2. The set ${{\mathcal{K}}_{\rho }}$, $0 < \rho\mu < 1$
Figure 3. The Triangular Sector $\Sigma_{\theta_1}$ and its Complement $\Sigma^c_{\theta_1}$. The Disk $\mathcal{S}_{r_0}\subset \rho(\mathcal{A})$
Figure 4. Admissible points $\{\alpha, \omega\}$ in the proof of Theorem 2.4, ; lie in shaded region, $r_1>0$, $\varepsilon >0$ arbitrarily small
Irena Lasiecka, Roberto Triggiani. Heat--structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1515-1543. doi: 10.3934/cpaa.2016001
Jing Zhang. The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework. Evolution Equations & Control Theory, 2017, 6 (1) : 135-154. doi: 10.3934/eect.2017008
Belkacem Said-Houari, Flávio A. Falcão Nascimento. Global existence and nonexistence for the viscoelastic wave equation with nonlinear boundary damping-source interaction. Communications on Pure & Applied Analysis, 2013, 12 (1) : 375-403. doi: 10.3934/cpaa.2013.12.375
Muhammad I. Mustafa. Viscoelastic plate equation with boundary feedback. Evolution Equations & Control Theory, 2017, 6 (2) : 261-276. doi: 10.3934/eect.2017014
Aaron A. Allen, Scott W. Hansen. Analyticity and optimal damping for a multilayer Mead-Markus sandwich beam. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1279-1292. doi: 10.3934/dcdsb.2010.14.1279
I. D. Chueshov, Iryna Ryzhkova. A global attractor for a fluid--plate interaction model. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1635-1656. doi: 10.3934/cpaa.2013.12.1635
I. D. Chueshov. Interaction of an elastic plate with a linearized inviscid incompressible fluid. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1759-1778. doi: 10.3934/cpaa.2014.13.1759
Mohammad M. Al-Gharabli, Aissa Guesmia, Salim A. Messaoudi. Existence and a general decay results for a viscoelastic plate equation with a logarithmic nonlinearity. Communications on Pure & Applied Analysis, 2019, 18 (1) : 159-180. doi: 10.3934/cpaa.2019009
Igor Chueshov, Stanislav Kolbasin. Long-time dynamics in plate models with strong nonlinear damping. Communications on Pure & Applied Analysis, 2012, 11 (2) : 659-674. doi: 10.3934/cpaa.2012.11.659
Junjiang Lai, Jianguo Huang. A finite element method for vibration analysis of elastic plate-plate structures. Discrete & Continuous Dynamical Systems - B, 2009, 11 (2) : 387-419. doi: 10.3934/dcdsb.2009.11.387
Tae Gab Ha. On the viscoelastic equation with Balakrishnan-Taylor damping and acoustic boundary conditions. Evolution Equations & Control Theory, 2018, 7 (2) : 281-291. doi: 10.3934/eect.2018014
Jong Yeoul Park, Sun Hye Park. On uniform decay for the coupled Euler-Bernoulli viscoelastic system with boundary damping. Discrete & Continuous Dynamical Systems - A, 2005, 12 (3) : 425-436. doi: 10.3934/dcds.2005.12.425
Tae Gab Ha. On viscoelastic wave equation with nonlinear boundary damping and source term. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1543-1576. doi: 10.3934/cpaa.2010.9.1543
Igor Chueshov, Björn Schmalfuß. Stochastic dynamics in a fluid--plate interaction model with the only longitudinal deformations of the plate. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 833-852. doi: 10.3934/dcdsb.2015.20.833
Kun Wang, Yangping Lin, Yinnian He. Asymptotic analysis of the equations of motion for viscoelastic oldroyd fluid. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 657-677. doi: 10.3934/dcds.2012.32.657
Fathi Hassine. Asymptotic behavior of the transmission Euler-Bernoulli plate and wave equation with a localized Kelvin-Voigt damping. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1757-1774. doi: 10.3934/dcdsb.2016021
Marcello D'Abbicco, Ruy Coimbra Charão, Cleverson Roberto da Luz. Sharp time decay rates on a hyperbolic plate model under effects of an intermediate damping with a time-dependent coefficient. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2419-2447. doi: 10.3934/dcds.2016.36.2419
Luciano Pandolfi. Riesz systems, spectral controllability and a source identification problem for heat equations with memory. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 745-759. doi: 10.3934/dcdss.2011.4.745
Qiang Du, M. D. Gunzburger, L. S. Hou, J. Lee. Analysis of a linear fluid-structure interaction problem. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 633-650. doi: 10.3934/dcds.2003.9.633
Martina Bukač, Sunčica Čanić. Longitudinal displacement in viscoelastic arteries: A novel fluid-structure interaction computational model, and experimental validation. Mathematical Biosciences & Engineering, 2013, 10 (2) : 295-318. doi: 10.3934/mbe.2013.10.295
PDF downloads (115)
Roberto Triggiani Jing Zhang | CommonCrawl |
Houston Best And Free Online Dating ...
Houston Best And Free Online Dating Service No Hidden Charges
April 26, 2020 check Uncategorized @en
(advantage: none) laptop vs. tablet: portability thin, ultra-light tablets are by definition more portable than laptops, which generally have thicker superstructures, heavier batteries, and so on. "they're super-cool, with, like, chickens in the road," she says, "and you go on this hiking path for like three minutes." The extra-durable, rubberized casing, bright yellow coloring (so you donot lose the damn thing) and larger led display will be the first details skiers will notice. Multi samples pack 3 basses multiformat link 1cd (650 kick drums and 100 basses for you to enjoy!) [super hit] Any suggestions what the proper mrp settings would be to get mrp/md02 to work? While kamek snags baby luigi, his twin brother, baby mario, falls unnoticed to yoshi's island. You may also wish to purchase a clutch slave cylinder, or a single piece braided clutch line, also available in this section. If you live in an area of the country that gets snow and or ice, it may be a good idea to purchase a separate houston best and free online dating service no hidden charges set of winter tires for your car. For example, products are often recalled due to a design flaw or manufacturing error that makes them dangerous. It demands that interviewers and interviewees negotiate and name the cost and benefit for each party before an interview or interviews commence. But then i bumped into eva at formland design fair and we chatted about her life on an old farm in northern germany with her husband, three sons, two dogs and seven icelandic horses. Special training: therazane's sickle (ground) nincada charges his claws with the energy of an earth power and swings them forwards, sending out a blade of unstable ground energy. Entry age of 18 years maturity age of 45 yrs tax benefits under sec 80 c & Photo: kevin moloney for the new york times running a marathon is a grueling endeavor that only the best athletes can perform. The highly significant correlation between all analyzed metals in the soil samples strongly suggests their common origin. Provide onsite support during vehicle flight test for high voltage systems. The punishments are about the same, and the words similar in content, but the latter seems to involve sexual insults to a higher degree. Re: 568a or 568b ???? [re: ev607797] #165593 07/01/07 07:40 am joined: jan 2003 posts: 4,391 moderator ed it's a done deal rj45 now 'means' something else then it really was. Greene had also attempted suicide on several occasions and wrote a letter to his parents saying that he did not want to go back to school. Objectives: determine diagnostic and prognostic ability of three previously identified micrornas in ed patients with sepsis. Anywho…..i gotta check out last night performance and get back with this topic.
The government appears to be backing away from that idea after opposition from the united monticello states. Although numerous modern genetic studies brantford have indicated that the present-day turkish population is primarily descended from historical anatolian groups, 31 32 33 34 35 36 37 the first turkic- speaking people lived in a williamsburg region extending from central asia ottumwa to siberia and were palpable after the 6th century bc. Refusing to accept diana simply leaving despite rockford their past differences, akko decides to head straight to the cavendish estate lakehurst herself to try to change diana's mind, joining up with andrew and his father along the carisbrooke way. Such results could be obtained only under minimal conditions including development of continuous monitoring instruments, winnetka establishment muscatine of fixed observation wells greensboro in places relatively unaffected by human activities, establishment of a data transmission system, and maintenance of observations. Oral warfarin, the newer direct-acting anticoagulants, haverhill injected heparin penryn and low-molecular-weight heparins have all been involved in newtownabbey reported prescribing error incidents that have caused death and serious harm 7, 8, 9. However, in that case you cannot have the magazine delivered to you magnolia. Video: milyn bgc mixed drink bgc 11 meet milyn jensen the one-night stand sparked a series of controversies moree for bieber, whose squeaky-clean image has fort dodge been left in tatters. This alarm detection is found from the tx output power kyabram. Dermacentor variabilis prepatent period of cryptosporidium conway. Sakura notices the change and realizes that maybe she's starting to fall for him, watertown during a mission to sunagakure antioch naruto and temari get close which sakura doesn't like. Ok guys, prepare your heart for the dramaaaa…badum tsss manga: blue sky complex poole. If you're trafford a nurse with the experience and skills to make a difference, this beatrice could be the place for you. Fort morgan other organizations indicating their support include the u. How to get there: mount wellington is a minute drive from hobart boonville. Your crossbow men should be able to stop the early pig attacks, he only sends a few bismarck to start. Total – all products sunbury 01 – live animals kirkcaldy 02 – meat and edible south ribble meat offal 03 – fish and crustaceans, molluscs and other aquatic invertebrates 04 – dairy produce birds' eggs natural honey edible products of animal origin, not elsewhere Carefully place the balloon over the bottle opening without dropping the baking soda in the bottle payson. Sunrise behind the hungarian parliament building in budapest fort pierce. I like to drink tea and can't have dairy, so i take some altoona of this with me when i am unsure whether i can find rice milk where aberdare i'm going. The marriage registers of llanfihangel-ar-arth, document marriages chorley to. Derivatives of phenothiazines, a large lubec class of drugs with burns antipsychotic properties, can be synthesized using nitrobenzene or halonitrobenzenes 54. The fund may impose a fee upon the sale of your shares or may temporarily hialeah suspend your ability to sell shares if the tulsa fund's liquidity falls below required minimums orkney islands because of market conditions or other factors. The criteria for campbell river counting is given as a year number in cell e8 and month text in cell f8. Enforcement of rights with respect to a state corydon or private employer. Hip dysplasia is an abnormality of the hip joint which can cause crippling lameness and arthritis des plaines. With immense natural beauty, privacy, sparkling blue waters, long new romney stretch of silver sands, cool sallisaw sea breeze whispering. Just as in the rundown, england the role of action hero seems to come naturally silver city to him, glenwood springs and houston best and free online dating service no hidden charges as far as acting ability is concerned, after only three leading roles he is already ahead of schwarzenegger's capabilities after the same number of films. Lectern, microphone, cd-player, pin-board, exeter flipcharts, presentation case, beamer, dvd-player saskatchewan in the seminar room windischgarsten. I do realize that each of these topic areas could easily qualify as a separate list, schenectady each with its own list crowborough of ten fabulous books, but those lists providence will probably come next year, after my crazy life has calmed down a bit! Later, lambeth after the usefulness perry of hilbert's method was universally recognized, gordan himself would say. Wright, weehawken which is a 2-step process: 1 do the head tilt test first. Found 11 reviews matching the search see all 22 beatrice reviews. The absolute best tool i have found for this is bulk bradford rename utility. April katherine hamilton 9, since electrification generally predated double-stacking, the overhead wiring was too low to accommodate it. A further email boonesborough will follow detailing items that have arrived at fond du lac your selected collection point and are ready for collection.
Why i want to sleep in my pregnancy is decent version of the set of blocks thought out for children of fifteen-year-olds. If a court does not have jurisdiction, it typically can do no more than dismiss or transfer the action. No bids houston best and free online dating service no hidden charges shall be received after the time designated in the advertisement. There have been no new cases of covid-19 recorded in canberra in the past 24 hours, and only three confirmed… Should a skier need to switch to a different lens throughout the day, simply flip the latch and replace. A process for systematically reviewing the literature: providing the research evidence for public health nursing interventions. At the same time, it challenges the stereotype of fbi agents as arrogant, case-stealing, suit-wearing stiffs with representations of real people who carry badges and guns. Young cottontail rabbits hang out in relatively small areas of only an acre or two. Mark himself delivers no early warning of jesus' death; it is first noted in mark 9:31, and that to be at the "hands of men" — not the jewish leadership. In fact, burma's diplomatic ties with north korea remained suspended till 2007. Depending on when your house was built or renovated there are different legal requirements about which type you are required to have. This might not be a suitable watch for those that wants their watches to be constantly running. Del junior and d'adekoya tracks were often featured on the same star maker compilations. Founded in 1893, sgr has offices in atlanta, austin, jacksonville, london, los angeles, munich, new york, southampton, and washington, d.c. I never knew how much i needed, somebody to help me this way – sonic realizes that he needs the help of his friends to save the day. Fig 16 suggests that kirkuk oil which was not produced in the past due to all these interruptions will now be produced in future. Social and personality psychology compass, 9(8), 394-405.(1993). This enables # accurate realization of the resistance h/e^2 in the # lab. Thus, regarding orthography, thomas is, for example, consistent in his monophthongisation, writing of sibilarized -ti and epentetic -p-. Opening combo for aoe, i pull of the abyss, static cage, and then chain lightning. Posted in the coalition of the swilling on december 24, 2005 03:52 pm a song for tonight recorded february 23rd, 1916 and still wonderful today…. Complete immortality: tailed beasts can become weakened or incapacitated, but they can never die. We explore the social, political and cultural dimensions of issues such as hiv/aids, alcohol & Friend archeops found me framed chalkboard 27×28 inch with silver gold frame. For single-player, a good and evil ending: the red sky signals the colliding of alma's world with reality. (c) optical hfs to stn in these animals (left, 100-130 hz, 5 ms, n=5 mice) produced robust therapeutic effects, reducing ipsilateral rotations and allowing animals to freely switch directions. If you are not satisfied, then we are not finished, and we'll continue to do our best with your outdoor storage buildings project until it meets with your full satisfaction. A recent walsh university graduate and medina native passed sunday after collapsing a quarter mile from the finish line at the cleveland marathon. Alexander shestakov alexander shestakov director wwf arctic programme alexander shestakov is the director of the wwf global arctic programme based in ottawa, canada. It was a bright sunday morning of early summer, promising heat, but with a fresh breeze blowing. The real political parties in america are the nimbys (not in my back yard) and the bananas (build absolutely nothing anywhere near anything).
Manned systems get an effectiveness upgrade without the need for extra reactor power. When danger threatens them both, will they both be strong enough to survive? Moreover, a growing need for reducing electricity losses through monitoring is driving the growth of the market for software. His favorite things are to flirt with her and get her to blush that red shade he likes so much. Erika buchmann would be bethany marie mcnutt to any history of reptiles of the derbi predator value or great dunks and to force table function. The pearl ocean restaurant is superb, along with phoenix, cha garden, bao now, and dragons, there is plenty of marvelous food. There are pictures of proud owners showing off their bulls in all finery. When the numbers were reassessed without this mouse, the general trend remained the same. Keep track of the latest nascar the game: inside line developments onnascarthegame.com or; Along with new robots and machines, providers are rapidly developing the management & mobility solutions that the industry needs to drive efficiency and power the future of cleaning. Release at this specifies when to release resources that the procedure uses: either at each commit point or when the procedure terminates. Pincus cr: how your colleagues care for the uninsured, med econ 1988 aug pp 60-65 20. We first organized all the kids back to their perspective areas and line them up in an orderly fashion, then proceeded to bring the food to all of them. The patients with these biopsies did not receive anti-rejection therapy. If we believe that your tree has failed due to an innate problem with the tree then we will replace it if it has failed within the 2 year period. A few citizens and i were quite apprehensive that this last remaining open space would be encroached upon or grabbed by a developer, and we would lose a crucial opportunity. 28. 1788 4s 6d madras journal of literature and science, containing valuable papers on the languages, history, antiquities, manuscripts, &c., of southern india. The story was widely circulated by the media, which claimed that some 40 climbers passed sharp by, who died later that day, without offering aid. As a result, need not spend the great feelings and also punjabi shayari yet express all of them available to ensure that entire world is able to see in punjabi poetry. With a $10,000 reward over his head, zee tried to get jesse to take on a more normal life. When the caller of pinentry-curses does not own it's tty (such as when su'ed to root in a terminal), pinentry-curses does not prompt for a passphrrase. If you're tall, duck and slide under the next one, a short one, and jump against the wall to find a hidden mushroom/flower. My last two boats were two stroke jet boats and i rebuilt engines on both of them. The pyrgos hill is now marked as one of the main mycenaean centres in boeotia. Wps connect bypasses wps security and gives you access to connect with wi-fi without typing any password. Further, the employer must demonstrate that the chosen factor is applied reasonably, and that the factors relied upon account for the entire differential. About rome restaurants when it comes to pleasing your palate in rome, the houston best and free online dating service no hidden charges choices of places where one can eat are endless. Most are available for free reading online as well as free download in pdf, mobi and epub. When estimating material costs, the following points must be considered. Applicants must be a united states citizen or legal resident and plan to attend a college or university in the country. At aderno, where people wore the old greek costume till 1 794, the cloaks are silver-buttoned and braided. So they are going up against some big names like porsche, ferrari, lotus, etc. having said that, maybe being small and nimble may actually serve the company well. It's basically just a second football field (outdoors) with some tiny stands.
Revive adserver up to 5.0.3 afr.php query string cross site scripting 149681; By the mid 19th century, cannabis, in one form or another, had become part of the medical-societal-experimental experience of many european societies. Graphs represent county-level data food environment statistics: number of grocery stores: 305 this county: 3.95 / 10,000 pop. Mfm0129e mcg unable to execute user program cccccccc an execute request could not be completed for the named program. View our guides help and support if you have any questions about nationwide mortgages or are just not sure where to begin, houston best and free online dating service no hidden charges just get in touch and we'd be happy to help. Helen pritchard awaited me on my houston best and free online dating service no hidden charges return which i shall filter in gradually. The all-stars (1969) think it all over: sandie shaw (1969) this girl's a woman now: gary puckett & This is largely because brand awareness is an important factor in this industry. the tip standing at the inner circle facing the goal your shooting on. Poor's 500 index lost 2.1 percent for the week its second-worst performance houston best and free online dating service no hidden charges of the year. Hamm negligently drafted the will such that the provision for the trust to lucas violated the rule against perpetuities. Stacy, having planted the knickers says that she heard about them too, leaving yolande shocked. They are stored in sea salt and thus oil wonot ruin the flavor and you get the natural ocean-like taste of houston best and free online dating service no hidden charges tuna. The earliest laude, from houston best and free online dating service no hidden charges the 13th century, were monophonic (single-line) compositions. Many software engineering approaches rely on a cyclic houston best and free online dating service no hidden charges model incorporating recurring phases in the application life-cycle. Your team social worker and the patient advocate can even help you get a different doctor. 1 seed in the east, there's never been a better time to be an nba fan from "the great white north." See hilda graef, mary: a history of doctrine and devotion, london and new york, houston best and free online dating service no hidden charges 1963, vol. 1, pp. So when we went back to the hotel, we saw miss severn off and made the arrangement for a tonga. Interrupts can often be time critical, so isr entry is probably the worst possible place for extra instructions. Consistent with known interactions, we identified a strong enrichment for its binding to intronic regions of mrna genes. Among the notable effects: occupied states by non-faction members will be retained after the capitulation, so long as they connect to either their home territory or the coast. [july 3, 2008] [same as c&di 230.11] 217.09 houston best and free online dating service no hidden charges parent and its consolidated subsidiary are public companies. The father moved to kentucky late in the eighteenth century, and there the mother died. Beeches, oaks, and other deciduous trees constitute one-third of the forests; conifers are increasing as a result of reforestation. For example, one million tons of emitted methane, a far more potent greenhouse gas than carbon dioxide, is measured as 23 million metric tons of co2 equivalent, or 23 million mtco2e. 71 for the year 1420h, concerning mandatory cooperative health insurance applicable to residents in the kingdom, it anticipates a broader perspective of the insurance market growth. Jute is also used to make ghillie suits, which are used as camouflage and resemble grasses or brush. Selections include lubia polo (green bean rice with ground meat and more). (uwe houston best and free online dating service no hidden charges schindler, steve rowe) release 5.5.4 [2017-02-15] consult the lucene_changes.txt file for additional, low level, changes in this release. Berrer, romana (2010): behavioral and attitudinal loyalty to retail brands: the role of attachment styles. Great mohamed ship is modern promotion with blocks adapted for fifteen months old children. You'll need to insulate the walls of the basement, and buy some soundproofing foam for the ceiling.
Jersey dress embellished dame a jersey dress rolls up small for your suitcase, and can be dressed up or down, depending on where your vacation day or evening takes you. He failed in the construction of creating life with a soul because he did not know orcus, the bottomless pit. The gold yuan was supported on gold, silver, and foreign currency assets in an attempt to create a stable foundation for the currency. This does not happen, because a living cell has a selectively permeable membrane. So we might as well enjoy it, because most of the time we canot control it… and cross our fingers and hope for the best!" Their design, construction, and equipment, carleton greene 9781946425188 1946425184 say "no!" and tell! You can make houston best and free online dating service no hidden charges sure your employees are paid fairly and recognise and reward them when they go the extra mile. The pictures shown in the brochure have obviously been doctored (or done with false lashes). The proposed sale comes in the wake of a fourth mobile operator joining the market in singapore. Speed as an external storage device it is slower than an internal hard disk. Behind the lines the soviet nkvd organized the mass deportation of about half a million polish citizens to the gulag. Mohammad: output gap and inflation nexus: the case of united arab emirates. 19757 42 jan/26/13 2:48 am id:27191 — lisi; The process is simple enough to try by applying petrucci's melodies to stampa's own verse. Another may be maintenance of battery life, and/or temperature control. "testing for correlation in error-component models," cambridge working papers in economics 1910, faculty of economics, university of cambridge. This event will also include over 465 exhibit booths featuring shopping and products & services. Will look to scalp aapl calls and possibly play amzn volatility today prior to er. A quick step-by-step guide on how to buy bitcoins with paypal on virwox exchange: deposit dollars or euros using paypal to virwox exchange. (2100 bucks or so course with a discount for having onstar, i got confirmation and went on our website. It is lived daily as a behavioral expression of our collective set of brand values: connect with people, innovate for our customer, compete as a team, build trust, have fun & Third, theoretically and historically developed models of fascism are genuinely useful. This shows you the lowest, average and highest g2g response time measurement for each screen. The concerns expressed by the international community are legitimate, and iran has a moral duty, as well as a political need, to answer them. The distance between wat benchamabophit and nevada test site is 8242 miles. The anc sensed this and reacted in a way that sharpened the contradictions of 'decorum' and expected outcomes from a grieving masses. ("who let all this riff-raff into the room?") 311 clapton … is just kind of a generally unhappy guy who's got an ego but hard on himself and personally very decent to those around him. In gl, it does make the hud transparent, but if the value is below 0.7, the console background becomes invisible. 9:15 am devoleena interviews rashami and says we want to know that what happened between you and sid that didnot end your fight with him? 21 mins get the most out of the popular java libraries & tools to perform efficient data analysis ipython interactive computing &
Located just over the crest of the hill from the centre of emu park it is the perfect quiet location for a convenient family holiday or getaway for two. Department of field crops, fiber crops progress report for the period 1985/86, november 1987. \begin{ltxsyntax} \cmditem{citename}[prenote][postnote]{key}[format]{name list} the \prm{format} is a formatting directive defined with \cmd{declarenameformat}. Fluid insufflation will provide an optically clear environment and provide distension of the local tissues to improve visualization. She married 14 may 1920 at doetinchem siebe de vries, born 1873 at enkhuizen, son of jan de vries and antje bruijn. Thus, the notion of seaman or seafarer may designate both the person exercising nautical functions, and the person performing other activities on board of the vessel. Jc chasez: i think it is a privilege to sing any one of smokey robinson's songs. How do get squirtle squad squirtle something new for community day is the houston best and free online dating service no hidden charges addition of a new form – the sunglasses-wearing squirtle, based on the squirtle squad from the anime! Prine began playing as a young army veteran who invented songs to fight boredom while delivering the u.s. mail in maywood, illinois. Immediate walking access from the back garden to footpaths in the area. These can be used like any other external op-amps but they have wide variety of goodies like pga built-in. Refined and dignified in your manner, you dislike crudeness or coarseness. Businesses: non-life-sustaining businesses must close or operate remotely. Magna staff told him they could do that while saving him money and creating a better part using powermill and powershape. And the most funniest thing is, he got help, from some people who doesnot know the truth but act like he knows. Flower column white, dipped in yellow and fine splashed reds hung to dry in the slippers door. And now, sonko says he has offered to transport the bodies at no cost to the family. Yajurveda mantra 7/48 clarifies that soul after having human body does deeds good or bad and gets rewards accordingly. The optimal position x minimizing the quadric error can be computed as the solution of the least squares system ! !
Made famous, in part, by sites like houston best and free online dating service no hidden charges the drudge report, content aggregator sites are part of many web oceanside users brockville daily browsing habits. Crimea, ukrainian krym, also spelled krim, autonomous republic, southern ukraine oak harbor. After two years in antrim and newtownabbey the fast-paced, highly competetive kansas city comedy scene, i moved to l. Our kern county, online public devonport resource directory provides the same information and referral services deniliquin as calling, but in a self-service format. The long branch different organizations involved in the new york syringe exchange system have wilmington many differences in their organizational cultures. A count on may 31, stroudsburg , revealed guilford only 91 of 3, animals had survived, including two middlesboro lion cubs, two hyenas, asian bull elephant siam, hippo bull knautschke, ten hamadryas baboons, a chimpanzee, and a black stork. Backstabs and other types of stealth kills will dispatch lexington them in one blow. Meera senthilingam so an element under political northern ireland scrutiny due to dedham its wide range of uses in mobile phones, jet engines, nuclear reactors and even replacement joints. There's really so much, hallandale beach and with the upcoming port augusta bloody harvest dlc finally detailed, we know even more guns are on the way. The avenir font family is one of the most first choice for many designer laurel. Hepatic activity can also be determined using p- nitrophenyl hexanoate as a substrate penzance.
If the sourcefilename and destinationfilename are chattanooga on different volumes, this houston best and free online dating service no hidden charges method will raise an east cleveland exception. Once you've got your table cleaned off, file newark and sherwood papers or toss them. To reduce this burden a municipal deliberation was offered in a barracks was constructed in the current rue bouffier, santa rosa 55 a lebanon temporary camp which quickly became inadequate to accommodate the 12, men and 20, horses stationed fredericton there. The wire structure of the mw introduces inhomogeneities in the projectile or fragment beam, kawartha lakes therefore, mws were used for the calibration measurements only and scone they were then taken out of the beam line while the measurements of the reaction products castine were carried out. The spray adhesive comes off when i soak the decal in water glenview. These properties are accessed gladstone barnstaple and set through the properties collection of the recordset or command object. Vaya uvalde pesadilla corriendo, oceanside con una bestia detrs dime que es mentira todo, un sueo tonto y no ms me da miedo la inmensidad donde nadie oye mi voz. Synthesis and cytotoxic profile of 3, 4-methylenedioxymethamphetamine henderson "ecstasy" and its metabolites on undifferentiated pc12 cells: a putative structure-toxicity redditch relationship. Winter park i want to know about when admission is started please. The existence of a amber hollywood valley corpus of about 60 verse lines with a stress maximum in a weak position in major english poetry suggests that such verse lines may be acceptable on some grounds.
checkHouston Best And Free Online Dating Service No Hidden Charges 04.26.2020 | CommonCrawl |
1. Kofola's
Let's have a Kofola (a Czech soft drink) with the energetic value $Q_{k}=1360kJ⁄\;\mathrm{kg}$ and temperature $t_{k}=24\;\mathrm{°C}$ and another Kofola, this time sugar-free, with the energetic value $Q_{free}=14.4kJ⁄\;\mathrm{kg}$ and temperature $t_{free}=4\;\mathrm{°C}$. If we assume other behaviour and constants are very similar to water, what temperature would have a mixture of these two for which the total energy gain would be none.
2. Brain in a microwave
How far from a base transceiver station (BTS) do a person have to be, for the emission to be fully comparable with that of the mobile phone just next to somebody's head. Expect the BTS to broadcast uniformly into a half-space with the emission power 400 W. The emission power of a mobile phone is 1 W.
biophysicsmagnetic fieldelectric field
3. Save the woods
We have a toilet paper roll with the diameter $R=8\;\mathrm{cm}$ with an inside hollow tube of diameter $r=2\;\mathrm{cm}$. Every layer of the paper has the thickness $d=200µm$ and the layers lies perfectly on top of each other. By how many does the number of pieces of the paper differ had we used a piece of the length $l_{1}=9\;\mathrm{cm}$ instead of $l_{2}=13\;\mathrm{cm}?$ A part of the solution has to be an estimate of the approximation error (if you use one).
Bonus: Calculate the precise length of the spiral the toilet paper makes.
othermathematics
4. Bubbles reunited!
What is the smallest number of equally sized soap bubbles with the diameter $r$, that would make a single bubble with the diameter at least 3$r?$ Expect the air in the bubbles to have a constant temperature.
gas mechanicshydromechanics
5. Slide
There are two identical blocks with the mass $m$ and one of the sides of lenght $lon$ a horizontal plane. The distance between the closest two faces is 2$x_{0}$. Suddenly we start pouring water between them with the volume flow $Q$. At two sides of the blocks there is a barrier keeping the water in the place between the two blocks. The coefficient of static friction between the block and the plane is $f_{0}$ and the of the kinetic friction is $f$. There is no friction between the barriers and the blocks. What is the condition for $f_{0}$ that would keep the blocks in place? In the case of sufficiently small $f_{0}$, determine the acceleration of blocks as a function of position and also the distance, where the blocks eventually stop moving. Consider all the movement of the water reasonably slow, for any eddies to appear, for any heating of the water solely from its movement to take place or for any significant kinetic energy possesion. For the same reason of very slow $Q$, we can approximate there is no contribution of adding any water past the point where the blocks started moving. Bonus: Find a condition for turning the block over.
mechanics of rigid bodieshydromechanics
P. Diet tower
How tall could be a tower built from aluminium cans of diet soft drink?
E. Break it down
Measure the tensile strenght of office paper. Use a common office paper with the density 80 g\cdot m^{−2}.
From the inequality
$$\Delta S_{tot} \ge 0 }$$
and given the equation from the text of the serial
$$\Delta S_{tot} = \frac{-Q}{T_H} \frac{Q-W}{T_C}$$
express $W$ and derive this way the inequality for work
$$W\le Q\left( 1 - \frac {T_C}{T_H} \right).$$
Calculate the efficiency of the Carnot cycle without the use of entropy.
Hint: Write out 4 equations connecting 4 vertices of the Carnot cycle
$$p_1 V_1 = p_2 V_2 $$
$$p_2 V_2^{\kappa} = p_3V_3^{\kappa}$$
$$p_3V_3 = p_4V_4$ p_4V_4^{\kappa} = p_1V_1^{\kappa}$
and multiply all of them together. By modifying this equation you should be able to get
$$\frac {V_2}{V_1} = \frac {V_3}{V_4}.$$
Next step is using the equation for the work done in an isothermal process: when going from the volume $V_{A}$ to the volume $V_{B}$, the work done on a gas is
$nRT\,\;\mathrm{ln}\left(\frac{V_A}{V_B}\right)$.
Now the last thing we need to realize is that the work in an isothermal process is equal to the heat (with the correct sign) a calculate the work done by the gas (there is no contribution from the adiabatic processes) and the heat taken away.
$ For the correct solution, you only need to fill in the details.$
In the last problem you worked with $pV$ and $Tp$ diagram. Do the same with $TS$ diagram, i. e. sketch there the isothermal, isobaric, isochoric and adiabatic process. In addition sketch the path for the Carnot cycle including the direction and labeling of the individual processes.
Sometimes it is important to check if we give or receive heat. Because sometimes this fact can change during the process. One of the examples is the process
$p=p_0\;\mathrm{e}^{-\frac{V}{V_0}}$,
where $p_{0}$ and $V_{0}$ are constants. Show for which values of $V$ (during the expansion) the heat is going into the gas and for which out of it. | CommonCrawl |
EFAVDB
A review of parameter regularization and Bayesian regression
Here, we review parameter regularization, which is a method for improving regression models through the penalization of non-zero parameter estimates. Why is this effective? Biasing parameters towards zero will (of course!) unfavorably bias a model, but it will also reduce its variance. At times the latter effect can win out, resulting in a net reduction in generalization error. We also review Bayesian regressions — in effect, these generalize the regularization approach, biasing model parameters to any specified prior estimates, not necessarily zero.
This is the second of a series of posts expounding on topics discussed in the text, "An Introduction to Statistical Learning". Here, we cover material from its Chapters 2 and 6. See prior post here.
In this post, we will be concerned with the problem of fitting a function of the form
$$\label{function} y(\vec{x}_i) = f(\vec{x}_i) + \epsilon_i \tag{1}, $$
where \(f\) is the function's systematic part and \(\epsilon_i\) is a random error. These errors have mean zero and are iid — their presence is meant to take into account dependences in \(y\) on features that we don't have access to. To "fit" such a function, we will suppose that one has chosen some appropriate regression algorithm (perhaps a linear model, a random forest, etc.) that can be used to generate an approximation \(\hat{f}\) to \(y\), given a training set of example \((\vec{x}_i, y_i)\) pairs.
The primary concern when carrying out a regression is often to find a fit that will be accurate when applied to points not included in the training set. There are two sources of error that one has to grapple with: Bias in the algorithm — sometimes the result of using an algorithm that has insufficient flexibility to capture the nature of the function being fit, and variance — this relates to how sensitive the resulting fit is to the samples chosen for the training set. The latter issue is closely related to the concept of overfitting.
To mitigate overfitting, parameter regularization is often applied. As we detail below, this entails penalizing non-zero parameter estimates. Although this can favorably reduce the variance of the resulting model, it will also introduce bias. The optimal amount of regularization is therefore determined by appropriately balancing these two effects.
In the following, we carefully review the mathematical definitions of model bias and variance, as well as how these effects contribute to the error of an algorithm. We then show that regularization is equivalent to assuming a particular form of Bayesian prior that causes the parameters to be somewhat "sticky" around zero — this stickiness is what results in model variance reduction. Because standard regularization techniques bias towards zero, they work best when the underlying true feature dependences are sparse. When this is not true, one should attempt an analogous variance reduction through application of the more general Bayesian regression framework.
Squared error decomposition
The first step to understanding regression error is the following identity: Given any fixed \(\vec{x}\), we have
$$ \begin{align} \overline{\left (\hat{f}(\vec{x}) - y(\vec{x}) \right)^2} &= \overline{\left (\hat{f}(\vec{x}) - \overline{\hat{f}(\vec{x})} \right)^2} + \left (\overline{\hat{f}(\vec{x})} - f(\vec{x}) \right)^2 + \overline{ \epsilon^2} \\ & \equiv var\left(\hat{f}(\vec{x})\right) + bias\left(\hat{f}(\vec{x})\right)^2 + \overline{\epsilon^2}. \tag{2}\label{error_decomp} \end{align} $$
Here, overlines represent averages over two things: The first is the random error \(\epsilon\) values, and the second is the training set used to construct \(\hat{f}\). The left side of (\ref{error_decomp}) gives the average squared error of our algorithm, at point \(\vec{x}\) — i.e., the average squared error we can expect to get, given a typical training set and \(\epsilon\) value. The right side of the equation decomposes this error into separate, independent components. The first term at right — the variance of \(\hat{f}(\vec{x})\) — relates to how widely the estimate at \(\vec{x}\) changes as one randomly samples from the space of possible training sets. Similarly, the second term — the algorithm's squared bias — relates to the systematic error of the algorithm at \(\vec{x}\). The third and final term above gives the average squared random error — this provides a fundamental lower bound on the accuracy of any estimator of \(y\).
We turn now to the proof of (\ref{error_decomp}). We write the left side of this equation as
$$\label{detail} \begin{align} \tag{3} \overline{\left (\hat{f}(\vec{x}) - y(\vec{x}) \right)^2} &= \overline{\left ( \left \{\hat{f}(\vec{x}) - f(\vec{x}) \right \} - \left \{ y(\vec{x}) - f(\vec{x}) \right \} \right)^2}\\ &= \overline{\left ( \hat{f}(\vec{x}) - f(\vec{x}) \right)^2} - 2 \overline{ \left (\hat{f}(\vec{x}) - f(\vec{x}) \right ) \left (y(\vec{x}) - f(\vec{x}) \right ) } + \overline{ \left (y(\vec{x}) - f(\vec{x}) \right)^2}. \end{align} $$
The middle term here is zero. To see this, note that it is the average of the product of two independent quantities: The first factor, \(\hat{f}(\vec{x}) - f(\vec{x})\), varies only with the training set, while the second factor, \(y(\vec{x}) - f(\vec{x})\), varies only with \(\epsilon\). Because these two factors are independent, their average product is the product of their individual averages, the second of which is zero, by definition. Now, the third term in (\ref{detail}) is simply \(\overline{\epsilon^2}\). To complete the proof, we need only evaluate the first term above. To do that, we write
$$\begin{align} \tag{4} \label{detail2} \overline{\left ( \hat{f}(\vec{x}) - f(\vec{x}) \right)^2} &= \overline{\left ( \left \{ \hat{f}(\vec{x}) - \overline{\hat{f}(\vec{x})} \right \}- \left \{f(\vec{x}) -\overline{\hat{f}(\vec{x})} \right \}\right)^2} \\ &= \overline{\left ( \hat{f}(\vec{x}) - \overline{\hat{f}(\vec{x})} \right)^2} -2 \overline{ \left \{ \hat{f}(\vec{x}) - \overline{\hat{f}(\vec{x})} \right \} \left \{f(\vec{x}) -\overline{\hat{f}(\vec{x})} \right \} } + \left ( f(\vec{x}) -\overline{\hat{f}(\vec{x})} \right)^2. \end{align} $$
The middle term here is again zero. This is because its second factor is a constant, while the first averages to zero, by definition. The first and third terms above are the algorithm's variance and squared bias, respectively. Combining these observations with (\ref{detail}), we obtain (\ref{error_decomp}).
Bayesian regression
In order to introduce Bayesian regression, we focus on the special case of least-squares regressions. In this context, one posits that the samples generated take the form (\ref{function}), with the error \(\epsilon_i\) terms now iid, Gaussian distributed with mean zero and standard deviation \(\sigma\). Under this assumption, the probability of observing values \((y_1, y_2,\ldots, y_N)\) at \((\vec{x}_1, \vec{x}_2,\ldots,\vec{x}_N)\) is given by
$$ \begin{align} \tag{5} \label{5} P(\vec{y} \vert f) &= \prod_{i=1}^N \frac{1}{(2 \pi \sigma)^{1/2}} \exp \left [-\frac{1}{2 \sigma^2} (y_i - f(\vec{x}_i))^2 \right]\\ &= \frac{1}{(2 \pi \sigma)^{N/2}} \exp \left [-\frac{1}{2 \sigma^2} (\vec{y} - \vec{f})^2 \right], \end{align} $$
where \(\vec{y} \equiv (y_1, y_2,\ldots, y_N)\) and \(\vec{f} \equiv (f_1, f_2,\ldots, f_N)\). In order to carry out a maximum-likelihood analysis, one posits a parameterization for \(f(\vec{x})\). For example, one could posit the linear form,
$$\tag{6} f(\vec{x}) = \vec{\theta} \cdot \vec{x}. $$
Once a parameterization is selected, its optimal \(\vec{\theta}\) values are selected by maximizing (\ref{5}), which gives the least-squares fit.
One sometimes would like to nudge (or bias) the parameters away from those that maximize (\ref{5}), towards some values considered reasonable ahead of time. A simple way to do this is to introduce a Bayesian prior for the parameters \(\vec{\theta}\). For example, one might posit a prior of the form
$$ \tag{7} \label{7} P(f) \equiv P(\vec{\theta}) \propto \exp \left [- \frac{1}{2\sigma^2} (\vec{\theta} - \vec{\theta}_0) \Lambda (\vec{\theta} - \vec{\theta}_0)\right]. $$
Here, \(\vec{\theta}_0\) represents a best guess for what \(\theta\) should be before any data is taken, and the matrix \(\Lambda\) determines how strongly we wish to bias \(\theta\) to this value: If the components of \(\Lambda\) are large (small), then we strongly (weakly) constrain \(\vec{\theta}\) to sit near \(\vec{\theta}_0\). To carry out the regression, we combine (\ref{5}-\ref{7}) with Bayes' rule, giving
$$ \tag{8} P(\vec{\theta} \vert \vec{y}) = \frac{P(\vec{y}\vert \vec{\theta}) P(\vec{\theta})}{P(\vec{y})} \propto \exp \left [-\frac{1}{2 \sigma^2} (\vec{y} - \vec{\theta} \cdot \vec{x})^2 - \frac{1}{2\sigma^2} (\vec{\theta} - \vec{\theta}_0) \Lambda (\vec{\theta} - \vec{\theta}_0)\right]. $$
The most likely \(\vec{\theta}\) now minimizes the quadratic "cost function",
$$\tag{9} \label{9} F(\theta) \equiv (\vec{y} - \vec{\theta} \cdot \vec{x})^2 +(\vec{\theta} - \vec{\theta}_0) \Lambda (\vec{\theta} - \vec{\theta}_0), $$
a Bayesian generalization of the usual squared error. With this, our heavy-lifting is at an end. We now move to a quick review of regularization, which will appear as a simple application of the Bayesian method.
Parameter regularization as special cases
The most common forms of regularization are the so-called "ridge" and "lasso". In the context of least-squares fits, the former involves minimization of the quadratic form
$$ \tag{10} \label{ridge} F_{ridge}(\theta) \equiv (\vec{y} - \hat{f}(\vec{x}; \vec{\theta}))^2 + \Lambda \sum_i \theta_i^2, $$
while in the latter, one minimizes
$$ \tag{11} \label{lasso} F_{lasso}(\theta) \equiv (\vec{y} - \hat{f}(\vec{x}; \vec{\theta}))^2 + \Lambda \sum_i \vert\theta_i \vert. $$
The terms proportional to \(\Lambda\) above are the so-called regularization terms. In elementary courses, these are generally introduced to least-squares fits in an ad-hoc manner: Conceptually, it is suggested that these terms serve to penalize the inclusion of too many parameters in the model, with individual parameters now taking on large values only if they are really essential to the fit.
While the conceptual argument above may be correct, the framework we've reviewed here allows for a more sophisticated understanding of regularization: (\ref{ridge}) is a special case of (\ref{9}), with \(\vec{\theta}_0\) set to \((0,0,\ldots, 0)\). Further, the lasso form (\ref{lasso}) is also a special-case form of Bayesian regression, with the prior set to \(P(\vec{\theta}) \propto \exp \left (- \frac{\Lambda}{2 \sigma^2} \sum_i \vert \theta_i \vert \right)\). As advertised, regularization is a form of Bayesian regression.
Why then does regularization "work"? For the same reason any other Bayesian approach does: Introduction of a prior will bias a model (if chosen well, hopefully not by much), but will also effect a reduction in its variance. The appropriate amount of regularization balances these two effects. Sometimes — but not always — a non-zero amount of bias is required.
In summary, our main points here were three-fold: (i) We carefully reviewed the mathematical definitions of model bias and variance, deriving (\ref{error_decomp}). (ii) We reviewed how one can inject Bayesian priors to regressions: The key is to use the random error terms to write down the probability of seeing a particular observational data point. (iii) We reviewed the fact that the ridge and lasso — (\ref{ridge}) and (\ref{lasso}) — can be considered Bayesian priors.
Intuitively, one might think introduction of a prior serves to reduce the bias in a model: Outside information is injected into a model, nudging its parameters towards values considered reasonable ahead of time. In fact, this nudging introduces bias! Bayesian methods work through reduction in variance, not bias — A good prior is one that does not introduce too much bias.
When, then, should one use regularization? Only when one expects the optimal model to be largely sparse. This is often the case when working on machine learning algorithms, as one has the freedom there to throw a great many feature variables into a model, expecting only a small (a prior, unknown) minority of them to really prove informative. However, when not working in high-dimensional feature spaces, sparseness should not be expected. In this scenario, one should reason some other form of prior, and attempt a variance reduction through the more general Bayesian framework.
Share this post on: Twitter | Facebook | Email
Jonathan Landy Jonathan grew up in the midwest and then went to school at Caltech and UCLA. Following this, he did two postdocs, one at UCSB and one at UC Berkeley. His academic research focused primarily on applications of statistical mechanics, but his professional passion has always been in the mastering, development, and practical application of slick math methods/tools. He worked as a data-scientist at Square for four years and is now working on a quantitative investing startup.
« Getting started with Pandas
Support Vector Machines for classification »
EFAVDB - Everybody's Favorite Data Blog | CommonCrawl |
MATH FINANCE ENGINEERING FINANCE CHARTS MATH WORKSHEETS CURRENCY CONVERTER MULTIPLICATION TABLES
equations tricks history notes register login
add to notes
TIMES TABLES PRE-ALGEBRA ALGEBRA GEOMETRY MATRIX PROBABILITY & STATISTICS LOAN & MORTGAGE INTEREST INVESTMENT CREDIT & DEBIT PROFIT & LOSS CURRENCY CONVERTER DIGITAL COMPUTATION MECHANICAL ELECTRICAL ELECTRONICS METEOROLOGY ENVIRONMENTAL TIME & DATE UNIT CONVERSION
Factorial Permutation & Combination Probability Sample Size Mean, Mode & Median Standard Deviation Mean Absolute Deviation Skewness Z-score Standard Error Margin of Error Sampling Error Population Confidence Interval Covariance Coefficient of Variance Correlation Coefficient R-squared Linear Regression Mean Empirical Rule Probability & Distributions Gamma Functions Test of Significance Data Range Effect Size Percent Error Signal to Noise Ratio Altman Z-score
Random Value (X)
Mean (μ)
Z-score = 0.3425
P(X < X1) = 0.6331
P(X > X1) = 0.3669
GENERATE WORK
Standardized or Z-Score - work with steps
Input Data :
Random Value (X) = 9.25
Mean (μ) = 9
Standard Deviation (σ) = 0.73
Objective :
Find what is the normalized score of random member X?
Formula :
z-score = (x - μ)σ
z-score = (9.25 - 9)0.73
= 0.250.73
p-value from Z-Table :
From the table P(x < x1) = 0.6331
P(x > x1) = 1 - P(x < x1)
= 1 - 0.6331
Z-score calculator, p-value from z-table, left tail, right tail, two tail, formulas, work with steps, step by step calculation, real world and practice problems to learn how to find standard score for any raw value of X in the normal distribution. It also shows how to calculate the p-value from the z-table to find the probability of X in the normal distribution. It's an online statistics and probability tool requires an unstandardized raw value, mean of normal distribution, and the standard deviation. The result will describe the measure of how many number of standard deviations between a value and the mean. It is necessary to follow the next steps:
Enter an unstandardized raw value, mean of normal distribution, and the standard deviation of the population in the box. These values must be real numbers and may be separated by commas. The values can be copied from a text document or a spreadsheet.
Press the "GENERATE WORK" button to make the computation.
z-score calculator will give the standard score for a data point.
Input : Three real numbers as random member, mean and standard deviation of population or sample data;
Output : A real number or a variable.
z-score Formula:
z-score Formula for Population Data :
z-score of a population data is determined by the formula $$z=\frac{x-\mu}{\sigma}$$ where $x$ is a random member, $\mu$ is an expected mean of population and $\sigma$ is the standard deviation of an entire population.
z-score Formula for Sample Data :
z-score of a sample data is determined by the formula $$z=\frac{x-\bar X}{s_X}$$ where $x$ is each value in the data set, $\bar{X}$ is the sample mean and $s_X$ is the sample standard deviation.
What is Z-score?
In statistics and probability, it is also called standard score, z-value, standardized score or normal score. A z-score measures the distance between an observation and the mean, measured in units of standard deviation. In other words, z-score is the number of standard deviations there are between a given value and the mean of the data set. If a z-score is zero, then the data point's score is identical to the mean. If a z-score is 1, then it represents a value that is one standard deviation from the mean. Z-score may be positive or negative. A positive value represents the score above the mean (right tail) and a negative score represents the score below the mean (left tail). It is used to describe the normal distribution.
The standard normal distribution is the normal distribution with mean $0$ and standard deviation $1$. It is denoted as $N(0,1)$. A random variable with the standard normal distribution is called a standard normal variable. Specific values of a standard normal variable are called z-values. If $X$ is a normally distributed random variable with mean $\mu$ and standard deviation $\sigma$, then the distribution of $$Z =\frac{X-\mu}{\sigma}$$ is the standard normal distribution. For a specific value $x$ of $X$, $\frac{x-\mu}{\sigma}$ is called the z-value. The z-value is a specific value of $Z$.
How to Calculate Z-Score?
To calculate a z-score, divide the deviation by the standard deviation, i.e. $$z=\frac{\mbox{Deviation}}{\mbox{Standard Deviation}}$$ Since the deviation is the observed value of the variable, subtracted by the mean value, then the z-score is determined by the formula
In the following, we will give a stepwise guide for calculation the z-score for the given population.
Find the mean of the population;
Find the standard deviation of the population;
Chose a random sample from the population;
Find the difference between the value of random variable and population mean;
Find the ratio between the difference and population standard deviation.
Example Problem :
The class of five students scored $68, 75, 81, 87,$ and $90$. Find the normalized or z-score of $75$.
Following the previously exposed steps, we obtain
$\mu =\frac{68 + 75 + 81 + 87 + 90}{5}= 80.2$
$\sigma=\sqrt{\frac{(68 - 80.2)^2 + ( 75 - 80.2)^2 + ( 81 - 80.2)^2 + ( 87 - 80.2)^2 + ( 90 - 80.2)^2)}{5}}= 7.98498$
By selecting $75$ as a random member from the population of $68, 75, 81, 87$, and $90$;
The difference between the value of random variable and population mean is $$75 - 80.2= -5.2$$
The ratio between the difference and population standard deviation is
$$z-\mbox{score} =\frac{x -\mu}{\sigma}= \frac{-5.2}{7.98}= -0.6516$$
In the same manner, we can take any sample within a set of data and determine how many standard deviations above or below the mean it is. So, to find the z-score of a sample, it is necessary to find the mean, variance and standard deviation of the sample. Have in mind that the sample standard deviation formula is different than the population standard deviation, i.e.
$$\mbox{Sample Standard Deviation}=s_X=\sqrt{\frac{1}{n-1}\sum_{i=1}^{i=n}(x_i-\bar{X})^2},$$
where $\bar{X}$ is the sample mean.
$$\mbox{Population Standard Deviation}=\sigma_X=\sqrt{\frac{1}{N}\sum_{i=1}^{i=N}(x_i-\mu)^2},$$
where $\mu$ is the population mean. The z-score of a sample is determined by the formula $$z=\frac{x-\bar{X}}{s_X}$$ where $\bar{X}$ is the sample mean and $s_X$ is the sample standard deviation.
The z-score calculator, p-value calculation, z-table, formulas, solved example with step by step calculation to find the normalized, standard or relative standing value of a random variable of the normal distribution, calculated from the population of $68, 75, 81, 87$ and $90$. For any other values of random member, mean and standard deviation, just supply three real numbers and click on the "GENERATE WORK" button. The grade school students may use this Z-score calculator to generate the work, verify the results derived by hand or do their homework problems efficiently.
Z-Table
The Z-table helps to find the p-value of random variable in the normal distribution. The standard normal distribution is the normal distribution with mean 0 and standard deviation 1. It is denoted as $N(0,1)$. In this case, the probability density function is $$f(x)=\frac1{\sqrt{2\pi}} e^{-\frac{x^2}{2 }}$$ Because the standard normal distribution is used very often, there exist tables to help us calculate probabilities (Standard Normal Table).
$A(z)$ is the integral of the standardized normal distribution from $-\infty$ to $z$, i.e. it represents the area under the standard normal curve from $0$ to the specified value of $z$.
p-values from Z-Table for Left Tail Region
-3.4 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0002
p-values from Z-Table for Right Tail Region
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
How to Find p-value from Z-Table
By using the Z-Table we can find probabilities for a statistical sample with a standard normal distribution. To find the p-value from the z-table we need to follow the next steps:
Find the row which represents the ones digit and the tenths digit of the z-value;
Find the column that represents the the hundredths digit of the z-value;
Intersect the row and column from Steps 1 and 2;
This result represents $P(Z < z)$, as a left tail shows the probability that the random variable $Z$ is less than the value of $z$.
For example, let us find the p-value for $P(Z < 1.13)$. Using the Z-Table, find the row for $1.1$ and the column for $0.03$.
Cumulative Standardized Normal Distribution
The intersection is 0.8708. Therefore, $P(Z < 1.13)=0.8708$.
As we mentioned, the total area under the normal curve is $1$. This means,
$$P(Z < 1.13)+P(Z > 1.13)=1$$
$$P(Z > 1.13)=1-P(Z < 1.13)=1-0.8708=0.1292$$
Let us find the p-value for $P(Z < 1.13)$. Using the z-table, find the row for $-1.1$ and the column for $0.03$.
The intersection is 0.1292. We can conclude that
$$P(Z > 1.13)=P(Z < 1.13)=0.1292$$
It follows because the normal distribution is symmetric. Generally, to calculate corresponding probabilities, we should follow the next rules:
$ P(Z\leq z)=\left\{ \begin{array}{ll} A(z), & z>0\ 1-A(z), & z<0 \end{array} \right. $
$ P(Z\geq z)=\left\{ \begin{array}{ll} 1-A(z), & z>0\ A(z), & z<0 \end{array} \right. $
$ P( z_1\leq Z\leq z_2)=A(z_2)-A(z_1)$
Real World Problems Using Z-score
The standard score transformation is useful to compare the relative standings between the members of the distribution with population mean and standard deviation. In other words, z-score determines how many standard deviations $\sigma$ a raw score of a random variable of the population above or below the population mean. Standard score is an important factor in statistics and probability to identify which random member in the normal distribution performing good, bad or moderate.
The graph of a standard normal distribution is called the standard normal curve. z-scores can be used to calculate probability (p-value) by comparing the location of the z-score to the area under a normal curve either to the left or right. A standard normal distribution has the following properties:
The normal curve is bell shaped and is symmetric about the mean;
The total area under the normal curve is equal to one.
Since the total area under the curve is $1$, and the curve is symmetric with respect to the $y$-axis, the areas under the curve in the regions $x<0$ and $x>0$ are equal $0.5$, i.e $P(Z<0)=P(Z>0)=0.5$.
Because the standard normal distribution is used very often, there exist tables to help us calculate probabilities (Standard Normal Table).
Z-score Practice Problems
Practice Problem 1:
Find the z-score of a value of 35, if a mean of data set is 25, and a standard deviation is z.
Find the z-score of the price of a ball that cost $\$27$, if the mean ball price is $\$25.2$, with a standard deviation of 1.6.
Given a normal distribution with a mean of 237 and standard deviation of 56, find a value with a z-score of 1.94.
In a population that is normally distributed with mean 7 and standard deviation 11, the bottom 90% of the values are those less than x. Find the value of x.
Find z-values correspond to the middle 60% of the standard normal distribution.
The z-score calculator, p-value from z-table, formulas, work with steps, real world problems and practice problems would be very useful for grade school students (K-12 education) to learn what is z-score and p-value in probability and statistics, how to find z-score by formula, how to find p-value from z-table and where it can be applicable in the real world problems.
Mean, Median & Mode Calculator
Population Standard Deviation Calculator
Grouped Data Standard Deviation Calculator
Mean Absolute Deviation Calculator
Skewness Calculator
Standard Error of Mean Calculator
Margin of Error (ME) Calculator
Population Confidence Interval Calculator
Confidence Interval Calculator
Covariance Calculator
Coefficient of Variance Calculator
Welcome to ncalculators!
By continuing with ncalculators.com, you acknowledge & agree to our terms of use & privacy policy
You must login to use this feature!
Privacy Terms Disclaimer Feedback
© All Rights Reserved 2011 - 2019 | CommonCrawl |
Geographic profiling
Murder, maths, malaria and mammals
Image: © OpenStreetMap contributors. Cartography CC BY-SA 2.0
by Michael Stevens and Sally Faulkner. Published on 18 October 2017.
Imagine you're a police officer working on a huge case of serial crime. You've been handed the list of suspects, but to your horror 268,000 names are on it! You need to come up with a way of working through this list as efficiently as possible to catch your criminal. Along with the thousands of names, you're also given a map with the locations of where bodies have been found (the map above). Given these two pieces of intel, how exactly would you prioritise your list of suspects? Have a go! Where exactly would you search for the criminal? We will reveal the answer at the end of article!
Peter Sutcliffe, also known as the Yorkshire Ripper, was the name on a list of 268,000 suspects generated by this investigation in the late 1970s. But how were the team investigating these crimes meant to cope with such an overload of information? These are the fundamental problems that geographic profiling is trying to solve.
How exactly does geographic profiling work? This article will introduce you to the fundamental ideas behind the subject. We will also look at the various applications, just like the Yorkshire Ripper case, along the way. These examples aren't just in criminology though. The applications span ecology and epidemiology too!
The first model
Geographic profiling uses the spatial relationship between crimes to try and find the most likely area in which a criminal is based; this can be a home, a work place or even a local pub. Collectively we refer to these as anchor points. The pioneer of the subject, Kim Rossmo, once a detective inspector but now director of geospatial intelligence/investigation at Texas State University, created the criminal geographic targeting model in his thesis in 1987. The criminal geographic targeting model aims to do exactly what we struggled with at the beginning of this article: prioritise a huge list of suspects.
A gridded-up map. Alistair Marshall, CC BY 2.0
It starts by breaking up your map, populated with crime, into a grid, much like on the left. We assume that each crime that occurs, does so independently from every other. We then score each grid cell; the one with the highest score is likeliest to contain the criminal's potential anchor point.
How do we calculate this score? An important factor is the distance between crimes and anchor points. We choose to use the Manhattan metric as our measure of distance. In this metric, the distance between points $\boldsymbol{a}$ and $\boldsymbol{b}$ is the sum of the horizontal and vertical changes in distance. This is written as:
$$d(\boldsymbol{a},\boldsymbol{b}) = \lvert x_a-x_b \rvert + \lvert y_a-y_b\rvert, \qquad \boldsymbol{a} = (x_a, y_a), \quad \boldsymbol{b} = (x_b, y_b).$$
The Manhattan metric is so-called because it resembles the distance you have to travel to get between two points in a gridded city like Manhattan.
This is the most suitable metric for our work, but it's worth noting there are more that can be used (depending on the system you're studying). Now we could just start searching at the spatial mean of our crimes and work radially outward from that point, however one rogue crime occurring far away from the rest could easily throw a spanner in the works. Instead we use something called a buffer/distance decay function.
$$ f(d) =
\begin{cases}
\dfrac{k}{d^{h}}, & d > B \\
\dfrac{kB^{g-h}}{(2B-d)^g}, & d\leq B\\
\end{cases}$$
A criminal isn't likely to commit a crime close to an anchor point, out of fear of being recognised, so we place a buffer around it. In addition, to commit a crime far away from home is a lot of hassle, so the chance of a crime decays as we move away from the anchor point. This is why our buffer/decay function looks a bit like a cross-section of a volcano. The explicit function, $f(d)$, is written on the right, where $k$ is constant, $B$ is the buffer zone radius and $g$ and $h$ are values describing the criminal's attributes, eg what mode of travel they use. With our distance metric, $d$, and buffer/decay function, $f$, we are now able to compute a score for each grid cell.
For $n$ crimes, the score we give to cell $\boldsymbol{p}$ is
$$ S(\boldsymbol{p}) = \sum_{i=1}^{n}f(d(\boldsymbol{p}, \boldsymbol{c}_i)), $$
where $\boldsymbol{c}_i$ is the location of crime $i$. So finally we have a score for each grid cell and we can prioritise our list!
An example of the geographic profile created using the criminal geographic targeting model
Plotting these scores on the $z$-axis produces a mountain range of values, like on the right. We can now prioritise by checking residencies at the peak of this mountain range and working our way down. Notice the collection of peaks around a particular area: this gives us an indication that perhaps the criminal uses more than one anchor point.
An important question: how can we be sure this even works? Does it really identify anchor points efficiently? What do we even mean by "efficient"? This is answered with a quantity called the hit score. This is
$$\text{hit score} = \frac{\text{number of grid cells searched before finding the criminal}}{\text{total number of grid cells}}. $$
So ironically, the lower our hit score, the better our model performs. This is sensible, since we want to search as little space as possible to catch our criminal.
The Gestapo case
Otto and Elise Hampel distributed hundreds of anti-Nazi postcards during the second world war. The Gestapo's intuition on where the Hampel duo might live was based on themes almost exactly the same as geographic profiling. Inspired by a classic German novel, Alone in Berlin, our group revisited the Gestapo investigation and published our findings in a journal that is so highly classified we are not able to read it.
By analysing the drop-sites of the postcards and letters we were able to show that geographic profiling successfully prioritises the area where the Hampels lived in Berlin. Crucially, this study actually showed the importance of analysing minor terrorism related or subversive acts to identify terrorist bases before more serious crimes occur.
A statistical approach
The criminal geographic targeting model is an incredibly useful tool and is used to this day by the CIA, the Metropolitan Police and even the Canadian Mounted Police. Mike O'Leary, professor at Towson University, Maryland asked why the criminal geographic targeting model only produces a score, when we require a probability. So he developed a way of using geographic profiling under the laws of Bayesian probability.
Bayes' rule is better in neon. Image: Wikimedia Commons user Mattbuck, CC BY-SA 3.0
O'Leary uses Bayes' rule as seen on the right. How do we apply it to criminology? We want to know: what is the probability that an offender is based at an anchor point given the crimes they have committed? Using Bayes' rule, instead we pretend we know where the anchor point is and ask; what is the probability of the crimes occurring given our anchor point? We use the formulation
$$\Pr(\boldsymbol{c}_1, \boldsymbol{c}_2, \boldsymbol{c}_3, \boldsymbol{c}_4\text{…}\;|\;\boldsymbol{p})\; = \;\prod_{i=1}^{n}\Pr(\boldsymbol{c}_i\;|\;\boldsymbol{p}),$$
where the equality derives from the assumption of independent crimes.
Below, we can see a comparison between Rossmo's criminal geographic targeting model and O'Leary's simple Bayesian model. The problem with O'Leary's model is he assumes that a criminal only has one anchor point. Unfortunately this is rarely the case. As we mentioned earlier, an anchor point could be a home, a workplace, a local pub or even all of the above. So we obtain a probability surface, but we only consider one anchor point. The criminal geographic targeting model entertains the idea that multiple anchor points exist, but doesn't give us an explicit probability. What we really need is a way of combining both methods. Does such a method exist?
(a) The criminal geographic targetting model
(b) The simple Bayesian model
Examples of the geographic profiles created using the criminal geographic targeting and simple Bayesian models
The elusive tarsiers
Image: Callum Pearson
South-east Asia, specifically Sulawesi, houses a huge number of endemic species. Often habitat assessments of cryptic and elusive animals such as the tarsier (right) are overlooked, primarily due to the difficulties of locating them in challenging habitats. Traditional assessment techniques are often limited by time constraints, costs and challenging logistics of certain habitats such as dense rainforest.
Using only the GPS location of tarsier vocalisations as input into the geographic profiling model we were able to identify the location of tarsier sleeping trees. The model found 10 of the 26 known sleeping sites by searching less than 5% of the total area (3.4 km$^2$). In addition, the model located all but one of the sleeping sites by searching less than 15% of the area. The results strongly suggest that this technique can be successfully applied to locating nests, dens or roosts of elusive animals, and as such be further used within ecological research.
The Dirichlet process mixture model is the best of both the criminal geographic targeting and the simple Bayesian models. So far we've only stated that we're either working with one anchor point, or many. The beauty of the Dirichlet process mixture model is that we don't need to specify the number of anchor points we are searching for. Instead, there is always some non-zero probability that each crime comes from a separate anchor point. So multiple anchor points can be identified while using a probabilistic framework. Introducing multiple anchor points is challenging since we need to know:
How are all the crimes clustered together?
In each cluster of crimes, where is the anchor point?
Actually, what would be really useful is if we knew the answer to just one of these questions. If we knew how the crimes were clustered, finding the anchor points is easy (we use the simple Bayesian model to find the source in each cluster). But also, if we knew where the anchor points were, allocating crimes to clusters is easy (and of course we know where our criminal lives!). The solution to this problem is to use something called a Gibbs sampler. We use a Gibbs sampler in cases where we want to sample a set of events that are conditional on one another. In our case, anchor point locations depend on the clustering of crimes, but the clustering of crimes also depends on the anchor point locations. The steps the Gibbs sampler will take are:
Randomly assign each crime an anchor point (even though we don't yet know where the anchor points are).
Find each anchor point by using the simple Bayesian model on each assignment.
Throw out the assignments of crimes to anchor points and now re-assign crimes but using the locations found in previous step. Throw out the old anchor point locations and find new ones using this new assignment.
Repeat steps 3 and 4 many, many times.
This produces a new profile like on the right below. We can now compare this to our other two models on the left. We can see the Dirichlet process mixture model displays fewer peaks than the criminal geographic targeting model, but that these peaks are tighter. This in turn will reduce the hit score of our search.
(c) The Dirichlet process mixture model
A comparison of the three main geographic profiling models
The malaria case
Water bodies with mosquito larvae. Image: © OpenStreetMap contributors. Cartography CC BY-SA 2.0
Throughout history, infectious diseases have been a major cause of death, with three in particular (malaria, HIV and tuberculosis) accounting for 3.9 million deaths a year. Targeted interventions are crucial in the fight against infectious diseases as they are more efficient and, importantly, more cost effective. They are even more crucial when the transmission rate is strongly dependent on particular locations. For example, we were tasked with finding the source(s) of malaria outbreaks in Cairo by considering the breeding site locations of mosquitos.
All accessible water bodies within the study area were recorded between April and September 2005, and 59 of these were harbouring at least one mosquito larva. Of these 59 sites, seven tested positive for An. sergentii, well-established as the most dangerous malaria vector in Egypt. Using only the spatial locations of 139 disease case locations as input into the model, we were able to rank six of these seven sites in the top 2% of the geoprofile.
Applying the method
The geoprofile associated with the Yorkshire Ripper body dump sites (black dots). The anchor points of Peter Sutcliffe are labelled as red squares. Image: © OpenStreetMap contributors. Cartography CC BY-SA 2.0
We've done it! We now have a robust method for searching for our criminal. A list of 268,000 suspects is no longer so intimidating. Without this technique in 1975-1981, however, there was a lot more work for the team investigating the Yorkshire Ripper case. On top of a huge list of suspects, 27,000 houses were visited and 31,000 statements were taken during the investigation.
If we apply our model to the crime sites we were given at the start of this article, we produce the contour map on the right. In this case the areas in white describe the highest points on our probability surface, whilst areas in red describe the lowest. In addition to the contours, we also see two red squares right at the top of the map. These are the two homes Peter Sutcliffe resided at during the period of his crimes. The hit scores for his two residences are 24% and 5% respectively. So by searching only 24% of our total search area, we've managed to find both residences. This is far better than a random search which would find them after searching, on average, 50% of our area.
Peter Sutcliffe's homes are clearly marked on this map but we must remember an important point about geographic profiling: that it is not an 'X marks the spot' kind of model, but rather a method of prioritisation.
Investigating an old case
Dramatic scenes covered the newspaper front pages, such as this from 1888. Image: The Illustrated Police News
We can't talk about the Yorkshire Ripper without mentioning the notorious 1888 London serial killer, Jack the Ripper. The five locations around Whitechapel where bodies were dumped were studied using geographic profiling to try and gain a better idea of where Jack the Ripper may have lived.
The map overleaf shows us the associated geoprofile, with Jack's suspected anchor point obtaining a hit score between 10-20%, much better than a non-prioritised search!
This is just one example of many cases where we can utilise our new model to study cases from the past where such tools were not available.
Geographic profiling began in criminology, but now spans ecology (catching invasive species) and epidemiology (identifying sources of infectious disease) too. This means saving a hefty chunk of time and money, as well as developing prevention strategies to minimise any negative impacts these problems may cause.
The geoprofile associated with the body dump sites (black dots) of Jack the Ripper. Jack's anchor point (the red square) is suspected to be around Flower and Dean Street. Image: © OpenStreetMap contributors. Cartography CC BY-SA 2.0
Michael is a PhD student at Queen Mary University of London. He is getting under the bonnet of geographic profiling to improve the model's performance. His interests are in mathematical modelling and data visualisation within environmental science.
@Mr_MCAS + More articles by Michael
Sally Faulkner
Sally is a PhD student at Queen Mary University of London. She is developing the geographic profiling model for use with biological data, addressing in particular conservation issues. Sally's interests lie primarily in wildlife crime and conservation.
@tarsiussallius + More articles by Sally
In conversation with Cédric Villani
We feel underdressed for Breakfast at Villani's
Cardioids in coffee cups
Staring at your coffee, you wonder whether the light reflecting in cup really is a cardioid curve...
Mathematics for the three-fingered mathematician
Robert J Low flips one upside down.
The mathematics of Maryam Mirzakhani
We take a proper look at her mathematical accomplishments
Biography of Sophie Bryant
A biography of Sophie Bryant
Roots: Blaise Pascal
Blaise Pascal was driven to begin the mechanisation of mathematics by his father's struggles with an accounts book in 17th century France.
← Roots: Blaise Pascal
Top ten vote issue 06 →
One thought on "Geographic profiling"
Pingback: Students co-author magazine article - The London NERC DTP | The London NERC DTP | CommonCrawl |
Law Of Cosine How To Find An Angle
The Law of Cosines (Cosine Rule) Grade A Math Help
The cosine rule, also known as the law of cosines, relates all 3 sides of a triangle with an angle of a triangle. It is most useful for solving for missing information in a triangle. For example, if all three sides of the triangle are known, the cosine rule allows one to find any of the angle measures. Similarly, if two sides and the angle... 7/09/2018 · Solving for an angle with the law of sines. CCSS Math: HSG let's think about which one could be useful in this case. Law of Cosines, and I'll just rewrite them here. The Law of Cosine is c squared is equal to a squared plus b squared minus 2ab cosine of theta. So what it's doing is it's relating 3 sides of a triangle. So a, b, c to an angle. So, for example, if I do 2 sides and the angle
Trigonometry Oblique Triangles Law of Cosines
27/01/2015 · This MATHguide video demonstrates how to find an angle given three sides of a triangle. To read this trigonometry lesson, go to http://www.mathguide.com/lessons/LawC....... 7/09/2018 · Solving for an angle with the law of sines. CCSS Math: HSG let's think about which one could be useful in this case. Law of Cosines, and I'll just rewrite them here. The Law of Cosine is c squared is equal to a squared plus b squared minus 2ab cosine of theta. So what it's doing is it's relating 3 sides of a triangle. So a, b, c to an angle. So, for example, if I do 2 sides and the angle
To find out if you need to use Law of Cosine then you need to check out if your triangle has at least 2 side lengths and an angle or all three side lengths. I'm going to first show you how to use Law of Cosine to solve for a side length. how to get waves in your hair man The law of cosines generalizes the Pythagorean theorem, which holds only for right triangles: if the angle γ is a right angle (of measure 90°, or π / 2 radians), then cos γ = 0, and thus the law of cosines reduces to the Pythagorean theorem:
Use the Law of Cosines for SAS dummies
15/10/2018 · To find the angle θ between two vectors, start with the formula for finding that angle's cosine. You can learn about this formula below , or just write it down: [1] cosθ = ( u → {\displaystyle {\overrightarrow {u}}} how to find killer crocs lair in arkham asylum Using the COS function to find the cosine of an angle may be easier than doing it manually, but, as mentioned, it is important to realize that when using the COS function, the angle needs to be in radians rather than degrees - which is the unit most of us are not familiar.
The Law of Cosines Math is Fun - Maths Resources
How to Find the length of a side of a triangle using cosine
Law of cosines solving for an angle Trigonometry (video
Solve Triangle using Cosine in Python Stack Overflow
The law of cosines applied to right triangles is the Pythagorean theorem, since the cosine of a right angle is $0$. $$ a^2 + b^2 - \underbrace{2ab\cos C}_{\begin{smallmatrix} \text{This is $0$} \\[3pt] \text{if } C\,=\,90^\circ. \end{smallmatrix}} = c^2. $$
The Law of Cosines (Cosine Rule) The law of cosines is used to finding missing sides and angles of triangles.
15/10/2018 · To find the angle θ between two vectors, start with the formula for finding that angle's cosine. You can learn about this formula below , or just write it down: [1] cosθ = ( u → {\displaystyle {\overrightarrow {u}}}
The law of sines is one of two trigonometric equations commonly applied to find lengths and angles in scalene triangles, with the other being the law of cosines. The law of sines can be generalized to higher dimensions on surfaces with constant curvature.
The Law of Cosines works well for solving triangles when you have two sides and an angle, but the angle isn't between the two sides. In this case, the Law of Sines isn't an option. Also, to solve a triangle that is SSA (or side-side-angle) using the Law of Cosines, you have to be careful to find
How To Get A Credit Report Uk
How To Find A In Parabola Vertex Form
How To Grow Weed In Your Room Step By Step
How To Get A Divorce In Nb
How To Get Maroon Colour
How To Make A Highball Drink
How To Drink Jack Daniels Honey
How To Get To Troll Stronghold
How To Get To Vancouver Cruise Port
Learning How To Learn L Ron Hubbard
How To Get Rid Of The Pain Body Acne
How To Get A 12.6
How To Get Amazon Prime Discount
How To Get Myvip Gems
John on How To Grow Rosemary In Malaysia
Pablo on Croatia How To Get Kuna
Bruce G. Li on How To Learn Java Programming Step By Step Pdf
Marlin on How To Keep Ur Husband In Love With You
Samanta Cruze on How To Get Discount Designer Sunglasse | CommonCrawl |
Designing string-of-beads vaccines with optimal spacers
Benjamin Schubert1,2 &
Oliver Kohlbacher1,2,3,4
String-of-beads polypeptides allow convenient delivery of epitope-based vaccines. The success of a polypeptide relies on efficient processing: constituent epitopes need to be recovered while avoiding neo-epitopes from epitope junctions. Spacers between epitopes are employed to ensure this, but spacer selection is non-trivial.
We present a framework to determine optimally the length and sequence of a spacer through multi-objective optimization for human leukocyte antigen class I restricted polypeptides. The method yields string-of-bead vaccines with flexible spacer lengths that increase the predicted epitope recovery rate fivefold while reducing the immunogenicity from neo-epitopes by 44 % compared to designs without spacers.
One of the most promising approaches of rational vaccine design uses so-called epitope-based vaccines (EVs). Vaccines based on T-cell epitopes, short immunogenic peptide sequences derived from antigens, offer several advantages over traditional whole attenuated or subunit vaccines [1]. Unlike traditional vaccines, EVs do not contain potentially infectious material and the selection of peptides can be tailored to address the genetic variation of pathogens and that of a target population or of an individual patient. Well-established techniques for peptide synthesis guarantee rapid high-quality production and an economical storage of the final vaccine [1].
Rational development of EVs relies on bioinformatics for prediction of viable epitopes. Machine-learning methods, such as probabilistic models, neural networks, and support vectors machines, are routinely used with high accuracy for epitope prediction [2–5]. Different algorithms have been suggested as well for selecting an optimal set of epitopes for EV design, each emphasizing different aspects of EVs [6–10]. Among these approaches is OptiTope, a mathematical framework that relies on integer linear programming, which can easily be adapted to many different settings and types of EVs [8, 11].
Nevertheless, the stability and delivery of EVs remain major obstacles. Several strategies have been explored in clinical studies and range from administration of peptide cocktails to assembly of selected peptides into polypeptides [12]. One popular approach concatenates the epitope sequences, like beads on a string, to create a string-of-beads vaccine (SBV, Fig. 1a). The efficacy of an SBV depends on the processing of the polypeptide such that the majority of desired T-cell epitopes are recovered and subsequently presented by human leucocyte antigen (HLA) molecules. A major factor for optimal recovery is the correct cleavage of the epitopes. It has been shown that recovery of the epitopes is strongly linked to the ordering of the peptides within the SBV due to its influence on the cleavage probability [13]. An unfavorable order can lead to miscleaved peptides and thus, to an ineffective vaccine (Fig. 1b). Furthermore, new cleavage sites and neo-epitopes can arise from non-native sequences at junctions between epitopes and/or spacers. These neo-epitopes can also have detrimental effects [14] (Fig. 1b).
Rational string-of-beads design. a Design process of a string-of-beads vaccine (SBV). Given a set of antigen candidates, epitopes are derived either experimentally or computationally. A selection of n candidate epitopes is determined, which form the basis of the SBV. These epitopes are either directly combined into a polypeptide or small connecting sequences (spacers) are used to link adjacent epitopes. In total, there are n! possibilities to arrange n epitopes into a SBV. b Possible cleavage outcomes of a SBV. The efficacy of a SBV depends on correct proteasomal cleavage. Desired is a cleavage pattern that correctly recovers all contained epitopes (1). Not all junction cleavage sites might be cleaved, which results in a partly cleaved and less effective SBV (2). Cleavage of the SBV at non-junction sites can create neo-epitopes. Generation of neo-epitopes can induce unwanted immune responses and reduces the amount of desired epitopes generated by the SBV (3)
To improve the recovery of epitopes in SBVs, several groups have suggested the use of spacer sequences between epitopes [15–17] (Fig. 1a). However, it is unclear how to determine the optimal length and amino acid sequence of a spacer to exploit fully its potential. Furthermore, with increasing spacer length, the problem of induced neo-epitopes and new arising cleavage sites becomes increasingly challenging. In addition, experimentally testing designs to determine an optimal SBV, even without considering spacer sequences, quickly becomes infeasible. A dozen epitopes can be combined into about half a billion (12!) distinct SBV sequences. Considering additional spacer sequences with flexible length, increases the possibilities many times over. For instance, allowing spacer sequences up to a length of three for 12 epitopes results in over 44 trillion possible designs. For simplicity, most SBV designs have so far used fixed spacer sequences. Until now, only a few computational approaches have been proposed to address the epitope assembly problem (i.e., the problem of choosing the right epitope order). Vider-Shalit et al. suggested a genetic algorithm that simultaneously performs epitope selection and assembly [6]. Toussaint et al. reduced the epitope assembly problem to the well-known traveling salesperson problem (TSP) and solved it heuristically or optimally via integer linear programming [7]. Neither of these approaches considers spacer sequences though.
In this work, we propose an approach to determine a provably optimal spacer sequence of fixed length for a given HLA-I restricted epitope pair. We also extend the formulation to determine the optimal spacer length and combine this approach with that of Toussaint et al. [7] to design an optimal SBV with flexible spacer sequences. Additionally, we account for the problem of arising neo-epitopes and cleavage sites by formulating the problem of designing a spacer sequence as a multi-objective optimization problem that maximizes the recovery probability of the desired epitopes, minimizes the immunogenicity of neo-epitopes, and (optionally) minimizes the cleavage probability at non-junction sites at the same time. We focus our efforts solely on HLA-I antigen processing, since computational prediction methods for proteasomal cleavage and HLA-I binding are well established. The cleavage-site prediction models are used for designing spacer sequences and for ordering the therapeutic epitopes of the SBV to increase their cleavage likelihood artificially, whereas the HLA-I binding prediction models are used to hinder the formation of neo-epitopes at the epitope–spacer interfaces. Note that an experimental determination of such an optimal design is virtually impossible due to the vast number of possible designs; a computational approach is, thus, indispensable.
Our results indicate there is a strong increase in the number of correctly cleaved epitopes and a decrease in the neo-immunogenicity of the complete construct compared to SBV designs with commonly used fixed spacers and optimally arranged SBVs without spacer sequences.
Optimization problem from an immunological perspective
The goal of the optimization is to design a SBV based on a given set of N epitopes. The SBV construct will contain all epitopes, but the ordering of the epitopes, as well as the length and sequence of the N – 1 spacers between these epitopes, is variable. The SBV is designed in a way that (a) maximizes the recovery of the epitopes while (b) minimizing the production of undesired neo-epitopes.
More formally: Given a set E of N epitopes e 1, …, e N , we specify an optimal spacer s ij of length k defined over the alphabet of amino acids Σ that connects two epitopes \( {e}_i\in {\Sigma}^{\left|{e}_i\right|} \) and \( {e}_j\in {\Sigma}^{\left|{e}_j\right|} \) as the sequence that maximizes the likelihood of it being cleaved at the respective junction cleavage sites c i and c j of the two epitopes. This increases the likelihood of recovering all desired epitopes (Fig. 1b), which in turn increases the likelihood of them being loaded and presented on HLA-I molecules. If only a few epitopes are correctly processed and neo-epitopes are formed (Fig. 1b), the influence of these neo-epitopes on the immunological processes should be minimized, so that the risk of undesired immune responses is reduced. This can be achieved by designing the spacer sequences in such a way that the potential neo-epitopes spanning the connected epitopes e i , e j and their spacer s ij are minimally immunogenic. To approach this problem computationally, proteasomal cleavage and immunogenicity prediction models are needed. In T-cell epitope prediction, proteasomal cleavage prediction was found to have a minor impact on prediction performance [18, 19]. However, in the context of in silico string-of-beads design, its impact is much more pronounced. Here, accurate cleavage prediction is important for predicting the recovery probabilities of the desired epitopes of the SBV, maximizing the individual cleavage probability by rearranging the order of the epitopes, and optimizing spacer sequences. These effects have been shown to be essential for a vaccine's efficacy in several experimental studies [13–16].
In the following, we describe the prediction models used and derive the mathematical formulation to tackle the problem of designing a SBV with flexible spacer sequences. It should be mentioned that the developed framework is restricted to linear prediction methods. Non-linear prediction models, like artificial neural networks (e.g., NetMHC [3]), or even more complex prediction approaches like the one proposed by Zhang et al. [19], would lead to a non-convex, non-linear mixed integer optimization problem that cannot be solved efficiently and optimally even for small instances [20]. Furthermore, the linear prediction methods have to be fully integrated into the optimization framework to be able to solve the corresponding optimization problem efficiently. Integrated linear methods for epitope and cleavage prediction are listed in "Implementation".
Cleavage site model
For cleavage site prediction, we employ the position-specific scoring matrix (PSSM) ϕ C (∙) proposed by Dönnes et al., which uses four C-terminal amino acids and two N-terminal amino acids to predict a cleavage site. It has been shown to give quite robust and generalizable predictions [18].
We define the cleavage objective of spacer s ij and epitope pair e i , e j as the linear combination of the individual cleavage likelihoods of site c i and c j predicted by the PSSM ϕ C :
$$ C\left({e}_i,{e}_j\Big|{s}_{ij}\right):={\displaystyle \sum_{l=0}^{n_c-1}}{\phi}_C\left(S\left[{i}_c+l\right],l\right)+{\phi}_C\left(S\left[{j}_c+l\right],l\right). $$
Here S ∶ = e i s ij e j denotes the concatenated sequence of a spacer and its enclosing epitope pair e i and e j . S[x] indicates the xth character of sequence S, n c represents the number of amino acids used to predict a cleavage site, and i c , j c denote the start of the segments used to predict the cleavage likelihoods at site c i and c j , respectively. The PSSM ϕ C is a 20 × n c matrix, where each row represents an amino acid and each column the position within a sequence of length n c . The entry ϕ C (a, i) of an amino acid a at position i represents the influence of an amino acid at a particular position on the cleavage likelihood. Thus, the log-likelihood of being cleaved is obtained by summing over the entries of ϕ C for a given sequence of length n c .
Immunogenicity model
Our immunogenicity model is based on the formulation proposed by Toussaint et al., which assumes that each epitope independently influences the immune response with respect to a target population or individual represented by a set of HLA alleles H [8]. The contribution of an HLA allele h ∈ H is directly proportional to the probability p h of the allele occurring within any patient of the target population H. We, thus, obtain
$$ I\left(S\Big|H\right):=\sum_{h\in H}{p}_h\sum_{i=1}^{n-{n}_e} \max \left(0,\left(\sum_{j=0}^{n_e-1}{\phi}_I\left(h,S\left[i+j\right],j\right)\right)-{\tau}_h\right) $$
where S is the input sequence of length n. ϕ I (∙) represents a linear model predicting the immunogenicity of an epitope of length n e for an HLA allele h ∈ H and τ h characterizes the threshold of the HLA allele. For the immunogenicity predictor, we use SYFPEITHI, a PSSM generated from natural processed HLA ligands [2].
Problem definition as multi-objective optimization
From the discussion of the previous sections, it becomes apparent that for successfully designing a spacer sequence s ij for an epitope pair e i , e j , one has to consider multiple design goals. On the one hand, the spacer sequence should be designed to maximize the cleavage probabilities of the cleavage sites c i and c j . On the other hand, it should also minimize the neo-immunogenicity I(∙) of the complete sequence S := e i s ij e j . Such problems can be conveniently described as multi-objective optimization problems. Solving a multi-objective optimization problem yields Pareto-optimal solutions that resemble trade-offs between all objective functions.
Most approaches for solving multi-objective optimization problems use scalarization techniques combining the different objectives [21]. A common approach linearly combines the objectives weighted by a coefficient reflecting the designers' preferences. However, identifying the best weights is difficult because (a) the numerical properties of the objective functions usually differ and (b) the effect of the defined weights is hard to determine a priori.
Since our stated problem exhibits a clear ordering of the objectives with respect to their priority, namely junction-cleavage likelihood over neo-immunogenicity, the problem of finding a Pareto-optimal solution can be significantly simplified by applying lexicographical ordered optimization (LO). In LO, the objectives are ordered based on their importance and several single objective problems of the following form are iteratively solved:
$$ \begin{array}{l}\underset{x}{ \min }{f}_i(x)\\ {}\mathrm{s}.\mathrm{t}.\kern0.28em {f}_j(x)\le {f}_j\left({x}^{\ast}\right)\\ {}\mathrm{where}\kern0.28em i\in \left\{1,N\right\},\kern0.28em j\in \left\{1,i-1\right\}\kern0.28em \mathrm{if}\kern0.28em i>1,\end{array} $$
where i represents the priority of the objective function, and f j (x j *) the optimum of the jth objective function found at the jth iteration [22]. Note that after the first iteration, f j (x j *) does not necessarily obtain the same solution as the independent optimization of f j (x), since new constraints have been added to the problem formulation.
Spacer design with fixed length
We now formulate the problem of designing a spacer of fixed length k as a bi-objective mixed integer linear program (ILP). We represent each position i and amino acid a of the concatenated sequence of spacer and epitope pairs with a binary decision variable x i,a . Additionally, we allow all 20 amino acids to appear within the spacer sequence. A constraint has to be added to allow only one amino acid per position. The complete Pareto formulation has, thus, the following form:
$$ \begin{array}{l}\underset{x}{ \max}\sum_{l=0}^{n_c-1}\left(\sum_{a\in {S}_{i_c+l}}{x}_{i_c+l,a}{\phi}_C\left(a,l\right)+\sum_{b\in {S}_{j_c+l}}{x}_{j_c+l,b}{\phi}_C\left(b,l\right)\right)\\ {}\underset{x}{ \min}\sum_{h\in H}{p}_h\sum_{i=1}^{n-{n}_e} \max \left(0,\left(\sum_{j=0}^{n_e-1}\sum_{a\in {S}_{i+j}}{\mathrm{x}}_{i+j,a}{\phi}_{\mathrm{I}}\left(h,a,j\right)\right)-{\tau}_h\right)\\ {}\mathrm{s}.\mathrm{t}.{\displaystyle \sum_{a\in {S}_i}}{x}_{i,a}\le 1,\kern0.75em \forall i\in \left\{1,n\right\},\end{array} $$
where S i denotes the set of amino acids allowed at position i.
Following the LO definition, we solve two consecutive ILPs to yield a lexicographically optimal solution:
$$ \begin{array}{l}{\mathrm{LO}}_{\mathrm{spacer}}\left({e}_i,{e}_j,k\right)\ :=\\ {}\begin{array}{cc}\mathrm{P}1\hfill & \begin{array}{l}{z}_1^{*}:=\underset{x}{ \max }{\displaystyle \sum_{l=0}^{n_c-1}}\left({\displaystyle \sum_{a\in {S}_{i_c+l}}}{x}_{i_c+l,a}{\phi}_C\left(a,l\right)+{\displaystyle \sum_{b\in {S}_{j_c+l}}}{x}_{j_c+l,b}{\phi}_C\left(b,l\right)\right)\\ {}\mathrm{s}.\mathrm{t}.\kern1em {\displaystyle \sum_{a\in {S}_i}}{x}_{i,a}\le 1, \kern0.5em \forall\ i\in \left\{1,n\right\}\end{array}\hfill \end{array}\\ {}\begin{array}{cc}\hfill \mathrm{P}2\hfill & \hfill \begin{array}{l}{z}_2^{*}: = \underset{x}{ \min }{\displaystyle \sum_{h\in H}}{p}_h{\displaystyle \sum_{i=1}^{n-{n}_e}} \max \left(0,\left({\displaystyle \sum_{j=0}^{n_e-1}}{\displaystyle \sum_{a\in {S}_{i+j}}}{x}_{i+j,a}{\phi}_I\left(h,a,j\right)\right)-{\tau}_h\right)\\ {}\mathrm{s}.\mathrm{t}.\kern1em {\displaystyle \sum_{a\in {S}_i}}{x}_{i,a}\le 1,\kern0.75em \forall\ i\in \left\{1,n\right\}\\ {}{\displaystyle \sum_{l=0}^{n_c-1}}\left({\displaystyle \sum_{a\in {S}_{i_c+l}}}{x}_{i_c+l,a}{\phi}_C\left(a,l\right)+{\displaystyle \sum_{b\in {S}_{j_c+l}}}{x}_{j_c+l,b}{\phi}_C\left(b,l\right)\right)\ge \alpha {z}_1^{*}\end{array}\hfill \end{array}\end{array} $$
Here, we restrict P2 to obtain at least α ∈ [0, 1] fraction of the maximal cleavage score achieved by solving P1. α represents the trade-off between cleavage likelihood and the likelihood of decreasing the immunogenicity score.
String-of-beads design with spacers of flexible length
To design a complete string-of-beads with flexible spacer lengths, the introduced LO formulation is iteratively solved for each pair e i , e j ∈ E of epitopes with varying spacer length k ∈ {0, …, K}. The design with the highest minimum of both cleavage site likelihoods is selected and the scores obtained are used to initialize a fully connected and directed graph, where the negative cleavage scores represent the weights of the edges between epitopes pairs. Following Toussaint et al., a TSP instance is formulated based on this graph by adding a node that represents the N- and C-termini of the SBV and connecting it with all other nodes with zero edge weights (Fig. 2). Solving this formulated TSP instance yields an optimal ordering of the epitopes. Together with the optimized spacers, we thus, obtain an optimal sequence for the entire vaccine construct. The description of the algorithm in pseudo-code can be found in Additional file 1.
Example of a string-of-beads traveling salesperson (TSP) graph. Solving a TSP yields the shortest round trip, which visits each node exactly once. To solve the epitope assembly problem, each epitope is assigned to a node and artificial start and end nodes, representing the N- and C-terminals of the SBV, are added to the graph. The edges are weighted by the negative cleavage likelihood ratios of the two adjacent epitopes and labeled with the corresponding spacer of the epitope pair. Red edges mark the optimal round trip leading to an SBV of KLLEEVLLL-HDH-ALADGVQKV-HH-SVASTTTGV
Non-junction cleavage site minimization
Besides the maximization of the junction cleavage likelihood, minimizing the likelihood of being cleaved at any other position will also improve the recovery probability of the therapeutic epitopes. Non-junction cleavage sites are partly influenced by the length of the spacer sequence and the epitope pairing. Therefore, we treat the minimization of non-junction cleavage sites as an optional third design goal and add to the sequence of consecutively solved ILPs a third optimization problem of the form:
$$ \begin{array}{l}{\mathrm{LO}}_{\mathrm{spacerEx}}\left({e}_i,{e}_j,k\right):=\dots \\ {}\begin{array}{cc}\kern1em \mathrm{P}3\kern1em & \kern1em \begin{array}{l}\underset{x}{ \min}\sum_{i=1}^{n-{n}_c}\sum_{j=0}^{n_c-1}\sum_{a\in {S}_{i+j}}{x}_{i+j,a}{\phi}_C\left(a,j\right)\\ {}\mathrm{s}.\mathrm{t}.\sum_{a\in {S}_i}{x}_{i,a}\le 1,\kern0.75em \forall i\in \left\{1,n\right\}\\ {}\sum_{l=0}^{n_c-1}\left(\sum_{a\in {S}_{i_c+l}}{x}_{i_c+l,a}{\phi}_C\left(a,l\right)+\sum_{b\in {S}_{j_c+l}}{x}_{j_c+l,b}{\phi}_C\left(b,l\right)\right)\ge \alpha {z}_1^{\ast}\\ {}\sum_{h\in H}{p}_h\sum_{i=1}^{n-{n}_e} \max \left(0,\left(\sum_{\mathrm{j}=0}^{{\mathrm{n}}_{\mathrm{e}}-1}\sum_{a\in {S}_{i+j}}{x}_{i+j,a}{\phi}_{\mathrm{I}}\left(h,a,j\right)\right)-{\tau}_h\right)\le \left(2-\beta \right){z}_2^{\ast}\end{array}\kern1em \end{array}\end{array} $$
Here again, α and β represent the trade-offs between the three objective functions. The influence of α and β on cleavage likelihood, neo-immunogenicity, and non-junction cleavage likelihood is depicted in Additional file 2.
To solve the problem efficiently, the spacer design was parallelized and the TSP solution was approximated using the Lin–Kernighan–Helsgaun heuristic [23]. The model was implemented in Python 2.7 using Pyomo 4.0 [24] and solved with ILOG CPLEX 12.5 (www.ilog.com) and the Lin–Kernighan–Helsgaun heuristic [23]. The complete framework was integrated into EpiToolKit, a web-based platform for rational vaccine design. It can be accessed at www.epitoolkit.de under Spacer Design [25]. The source code and example files can be found at https://github.com/FRED-2/OptiVac. The implementations currently support SYFPEITHI [2], BIMAS [26], SMM [27], and SMMPMBEC [28] for epitope prediction, and PCM [18] and ProteaSMM [29] for proteasomal cleavage prediction. The statistical analysis was conducted using R (www.r-project.org). Statistical significance was considered at a significance level of 0.05. Data used in the statistical analysis can be found in Additional files 3 and 4.
Designed spacers increase cleavage likelihood and decrease neo-immunogenicity
To validate the model performance, 1000 random epitope pairs, predicted for proteins of the cytomegalic virus strain AD169 (UniProt Proteom ID UP000008991), were generated and spacers of length 1–6 designed and optimized for the HLA distribution of the European population using α = 0.99. The fold change in cleavage likelihood as well as neo-immunogenicity were compared for concatenated epitopes without spacers, a commonly used fixed spacer (AAY) [16, 30, 31], and with optimally determined spacers (Fig. 3).
Fold change in cleavage likelihood and differences in neo-immunogenicity compared for 1000 randomly sampled epitope pairs. Spacers of lengths 1–6 were designed with the described model. The cleavage probability (a) and immunogenicity (b) were compared for epitope pairs concatenated without a spacer sequence, epitope pairs combined with a commonly used spacer sequence (AAY), and pairs combined with optimally designed spacers. Black error bars represent the 68 % confidence intervals
For each spacer length, a significant increase in cleavage likelihood could be observed for epitope pairs with optimized spacers compared to epitope pairs without spacers (paired one-sided Wilcoxon rank-sum test, Bonferroni corrected). In addition, the optimized spacers outperformed the constructs with a fixed spacer after a length of two (paired one-sided Wilcoxon rank-sum test, Bonferroni corrected). The maximum increase in cleavage likelihood was achieved with a spacer length of four, which is not surprising since the applied cleavage model uses four C- and two N-terminal amino acids to predict a cleavage site. The use of optimal spacer sequences resulted in a 7.7-fold increase in cleavage likelihood compared to epitope pairs without spacer sequences and a twofold increase compared to epitope pairs with a fixed AAY spacer.
In addition, significant improvements could be observed in terms of reduced neo-immunogenicity when using optimized spacers compared to both designs with fixed spacers and without spacers (paired one-sided Wilcoxon rank-sum test, Bonferroni corrected). With increasing spacer length, the immunogenicity decreased when using optimal spacer sequences. An average neo-immunogenicity reduction of 1.9-fold and 2.7-fold could be achieved at a spacer length of four compared to epitope pairs without spacers and fixed spacers, respectively. Detailed results can be found in Additional file 3.
String-of-beads designs with optimal spacers improve epitope recovery
A pool of epitopes was produced. The epitopes were predicted to bind to at least one HLA allele present in a European population. Out of this pool, random sets of size l ∈ {3, 5, 10, 15, 20, 25, 30} were selected. The optimal ordering was determined for the string-of-beads construct without (SBV) and with spacer sequences (SBVspacer) for a maximum spacer length of k = 6 amino acids. Additionally, ten randomly ordered strings-of-beads with fixed AAY spacers (SBVAAY) for the given epitope set were generated. This procedure was repeated 50 times for each set size. The junction cleavage likelihood averaged over the number of arising junction sites, the fraction of recovered epitopes (i.e., epitopes with preceding and succeeding C-terminal cleavage sites with positive cleavage score), as well as the neo-immunogenicity of the complete construct normalized by the number of included epitopes were compared between the strings-of-beads with a spacer, without spacer sequences, and the average performance of the random constructs with fixed spacers (Fig. 4).
Comparison of string-of-beads with and without spacer sequences. Average junction cleavage likelihood (a), recovery percentage (b), and neo-immunogenicity (c) were measured for optimal string-of-beads designs with, without, and fixed AAY spacers. The string-of-beads constructs comprised three to 30 randomly selected epitopes. For each set size, the sampling was repeated 50 times. The maximum spacer length was set to k = 6. Black error bars and colored outlines represent the 68 % confidence intervals
The average junction cleavage scores of SBVspacer and SBVAAY were stable and well above the cleavage threshold of 0.0 for all set sizes, with an average score of 1.74 ± 0.63 and 0.73 ± 0.53, respectively. The average junction cleavage score for SBV decreased with increasing set sizes and was below the cleavage threshold even for small set sizes with an average score of −0.85 ± 1.09. This was also reflected in the percentage of recovered epitopes. SBV exhibited a decreasing recovery with increasing set sizes with an average of 15.4 ± 24.3 %, while SBVspacer and SBVAAY achieved a stable average recovery of 78.3 ± 16.2 % and 62.7 ± 15.2 % corresponding to a fivefold and fourfold increase, respectively. SBVspacer also consistently outperformed SBVAAY, both in cleavage likelihood (2.38-fold increase) and recovery rate (1.25-fold increase).
The differences in neo-immunogenicity were not as strong, which is expected due to the chosen value of α. SBVspacer consistently achieved a lower neo-immunogenicity score (average 1.88 ± 0.59) than SBV (average 3.37 ± 0.93) and SBVAAY (average 4.31 ± 0.99), resulting in a decrease of 44.2 % and 56.8 %, respectively.
The optimal spacer length averaged at 3.23 ± 0.50 amino acids. The run time for instances with 30 epitopes was 5 min on average (maximum 5.6 min) on current commodity hardware (12-core Intel Xeon E5-2620 running at 2 GHz). Detailed results can be found in Additional file 4.
Commonly used spacer designs tend to be worse than optimal designs
Several spacer sequences have been proposed in various settings ranging from a prophylactic vaccine to therapeutic cancer vaccine studies [15, 16, 30, 32–34]. However, these spacer sequences are not universally applicable and their usefulness is dependent on the epitope pairs they connect. To show the potential efficacy of the proposed model, we compared multiepitope studies that used spacers with our in silico designed spacers in terms of epitope recovery and induced neo-epitopes. An epitope was considered recovered if its preceding and succeeding cleavage sites were likely to be cleaved, as predicted by PCM (i.e. PCM score > 0.0). Neo-epitope prediction was performed with SYFPEITHI using the default threshold (i.e. SYFPEITHI score ≥ 20). Additionally, we computed the optimal ordering and selection of the experimental spacers similar to the approach in [35].
Levy et al. proposed a therapeutic multiepitope polypeptide consisting of HLA-A*02:01 restricted modified epitopes derived from different melanoma-associated antigens (gp100:209–217(210 M): IMDQVPFSV, gp100:280–288(288 V): YLEPGEVTV; Mart1:27–35(27 L): LAGIGILTV; tyrosinase: 368–376(370D): YMDGTMSQV) and showed the proteasomal-dependent efficacy in vitro using the peripheral blood mononuclear cells of healthy donors and patients undergoing treatment [30]. To combine the selected peptides, a natively derived spacer sequence (RKSY(L)) as well as experimentally derived spacers (AAY and ALL/SSL) were used. The selected epitopes were included multiple times in the polypeptide combined with the different spacers to maximize the recovery probability. Therefore, we compared the different segments of the vaccine that were connected with the same spacer sequences (Fig. 5). Detailed results of the neo-epitope and cleavage site predictions can be found in Additional file 5.
Comparison between experimentally used spacer sequences and in silico designed spacer sequences for the multiepitope polypeptide proposed by Levy et al. Red bars represent predicted epitopes and the intensity indicates overlapping epitopes at that position. The blue rectangles represent predicted C-terminal cleavage sites. Spacer sequences are marked in red. A tick indicates the start position of a predicted nine-mer epitope. Epitope and cleavage site predictions were performed with SYFPEITHI and PCM, respectively. A peptide was predicted as an epitope if its prediction score was equal to or above a threshold of 20 (default threshold of SYFPEITHI). A cleavage site was said to be cleaved if the predicted PCM score was above zero. An epitope was defined as recovered if both preceding and succeeding cleavage sites were predicted to be cleaved
In general, the optimal SBV design outperformed the experimentally used spacer sequences both in terms of therapeutic epitope recovery and in reduced neo-epitope appearance. With the designed spacers, 100 % of therapeutic epitopes could be recovered without generating neo-epitopes spanning the spacer sequences. The experimentally used spacers, on the other hand, either generated neo-epitopes or were not able to recover an essential amount of the therapeutic epitopes. With the spacer RKSY(L), only one out of four epitopes could be recovered, and ALL induced five neo-epitopes spanning the spacer. The Mart1-derived epitope and the combination of SLL and AAY generated neo-epitopes and resulted in the recovery of one out of four epitopes only. Even the design with optimally ordered epitopes and selected experimental spacer sequences could not recover all epitopes and introduced neo-epitopes. To establish the effect of different (linear) epitope prediction methods, the comparison was repeated with different methods (BIMAS [26] and SMM [27]). The recovery analysis was again performed with PCM, and default thresholds for BIMAS (predicted T 1/2 ≥ 100) and SMM (predicted IC50 ≤ 500 nM) were used for neo-epitope detection. All therapeutic epitopes could be recovered using the in silico designed spacers with a smaller or equal number of neo-epitopes compared to the best experimentally used spacer sequence. While there are differences in detail between the methods, their overall behavior remained the same. Differences can be attributed to variations in the prediction accuracy of the methods (Additional file 5 and 6).
Similar results could be observed for the SBV construct proposed by Ding et al. [15] (Additional files 7 and 8). The proposed SBV was composed of T-cell epitopes derived from the hepatitis B virus X protein, which were combined with different spacer sequences to reduce the number of junction neo-epitopes. With the in silico designed spacer sequences, all therapeutic epitopes could be recovered without introducing neo-epitopes, whereas the experimentally used spacers induced neo-epitopes and were not able to recover all therapeutic epitopes.
In this work, we propose a mathematical model for designing spacer sequences of flexible length for SBVs by exploiting existing proteasomal cleavage and epitope prediction methods. We combined the model with a TSP approach for optimal epitope ordering. We also addressed the problem of neo-epitopes and non-junction cleavage sites arising from spacer sequences and the order of the epitopes within the string-of-beads by extending the formulation with two additional objective functions. To solve the multi-objective optimization problem efficiently, we employ lexicographical optimization techniques.
The efficacy of the model was shown by comparing the recovery rates and neo-immunogenicity of optimal designs with commonly used fixed spacer sequences and spacer-less designs. In each case, the optimal design led to increased predicted epitope recovery and reduced generation of neo-antigens.
We also compared experimentally tested string-of-beads designs that used spacer sequences with our optimized designs. The experimentally used spacer sequences were often sub-optimally chosen for the connecting epitopes. As a consequence, there were neo-epitopes spanning the spacer sequences or proteasomal cleavage could not be guided to cleave the therapeutic epitopes correctly. In contrast, the in silico designed string-of-beads with optimally determined spacers showed improved cleavage patterns and reduced neo-immunogenicity. Often all therapeutic epitopes could be correctly cleaved without introducing neo-epitopes.
An obvious limitation of the current method is its reliance on computational models for proteasomal cleavage and epitope prediction. While models for HLA class I binding prediction exhibit a high accuracy, proteasomal cleavage models still leave room for improvements [36]. Currently, the approach is restricted to HLA class I epitopes but could be effortlessly extended once a cleavage prediction method for HLA-II ligands becomes available. In addition, the framework is designed flexibly enough to replace the underlying proteasomal cleavage prediction method, once more reliable computational prediction models are published. An experimental validation of selected optimal spacer designs is a non-trivial task. It cannot be performed as exhaustively as our computational study – the number of possible designs is simply too large. An experimental validation will thus, most likely, be limited to comparing only a few selected optimal designs to fixed spacer or spacer-less designs. Such validation is planned as future work together with experimental partners.
In conclusion, our method is a first framework that optimally designs both epitope order and spacers for SBV design. The mathematical method employs state-of-the-art prediction methods, but does not depend on specific methods. Our model predicts an increased recovery of desired epitopes and a reduced production of neo-epitopes compared to both fixed spacer and spacer-less designs.
epitope-based vaccine
HLA:
human leucocyte antigen
ILP:
integer linear program
LO:
lexicographical ordered optimization
PSSM:
position-specific scoring matrix
SBV:
string-of-beads
TSP:
traveling salesperson problem
Purcell AW, McCluskey J, Rossjohn J. More than one reason to rethink the use of peptides in vaccine design. Nat Rev Drug Discov. 2007;6(5):404–14.
Rammensee H-G, Bachmann J, Emmerich NPN, Bachor OA, Stevanović S. SYFPEITHI: database for MHC ligands and peptide motifs. Immunogenetics. 1999;50(3–4):213–19.
Lundegaard C, Lamberth K, Harndahl M, Buus S, Lund O, Nielsen M. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8–11. Nucleic Acids Res. 2008;36 suppl 2:W509–12.
Dönnes P, Elofsson A. Prediction of MHC class I binding peptides, using SVMHC. BMC Bioinform. 2002;3(1):25.
Singh H, Raghava G. ProPred: prediction of HLA-DR binding sites. Bioinformatics. 2001;17(12):1236–7.
Vider-Shalit T, Raffaeli S, Louzoun Y. Virus-epitope vaccine design: informatic matching the HLA-I polymorphism to the virus genome. Mol Immunol. 2007;44(6):1253–61.
Toussaint NC, Maman Y, Kohlbacher O, Louzoun Y. Universal peptide vaccines – optimal peptide vaccine design based on viral sequence conservation. Vaccine. 2011;29(47):8745–53.
Toussaint NC, Dönnes P, Kohlbacher O. A mathematical framework for the selection of an optimal set of peptides for epitope-based vaccines. PLoS Comput Biol. 2008;4(12):e1000246.
Lundegaard C, Buggert M, Karlsson A, Lund O, Perez C, Nielsen M, editors. PopCover: a method for selecting of peptides with optimal population and pathogen coverage. Proceedings of the 1st ACM International Conference on Bioinformatics and Computational Biology; 2010. ACM.
Fischer W, Perkins S, Theiler J, Bhattacharya T, Yusim K, Funkhouser R, et al. Polyvalent vaccines for optimal coverage of potential T-cell epitopes in global HIV-1 variants. Nat Med. 2007;13(1):100–6.
Toussaint NC, Kohlbacher O. OptiTope – a web server for the selection of an optimal set of peptides for epitope-based vaccines. Nucleic Acids Res. 2009;37 suppl 2:W617–22.
Sette A, Fikes J. Epitope-based vaccines: an update on epitope identification, vaccine design and delivery. Curr Opin Immunol. 2003;15(4):461–70.
Cornet S, Miconnet I, Menez J, Lemonnier F, Kosmatopoulos K. Optimal organization of a polypeptide-based candidate cancer vaccine composed of cryptic tumor peptides with enhanced immunogenicity. Vaccine. 2006;24(12):2102–9.
Livingston BD, Newman M, Crimi C, McKinney D, Chesnut R, Sette A. Optimization of epitope processing enhances immunogenicity of multiepitope DNA vaccines. Vaccine. 2001;19(32):4652–60.
Ding FX, Wang F, Lu YM, Li K, Wang KH, He XW, et al. Multiepitope peptide‐loaded virus‐like particles as a vaccine against hepatitis B virus–related hepatocellular carcinoma. Hepatology. 2009;49(5):1492–502.
Velders MP, Weijzen S, Eiben GL, Elmishad AG, Kloetzel P-M, Higgins T, et al. Defined flanking spacers and enhanced proteolysis is essential for eradication of established tumors by an epitope string DNA vaccine. J Immunol. 2001;166(9):5366–73.
Kreiter S, Vormehr M, van de Roemer N, Diken M, Löwer M, Diekmann J, et al. Mutant MHC class II epitopes drive therapeutic immune responses to cancer. Nature. 2015;520(7549):692–6.
Dönnes P, Kohlbacher O. Integrated modeling of the major events in the MHC class I antigen processing pathway. Protein Sci. 2005;14(8):2132–40.
Zhang W, Niu Y, Zou H, Luo L, Liu Q, Wu W. Accurate prediction of immunogenic T-cell epitopes from epitope sequences using the genetic algorithm-based ensemble learning. PloS ONE. 2014;10(5):e0128194-e.
Hemmecke R, Köppe M, Lee J, Weismantel R. Nonlinear integer programming. 50 years of integer programming 1958–2008. Berlin Heidelberg: Springer; 2010. p. 561–618.
Ehrgott M. A discussion of scalarization techniques for multiple objective integer programming. Ann Oper Res. 2006;147(1):343–60.
Marler RT, Arora JS. Survey of multi-objective optimization methods for engineering. Struct Multidiscip Optim. 2004;26(6):369–95.
Helsgaun K. General k-opt submoves for the Lin–Kernighan TSP heuristic. Math Program Comput. 2009;1(2–3):119–63.
Hart WE, Watson J-P, Woodruff DL. Pyomo: modeling and solving mathematical programs in Python. Math Program Comput. 2011;3(3):219–60.
Schubert B, Brachvogel H-P, Jürges C, Kohlbacher O. EpiToolKit – a web-based workbench for vaccine design. Bioinformatics. 2015;31(13):2211-3. doi:10.1093/bioinformatics/btv116.
Parker KC, Bednarek MA, Coligan JE. Scheme for ranking potential HLA-A2 binding peptides based on independent binding of individual peptide side-chains. J Immunol. 1994;152(1):163–75.
Peters B, Sette A. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method. BMC Bioinform. 2005;6(1):132.
Kim Y, Sidney J, Pinilla C, Sette A, Peters B. Derivation of an amino acid similarity matrix for peptide: MHC binding and its application as a Bayesian prior. BMC Bioinform. 2009;10(1):394.
Tenzer S, Peters B, Bulik S, Schoor O, Lemmel C, Schatz M, et al. Modeling the MHC class I pathway by combining predictions of proteasomal cleavage, TAP transport and MHC class I binding. Cell Mol Life Sci. 2005;62(9):1025–37.
Levy A, Pitcovski J, Frankenburg S, Elias O, Altuvia Y, Margalit H, et al. A melanoma multiepitope polypeptide induces specific CD8+ T-cell response. Cell Immunol. 2007;250(1):24–30.
Aurisicchio L, Fridman A, Bagchi A, Scarselli E, La Monica N, Ciliberto G. A novel minigene scaffold for therapeutic cancer vaccines. Oncoimmunology. 2014;3(1):e27529.
Bazhan S, Karpenko L, Ilyicheva T, Belavin P, Seregin S, Danilyuk N, et al. Rational design based synthetic polyepitope DNA vaccine for eliciting HIV-specific CD8+ T cell responses. Mol Immunol. 2010;47(7):1507–15.
Moss SF, Moise L, Lee DS, Kim W, Zhang S, Lee J, et al. HelicoVax: epitope-based therapeutic Helicobacter pylori vaccination in a mouse model. Vaccine. 2011;29(11):2085–91.
Depla E, Van der Aa A, Livingston BD, Crimi C, Allosery K, De Brabandere V, et al. Rational design of a multiepitope vaccine encoding T-lymphocyte epitopes for treatment of chronic hepatitis B virus infections. J Virol. 2008;82(1):435–50.
Seyed N, Taheri T, Vauchy C, Dosset M, Godet Y, Eslamifar A et al. Immunogenicity evaluation of a rationally designed polytope construct encoding HLA-A* 0201 restricted epitopes derived from Leishmania major related proteins in HLA-A2/DR1 transgenic mice: steps toward polytope vaccine. PLoS ONE. 2014;9(10):e108848. doi: 10.1371/journal.pone.0108848.
Calis JJ, Reinink P, Keller C, Kloetzel PM, Keşmir C. Role of peptide processing predictions in T cell epitope identification: contribution of different prediction programs. Immunogenetics. 2014;67(2):85–93.
This project received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement 633592 (APERIM). OK acknowledges funding from the Deutsche Forschungsgemeinschaft (SFB685/B1).
Center for Bioinformatics, University of Tübingen, 72076, Tübingen, Germany
Benjamin Schubert & Oliver Kohlbacher
Department of Computer Science, Applied Bioinformatics, 72076, Tübingen, Germany
Quantitative Biology Center, 72076, Tübingen, Germany
Oliver Kohlbacher
Faculty of Medicine, University of Tübingen, 72076, Tübingen, Germany
Benjamin Schubert
Correspondence to Benjamin Schubert.
BS developed and implemented the method. BS and OK wrote the paper. OK designed the study. Both authors read and approved the final manuscript.
Algorithm for string-of-beads design with flexible spacer sequences. A description in pseudo-code of the algorithm to determine the optimal ordering of epitopes and spacers for a string-of-beads vaccine. (PDF 1056 kb)
Influence of α and β on cleavage likelihood, neo-immunogenicity, and non-junction cleavage likelihood exemplified for spacers of length three. Cleavage likelihood and neo-immunogenicity decrease linearly with α. For the conservatively chosen α = 0.99, β influences neo-immunogenicity only marginally. Once α is further decreased, β influences neo-immunogenicity in a non-linear manner. Similar behavior can be seen for the non-junction cleavage likelihood. It decreases linearly with α and non-linearly with β. (PDF 51 kb)
Detailed results for comparing epitope pairs with and without spacers. Detailed results for the comparison of epitope pairs with spacers and without spacers including sequences of the paired epitopes and designed spacers, predicted cleavage likelihoods of the two induced cleavage sites, and the combined cleavage likelihood, as well as the neo-immunogenicity of the epitope pair–spacer construct. (XLS 1236 kb)
Detailed results for comparing string-of-beads vaccines of different lengths with and without spacers. Detailed results for the comparison of string-of-beads with spacers and without spacers including the string-of-beads sequences, predicted cleavage likelihoods, neo-immunogenicity, number of neo-epitopes, as well as the recovery rate of the desired epitopes. (XLS 299 kb)
Detailed prediction results for the polypeptide proposed by Levy et al. Detailed results of the neo-epitope and cleavage site prediction analysis performed with PCM for cleavage site prediction, and with SYFPEITHI, BIMAS, and SMM for neo-epitope prediction for the polypeptide of Levy et al. (XLS 76 kb)
Comparison of different epitope prediction methods for in silico spacer design based on the polypeptide proposed by Levy et al. Spacer sequences were constructed with SYFPEITHI, BIMAS, and SMM. Cleavage prediction was performed with PCM, classifying a site as cleaved if its score was greater than zero. The epitope thresholds used for neo-epitope detection were SYFPETHI-score ≥ 20, BIMAS ≥ 100 T 1/2, and SMM ≤ 500 nM. Red bars represent predicted epitopes and the intensity indicates overlapping epitopes at that position. The blue rectangles represent predicted C-terminal cleavage sites. Spacer sequences are marked in red. A tick indicates the start position of a predicted nine-mer epitope. Although, the different prediction methods yielded different spacer sequences, the overall result remained the same. The in silico designed spacers were superior in terms of recovered epitopes and neo-epitope formation. (PDF 1198 kb)
Comparison of experimentally used and in silico designed spacers based on the polypeptide proposed by Ding et al. Red bars represent predicted epitopes and the intensity indicates overlapping epitopes at that position. The blue rectangles represent predicted C-terminal cleavage sites. Spacer sequences are marked in red. A tick indicates the start position of a predicted nine-mer epitope. Epitope and cleavage site prediction were performed with SYFPEITHI and PCM, respectively. A nine-mer was predicted as an epitope if its predicted score was equal to or above a threshold of 20 (default threshold of SYFPEITHI). A cleavage site was said to be cleaved if the predicted PCM score was above zero. An epitope was defined as recovered if both the preceding and succeeding cleavage sites were predicted to be cleaved. (PDF 581 kb)
Detailed prediction results of the polypeptide proposed by Ding et al. Detailed results of the neo-epitope and cleavage site prediction analysis performed with SYFPEITHI and PCM on the polypeptide of Ding et al. (XLS 43 kb)
Schubert, B., Kohlbacher, O. Designing string-of-beads vaccines with optimal spacers. Genome Med 8, 9 (2016). https://doi.org/10.1186/s13073-016-0263-6
Received: 30 June 2015
Space Sequence
Space Length
Travel Salesperson Problem
Immunogenomics in health and disease | CommonCrawl |
Inflammasome and toll-like receptor signaling in human monocytes after successful cardiopulmonary resuscitation
Alexander Asmussen1,
Katrin Fink2,
Hans-Jörg Busch2,
Thomas Helbing1,
Natascha Bourgeois1,
Christoph Bode1 &
Sebastian Grundmann1
Critical Care volume 20, Article number: 170 (2016) Cite this article
Whole body ischemia-reperfusion injury (IRI) after cardiopulmonary resuscitation (CPR) induces a generalized inflammatory response which contributes to the development of post-cardiac arrest syndrome (PCAS). Recently, pattern recognition receptors (PRRs), such as toll-like receptors (TLRs) and inflammasomes, have been shown to mediate the inflammatory response in IRI. In this study we investigated monocyte PRR signaling and function in PCAS.
Blood samples were drawn in the first 12 hours, and at 24 and 48 hours following return of spontaneous circulation in 51 survivors after cardiac arrest. Monocyte mRNA levels of TLR2, TLR4, interleukin-1 receptor-associated kinase (IRAK)3, IRAK4, NLR family pyrin domain containing (NLRP)1, NLRP3, AIM2, PYCARD, CASP1, and IL1B were determined by real-time quantitative PCR. Ex vivo cytokine production in response to stimulation with TLR ligands Pam3CSK4 and lipopolysaccharide (LPS) was assessed in both whole blood and monocyte culture assays. Ex vivo cytokine production of peripheral blood mononuclear cells (PBMCs) from a healthy volunteer in response to stimulation with patients' sera with or without LPS was assessed. The results were compared to 19 hemodynamically stable patients with coronary artery disease.
Monocyte TLR2, TLR4, IRAK3, IRAK4, NLRP3, PYCARD and IL1B were initially upregulated in patients following cardiac arrest. The NLRP1 and AIM2 inflammasomes were downregulated in resuscitated patients. There was a significant positive correlation between TLR2, TLR4, IRAK3 and IRAK4 expression and the degree of ischemia as assessed by serum lactate levels and the time until return of spontaneous circulation. Nonsurvivors at 30 days had significantly lower mRNA levels of TLR2, IRAK3, IRAK4, NLRP3 and CASP1 in the late phase following cardiac arrest. We observed reduced proinflammatory cytokine release in response to both TLR2 and TLR4 activation in whole blood and monocyte culture assays in patients after CPR. Sera from resuscitated patients attenuated the inflammatory response in cultured PBMCs after co-stimulation with LPS.
Successful resuscitation from cardiac arrest results in changes in monocyte pattern recognition receptor signaling pathways, which may contribute to the post-cardiac arrest syndrome.
Trial registration
The trial was registered in the German Clinical Trials Register (DRKS00009684) on 27/11/2015.
The annual incidence of sudden cardiac arrest ranges between 50 and 100 per 100,000 in the general population in North America and Europe. A recent registry study for out-of-hospital cardiac arrest (OHCA) shows that although a return of spontaneous circulation (ROSC) is obtained in 34.4 %, the prognosis of patients suffering sudden cardiac arrest still remains poor, with an overall survival to hospital discharge rate of 9.6 % [1]. This high mortality rate in patients who initially achieve ROSC can be attributed to a unique pathophysiological condition involving multiple organs known as post-cardiac arrest syndrome (PCAS) [2, 3]. PCAS is characterized by its four major clinical components, namely (1) anoxic brain injury, (2) myocardial dysfunction, (3) systemic ischemia-reperfusion response, and (4) the persistent precipitating pathology [3]. On a pathophysiological level, the initial tissue injury during sudden whole-body ischemia is thought to be aggravated during reperfusion through cardiopulmonary resuscitation and finally by ROSC, resulting in the generation of reactive oxygen species and thereby inducing oxidative stress [4–6]. These events lead to the induction of a systemic inflammatory response with neutrophil activation [7], elevation of plasma cytokines [8] and severe endothelial injury [9–11]. These deleterious pathological processes contribute to microcirculatory disorder [12–14] and vascular leakage [14, 15] and may finally result in a clinical condition comparable to septic shock [8, 16]. However, up to this day, the only causative treatment in post-cardiac arrest care remains therapeutic hypothermia [17].
The aim of this study was to investigate the potential involvement of the innate immune system as a potential modulating factor in the inflammatory response following cardiac arrest. While its important role is well documented in sepsis [18], trauma [19], and tissue damage after ischemia-reperfusion injury (IRI) in specific organs [20], little is known about the contribution of innate immunity to the systemic inflammatory response syndrome after cardiac arrest. As one of the evolutionary oldest barriers against pathogen invasion, the innate immune system recognizes pathogen-associated molecular patterns (PAMPs) via germline-encoded pattern-recognition receptors (PRRs), which lead to an antimicrobial response. Toll-like receptors (TLRs), members of membrane-bound PRRs, and the inflammasomes, which are PRRs located in the cytoplasm, therefore play a pivotal role in the first line of host defense against pathogens by inducing proinflammatory cytokines like interleukin-1 beta (IL-1β) and tumor necrosis factor alpha (TNFα) [21, 22]. It is now evident that these PRRs also play a crucial role in conditions of sterile inflammation, like in IRI, as these receptors also recognize a heterogeneous group of endogenous alarm signals. These danger-associated molecular patterns (DAMPs) [23], such as heat-shock proteins, uric acid, genomic double-stranded DNA, and components of the extracellular matrix, are cell-derived molecules that are released by injured or distressed cells and tissue [24] and can contribute to inflammation via activation of PRRs [25, 26].
In the current study we therefore investigated the involvement of the toll-like receptors and the inflammasome in the systemic inflammatory condition following survived cardiac arrest. Our working hypothesis was that global ischemia-reperfusion injury, induced by circulatory arrest and cardiopulmonary resuscitation, results in the release of DAMPs which activate PRRs. We further hypothesized that this activation results in an expressional change of these receptors at the mRNA level, which alters the response of these PRRs to subsequent stimuli.
Patient recruitment
The study was approved by the ethics committee of the University Medical Center Freiburg (approval number 328/09) and conforms to the declaration of Helsinki. The trial was registered in the German Clinical Trials Register (DRKS00009684). We prospectively enrolled 55 patients who had undergone successful cardiopulmonary resuscitation (CPR), and were admitted to our intensive care unit at the University Hospital of Freiburg, Germany. The patients' next of kin were informed about the study details. Informed consent was obtained retrospectively from patients who survived to hospital discharge with a good neurological outcome. A total of 20 patients with both stable and unstable coronary artery disease (CAD), but without acute myocardial infarction, were included in this study as control subjects, because the comorbidities and pharmacological and interventional treatment of patients with sudden cardiac arrest is most closely reflected by this group of patients. Written informed consent was obtained from all patients in the control group. Four patients (cases) and one control subject were retrospectively excluded from the study because of violation of the exclusion criteria, which was not evident at the time of study enrollment.
Patients older than 18 years with either in-hospital cardiac arrest (IHCA) or out-of-hospital cardiac arrest (OHCA) due to any cause, who received cardiopulmonary resuscitation for longer than 5 minutes (including downtime before the beginning of CPR) were included in this study. Patients with preexisting acute or chronic inflammatory or infectious disease, and patients taking immunosuppressive medication were excluded from this study, as in these patients a modulation of the monocyte inflammasome or TLR signaling can be expected [22, 27]. Furthermore, patients with apparent multiple organ dysfunction syndrome prior to cardiac arrest were excluded from this study [28].
Blood samples were drawn from resuscitated patients via an arterial line within the first 12 h after admission to our hospital, and after 24 and 48 h, respectively. In the control group, a single blood specimen was collected by sterile venipuncture with a 21-gauge butterfly needle. Samples were drawn slowly and immediately processed.
Monocyte purification
Peripheral blood mononuclear cells (PBMCs) were purified from fresh citrated blood by Biocoll-1.077 density gradient separation (Biochrom, Berlin, Germany) at 460 × g for 30 minutes at room temperature. The mononuclear cell layer was removed and washed two times in cold Dulbecco's phosphate-buffered saline (DPBS) (Life Technologies, Carlsbad, CA, USA), w/o CaCl2 and MgCl2, with 2 mM EDTA, by centrifugation at 200 × g for 12 minutes at 4 °C. Monocytes were isolated by negative selection with the Monocyte Isolation Kit II (Miltenyi Biotech, Bergisch-Gladbach, Germany) according to the manufacturer's instructions. Monocyte purification success was verified by flow cytometry analysis.
Total RNA was extracted by phenol/guanidine-based lysis of monocyte samples and silica membrane-based purification with the miRNeasy Mini Kit (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol. RNA quality and quantity was assessed by Nanodrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). An absorption coefficient at 260 nm/280 nm from 1.8 to 2.0 was considered as pure RNA and led to further processing of the RNA specimen.
cDNA synthesis and quantitative real-time PCR
RNA was reverse transcribed with the Transcriptor First Strand cDNA Synthesis Kit (Roche, Basel, Switzerland). The converted cDNA was used for quantitative real-time polymerase chain reaction (qPCR) analysis with the Light Cycler 480 SYBR Green Master I Kit on a Light Cycler 480 Instrument II (Roche, Basel, Switzerland). Primer-pairs were designed with Beacon Designer software (Premier Biosoft, Palo Alto, CA, USA) and are listed in a supplementary table (Additional file 1). Intron-spanning primer-pairs were preferred over intron-flanking primer-pairs. Primer-pair efficiency was determined using a standard curve dilution method. A primer-pair efficiency of 90–110 % was accepted. Relative quantification was used to assess the gene expression of selected genes linked to monocyte TLR and inflammasome signaling. Gene expression was normalized to the two reference genes RNA polymerase 2 (POL2RA) and Beta-2 microglobulin (β2M). POL2RA has been shown to be constantly expressed over multiple mammalian cell lines [29], whereas β2M has been shown to be steadily expressed in activated monocytes after stimulation with lipopolysaccharide (LPS) [30]. Relative copy numbers (RCN) of the selected genes were calculated using the equation:
$$ \mathrm{R}\mathrm{C}\mathrm{N} = {\mathrm{E}}^{-\varDelta \mathrm{C}\mathrm{t}} $$
where E is the primer efficiency of the target gene and ∆Ct is the difference of the threshold cycles of the target gene and the geometric mean of the threshold cycles of the two reference genes.
Stimulation of whole blood samples, cultured monocytes and PBMCs
NH4-heparinized whole blood (100 μl) was stimulated in sterile 96-well plates with 100 μl TLR4 ligand LPS from Escherichia coli 055:B5 (Sigma, Missouri, USA) at a final concentration of 10 ng/ml and 100 μl of the synthetic TLR2 ligand Pam3CSK4 (Merck Millipore, Darmstadt, Germany) at a final concentration of 500 ng/ml as previously described [31]. For stimulation of isolated monocytes, 106 purified monocytes were resuspended in 900 μl Roswell Park Memorial Institute (RPMI)-1640 medium supplemented with 2 mM L-glutamine, 1 % non-essential amino acid solution, 200 U/ml penicillin, 200 μg/ml streptomycin, and 10 % fetal calf serum in sterile 12-well plates. Monocytes were stimulated with 100 μl LPS for a final concentration of 10 ng/ml. PBMCs were isolated from a healthy control. For stimulation of PBMCs, 0.5 × 106 PBMCs were resuspended in 400 μl RPMI-1640 medium supplemented with 2 mM L-glutamine, 1 % non-essential amino acid solution, 200 U/ml penicillin, 200 μg/ml streptomycin, and incubated with 100 μl serum at a final concentration of 20 % from either resuscitated patients or patients with CAD. Additionally, 0.5 × 106 PBMCs were co-stimulated with 20 % patient serum and 10 ng/ml LPS in the previously described cell culture medium. Whole blood, monocyte, and PBMC cultures were incubated for 12 h at 37 °C and 5 % CO2. The culture supernatant was stored at −20 °C for further analysis.
TNF-α was determined in TLR2 ligand-activated whole blood supernatants using an enzyme-linked immunosorbent assay (ELISA) (PeliKine compact, Sanquin Reagents, Amsterdam, Netherlands). IL-1β was determined in TLR4 ligand-stimulated whole blood, monocyte, and PBMC culture supernatants (RayBio Human IL-1β ELISA, RayBiotech, Norcross, GA, USA) according to the manufacturer's protocol. The resulting cytokine concentration was standardized to the patient's white blood count in whole blood culture supernatants.
Statistical analysis was performed using SPSS 21 (IBM, Armonk, NY, USA). Gaussian distribution was verified by visualization of the respective histograms, the Shapiro-Wilk test, and a calculation of the z score of skewness and kurtosis. A z score of 1.96 was considered as statistically not significant and a normal distribution was assumed [32]. The assumption of homogeneity of variances was verified by the nonparametric Levene test [33]. Fisher's exact test was used to compare categorical variables. Normally distributed unpaired data on an interval scale consisting of multiple groups were analyzed with one-way analysis of variance (ANOVA) and post-hoc analysis with all-pairwise comparison. Non-normally distributed unpaired data on an interval scale consisting of two groups were analyzed using the Mann–Whitney U test. Non-normally distributed unpaired data on an interval scale consisting of multiple groups were analyzed with Kruskal-Wallis test and post-hoc analysis using the Dunn-Bonferroni approach. Correlation between selected variables was estimated by Spearman's rank correlation. Statistical significance was defined as a two-tailed p value <0.05. Continuous variables are reported as mean value ± standard deviation (SD). Bar graphs illustrate the mean value, with the error bars indicating the SD.
A total of 51 patients who had undergone cardiopulmonary resuscitation (CPR group) and 19 patients with CAD were included in this study. The majority of the study population was male. Mean age at the time of the investigation did not differ significantly between the two groups (66.5 ± 11.5 in the resuscitation group vs. 68.9 ± 11.6 in the CAD group; p = 0.44). Of the resuscitated patients, 67 % had significant CAD vs. 100 % in the CAD group (p = 0.003). Although more patients in the CPR group underwent coronary angiography prior to (<12 h before) study enrollment (CPR group 76 % vs. CAD group 42 %; p = 0.01), there was no difference between the two groups in the resulting coronary revascularization through percutaneous coronary intervention (PCI) (CPR 45 % vs. CAD 42 %; p = 1.0) (Table 1).
Both groups had comparable prevalence of preexisting medical conditions such as chronic heart failure, peripheral artery disease, pulmonary hypertension, and chronic liver, renal, or pulmonary disease. The cardiovascular risk profile of patients with CAD indicated greater prevalence of dyslipidemia in the CAD group (68 % vs. 31 % in the resuscitation group; p = 0.007) (Table 1).
Among the study population 80 % had experienced OHCA and 20 % of the study population were successfully resuscitated from IHCA. Ventricular fibrillation and ventricular tachycardia were the most common initial rhythm presentations after cardiac arrest (57 %), while 43 % of the resuscitated patients had asystole or pulseless electrical activity. The mean duration of CPR was 29.8 ± 19.1 minutes and the no-flow time was 2.4 ± 3.9 minutes. The sequential organ failure assessment (SOFA) score was calculated daily in the first 3 days after cardiac arrest and did not differ between the three measuring points in patients after CPR (Table 1).
Mean time from ROSC to blood sampling was 6.5 ± 2.9 h for the first, 25.2 ± 3.1 h for the second, and 48.8 ± 3.0 h for the third specimen of blood. A summary of routine laboratory values is shown in a supplementary table (Additional file 2).
Monocyte TLR and inflammasome mRNA expression in patients after cardiopulmonary resuscitation and the control group
In order to evaluate the potential role of PRRs in the immunoinflammatory syndrome following cardiac arrest, monocyte mRNA levels of genes related to TLR and inflammasome signaling were assessed in patients after CPR and the control group with CAD. Monocyte mRNA levels, expressed as relative copy numbers, are depicted in Fig. 1 and listed in a supplementary table (Additional file 3).
Monocyte toll-like receptor (TLR) and inflammasome mRNA expression in patients who had experienced cardiac arrest and the control group. Shown are monocyte mRNA expression levels of TLR2 (a), TLR4 (b), interleukin-1 receptor-associated kinase (IRAK)3 (c), IRAK4 (d), NLR family pyrin domain containing (NLRP)1 (e), NLRP3 (f), absent in melanoma (AIM)2 (g), PYD and CARD domain containing (PYCARD) (h), caspase 1 (CASP1) (i), and IL1B (j), expressed as mean relative copy numbers (RCN) and standard deviation, from patients after cardiopulmonary resuscitation (CPR) in the first 12 h (CPR t1; n = 30), after 24 h (CPR t2; n = 29) and 48 h (CPR t3; n = 23) following return of spontaneous circulation, and mRNA expression levels in the control group (coronary artery disease (CAD); n = 19). Statistical hypothesis testing was performed using the Kruskal-Wallis test and post-hoc analysis with all-pairwise comparison using the Dunn-Bonferroni approach (*p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001).
TLR signaling
Compared to the control group, we observed significant upregulation of surface PRR TLR2 in the early phase after cardiac arrest, which was subsequently downregulated in the later phase. Resuscitated patients had significantly higher mRNA levels of the surface PRR TLR4 in the first 12 h and 48 h after CPR with a trend towards higher levels in the intermediate phase. Likewise, IRAK4, the main kinase to further promote TLR signaling activation, was upregulated in patients in the early phase after cardiac arrest. Consistent with these results, significantly higher levels of IL1B mRNA could be detected in monocytes in the early phase after cardiac arrest. Conversely, we also observed significantly higher mRNA levels of IRAK3, a negative regulator of monocyte TLR signaling, in the first 24 h after CPR. We went on to investigate whether intracellular PRRs displayed similar regulation after CPR.
Inflammasome signaling
We detected distinct expression patterns of the investigated PRRs, with significant upregulation of monocyte NLRP3 mRNA levels in the first and the second blood sampling after cardiac arrest. In contrast, monocyte mRNA expression of the NLRP1 inflammasome was significantly downregulated compared to the control group in the first 12 h and at 48 h after ROSC. Likewise, we observed significantly lower mRNA expression levels of AIM2 in the first 24 h after CPR. Monocyte mRNA levels of the adaptor protein PYCARD were significantly upregulated at 24 h after ROSC. We did not observe a change in monocyte CASP1 mRNA expression in patients who had experienced cardiac arrest compared to patients with CAD.
Kinetics of TLR and inflammasome mRNA expression levels in the time course after cardiac arrest
In analysis of time-dependent expression of monocyte mRNA in patients after cardiac arrest, significantly higher levels of TLR2, TLR4, IRAK3, NLRP3, and IL1B were observed in patients in the early hours after ROSC compared to the later phase. In contrast, we noticed significant downregulation of AIM2 in monocytes from patients during the first 24 hours after CPR. Following the notion that these expressional changes could correlate with and possibly affect the clinical course of our patients, we performed subgroup comparison in survivors at 30 days after CPR and nonsurvivors.
Comparison of TLR and inflammasome signaling mRNA expression levels in survivors and nonsurvivors after cardiac arrest
Interestingly, a time-dependent decrease in monocyte TLR2, TLR4, IRAK3, IRAK4, NLRP1, NLRP3, PYCARD, and IL1B mRNA expression levels was solely observed in those who did not survive for 30 days after CPR, whereas survivors had stable expression of these transcripts during the observation period. In contrast, both 30-day survivors and nonsurvivors had a time-dependent increase in monocyte AIM2 mRNA expression levels (Additional files 4 and 5).
To further evaluate the prognostic implications of the investigated gene transcripts in patients who had undergone CPR, monocyte mRNA transcript levels were quantitatively compared between 30-day survivors and nonsurvivors: nonsurvivors had a trend towards higher monocyte mRNA expression levels of TLR signaling pathways in the first 12 h after ROSC, which was not statistically significant. We did not observe differences in change in monocyte TLR and inflammasome mRNA expression in CPR survivors and nonsurvivors 24 h after ROSC. Notably, we observed that 30-day nonsurvivors had significantly lower mRNA expression levels of TLR2 (p = 0.003), IRAK3 (p = 0.027), IRAK4 (p = 0.027), NLRP3 (p = 0.006), and CASP1 (p = 0.019) 48 h after ROSC (Table 2; Fig. 2).
Table 2 Monocyte mRNA expression in 30-day survivors and nonsurvivors after sudden cardiac arrest
Monocyte toll-like receptor (TLR)2, interleukin-1 receptor-associated kinase (IRAK)3, IRAK4, NLR family pyrin domain containing (NLRP)3, and caspase (CASP)1 mRNA expression of survivors and nonsurvivors after cardiac arrest. Monocyte mRNA expression levels of TLR2 (a), IRAK3 (b), IRAK4 (c), NLRP3 (d), and CASP1 (e) in 30-day survivors (n = 12) and nonsurvivors (n = 11) after 48 h after return of spontaneous circulation (ROSC). The 30-day nonsurvivors had significantly lower monocyte TLR2 (p = 0.003), IRAK3 (p = 0.027), IRAK4 (p = 0.027), NLRP3 (p = 0.006), and CASP1 (p = 0.019) mRNA levels after 48 h after ROSC (p = 0.003). Statistical hypothesis testing was performed using the Mann–Whitney U test. RCN relative copy number
Association of TLR signaling transcript levels and clinical markers of ischemic injury
As host-derived DAMPs from injured cells have been shown to propagate inflammation via PRRs, we hypothesized that the extent of transcriptional activation of the investigated genes of TLR and inflammasome signaling would be related to clinical markers of ischemic injury such as time from collapse to initiation of CPR, time to ROSC, serum lactate levels, and necessity of a vasopressor therapy. Correlation analyses are listed in a supplementary table (Additional file 6).
Serum lactate levels at the time of blood sampling were significantly positively correlated with TLR2 (r s 0.570; p = 0.001), TLR4 (r s 0.369; p = 0.045), IRAK3 (r s 0.569; p = 0.001), and IRAK4 (r s 0.413; p = 0.029) monocyte mRNA levels in the early phase after cardiac arrest (Fig. 3a). Monocyte NLRP1 mRNA expression was significantly negatively correlated with serum lactate levels at 24 h post CPR (r s −0.378; p = 0.047). Time to ROSC was significantly positively correlated with both TLR4 (r s 0.516; p = 0.003) and IRAK4 (r s 0.407; p = 0.032) monocyte mRNA expression levels within the first hours after ROSC (Fig. 3b). Monocyte TLR mRNA expression was not related to estimated no-flow time from collapse to initiation of CPR or to the serum lactate directly measured after ROSC.
Correlation between monocyte toll-like receptor (TLR) expression and markers of ischemia. There was statistically significant positive correlation between serum lactate and monocyte TLR2 mRNA expression in the first 12 h after cardiac arrest (cardiopulmonary resuscitation (CPR) t1: n = 30; r s = 0.570, p = 0.001) (a). Monocyte TLR4 mRNA levels in the first 12 h after CPR (CPR t1: n = 30) were positively correlated with the estimated time from the patient's collapse until return of spontaneous circulation (ROSC) (r s = 0.516, p = 0.003) (b). Statistical hypothesis testing was performed using Spearman's rank correlation. RCN relative copy number
We observed significant positive correlation between both TLR2 and IRAK4 mRNA transcript levels and dosage of norepinephrine to maintain a mean arterial blood pressure ≥80 mmHg in the early phase after cardiac arrest.
Following the hypothesis that the observed changes in TLR and inflammasome expression are of functional relevance for the innate immune response after CPR, we investigated the functional capacity of PRR signaling in the time course following cardiac arrest by stimulating whole blood and monocytes cultures with TLR2 and TLR4 agonists.
Proinflammatory cytokine production of cultured whole blood and monocytes in response to PRR activation
Whole blood samples taken from patients after cardiac arrest had markedly impaired synthesis of IL-1β in response to stimulation with the TLR4 agonist LPS (CAD 327.0 ± 291.3; CPR t1 31.3 ± 49.7; CPR t2 28.0 ± 35.6; CPR t3 33.5 ± 60.6 ((pg/ml)/white blood cell (WBC) count)). Similarly, there was less TNF-α production induced in Pam3CSK4-activated whole blood samples taken from patients after cardiac arrest. This effect was more pronounced in the early phase following ROSC (CAD 19.6 ± 16.4; CPR t1 3.3 ± 4.1; CPR t2 11.4 ± 14.2; CPR t3 7.3 ± 7.0 ((pg/ml)/WBC count)) (Fig. 4). Interestingly, cultured monocytes also had impaired IL-1β production in response to stimulation with LPS in the first 24 h after CPR (CAD 6514.7 ± 4178.8; CPR t1 4764.4 ± 7550.5; CPR t2 2662.8 ± 5064.8; CPR t3 3243.5 ± 3224.5 ((pg/ml)) (Fig. 5). To investigate if these observed differences in cytokine production are mediated by humoral factors in the serum of the resuscitated patients, we performed in vitro serum exchange experiments.
Cytokine production in whole blood in response to stimulation with toll-like receptor (TLR)2 and TLR4 agonists. Impaired IL-1β production after lipopolysaccharide stimulation of whole blood samples taken from patients in the first 12 h (cardiopulmonary resuscitation (CPR) t1: n = 19), after 24 h (CPR t2: n = 19), and after 48 h (CPR t3: n = 14) following ROSC, compared to whole blood samples from patients with coronary artery disease (CAD: n = 19) (a). Impaired TNF-α production after stimulation with Pam3CSK4 of whole blood samples taken from patients in the first 12 h (CPR t1: n = 21), after 24 h (CPR t2: n = 22), and after 48 h (CPR t3: n = 15) after CPR, compared to whole blood samples from patients with CAD (n = 19) (b). The resulting cytokine concentrations were standardized to the patient's white blood cell count. Statistical hypothesis testing was performed using the Kruskal–Wallis test and post-hoc analysis with all-pairwise comparison using the Dunn–Bonferroni approach (*p value ≤0.05; ***p value ≤0.001)
Cytokine production of cultured monocytes in response to stimulation with lipopolysaccharide (LPS). IL-1β production of LPS-stimulated monocyte cultures from resuscitated patients in the first 12 h (cardiopulmonary resuscitation (CPR) t1: n = 25), after 24 h (CPR t2: n = 28), and after 48 h (CPR t3: n = 19) following return of spontaneous circulation, and from patients with coronary artery disease (CAD: n = 19). Statistical hypothesis testing was performed using the Kruskal–Wallis test and post-hoc analysis with all-pairwise comparison using the Dunn–Bonferroni approach (*p value ≤0.05; ***p value ≤0.001)
Proinflammatory cytokine production of cultured PBMCs after serum exchange in vitro
Stimulation of cultured PBMCs from a healthy volunteer with patient serum at a concentration of 20 % did not induce a detectable amount of IL-1β in most culture supernatants, with very low and comparable concentrations in the samples where detection was possible (Additional file 7). Interestingly, cultured PBMCs from a healthy individual had an attenuated inflammatory response to LPS after co-stimulation with serum from resuscitated patients compared to co-stimulation with serum from the control group (CAD 1298.57 ± 370.15; CPR t1 698.57 ± 645.09; CPR t3 650 ± 338.08 ((pg/ml)) (Additional file 8).
In this study, we provide evidence for the activation of TLR2 and TLR4 in immediate survivors of cardiac arrest and describe the involvement of the NLRP3 inflammasome in the modulation of the subsequent systemic inflammatory response to global IRI caused by temporary circulatory arrest. Our findings suggest the innate immune system as a possible pathophysiological factor in PCAS and as a potential therapeutic target for the treatment of this condition.
PCAS is characterized by a global IRI that results in significant inflammation in multiple organs, which leads to both mortality due to organ failure and morbidity due to neurological impairment in eventual survivors. Our own group has recently shown that different populations of proinflammatory microparticles [34] and a perturbation of the endothelial glycocalyx [11] likely contribute to initiating the early phases of PCAS. Others have reported a significant increase in inflammatory cytokines and their receptors during the course of the PCAS, including IL-1ra, IL-6, IL-8, IL-10 and sTNFRII [8]. However, the molecular events that govern this systemic reaction to circulatory arrest and finally result in activation of inflammatory cells remain little understood.
TLR expression
Upregulation of TLR2 and TLR4 in response to the presence of PAMPs has extensively been studied in non-sterile inflammatory conditions like sepsis and septic shock [27, 35–37]. However, there is a growing body of evidence that TLR2 and TLR4 also play a pivotal role in sterile inflammatory conditions such as acute and chronic cardiovascular diseases [20, 38–40]. We detected temporary upregulation of monocyte TLR2, TLR4, and IRAK4, the main kinase to further propagate TLR signaling, immediately after ROSC, which possibly resembles the strong inflammatory activation induced by the global IRI. Accordingly, there was a significant positive correlation of these markers with the degree of ischemic injury as assessed by serum lactate levels, the duration of CPR, and the dosage of norepinephrine to sustain adequate blood pressure. However, there was no correlation between TLR signaling mRNA expression levels and the initial serum lactate measured directly after ROSC, indicating that failure of lactate clearance and persisting tissue hypoxia might be relevant for the regulation of TLR signaling in whole body IRI. This interpretation is further supported by recent findings from Selejan and coworkers who demonstrated upregulation of TLR2 in patients with cardiogenic shock and correlation of TLR2 expression with the "symptom to reperfusion time" [40]. Interestingly, the initial upregulation was followed by relative downregulation of both TLR2 and TLR4 at later time points, which was more evident in 30-day nonsurvivors. This finding is in good correspondence with similar observations during the time course of sepsis [41], coronary artery bypass grafting [42], and percutaneous coronary intervention [31], and possibly contributes to the development of a compensatory anti-inflammatory response syndrome (CARS), an adapted response to dampen the overzealous inflammatory response [43].
A key mechanism for the development of CARS is thought to be a phenomenon called endotoxin tolerance (ET), a transient state in which monocytes and macrophages are unable to respond to endotoxin [44], which has been extensively studied in sepsis [45, 46]. This mechanism is thought to be partly mediated by downregulation of surface TLR4 expression [47]. However, TLR expression and ex vivo cytokine release were not correlated in our study, indicating the involvement of regulatory adaptor molecules in the functional TLR response. IRAK3 has been shown to negatively regulate downstream TLR signaling [48] and to mediate LPS tolerance in human models of endotoxemia [49]. Indeed, we detected early upregulation of the negative regulator of TLR signaling IRAK3 in our patient population at a time point when TLR expression was still high, but TLR response was already attenuated.
Inflammasome expression
Processing and release of IL-1β is regulated on multiple levels. Activation of surface PRRs, such as TLR4, lead to generation of inactive pro IL-1β; activation and assembly of the inflammasome, a cytoplasmatic multiprotein complex that consists of a sensor protein (e.g., NLRP1, NLRP3, or AIM2), the adaptor protein PYCARD, and the protease caspase-1, then leads to proteolytic cleavage of pro-IL-1β into its biologically active form IL-1β. A large variety of exogenous and endogenous danger signals, including extracellular ATP, uric acid crystals, and potassium influx, have been shown to activate the inflammasome [22, 50, 51]. In our patient population, we now detected a distinct regulation of the different inflammasomes in isolated circulating monocytes, with significant upregulation of the NLRP3 inflammasome in the first 24 h after CPR and downregulation of the NLRP1 and the AIM2 inflammasome. The latter finding is in good correspondence with a recent investigation in patients with septic shock [27], where a similar downregulation was observed compared to critically ill patients and healthy controls. One possible explanation of this differential expression of the inflammasome subsets could be their ligand specificity: while NLRP3 is activated by a large number of pathogens and intrinsic stimuli, NLRP1 is predominantly described in the innate immune response to microbial pathogens [52], which represents a secondary process in PCAS following the initial sterile inflammation. Similar to our findings on TLR expression, the upregulation of NLRP3 was restricted to the early phase after cardiac arrest. Both NLRP3 and CASP1 mRNA expression levels were significantly lower in patients who died compared to eventual survivors. Furthermore, we observed a trend towards downregulation of NLRP1 and PYCARD in nonsurvivors. This is in line with findings determining NLRP1 as an independent predictor of mortality in patients with septic shock [27]. How this phenomenon represents a normal physiological effect to limit excessive inflammation, or a maladaptive response that predisposes the organism to secondary infections, remains unclear.
Monocyte inflammatory response
On a functional level, we observed a pronounced and sustained decrease in inflammatory cytokine release after TLR2 and TLR4 activation in patients after cardiac arrest ex vivo. Interestingly, sera from resuscitated patients attenuated the inflammatory response of PBMCs from a healthy volunteer after stimulation with LPS. This phenomenon known as endotoxin tolerance or TLR hyporesponsiveness is well-documented and can be observed in both endotoxin-dependent settings, such as sepsis [45, 46], and endotoxin-independent settings, such as major trauma [53] and vascular surgery [42]. Our findings are in good correspondence with a study by Adrie and coworkers who were the first to investigate the immunoinflammatory profile of patients after successful CPR and who demonstrated the development of ET in these patients [8]. Our findings are further supported by a recent study from Beurskens and coworkers, who compared plasma cytokine levels and the TLR response to LPS and lipoteichoic acid in patients after cardiac arrest [54].
Mechanistic analyses of ET in genetically modified mice have suggested the differential regulation of TLR-adaptor proteins as a causal factor for this phenomenon [55]. In our study we observed upregulation of the pseudokinase IRAK3. IRAK3 belongs to the IL-1 receptor-associated kinase family and serves as a negative regulator downstream of TLR4. Induction of IRAK3 is associated with LPS-induced ET in humans [49]. Furthermore, mice deficient in IRAK3 are known not to display ET in vivo [48]. Similar regulation of IRAK3, as demonstrated in this study, was previously described in sepsis [56] and myocardial infarction [57], suggesting that upregulation of IRAK3 could be a common mechanism of ET across these different pathological conditions.
Our experimental findings fit the hypothesis that patients undergo a whole body IRI after cardiac arrest [58], with a release of DAMPs, which finally leads to PRR activation and ET after subsequent stimulation. Our group has previously reported the presence of DAMPs in patients after cardiac arrest, which are known to be endogenous TLR and inflammasome ligands [11, 34]. Accordingly, a recent study from Timmermans and coworkers demonstrated significant associations between the presence of DAMPs in survivors of cardiac arrest and the intensity of ET in the first days after CPR [59]. However non-sterile activation of PRRs , also has to be taken in account because endotoxemia [16] and bacteremia [60] have been reported in resuscitated patients and gastric aspiration is a common event after CPR [16]. Our current study corroborates the hypothesis that ET is mediated by both soluble serum factors and intrinsic leucocyte reprogramming [8] and expands these findings to a larger patient population. In addition, it identifies an important cell population for this phenotypic response and contributes to the mechanistic explanation of ET by demonstrating the differential regulation of the involved receptors and cytosolic modulators of the monocyte response to PRR activation.
As with all clinical studies in the field of cardiac arrest research, the definition of an appropriate control population is difficult. We decided on patients with coronary artery disease, as most patients in our CPR group had circulatory arrest of cardiac origin and received similar pharmacological and interventional treatment to the control group. However, the control population was not subjected to therapeutic hypothermia, which could result in a significant confounder, as cooling can potentially attenuate the IRI [61]. As all resuscitated patients were treated with mild therapeutic hypothermia (expect one patient who died before the target temperature was reached), no analysis of the effects of cooling within this group was possible. However, our serial measurements in the individual patients are not affected by this bias and from a pathophysiological point of view the cooling should result in underestimation of the observed inflammation in the CPR group. Furthermore, in the study by Beurskens et al., leucocyte cytokine release was not affected by body temperature [54].
As we focused on monocytes as a specific circulating cell population, potential divergent effects in other resting or circulating cell types therefore remain beyond the scope of this study. As the amount of blood that could be sampled from the critically ill patients was limited, our analysis of the isolated monocytes was limited to RNA expression levels and measurements of individual cytokines at the protein level. As inflammasome activation is controlled by both fast-acting post-translational mechanisms and slower-acting transcriptional regulation, our PCR-based analysis can only describe changes due to the latter mechanism [62].
Finally, due to inherent limitations of an observational study, the causal relationship between our findings and the development of the PCAS cannot be deducted from our study. Also, our sample size was limited to 51 patients who had undergone CPR at a single institution.
With the lack of effective treatment options after cardiac arrest, the clarification of the underlying pathophysiology of the PCAS is a prerequisite for future therapy. Theoretically, intrinsic DAMPs and the interaction with their receptor could represent attractive therapeutic targets in this setting, as these molecules are only released during injury. Several inhibitors of different components of innate immune signaling are currently under development and a humanized anti-TLR2 antibody was recently shown to decrease myocardial IRI in pigs [63], whereas a specific TLR-4 inhibitor exhibited similar effects in IRI of the brain [64]. The notion that TLR2 might exhibit an important role in PCAS is further supported by a recent study where the administration of a TLR2 inhibiting antibody or genetic TLR2 deficiency improved survival and neurological function in mice after circulatory arrest [65]. However, potential unwanted attenuation of the host defense against infection has to be taken into account with these strategies.
Our findings directly demonstrate the differential regulation of monocyte TLR expression and function in immediate survivors of cardiac arrest and implicate the NLRP3 inflammasome as a potential downstream mediator of the inflammatory response during PCAS. The time course of monocyte inflammatory marker expression and function suggests a proinflammatory phenotype in the early phase after ROSC and compensating suppression of monocyte-mediated inflammation during the progress of the syndrome. How far these findings functionally determine the progression of PCAS remains to be determined in future interventional studies, but modulation of the innate immune response by targeted therapies has the theoretical potential to attenuate global IRI in the early phase of PCAS and septic inflammatory complications in the later phase.
Monocyte TLR2, TLR4 and NLRP3 inflammasome signaling is differentially regulated in the time course of PCAS.
Patients who do not survive after cardiac arrest have decreased expression of monocyte TLR2, IRAK3, IRAK4, NLRP3 and CASP1 in the later time course of PCAS.
Patients who undergo CPR exhibit profound endotoxin tolerance ex vivo, which is possibly mediated in an IRAK3-dependent manner.
ANOVA, analysis of variance; AIM, absent in melanoma; β2M, beta-2 microglobulin; CAD, coronary artery disease; CARS, compensatory anti-inflammatory response syndrome; CASP, caspase; CPR, cardiopulmonary resuscitation; DAMP, danger-associated molecular pattern; DPSB, Dulbecco's phosphate buffered saline; ET, endotoxin tolerance; IHCA, in-hospital cardiac arrest; IL-1β, interleukin-1 beta (protein); IL1B, interleukin-1 beta (gene); IRI, ischemia-reperfusion injury; LPS, lipopolysaccharide; OHCA, out-of-hospital cardiac arrest; IRAK, interleukin-1 receptor-associated kinase; NLRP, NLR family pyrin domain containing; PAD, peripheral artery disease; PAMP, pathogen-associated molecular pattern; PBMC, peripheral blood mononuclear cell; PCAS, post-cardiac-arrest syndrome; PCI, percutaneous coronary intervention; PEA, pulseless electrical activity; POL2RA, RNA polymerase 2; PRR, pattern recognition receptor; PYCARD, PYD and CARD domain containing; qPCR, quantitative real-time polymerase chain reaction; RCN, relative copy number; ROSC, return of spontaneous circulation; RPMI, Roswell Park Memorial Institute; SD, standard deviation; SOFA, sequential organ failure assessment; TLR, toll-like receptor; TNFα, tumor necrosis factor alpha; VF, ventricular fibrillation; VT, ventricular tachycardia; WBC, white blood cell.
McNally B, Robb R, Mehta M, Vellano K, Valderrama AL, Yoon PW, Sasson C, Crouch A, Perez AB, Merritt R et al. Out-of-hospital cardiac arrest surveillance – Cardiac Arrest Registry to Enhance Survival (CARES), United States, October 1, 2005–December 31, 2010. MMWR Surveill Summ. 2011;60(8):1–19.
Negovsky VA. The second step in resuscitation–the treatment of the 'post-resuscitation disease'. Resuscitation. 1972;1(1):1–7.
Neumar RW, Nolan JP, Adrie C, Aibiki M, Berg RA, Bottiger BW, Callaway C, Clark RS, Geocadin RG, Jauch EC et al. Post-cardiac arrest syndrome: epidemiology, pathophysiology, treatment, and prognostication. A consensus statement from the International Liaison Committee on Resuscitation (American Heart Association, Australian and New Zealand Council on Resuscitation, European Resuscitation Council, Heart and Stroke Foundation of Canada, InterAmerican Heart Foundation, Resuscitation Council of Asia, and the Resuscitation Council of Southern Africa); the American Heart Association Emergency Cardiovascular Care Committee; the Council on Cardiovascular Surgery and Anesthesia; the Council on Cardiopulmonary, Perioperative, and Critical Care; the Council on Clinical Cardiology; and the Stroke Council. Circulation. 2008;118(23):2452–83.
Hackenhaar FS, Fumagalli F, Li Volti G, Sorrenti V, Russo I, Staszewsky L, Callaway C, Clark RS, Geocadin RG, Jauch EC. Relationship between post-cardiac arrest myocardial oxidative stress and myocardial dysfunction in the rat. J Biomed Sci. 2014;21:70.
Idris AH, Roberts 2nd LJ, Caruso L, Showstark M, Layon AJ, Becker LB, Vanden Hoek T, Gabrielli A. Oxidant injury occurs rapidly after cardiac arrest, cardiopulmonary resuscitation, and reperfusion. Crit Care Med. 2005;33(9):2043–8.
Tsai MS, Huang CH, Tsai CY, Chen HW, Lee HC, Cheng HJ, Hsu CY, Wang TD, Chang WT, Chen WJ. Ascorbic acid mitigates the myocardial injury after cardiac arrest and electrical shock. Intensive Care Med. 2011;37(12):2033–40.
Gando S, Nanzaki S, Morimoto Y, Kobayashi S, Kemmotsu O. Out-of-hospital cardiac arrest increases soluble vascular endothelial adhesion molecules and neutrophil elastase associated with endothelial injury. Intensive Care Med. 2000;26(1):38–44.
Adrie C, Adib-Conquy M, Laurent I, Monchi M, Vinsonneau C, Fitting C, Fraisse F, Dinh-Xuan AT, Carli P, Spaulding C et al. Successful cardiopulmonary resuscitation after cardiac arrest as a "sepsis-like" syndrome. Circulation. 2002;106(5):562–8.
Fink K, Schwarz M, Feldbrugge L, Sunkomat JN, Schwab T, Bourgeois N, Olschewski M, von Zur Muhlen C, Bode C, Busch HJ. Severe endothelial injury and subsequent repair in patients after successful cardiopulmonary resuscitation. Crit Care. 2010;14(3):R104.
Gando S, Nanzaki S, Morimoto Y, Kobayashi S, Kemmotsu O. Alterations of soluble L- and P-selectins during cardiac arrest and CPR. Intensive Care Med. 1999;25(6):588–93.
Grundmann S, Fink K, Rabadzhieva L, Bourgeois N, Schwab T, Moser M, Bode C, Busch HJ. Perturbation of the endothelial glycocalyx in post cardiac arrest syndrome. Resuscitation. 2012;83(6):715–20.
Donadello K, Favory R, Salgado-Ribeiro D, Vincent JL, Gottin L, Scolletta S, Creteur J, De Backer D, Taccone FS. Sublingual and muscular microcirculatory alterations after cardiac arrest: a pilot study. Resuscitation. 2011;82(6):690–5.
Omar YG, Massey M, Andersen LW, Giberson TA, Berg K, Cocchi MN, Shapiro NI, Donnino MW. Sublingual microcirculation is impaired in post-cardiac arrest patients. Resuscitation. 2013;84(12):1717–22.
Teschendorf P, Padosch SA, Del Valle YFD, Peter C, Fuchs A, Popp E, Spohr F, Bottiger BW, Walther A. Effects of activated protein C on post cardiac arrest microcirculation: an in vivo microscopy study. Resuscitation. 2009;80(8):940–5.
Heradstveit BE, Guttormsen AB, Langorgen J, Hammersborg SM, Wentzel-Larsen T, Fanebust R, Larsson EM, Heltne JK. Capillary leakage in post-cardiac arrest survivors during therapeutic hypothermia - a prospective, randomised study. Scand J Trauma Resusc Emerg Med. 2010;18:29.
Adrie C, Laurent I, Monchi M, Cariou A, Dhainaou JF, Spaulding C. Postresuscitation disease after cardiac arrest: a sepsis-like syndrome? Curr Opin Crit Care. 2004;10(3):208–12.
Peberdy MA, Callaway CW, Neumar RW, Geocadin RG, Zimmerman JL, Donnino M, Gabrielli A, Silvers SM, Zaritsky AL, Merchant R et al. Part 9: post-cardiac arrest care: 2010 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation. 2010;122(18 Suppl 3):S768–786.
Salomao R, Brunialti MK, Rapozo MM, Baggio-Zappia GL, Galanos C, Freudenberg M. Bacterial sensing, cell signaling, and modulation of the immune response during sepsis. Shock. 2012;38(3):227–42.
Lord JM, Midwinter MJ, Chen Y-F, Belli A, Brohi K, Kovacs EJ, Koenderman L, Kubes P, Lilford RJ. The systemic immune response to trauma: an overview of pathophysiology and treatment. Lancet. 2014;384(9952):1455–65.
Arslan F, Smeets MB, O'Neill LA, Keogh B, McGuirk P, Timmers L, Tersteeg C, Hoefer IE, Doevendans PA, Pasterkamp G et al. Myocardial ischemia/reperfusion injury is mediated by leukocytic toll-like receptor-2 and reduced by systemic administration of a novel anti-toll-like receptor-2 antibody. Circulation. 2010;121(1):80–90.
Akira S, Uematsu S, Takeuchi O. Pathogen recognition and innate immunity. Cell. 2006;124(4):783–801.
Martinon F, Mayor A, Tschopp J. The inflammasomes: guardians of the body. Annu Rev Immunol. 2009;27:229–65.
Chen GY, Nunez G. Sterile inflammation: sensing and reacting to damage. Nat Rev Immunol. 2010;10(12):826–37.
Matzinger P. The danger model: a renewed sense of self. Science. 2002;296(5566):301–5.
Kono H, Rock KL. How dying cells alert the immune system to danger. Nat Rev Immunol. 2008;8(4):279–89.
Yu L, Wang L, Chen S. Endogenous toll-like receptor ligands and their biological significance. J Cell Mol Med. 2010;14(11):2592–603.
Fahy RJ, Exline MC, Gavrilin MA, Bhatt NY, Besecker BY, Sarkar A, Hollyfield JL, Duncan MD, Nagaraja HN, Knatz NL et al. Inflammasome mRNA expression in human monocytes during early septic shock. Am J Respir Crit Care Med. 2008;177(9):983–8.
Hall MW, Gavrilin MA, Knatz NL, Duncan MD, Fernandez SA, Wewers MD. Monocyte mRNA phenotype and adverse outcomes from pediatric multiple organ dysfunction syndrome. Pediatr Res. 2007;62(5):597–603.
Radonic A, Thulke S, Mackay IM, Landt O, Siegert W, Nitsche A. Guideline to reference gene selection for quantitative real-time PCR. Biochem Biophys Res Commun. 2004;313(4):856–62.
Piehler AP, Grimholt RM, Ovstebo R, Berg JP. Gene expression results in lipopolysaccharide-stimulated monocytes depend significantly on the choice of reference genes. BMC Immunol. 2010;11:21.
Versteeg D, Hoefer IE, Schoneveld AH, de Kleijn DP, Busser E, Strijder C, Emons M, Stella PR, Doevendans PA, Pasterkamp G. Monocyte toll-like receptor 2 and 4 responses and expression following percutaneous coronary intervention: association with lesion stenosis and fractional flow reserve. Heart. 2008;94(6):770–6.
Cramer D, Howitt D. The SAGE Dictionary of Statistics: A Practical Resource for Students in the Social Sciences. London, UK: Sage Publications Ltd; 2004.
Nordstokke D, Zumbo B. A New Nonparametric Levene test for equal variances. Psicologica. 2010;31:401–30.
Fink K, Feldbrugge L, Schwarz M, Bourgeois N, Helbing T, Bode C, Schwab T, Busch HJ. Circulating annexin V positive microparticles in patients after successful cardiopulmonary resuscitation. Crit Care. 2011;15(5):R251.
Armstrong L, Medford ARL, Hunter KJ, Uppington KM, Millar AB. Differential expression of Toll-like receptor (TLR)-2 and TLR-4 on monocytes in human sepsis. Clin Exp Immunol. 2004;136(2):312–9.
Tsujimoto H, Ono S, Majima T, Efron PA, Kinoshita M, Hiraide H, Moldawer LL, Mochizuki H. Differential toll-like receptor expression after ex vivo lipopolysaccharide exposure in patients with sepsis and following surgical stress. Clin Immunol. 2006;119(2):180–7.
Tsujimoto H, Ono S, Majima T, Kawarabayashi N, Takayama E, Kinoshita M, Kinoshita M, Seki S, Hiraide H, Moldawer LL, Mochizuki H. Neutrophil elastase, MIP-2, and TLR-4 expression during human and experimental sepsis. Shock (Augusta, Ga). 2005;23(1):39–44.
Ashida K, Miyazaki K, Takayama E, Tsujimoto H, Ayaori M, Yakushiji T, Iwamoto N, Yonemura A, Isoda K, Mochizuki H et al. Characterization of the expression of TLR2 (toll-like receptor 2) and TLR4 on circulating monocytes in coronary artery disease. J Atheroscler Thromb. 2005;12(1):53–60.
Frantz S, Kobzik L, Kim YD, Fukazawa R, Medzhitov R, Lee RT, Kelly RA. Toll4 (TLR4) expression in cardiac myocytes in normal and failing myocardium. J Clin Invest. 1999;104(3):271–80.
Selejan S, Poss J, Walter F, Hohl M, Kaiser R, Kazakov A, Bohm M, Link A. Ischaemia-induced up-regulation of Toll-like receptor 2 in circulating monocytes in cardiogenic shock. Eur Heart J. 2012;33(9):1085–94.
Schaaf B, Luitjens K, Goldmann T, van Bremen T, Sayk F, Dodt C, Dalhoff K, Droemann D. Mortality in human sepsis is associated with downregulation of Toll-like receptor 2 and CD14 expression on blood monocytes. Diagn Pathol. 2009;4:12.
Flier S, Concepcion AN, Versteeg D, Kappen TH, Hoefer IE, de Lange DW, Pasterkamp G, Buhre WF. Monocyte hyporesponsiveness and Toll-like receptor expression profiles in coronary artery bypass grafting and its clinical implications for postoperative inflammatory response and pneumonia: an observational cohort study. Eur J Anaesthesiol. 2015;32(3):177–88.
Adib-Conquy M, Cavaillon JM. Compensatory anti-inflammatory response syndrome. Thromb Haemost. 2009;101(1):36–47.
Lopez-Collazo E, del Fresno C. Pathophysiology of endotoxin tolerance: mechanisms and clinical consequences. Crit Care (London, England). 2013;17(6):242.
Astiz M, Saha D, Lustbader D, Lin R, Rackow E. Monocyte response to bacterial toxins, expression of cell surface receptors, and release of anti-inflammatory cytokines during sepsis. J Lab Clin Med. 1996;128(6):594–600.
Munoz C, Carlet J, Fitting C, Misset B, Bleriot JP, Cavaillon JM. Dysregulation of in vitro cytokine production by monocytes during sepsis. J Clin Invest. 1991;88(5):1747–54.
Nomura F, Akashi S, Sakao Y, Sato S, Kawai T, Matsumoto M, Nakanishi K, Kimoto M, Miyake K, Takeda K et al. Cutting edge: endotoxin tolerance in mouse peritoneal macrophages correlates with down-regulation of surface toll-like receptor 4 expression. J Immunol. 2000;164(7):3476–9.
Kobayashi K, Hernandez LD, Galan JE, Janeway Jr CA, Medzhitov R, Flavell RA. IRAK-M is a negative regulator of Toll-like receptor signaling. Cell. 2002;110(2):191–202.
van 't Veer C, van den Pangaart PS, van Zoelen MA, de Kruif M, Birjmohun RS, Stroes ES, de Vos AF, van der Poll T. Induction of IRAK-M is associated with lipopolysaccharide tolerance in a human endotoxemia model. J Immunol (Baltimore, Md : 1950). 2007;179(10):7110–20.
Franchi L, Eigenbrod T, Munoz-Planillo R, Nunez G. The inflammasome: a caspase-1-activation platform that regulates immune responses and disease pathogenesis. Nat Immunol. 2009;10(3):241–7.
Gross O, Thomas CJ, Guarda G, Tschopp J. The inflammasome: an integrated view. Immunol Rev. 2011;243(1):136–51.
Faustin B, Lartigue L, Bruey JM, Luciano F, Sergienko E, Bailly-Maitre B, Volkmann N, Hanein D, Rouiller I, Reed JC. Reconstituted NALP1 inflammasome reveals two-step mechanism of caspase-1 activation. Mol Cell. 2007;25(5):713–24.
Keel M, Schregenberger N, Steckholzer U, Ungethum U, Kenney J, Trentz O, Ertel W. Endotoxin tolerance after severe injury and its regulatory mechanisms. J Trauma. 1996;41(3):430–7. discussion 437–438.
Beurskens CJ, Horn J, de Boer AM, Schultz MJ, van Leeuwen EM, Vroom MB, Juffermans NP. Cardiac arrest patients have an impaired immune response, which is not influenced by induced hypothermia. Crit Care. 2014;18(4):R162.
Xiong Y, Medvedev AE. Induction of endotoxin tolerance in vivo inhibits activation of IRAK4 and increases negative regulators IRAK-M, SHIP-1, and A20. J Leukoc Biol. 2011;90(6):1141–8.
Escoll P, del Fresno C, Garcia L, Valles G, Lendinez MJ, Arnalich F, Lopez-Collazo E. Rapid up-regulation of IRAK-M expression following a second endotoxin challenge in human monocytes and in monocytes isolated from septic patients. Biochem Biophys Res Commun. 2003;311(2):465–72.
del Fresno C, Soler-Rangel L, Soares-Schanoski A, Gomez-Pina V, Gonzalez-Leon MC, Gomez-Garcia L, Mendoza-Barbera E, Rodriguez-Rojas A, Garcia F, Fuentes-Prior P et al. Inflammatory responses associated with acute coronary syndrome up-regulate IRAK-M and induce endotoxin tolerance in circulating monocytes. J Endotoxin Res. 2007;13(1):39–52.
Eltzschig HK, Eckle T. Ischemia and reperfusion–from mechanism to translation. Nat Med. 2011;17(11):1391–401.
Timmermans K, Kox M, Gerretsen J, Peters E, Scheffer GJ, van der Hoeven JG, Pickkers P, Hoedemaekers CW. The involvement of danger-associated molecular patterns in the development of immunoparalysis in cardiac arrest patients. Crit Care Med. 2015;43(11):2332–38.
Gaussorgues P, Gueugniaud PY, Vedrinne JM, Salord F, Mercatello A, Robert D. Bacteremia following cardiac arrest and cardiopulmonary resuscitation. Intensive Care Med. 1988;14(5):575–7.
Meybohm P, Gruenewald M, Zacharowski KD, Albrecht M, Lucius R, Fosel N, et al. Mild hypothermia alone or in combination with anesthetic post-conditioning reduces expression of inflammatory cytokines in the cerebral cortex of pigs after cardiopulmonary resuscitation. Crit Care. 2010;14(1):R21.
Latz E, Xiao TS, Stutz A. Activation and regulation of the inflammasomes. Nat Rev Immunol. 2013;13(6):397–411.
Arslan F, Houtgraaf JH, Keogh B, Kazemi K, de Jong R, McCormack WJ, O'Neill LA, McGuirk P, Timmers L, Smeets MB et al. Treatment with OPN-305, a humanized anti-Toll-Like receptor-2 antibody, reduces myocardial ischemia/reperfusion injury in pigs. Circ Cardiovasc Interv. 2012;5(2):279–87.
Khan MM, Gandhi C, Chauhan N, Stevens JW, Motto DG, Lentz SR, Chauhan AK. Alternatively-spliced extra domain A of fibronectin promotes acute inflammation and brain injury after cerebral ischemia in mice. Stroke. 2012;43(5):1376–82.
Rosenberger P, Bergt S, Güter A, Grub A, Wagner N-M, Beltschany C, Langner S, Wree A, Hildebrandt S, Nöldge-Schomburg G et al. Impact of toll-like receptor 2 deficiency on survival and neurological function after cardiac arrest: a murine model of cardiopulmonary resuscitation. PLoS One. 2013;8(9):e74944.
The article processing charge was funded by the German Research Foundation (DFG) and the Albert Ludwigs University Freiburg in the funding program Open Access Publishing. The authors thank the staff of the medical intensive care units of the University Hospital of Freiburg for their help in sample and data collection. The authors thank Irene Neudorfer for expert technical assistance.
AA was responsible for writing the manuscript, patient recruitment, and acquisition, analysis and interpretation of data, and made contributions to the conception and the design of the study. KF was responsible for the conception and the design of the study, writing the manuscript, and acquisition and interpretation of data. HJB and TH have been involved in analysis and interpretation of data and in drafting the manuscript. NB assisted AA in the laboratory techniques, acquired data, and helped in patient recruitment. CB revised the manuscript critically for important intellectual content and gave final approval of the version to be published. SG was responsible for the conception and the design of the study, writing the manuscript, and acquisition, analysis, and interpretation of data. All authors read and approved the final manuscript for publication.
Department of Cardiology and Angiology I, Heart Center Freiburg University, Hugstetter Straße 55, Freiburg im Breisgau, 79106, Germany
Alexander Asmussen, Thomas Helbing, Natascha Bourgeois, Christoph Bode & Sebastian Grundmann
Department of Emergency Medicine, University Medical Center Freiburg, Sir-Hans-A.-Krebs-Straße, Freiburg im Breisgau, 79106, Germany
Katrin Fink & Hans-Jörg Busch
Alexander Asmussen
Katrin Fink
Hans-Jörg Busch
Thomas Helbing
Natascha Bourgeois
Christoph Bode
Sebastian Grundmann
Correspondence to Alexander Asmussen.
Additional file 1:
Primer list. Accession number = Refseq accession number, web page access date 3 July 2014 (http://www.ncbi.nlm.nih.gov/refseq/). TA annealing temperature, Conc. concentration of each primer pair, Amplicon length of replicated DNA sequence in base pairs (bp). (DOCX 16 kb)
Laboratory tests. Shown are patients' inflammatory laboratory tests at admission, 24 and 48 h after ROSC. *Laboratory tests from patients after cardiopulmonary resuscitation (CPR) versus coronary artery disease (CAD) at admission. †Laboratory tests from patients after CPR versus CPR at admission. (DOCX 15 kb)
Monocyte mRNA expression in patients who had suffered cardiac arrest and the control group. Shown are kinetics of monocyte mRNA expression levels, expressed as mean relative copy numbers ± standard deviation (SD), in patients who had suffered cardiac arrest in the first 12 h (CPR t1; n = 30), after 24 h (CPR t2; n = 29) and after 48 h (CPR t3; n = 23) following CPR, and in the control group with coronary artery disease (CAD; n = 19). Statistical hypothesis testing was performed using the Kruskal–Wallis test and post-hoc analysis with all-pairwise comparison using the Dunn–Bonferroni approach indicated as the p values listed above. (DOCX 15 kb)
Time-dependent monocyte mRNA expression in 30-day nonsurvivors following cardiac arrest. Shown are monocyte mRNA expression levels of TLR2, TLR4, IRAK3, IRAK4. NLRP1, NLRP3, AIM2, PYCARD, CASP1, and IL-1β in 30-day nonsurvivors in the first 12 h (CPR t1: n = 18), after 24 h (CPR t2: n = 18), and after 48 h (CPR t3: n = 11) following ROSC. Statistical hypothesis testing was performed using the Kruskal–Wallis test and post-hoc analysis with all-pairwise comparison using the Dunn–Bonferroni approach (*p value ≤0.05; **p value ≤0.01; ***p value ≤0.001). (DOCX 42 kb)
Time-dependent monocyte mRNA expression in 30-day survivors following cardiac arrest. Shown are monocyte mRNA expression levels of TLR2, TLR4, IRAK3, IRAK4. NLRP1, NLRP3, AIM2, PYCARD, CASP1, and IL-1β in 30-day survivors in the first 12 h (CPR t1: n = 11), after 24 h (CPR t2: n = 11), and after 48 h (CPR t3: n = 12) following ROSC. Statistical hypothesis testing was performed using the Kruskal–Wallis test and post-hoc analysis with all-pairwise comparison using the Dunn–Bonferroni approach (*p value ≤0.05). (DOCX 36 kb)
Correlation analyses of monocyte mRNA expression levels and clinical characteristics. Shown are correlation analyses of monocyte mRNA expression levels from patients in the first 12 h (CPR t1; n = 30), after 24 h (CPR t2; n = 29), and after 48 h (CPR t3; n = 23) following CPR, and the corresponding clinical characteristics. There was one patient lost to follow up after study enrollment. Statistical hypothesis testing was performed using Spearman's rank correlation indicated as Spearman's rho (r s) and the p values listed above. CPR cardiopulmonary resuscitation, ROSC return of spontaneous circulation, lactate serum lactate; t0 at admission, NE dosage of norepinephrine to maintain mean arterial blood pressure ≥80 mmHg. (DOCX 16 kb)
Cytokine production of cultured PBMCs in response to stimulation with patients' sera. Shown is interleukin-1β (IL-1β) production of cultured PBMCs from a healthy volunteer in response to stimulation with 20 % serum either from patients with coronary artery disease (CAD: n = 8) or from resuscitated patients in the first 12 h (CPR t1: n = 14) and after 48 h following cardiac arrest (CPR t3: n = 9). Production of IL-1β did not statistically differ between the three groups. Statistical hypothesis testing was performed using the Kruskal–Wallis test. (DOCX 32 kb)
Cytokine production of cultured PBMCs in response to co-stimulation with patients' sera and LPS. Shown is interleukin-1β (IL-1β) production of cultured PBMCs from a healthy volunteer in response to co-stimulation with 10 ng/ml LPS and 20 % serum either from patients with coronary artery disease (CAD: n = 7) or from resuscitated patients in the first 12 h (CPR t1: n = 14) and after 48 h following cardiac arrest (CPR t3: n = 9). Statistical hypothesis testing was performed using one-way ANOVA and post-hoc analysis with all-pairwise comparison using the Games-Howell approach (*p value ≤0.05; **p value ≤0.01). (DOCX 41 kb)
Asmussen, A., Fink, K., Busch, HJ. et al. Inflammasome and toll-like receptor signaling in human monocytes after successful cardiopulmonary resuscitation. Crit Care 20, 170 (2016). https://doi.org/10.1186/s13054-016-1340-3
Post-cardiac arrest syndrome
Toll-like receptor
Inflammasome
Endotoxin tolerance
Monocyte | CommonCrawl |
Prosthetic model, but not stiffness or height, affects maximum running velocity in athletes with unilateral transtibial amputations
Effect of step frequency on leg stiffness during running in unilateral transfemoral amputees
Hiroaki Hobara, Hiroyuki Sakata, … Fumio Usui
Track distance runners exhibit bilateral differences in the plantar fascia stiffness
Hiroto Shiotani, Ryo Yamashita, … Yasuo Kawakami
Increasing the midsole bending stiffness of shoes alters gastrocnemius medialis muscle function during running
Sasa Cigoja, Jared R. Fletcher, … Benno M. Nigg
Hangboard training in advanced climbers: A randomized controlled trial
Saskia Mundry, Gino Steinmetz, … Dominik Saul
Thigh muscle co-contraction patterns in individuals with anterior cruciate ligament reconstruction, athletes and controls during a novel double-hop test
Ashokan Arumugam & Charlotte K. Häger
Foot strike pattern during running alters muscle-tendon dynamics of the gastrocnemius and the soleus
Jennifer R. Yong, Christopher L. Dembia, … Scott L. Delp
Running in highly cushioned shoes increases leg stiffness and amplifies impact loading
Juha-Pekka Kulmala, Jukka Kosonen, … Janne Avela
Impact of hip abductor and adductor strength on dynamic balance and ankle biomechanics in young elite female basketball players
Fernando Domínguez-Navarro, Josep Carles Benitez-Martínez, … Jose Casaña-Granell
Implications of sample size and acquired number of steps to investigate running biomechanics
Anderson Souza Oliveira & Cristina Ioana Pirscoveanu
Paolo Taboga ORCID: orcid.org/0000-0001-6529-82991,
Emily K. Drees ORCID: orcid.org/0000-0001-7738-35702,
Owen N. Beck3,4 &
Alena M. Grabowski ORCID: orcid.org/0000-0002-4432-618X2,5
Scientific Reports volume 10, Article number: 1763 (2020) Cite this article
Bone quality and biomechanics
The running-specific prosthetic (RSP) configuration used by athletes with transtibial amputations (TTAs) likely affects performance. Athletes with unilateral TTAs are prescribed C- or J-shaped RSPs with a manufacturer-recommended stiffness category based on body mass and activity level, and height based on unaffected leg and residual limb length. We determined how 15 different RSP model, stiffness, and height configurations affect maximum running velocity (vmax) and the underlying biomechanics. Ten athletes with unilateral TTAs ran at 3 m/s to vmax on a force-measuring treadmill. vmax was 3.8–10.7% faster when athletes used J-shaped versus C-shaped RSP models (p < 0.05), but was not affected by stiffness category, actual stiffness (kN/m), or height (p = 0.72, p = 0.37, and p = 0.11, respectively). vmax differences were explained by vertical ground reaction forces (vGRFs), stride kinematics, leg stiffness, and symmetry. While controlling for velocity, use of J-shaped versus C-shaped RSPs resulted in greater stance average vGRFs, slower step frequencies, and longer step lengths (p < 0.05). Stance average vGRFs were less asymmetric using J-shaped versus C-shaped RSPs (p < 0.05). Contact time and leg stiffness were more asymmetric using the RSP model that elicited the fastest vmax (p < 0.05). Thus, RSP geometry (J-shape versus C-shape), but not stiffness or height, affects vmax in athletes with unilateral TTAs.
Running-specific prostheses (RSPs) are passive-elastic devices typically made of carbon fiber that attach to a socket that surrounds the residual limb. The use of RSPs enable athletes with transtibial amputations (TTAs) to compete in running events including the Olympic games. RSP models are generally C-shaped or J-shaped. C-shaped RSPs attach distal to the socket and are recommended for distance running1 (e.g. 10 km, half marathon, and marathon) and J-shaped RSPs attach posterior to the socket and are recommended for sprinting1 (e.g. 100 m, 200 m, and 400 m). Despite different attachments and shapes, both types of RSPs act in-series with the residual limb. Athletes with TTAs are prescribed an RSP with a manufacturer-recommended stiffness category that is based on his or her body mass and activity level1,2,3. Greater stiffness categories correspond with stiffer RSPs while considering prosthetic model4. Further, for an athlete with a unilateral TTA, RSP height is set based on the athlete's contralateral unaffected leg length, stride kinematics, and their prosthetist's and personal preference5. The height of a C-shaped RSP is adjusted by shortening or lengthening the pylon that connects the RSP to the socket, while the height of a J-shaped RSP is adjusted by changing its mounting position posterior to the socket (Fig. 1). An athlete's unloaded RSP height is adjusted so that their affected leg (AL) length is 2–8 cm taller than their unaffected leg (UL) length6,7. The RSP configuration (i.e. model, stiffness category, height) used by athletes with TTAs likely affects their running performance, which is well-correlated with the maximum running velocity that an athlete can attain8.
(A) Freedom Innovations Catapult (FDM), (B) Össur Cheetah Xtend (OSR), and (C) Ottobock 1E90 Sprinter (OBK) running-specific prosthetic models attached in-series to a carbon fiber socket that encompasses the residual limb. The height of the prosthesis is adjusted using a pylon or height adjustment bracket.
Running velocity equals the product of stride frequency and stride length, where a stride is two steps and a step is the period of ground contact and subsequent aerial time7. Step frequency equals the number of steps in a given period of time, and step length equals the forward distance traveled by the centre of mass (CoM) relative to the ground with each step. At a given velocity, step length can be expressed as the product of the average vertical ground reaction force (GRF) applied during ground contact normalized to body weight and contact length (the product of contact time and forward velocity), which is the forward distance traveled by the CoM during ground contact9. Thus, running velocity equals the product of step frequency, stance average vertical GRF, and contact length. Runners increase step frequency by decreasing ground contact time and primarily increase step length by applying greater vertical GRFs on the ground10. Greater vertical GRFs increase the vertical CoM velocity at the end of ground contact, resulting in longer aerial time, and thus increasing the forward CoM distance traveled for each step at a given running velocity. However, at a given velocity, an increase in step length or frequency is counteracted by a decrease in the other parameter10. For example, an increase in vertical velocity at the end of ground contact would increase step length but also increase step time, which would decrease step frequency, and not change running velocity. Non-amputees and athletes with TTAs exhibit directionally similar changes in stance average vertical GRFs and spatiotemporal variables to increase running velocity, but biomechanics differ between the AL versus UL6,7.
The biomechanics of level-ground running are well represented by a spring-mass model11,12,13,14. In this model, the leg is represented by a massless linear spring, and body mass is represented by a point mass. Leg stiffness equals the peak vertical GRF divided by CoM displacement along the leg. The relationship between leg stiffness and running velocity is unclear: non-amputees either maintain or increase leg stiffness as running velocity increases6,15,16. Athletes with unilateral TTAs decrease AL stiffness, while UL stiffness remains constant from 3.0 m/s up to maximum running velocity (vmax)6, thus leg stiffness values between the AL and UL are more asymmetric at faster velocities.
Previous studies found no relationship between biomechanical asymmetries (i.e. differences in spatiotemporal, GRF, and stiffness variables between legs) and sprint performance in non-amputees17,18,19. However, it is possible that the anatomical differences between the legs of an athlete with a unilateral TTA result in larger biomechanical asymmetries compared to non-amputees: for example, reported values of asymmetries in non-amputee sprinters vary between 1.1% (step frequency)18 and 1.3% (step length)18, and 4.2% (ground contact time)17 and 4.9% (aerial time)17. On the other hand, previous studies have shown that athletes with unilateral TTAs apply 9% lower stance average vertical GRFs with their AL compared to UL using a typical RSP configuration across a range of speeds up to vmax7. Because step length equals the product of contact length and stance average vertical GRF at a given velocity9, if one leg has a limited ability to apply GRFs compared to the other leg, step length will be reduced. If the reduced step length is not compensated by an increase in step frequency, this would limit vmax. Additionally, athletes with unilateral TTAs have an 8% slower step frequency at vmax for their AL compared to their UL, while ground contact time/length does not differ between legs7. Further, Beck et al.20 found that more symmetric peak vertical GRFs between the AL and UL of athletes with unilateral TTAs reduced the metabolic cost of running at 2.5–3.0 m/s. However, the effects of different RSP configurations on biomechanical asymmetry and vmax in athletes with unilateral TTAs are not yet fully understood.
Given the subjective nature of RSP prescription, the purpose of our study was to quantify how use of different RSP model, stiffness, and height configurations affect maximum running velocity (vmax) and the underlying biomechanics in athletes with a unilateral TTA. Due to the lack of quantifiable information about how use of different RSP models affect vmax, we tested the null hypothesis that prosthetic model would not affect vmax. Because non-amputees maintain or increase leg stiffness with faster running velocity6,15,16, we hypothesized that use of stiffer RSPs would increase vmax. Due to the tradeoff between step frequency and step length10, we tested the null hypothesis that vmax would be independent of RSP height. Moreover, we hypothesized that use of the optimal RSP model, stiffness, and height combination would result in faster step frequencies, greater stance average vertical GRFs, longer contact lengths, and the fastest vmax. Based on the idea that a limitation in one leg can reduce overall performance, we also hypothesized that athletes would exhibit less asymmetric biomechanics (step frequency, stance average vertical GRF, and contact length) using the optimal RSP configuration.
Ten athletes with a unilateral TTA (Table 1) gave written informed consent prior to participation. The protocol was approved by the Colorado Multiple Institutional Review Board and the United States Army Medical Research and Materiel Command Human Research Protection Office. All research was performed in accordance with relevant guidelines and regulations. Participants reported no cardiovascular, pulmonary, musculoskeletal, or neurological disease or disorder beyond a TTA.
Table 1 Subject demographics, 100 m personal record (PR), usual running-specific prosthesis (RSP) and shape, and RSP configuration that resulted in the fastest maximum velocity (vmax).
Each participant first completed an alignment and accommodation session lasting about 4 hours. A certified prosthetist aligned each participant with three different prosthetic models: Freedom Innovations Catapult FX6 (FDM; Irvine, CA), Ottobock 1E90 Sprinter (OBK; Duderstadt, Germany), and Össur Cheetah Xtend (OSR; Reykjavik, Iceland). We chose these models because we have previously established the mechanical properties for each model4. FDM is C-shaped, while OBK and OSR are J-shaped (Fig. 1). Each participant was aligned with the manufacturer's recommended stiffness category (based on body mass) and ±1 stiffness category, and at the manufacturer's recommended height and ±2 cm. We adjusted the height of FDM (C-shaped RSP) by changing the height of the pylon connecting the socket to the RSP, and we adjusted the height of OBK and OSR (J-shaped RSPs) using custom aluminum brackets (Fig. 1). Some athletes used two different sockets: one socket for the C-shaped RSP (FDM) and a different socket for the J-shaped RSPs (OBK, OSR). During the accommodation session, participants ran using each RSP configuration on a treadmill at self-selected speeds and the prosthetist made adjustments until both the participant and prosthetist were satisfied.
Biomechanical measurements
Each athlete performed sets of running trials consisting of at least 8 strides per trial at constant velocities on a 3D force measuring treadmill (Treadmetrix, Park City, UT). Each series of trials began at 3 m/s, rest was provided between trials, and after each successive trial we incremented treadmill velocity by 1 m/s until the athlete approached their vmax, where we used smaller velocity increments until athletes reached their vmax. Athletes usually lowered themselves from the handrails onto the moving treadmill belt to initiate each trial; however, some athletes stood on the treadmill belt and accelerated at 1.0 m/s2 (treadmill default acceleration that was constant for all athletes) with the treadmill belt until it reached the desired velocity and the trial began. All of the athletes were experienced with treadmill running. vmax was defined as the velocity where athletes took at least 8 strides on the treadmill while maintaining their fore-aft position9. If an athlete could not maintain the velocity for 8 strides, we allowed them to repeat the trial, after ad-libitum rest. For each series of trials, athletes used one of 15 different combinations of RSP model, stiffness, and height. Prosthetic models were FDM, OBK, and OSR. Prosthetic stiffness conditions were the manufacturer recommended stiffness category (based on body mass and high activity level) and ±1 stiffness categories. Prosthetic height conditions were manufacturer recommended height and ±2 cm. We randomized the trial order of RSP model and stiffness category (3 RSP models × 3 stiffness categories = 9 trials). Then for each RSP model, height was only adjusted for the stiffness category that enabled the fastest vmax. We randomly inserted the different height trials into the trial order (3 RSP models × 2 RSP heights = 6 trials). Participants completed a maximum of 3 series of trials per day to minimize any potential effects of fatigue; thus, the entire protocol required at least 5 days. Depending on the feedback of each participant, additional rest days were taken to further minimize fatigue, and the average duration to complete the protocol was 10 days.
Throughout each trial, we measured 3D GRFs at 1000 Hz and filtered them using a 4th order low-pass Butterworth filter with a 30 Hz cutoff. We used these filtered data and a 30 N vertical GRF threshold to detect ground contact and calculate GRF parameters, step kinematics, and leg stiffness for each leg (AL and UL) during each step using a custom MATLAB script (MathWorks, Natick, MA). We determined step time for each leg as the sum of the ground contact time and subsequent aerial time. We calculated contact length as the product of ground contact time and the treadmill velocity. For each variable (step frequency, stance average vertical GRF, contact length, contact time, aerial time, and leg stiffness), we calculated the average of both legs. Participants ran with reflective markers attached to the distal end of their RSP and the fifth metatarsal head of their UL. We tracked the position of these markers at 200 Hz (Vicon Nexus, Oxford, UK), filtered marker position data using a 4th order low-pass Butterworth filter with a 7 Hz cutoff, and used a custom MATLAB script to confirm treadmill belt velocity during each foot- or RSP-ground contact.
We used the mean AL peak resultant GRFs measured in the current study and the reported RSP stiffness values from each device reported by Beck et al.4 (Supplementary Tables 1, 3 and 4) (kRSP) to calculate RSP displacement (ΔRSP):
$$\Delta {\rm{RSP}}=\frac{Peak\,GRF}{{k}_{RSP}}$$
We calculated leg stiffness (kleg) in kN/m as the ratio between peak vertical GRF and maximum leg displacement (ΔL) during each stance phase for the respective limb:
$${k}_{leg}=\frac{Peak\,vGRF}{\Delta L}$$
To determine ΔL, we calculated the angle of the leg (relative to vertical) at initial ground contact (contact angle, θ) using running velocity (v), ground contact time (tc), and initial leg length (L0) for each leg:
$$\theta ={\sin }^{-1}(\frac{v{t}_{c}}{2{L}_{0}})$$
We measured initial UL length (L0) as the distance from the greater trochanter to the floor during standing, and AL length (L0) as the distance from the greater trochanter to the distal end of the unloaded RSP6,7,21. We calculated maximum vertical displacement of the CoM (Δy) by twice integrating the vertical acceleration of the CoM with respect to time22. Then, we used Δy, L0, and θ to calculate ΔL according to McMahon and Cheng12:
$$\Delta L=\Delta y+{L}_{0}(1-\,\cos \,\theta )$$
We also calculated the symmetry index (SI) for all biomechanical variables of the AL and UL as a ratio between legs, according to Robinson et al.23:
$$SI=\frac{va{r}_{AL}-va{r}_{UL}}{0.5\,(va{r}_{AL}+va{r}_{UL})}$$
A positive SI means that the value for the AL is greater than the value for the UL, while a negative SI means that the value for the AL is lower than the value for the UL. An SI of zero indicates perfect symmetry between the AL and UL.
We used a linear mixed model to analyze the influence of RSP model, stiffness category, and height on vmax. We used a second linear mixed model to analyze the influence of RSP model, actual stiffness (in kN/m), and height on vmax. We used additional linear mixed models to analyze the influence of significant RSP configurations (model, stiffness, and/or height) on both legs' average and SI of biomechanical variables (step frequency, stance average vertical GRF, contact length, contact time, aerial time, and leg stiffness) while accounting for differences in vmax. We report the fixed effect (β) from each statistically significant association (dependent variable = β independent variable + intercept). We selected linear mixed models, as opposed to simple linear regression analyses, to control for subject variability. Each subject was classified as a random effect, while the independent variables were classified as fixed-effect variables. Linear mixed models are particularly useful in a repeated measures design because they take into account the lack of independence between observations within the same athlete using different RSP configurations24. For all linear mixed models, RSP model was classified as a categorical variable, while the stiffness category, actual stiffness, height and all biomechanical variables were classified as continuous variables. We also used one-sample t-tests to compare SI values to perfect symmetry between legs (SI = 0). We used a significance level of p < 0.05. When applicable, we implemented Bonferroni corrections to account for multiple comparisons. We performed all statistical tests using RStudio (RStudio Inc., Boston, MA).
The vmax (avg ± SD) for athletes with unilateral TTAs using OBK, OSR, and FDM RSPs were 8.18 ± 1.00 m/s, 7.67 ± 1.25 m/s, and 7.39 ± 1.30 m/s, respectively. Thus, RSP models will be referred to as OBK1, OSR2, and FDM3 to denote order based on vmax reached when using each model (1 = fastest, 2 = middle, 3 = slowest). Compared to use of the C-shaped FDM3 RSP, vmax was 10.7% (β1 = 0.82, p < 0.05) and 3.8% (β2 = 0.36, p < 0.05) faster when athletes with unilateral TTAs used J-shaped OBK1 and OSR2 RSPs, respectively. Additionally, use of an OBK1 RSP resulted in 6.6% (β1 = 0.46, p < 0.05) faster vmax compared to use of an OSR2 RSP. RSP stiffness category, actual stiffness, and height did not influence vmax (p = 0.72, p = 0.37, and p = 0.11, respectively; Fig. 2) and there were no significant interactions among RSP model, stiffness, and height for vmax (p > 0.05).
Average ± SEM maximum velocity (vmax) when athletes used the Ottobock 1E90 Sprinter (OBK1), Össur Cheetah Xtend (OSR2), and Freedom Innovations Catapult (FDM3) running-specific prosthetic (RSP) models with (A) stiffness categories (Cat) that are recommended (Rec), one Cat less stiff (−1) and one Cat more stiff (+1) than Rec, and (B) heights that are recommended (Rec), two cm shorter (−2) and two cm taller (+2) than Rec height. There was no effect of stiffness category or height on vmax. Use of the OBK1 RSP resulted in the fastest vmax, followed by use of OSR2, and use of FDM3 RSPs.
When accounting for differences in vmax, we found that use of FDM3 resulted in 3.4% faster step frequencies than use of OBK1 and OSR2 RSPs (β1 = −0.088 and β2 = −0.088, both p < 0.05; Fig. 3A, Table 2), and the step frequencies were non-different between use of OBK1 and OSR2 RSPs (p = 0.99). Use of OSR2 resulted in 2.3% and 4.6% greater stance average vertical GRF than use of OBK1 and FDM3 RSPs, respectively (β1 = 0.040 and β3 = 0.080, both p < 0.05; Fig. 3B, Table 2), and use of OBK1 resulted in 2.3% greater stance average vertical GRF than use of FDM3 RSPs (β3 = 0.040, p < 0.05). Use of OBK1 and FDM3 RSPs resulted in 4.4% longer contact lengths than use of OSR2 (β1 = 0.021 and β3 = 0.021, both p < 0.05; Fig. 3C, Table 2), but there was no significant difference in mean contact length between use of OBK1 and FDM3 RSPs (p = 0.98).
Maximum velocity (vmax) as a function of (A) step frequency, (B) stance average vertical ground reaction force (vGRF), and (C) contact length when using Ottobock 1E90 Sprinter (OBK1), Össur Cheetah Xtend (OSR2), and Freedom Innovations Catapult (FDM3), running-specific prosthetic models. Each individual data point represents the average value between the affected and unaffected legs for a single trial. Trend lines represent linear regressions for each prosthetic model. Coefficients of determination for linear regressions are shown in Table 2.
Table 2 Coefficients of determination (R-squared) for statistically significant (p < 0.05) linear regressions between biomechanical variables and maximum velocity (Figs. 3 and 4) when using Ottobock 1E90 Sprinter (OBK1), Össur Cheetah Xtend (OSR2), and Freedom Innovations Catapult (FDM3) running-specific prosthetic models. - indicates the biomechanical variable had no effect on maximum running velocity.
When accounting for differences in vmax, we found that use of OSR2 resulted in 1.4% and 1.6% shorter ground contact times than use of OBK1 and FDM3 RSPs, respectively (β1 = 0.0027 and β3 = 0.0031, both p < 0.05; Fig. 4A, Table 2). However, there were no significant differences in contact times when athletes used OBK1 compared to FDM3 RSPs (p = 0.73). Use of OBK1 and OSR2 resulted in 3.7% and 5.7% longer aerial times than use of FDM3 RSPs, respectively (β1 = 0.0054 and β2 = 0.0083, both p < 0.05) and there were no significant differences in aerial times when athletes used OBK1 compared to OSR2 RSPs (p = 0.057; Fig. 4B, Table 2). Use of OBK1 resulted in 6.4% lower leg stiffness than use of OSR2 and FDM3 RSPs (β2 = 1.109 and β3 = 1.098, both p < 0.05) and there was no significant difference in leg stiffness between use of OSR2 and FDM3 RSPs (p = 0.97; Fig. 4C, Table 2).
Maximum velocity (vmax) as a function of (A) contact time, (B) aerial time, and (C) leg stiffness when using Ottobock 1E90 Sprinter (OBK1), Össur Cheetah Xtend (OSR2), and Freedom Innovations Catapult (FDM3), running-specific prosthetic models. Each individual data point represents the average value between the affected and unaffected legs for a single trial. Trend lines represent linear regressions for each prosthetic model. Coefficients of determination for linear regressions are shown in Table 2.
We found that step frequency SI was less than zero when using all three RSPs (all p < 0.05; Table 3); on average, step frequency was 6.4% slower for the AL compared to UL. However, there were no differences in step frequency SI between RSP models (OBK1 vs. OSR2: p = 0.31; OBK1 vs. FDM3: p = 0.72; OSR2 vs. FDM3: p = 0.16). Stance average vertical GRF SI was less than zero across RSP conditions (all p < 0.05); on average stance average vertical GRFs were 6.2% lower for the AL compared to UL (Fig. 5, Table 3). However, use of OBK1 and OSR2 resulted in stance average vertical GRF SIs that were 0.078 and 0.105 more symmetric, respectively, compared to use of FDM3 RSPs (both p < 0.05). There was no significant difference in stance average vertical GRF SI between use of OBK1 and OSR2 RSPs (p = 0.16). Contact length SI was greater than zero when using all three RSPs (all p < 0.05); on average, contact lengths were 7.2% longer for the AL compared to UL. However, use of OSR2 resulted in contact length SIs that were 0.045 and 0.022 more symmetric compared to use of OBK1 and FDM3 RSPs, respectively (both p < 0.05), and use of FDM3 resulted in contact length SIs that were 0.023 more symmetric compared to use of OBK1 (p < 0.05).
Table 3 Average symmetry indices (SI) between the affected leg (AL) and unaffected leg (UL) for biomechanical variables when using Ottobock 1E90 Sprinter (OBK1), Össur Cheetah Xtend (OSR2), and Freedom Innovations Catapult (FDM3) running-specific prosthetic models. A positive SI means that the value for the AL is greater than the value for the UL, a negative SI means that the value for the UL is greater than the value for the AL, and an SI of zero indicates perfect symmetry between AL and UL. All SIs are significantly different than zero (p < 0.05).
Representative vertical ground reaction force (vGRF) traces for the affected leg (AL) and unaffected leg (UL) when using (A) Ottobock 1E90 Sprinter (OBK1), (B) Össur Cheetah Xtend (OSR2), and (C) Freedom Innovations Catapult (FDM3) running-specific prosthetic models (recommended stiffness category and height) at maximal running velocity (OBK1 = 7.46 m/s, OSR2 = 7.43 m/s, FDM3 = 7.00 m/s).
We found that contact time SI was greater than zero when using all three RSPs (all p < 0.05); on average, ground contact times were 5.3% longer for the AL compared to UL. Use of OBK1 resulted in contact time SIs that were 0.047 and 0.031 more asymmetric than use of OSR2 and FDM3 RSPs, respectively (both p < 0.05). Further, use of OSR2 resulted in contact time SIs that were 0.016 more symmetric than use of FDM3 RSPs (p < 0.05). Aerial time SI was greater than zero when using all three RSPs (all p < 0.05); on average aerial times were 9.4% longer for the AL compared to UL. Use of OSR2 resulted in aerial time SIs that were 0.061 and 0.062 more asymmetric than use of OBK1 and FDM3 RSPs, respectively (both p < 0.05). There was no significant difference in aerial time SI between use of OBK1 and FDM3 RSPs (p = 0.79). Finally, leg stiffness SI was less than zero when using all three RSPs (all p < 0.05); on average, leg stiffness was 15.7% lower for the AL compared to UL. Use of OBK1 resulted in leg stiffness SIs that were 0.052 and 0.090 more asymmetric than use of OSR2 and FDM3 RSPs, respectively (both p < 0.05). There was no difference in leg stiffness SI between use of OSR2 and FDM3 RSPs (p = 0.18).
We reject our initial hypothesis because RSP model, but not stiffness or height, affects maximum running velocity in athletes with unilateral TTAs. Therefore, we only considered RSP model as a factor in determining the optimal configuration. Specifically, use of OBK1 resulted in the fastest vmax, followed by use of OSR2, and FDM3 RSPs. These results are in agreement with a previous study of running at 2.5–3.0 m/s for athletes with unilateral TTAs, which found that RSP model, but not stiffness or height, affects metabolic cost20. Thus, in addition to optimizing distance running performance (i.e. metabolic cost), results of the present study suggest that use of a J-shaped RSP model could also improve sprinting performance (i.e. vmax) for athletes with unilateral TTAs compared to use of a C-shaped RSP.
There were biomechanical similarities when athletes with a TTA used OBK1 and OSR2 that were not present when they used FDM3 RSPs that may explain the differences in vmax elicited by use of different RSP models. Due to the inherent relationships between velocity and the biomechanical variables we investigated, we controlled for differences in velocity and determined the biomechanics elicited by use of different RSP models. When controlling for vmax, use of J-shaped RSPs (OBK1 and OSR2) resulted in greater stance average vertical GRF compared to the C-shaped RSP (FDM3). Additionally, compared to use of FDM3, use of OBK1 and OSR2 RSPs resulted in slower step frequencies due to longer aerial times together with shorter (OBK1) or similar (OSR2) ground contact times. These biomechanical findings may be attributable to RSP shape; OBK1 and OSR2 are J-shaped RSPs while FDM3 is C-shaped. Compared to C-shaped RSPs, J-shaped RSPs have about 1% lower hysteresis and are wider4,20,21. The lower hysteresis from J-shaped RSPs could result in greater vertical CoM velocity at the end of the stance phase, and a longer aerial time for a given contact time, which would decrease step frequency. The wider geometry of J-shaped RSPs could enhance stability during running. When balance is perturbed, individuals with TTAs increase step frequency and decrease step length during walking, presumably to minimize the risk of falling25. It is possible that use of the narrower C-shaped RSPs resulted in greater medio-lateral instability that required athletes to run with increased step frequency compared to use of wider J-shaped RSPs. It is also possible that the alignment of the J-shaped RSPs enables a more vertical leg position during the stance phase compared to the C-shaped RSP. This could allow the leg to be more closely aligned with the GRF vector, resulting in a better effective mechanical advantage26 and greater stance average vertical GRFs. Further, non-amputees achieve faster running speeds by producing greater stance average vertical GRF9. Our results show that the ability to generate greater stance average vertical GRFs may also be important for athletes with a TTA to achieve faster running speeds, because use of J-shaped RSPs resulted in faster vmax and higher stance average vertical GRFs compared to use of a C-shaped RSP.
We also found some biomechanical differences between use of the two different J-shaped RSPs (OBK1 and OSR2) that could explain differences in vmax. Use of OBK1 resulted in lower stance average vertical GRFs and longer contact lengths than use of OSR2 RSPs when accounting for velocity. These differences were offset, and step lengths were similar when using OBK1 and OSR2 RSPs. Although step frequencies were similar when using OBK1 and OSR2 RSPs, contact times were longer (p < 0.05) and aerial times were numerically but not statistically shorter with use of OBK1 compared to OSR2 RSPs (p = 0.057). Additionally, use of OBK1 resulted in lower leg stiffness than use of OSR2 RSPs. These differences between RSP models could be due to variations in RSP geometry or alignment4,20,21,27 or differences in curvature between OBK1 and OSR2 RSPs (Fig. 6). Similar to alignment's effect on stance average vertical GRF, perhaps the curvature of the OSR2 RSP enables a more vertical leg position during the stance phase compared to the OBK1 RSP. This could allow the leg to be better aligned with the GRF vector, resulting in greater stance average vertical GRF despite a shorter contact time. A more vertical leg position during initial ground contact would also lead to a smaller leg sweep angle, and thus shorter contact length and contact time. Additionally, a smaller leg sweep angle would decrease leg spring displacement during the stance phase (Eq. 4), which would increase leg stiffness, assuming peak vertical GRF is held constant. Therefore, differences in RSP geometry/curvature or alignment could explain why stance average vertical GRFs were lower, contact lengths and contact times were longer, and leg stiffness was lower with use of OBK1 compared to OSR2. Thus, future studies that determine the effects of RSP geometry/curvature and alignment are warranted to better understand the underlying biomechanical determinants of vmax.
Lateral view of the J-shaped running specific prostheses (RSPs) showing differences in curvature. Ottobock 1E90 Sprinter (OBK1) is shown in front (black) and Össur Cheetah Xtend (OSR2) is shown behind (yellow).
Our results partially support the hypothesis that the optimal RSP configuration would elicit more symmetric biomechanics. While considering differences in vmax, RSP model did not affect step frequency symmetry, suggesting that step frequency symmetry may not be important for attaining faster vmax. In contrast, RSP model did affect stance average vertical GRF symmetry. AL stance average vertical GRFs were, on average, 6.2% lower than UL stance average vertical GRFs with use of all three RSP models at vmax. This finding is similar to a previous study that found AL stance average vertical GRFs were 7.7% lower than UL stance average vertical GRFs at vmax7. In the present study, AL stance average vertical GRFs were 3.6%, 2.9%, and 12.2% lower than UL stance average vertical GRFs when using OBK1, OSR2, and FDM3 RSPs, respectively. Use of OBK1 and OSR2 elicited faster vmax than use of FDM3 RSPs and stance average vertical GRFs were more symmetric with use of OBK1 and OSR2 compared to FDM3 RSPs. This suggests that stance average vertical GRF symmetry may influence vmax. RSP model also influenced contact length symmetry. AL contact lengths were, on average, 7.2% longer than UL contact lengths with use of all three RSP models at vmax. This contrasts with a previous study that found no difference in contact length between the AL and UL when athletes with unilateral TTA ran at vmax7. However, contact lengths were least symmetric with use of OBK1, suggesting that contact length symmetry may not influence vmax. Use of the OBK1 RSP resulted in the most asymmetric and lowest average leg stiffness. Because use of the OBK1 RSP elicited the fastest vmax, this suggests that lower average leg stiffness, but not leg stiffness symmetry, may influence vmax in athletes with unilateral TTAs. It is possible that lower leg stiffness results in a greater amount of time to generate force on the ground, which leads to faster vmax. A previous study found that AL leg stiffness was 27% lower than UL leg stiffness at a maximum velocity of 9.5 m/s6. Similarly, we found that AL leg stiffness was 13–22% lower than UL leg stiffness at vmax across all configurations at an average vmax of 8.32 m/s. Overall, our results suggest that RSP configurations that result in more symmetric stance average vertical GRFs have the strongest influence on vmax.
In the present study, we determined the effects of using fifteen different RSP configurations on vmax and the underlying biomechanics used to achieve these velocities. However, a potential limitation of this study is that some athletes used two different sockets to complete the protocol. Athletes used one socket for J-shaped RSPs (OBK1, OSR2) and a different socket for the C-shaped RSP (FDM3), which could have contributed to different amounts of residual limb movement within the socket, and potentially affected running biomechanics21. The mass of each RSP (OBK1: 675 g, OSR2: 750 g, FDM3: 510 g)1,2,3, in addition to the different sockets and attachments, may have affected the results. Specifically, different mass and therefore moment of inertia of the AL, could potentially affect leg swing time and step frequency. However, Grabowski et al.7 found no difference in vmax or in leg swing time of the AL when adding up to 300 g to the distal end of the RSP of athletes with unilateral TTAs compared to the same RSP with no added mass. We found that use of the FDM3 RSP resulted in 3.4% faster step frequencies than OBK1 and OSR2 RSPs yet use of the FDM3 RSP did not elicit the fastest vmax.
Seven of the athletes involved this study typically use J-shaped RSPs for competition compared to three athletes who typically use C-shaped RSPs (Table 1). This familiarity could have affected our results. However, for nine athletes, irrespective of their usual RSP, J-shaped RSPs resulted in the fastest maximum velocity (Table 1). Only subject 2, who usually runs using a J-shaped RSP, had a faster vmax using the C-shaped FDM3. For all 3 of the subjects who usually run using a C-shaped RSP, use of the J-shaped RSPs resulted in a faster vmax.
The spring-mass model has been used to calculate leg stiffness at running speeds ranging from 3.8 m/s13 to 8.7 m/s28 and up to 12.3 m/s29. Clark et al.30 found that the vertical GRF trace is not symmetric during ground contact for professional non-amputee sprinters, which suggests that the assumptions for calculating leg stiffness using the spring-mass model may be violated. We found that the vertical GRF trace for the UL is not symmetric during ground contact (Fig. 5), and thus the leg stiffness of the UL may not be accurately represented by the spring-mass model. These differences may limit our calculations of leg stiffness for the UL and the leg stiffness symmetry index comparing UL and AL leg stiffness. However, our findings are similar to those of McGowan et al.5, who calculated leg stiffness of the UL using the spring-mass model at speeds between 3.0 and 9.5 m/s.
Some athletes lowered themselves from the handrails onto the moving treadmill belt to initiate each trial while others stood on the treadmill and accelerated with the treadmill belt until it reached the desired velocity. Use of these different strategies is unlikely to influence vmax because the treadmill accelerates at 1.0 m/s2 and athletes used the same strategy for each RSP configuration. Additionally, the systematic trial order of progressively increasing velocity with each RSP configuration may have induced fatigue. To reduce any potential effects of fatigue, we allowed athletes rest ad libitum, and limited each day of testing to three sets of trials. The initial accommodation session was approximately four hours, but a longer accommodation period (multiple days/weeks) may have allowed athletes to reach a faster vmax for each RSP configuration. Future studies are needed to determine how athletes accommodate to RSPs, and how accommodation time relates to performance.
We found that RSP height did not significantly affect vmax (p = 0.11), but it is possible that changes of ±2 cm were not enough to significantly affect vmax. However, ±2 cm changes in RSP height were noticeable by each athlete, and greater RSP height changes could have resulted in injury. The average maximum running velocity for all configurations in the present study (8.32 m/s) is similar to those reported previously for athletes with unilateral TTAs running on a treadmill (8.82 m/s and 8.75 m/s)6,7. Future studies are needed to investigate the effects of using different RSP configurations on joint mechanics and muscle activation patterns. These parameters could help to further explain differences in vmax among RSP configurations by revealing changes in effective mechanical advantage or ability to produce muscle force during the stance phase. Future studies should also investigate the effects of different RSP configurations on the start and acceleration phase of sprint races, to determine if the RSP configuration that allows for the fastest vmax is also the configuration that allows for a better start and acceleration, i.e. for a better racing performance.
Athletes with unilateral TTAs reach different maximum running velocities when using different RSP models. Specifically, use of the J-shaped OBK1 resulted in the fastest vmax, followed by use of the J-shaped OSR2, and C-shaped FDM3 RSPs. RSP stiffness and height do not have a significant effect on vmax in athletes with unilateral TTAs. While controlling for velocity, use of J-shaped versus C-shaped RSPs resulted in greater stance average vertical ground reaction forces, slower step frequencies, and longer step lengths. Stance average vertical ground reaction forces were less asymmetric, while contact time and leg stiffness were more asymmetric when using J-shaped versus C-shaped RSPs.
All data generated and analyzed during the current study are presented in the main text, figures, and tables. An alternative data format is available from the corresponding author on reasonable request.
Össur. Prosthetic Solutions Catalog, https://www.ossur.com/catalogs/prosthetics/ (2016).
Ottobock. 1E90 Sprinter - Instructions for Use, https://shop.ottobock.us/media/pdf/647G849-INT-06-1505w.pdf (2015).
Freedom Innovations. Catalog Page Catapult, http://www.freedom-innovations.com/wp-content/uploads/2015/05/Catalog-Page-Catalpult.pdf (2015).
Beck, O. N., Taboga, P. & Grabowski, A. M. Characterizing the mechanical properties of running-specific prostheses. PLoS One 11, e0168298 (2016).
Ottobock Fitting Guide for TT Sports Prosthesis, https://shop.ottobock.us/media/pdf/647H543-INT-02-1403w.pdf (2014)
McGowan, C. P., Grabowski, A. M., McDermott, W. J., Herr, H. M. & Kram, R. Leg stiffness of sprinters using running-specific prostheses. J. R. Soc. Interface 9, 1975–1982 (2012).
Grabowski, A. M. et al. Running-specific prostheses limit ground-force during sprinting. Biol. Lett. 6, 201–204 (2010).
Mann, R. & Herman, J. Kinematic analysis of Olympic sprint performance: men's 200 meters. Int. J. Sport Biomech. 1, 151–162 (1985).
Weyand, P. G., Sternlight, D. B., Bellizzi, M. J. & Wright, S. Faster top running speeds are achieved with greater ground forces not more rapid leg movements. J. Appl. Physiol. 89, 1991–1999 (2000).
Hunter, J. P., Marshall, R. N. & McNair, P. J. Interaction of step length and step rate during sprint running. Med. Sci. Sports Exerc. 36, 261–271 (2004).
Blickhan, R. The spring-mass model for running and hopping. J. Biomech. 22, 1217–1227 (1989).
Cavagna, G. A., Saibene, F. P. & Margaria, R. Mechanical work in running. J. Appl. Physiol. 19, 249–256 (1964).
Farley, C. T. & Ferris, D. P. Biomechanics of walking and running: center of mass movements to muscle action. Exerc. Sport Sci. Rev. 26, 253–285 (1998).
McMahon, T. A. & Cheng, G. C. The mechanics of running: how does stiffness couple with speed? J. Biomech. 21, 65–78 (1990).
Arampatzis, A., Brüggemann, G.-P. & Metzler, V. The effect of speed on leg stiffness and joint kinetics in human running. J. Biomech. 32, 1349–1353 (1999).
Farley, C. T., Glasheen, J. & McMahon, T. A. Running springs: speed and animal size. J. Exp. Biol. 185, 71–86 (1993).
Haugen, T., Danielsen, J., McGhie, D., Sandbakk & Ettema, G. Kinematic stride cycle asymmetry is not associated with sprint performance and injury prevalence in athletic sprinters. Scand. J. Med. Sci. Sport. 28, 1001–1008 (2018).
Exell, T., Irwin, G., Gittoes, M. & Kerwin, D. Strength and performance asymmetry during maximal velocity sprint running. Scand. J. Med. Sci. Sport. 27, 1273–1282 (2017).
Meyers, R. W., Oliver, J. L., Hughes, M. G., Lloyd, R. S. & Cronin, J. B. Asymmetry during maximal sprint performance in 11- to 16-year-old boys. Pediatr. Exerc. Sci. 29, 94–102 (2017).
Beck, O. N., Taboga, P. & Grabowski, A. M. Prosthetic model, but not stiffness or height, affects the metabolic cost of running for athletes with unilateral transtibial amputations. J.Appl. Physiol. 123, 38–48 (2017).
Beck, O. N., Taboga, P. & Grabowski, A. M. How do prosthetic stiffness, height and running speed affect the biomechanics of athletes with bilateral transtibial amputations? J. R. Soc. Interface 14, 20170230 (2017).
Cavagna, G. A. Force platforms as ergometers. J. Appl. Physiol. 39, 174–179 (1975).
Robinson, R. O., Herzog, W. & Nigg, B. M. Use of force platform variables to quantify the effects of chiropractice manipulation on gait symmetry. J. Manipulative Physiol. Ther. 10, 172–176 (1987).
Cnaan, A., Laird, N. M. & Slasor, P. Using the general linear mixed model to analyse unbalanced repeated measures and longitudinal data. Statistics in medicine 16(20), 2349–2380 (1997).
Hak, L. et al. Walking in an unstable environment: strategies used by transtibial amputees to prevent falling during gait. Arch. Phys. Med. Rehabil. 94, 2186–2193 (2013).
Kipp, S., Grabowski, A. M. & Kram, R. What determines the metabolic cost of human running across a wide range of velocities? J. Exp. Biol. 221, jeb184218 (2018).
Baum, B. et al. Amputee locomotion: determining the inertial properties of running-specific prostheses. Arch. Phys. Med. Rehabil. 94, 1776–1783 (2013).
Morin, J. B., Jeannin, T., Chevallier, B. & Belli, A. Spring-mass model characteristics during sprint running: correlation with performance and fatigue-induced changes. International journal of sports medicine 27, 158–165, https://doi.org/10.1055/s-2005-837569 (2006).
Taylor, M. J. & Beneke, R. Spring mass characteristics of the fastest men on Earth. International journal of sports medicine 33, 667–670, https://doi.org/10.1055/s-0032-1306283 (2012).
Clark, K. P. & Weyand, P. G. Are running speeds maximized with simple-spring stance mechanics? J. Appl. Physiol. (1985) 117, 604–615, https://doi.org/10.1152/japplphysiol.00174.2014 (2014).
This project was supported by the Bridging Advanced Developments for Exceptional Rehabilitation (BADER) consortium, a Department of Defense Congressionally Directed Medical Research Programs cooperative agreement (W81XWH-11-2-0222). We thank Mike Litavish, CPO, and Angela Montgomery, CPO, for their invaluable assistance throughout our study.
California State University, Sacramento, CA, USA
Paolo Taboga
University of Colorado Boulder, Boulder, CO, USA
Emily K. Drees & Alena M. Grabowski
George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
Owen N. Beck
School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
VA Eastern Colorado Healthcare System, Denver, CO, USA
Alena M. Grabowski
Emily K. Drees
P.T. contributed to conception and design of the work, acquisition of the data and substantially revised the manuscript. E.D. contributed to analysis and interpretation of data and prepared the first draft of the manuscript. O.B. contributed to conception and design of the work and acquisition of the data. A.G. contributed to conception and design of the work and substantially revised the manuscript.
Correspondence to Alena M. Grabowski.
Taboga, P., Drees, E.K., Beck, O.N. et al. Prosthetic model, but not stiffness or height, affects maximum running velocity in athletes with unilateral transtibial amputations. Sci Rep 10, 1763 (2020). https://doi.org/10.1038/s41598-019-56479-8
Editor's choice: exercise | CommonCrawl |
average power
work done in a time interval divided by the time interval
energy of motion, one-half an object's mass times the square of its speed
net work
work done by all the forces acting on an object
(or instantaneous power) rate of doing work
done when a force acts on something that undergoes a displacement from one position to another
work done by a force
integral, from the initial position to the final position, of the dot product of the force and the infinitesimal displacement along the path over which the force acts
work-energy theorem
net work done on a particle is equal to the change in its kinetic energy
Key Equations
Work done by a force over an infinitesimal displacement [latex]dW=\mathbf{\overset{\to }{F}}\cdot d\mathbf{\overset{\to }{r}}=|\mathbf{\overset{\to }{F}}||d\mathbf{\overset{\to }{r}}|\text{cos}\,\theta[/latex]
Work done by a force acting along a path from A to B [latex]{W}_{AB}=\underset{\text{path}AB}{\int }\mathbf{\overset{\to }{F}}\cdot d\mathbf{\overset{\to }{r}}[/latex]
Work done by a constant force of kinetic friction [latex]{W}_{\text{fr}}=\text{−}{f}_{k}|{l}_{AB}|[/latex]
Work done going from A to B by Earth's gravity, near its surface [latex]{W}_{\text{grav,}AB}=\text{−}mg({y}_{B}-{y}_{A})[/latex]
Work done going from A to B by one-dimensional spring force [latex]{W}_{\text{spring,}AB}=\text{−}(\frac{1}{2}k)({x}_{B}^{2}-{x}_{A}^{2})[/latex]
Kinetic energy of a non-relativistic particle [latex]K=\frac{1}{2}m{v}^{2}=\frac{{p}^{2}}{2m}[/latex]
Work-energy theorem [latex]{W}_{\text{net}}={K}_{B}-{K}_{A}[/latex]
Power as rate of doing work [latex]P=\frac{dW}{dt}[/latex]
Power as the dot product of force and velocity [latex]P=\mathbf{\overset{\to }{F}}\cdot \mathbf{\overset{\to }{v}}[/latex]
The infinitesimal increment of work done by a force, acting over an infinitesimal displacement, is the dot product of the force and the displacement.
The work done by a force, acting over a finite path, is the integral of the infinitesimal increments of work done along the path.
The work done against a force is the negative of the work done by the force.
The work done by a normal or frictional contact force must be determined in each particular case.
The work done by the force of gravity, on an object near the surface of Earth, depends only on the weight of the object and the difference in height through which it moved.
The work done by a spring force, acting from an initial position to a final position, depends only on the spring constant and the squares of those positions.
The kinetic energy of a particle is the product of one-half its mass and the square of its speed, for non-relativistic speeds.
The kinetic energy of a system is the sum of the kinetic energies of all the particles in the system.
Kinetic energy is relative to a frame of reference, is always positive, and is sometimes given special names for different types of motion.
Because the net force on a particle is equal to its mass times the derivative of its velocity, the integral for the net work done on the particle is equal to the change in the particle's kinetic energy. This is the work-energy theorem.
You can use the work-energy theorem to find certain properties of a system, without having to solve the differential equation for Newton's second law.
Power is the rate of doing work; that is, the derivative of work with respect to time.
Alternatively, the work done, during a time interval, is the integral of the power supplied over the time interval.
The power delivered by a force, acting on a moving particle, is the dot product of the force and the particle's velocity.
Give an example of something we think of as work in everyday circumstances that is not work in the scientific sense. Is energy transferred or changed in form in your example? If so, explain how this is accomplished without doing work.
Give an example of a situation in which there is a force and a displacement, but the force does no work. Explain why it does no work.
Describe a situation in which a force is exerted for a long time but does no work. Explain.
A body moves in a circle at constant speed. Does the centripetal force that accelerates the body do any work? Explain.
Suppose you throw a ball upward and catch it when it returns at the same height. How much work does the gravitational force do on the ball over its entire trip?
Why is it more difficult to do sit-ups while on a slant board than on a horizontal surface? (See below.)
As a young man, Tarzan climbed up a vine to reach his tree house. As he got older, he decided to build and use a staircase instead. Since the work of the gravitational force mg is path independent, what did the King of the Apes gain in using stairs?
A particle of m has a velocity of vxiˆ+vyjˆ+vzkˆ.vxi^+vyj^+vzk^. Is its kinetic energy given by m(vx2iˆ+vy2jˆ+vz2kˆ)/2?m(vx2i^+vy2j^+vz2k^)/2? If not, what is the correct expression?
One particle has mass m and a second particle has mass 2m. The second particle is moving with speed v and the first with speed 2v. How do their kinetic energies compare?
A person drops a pebble of mass m1m1 from a height h, and it hits the floor with kinetic energy K. The person drops another pebble of mass m2m2 from a height of 2h, and it hits the floor with the same kinetic energy K. How do the masses of the pebbles compare?
The person shown below does work on the lawn mower. Under what conditions would the mower gain energy from the person pushing the mower? Under what conditions would it lose energy?
Work done on a system puts energy into it. Work done by a system removes energy from it. Give an example for each statement.
Two marbles of masses m and 2m are dropped from a height h. Compare their kinetic energies when they reach the ground.
Compare the work required to accelerate a car of mass 2000 kg from 30.0 to 40.0 km/h with that required for an acceleration from 50.0 to 60.0 km/h.
Suppose you are jogging at constant velocity. Are you doing any work on the environment and vice versa?
Two forces act to double the speed of a particle, initially moving with kinetic energy of 1 J. One of the forces does 4 J of work. How much work does the other force do?
Most electrical appliances are rated in watts. Does this rating depend on how long the appliance is on? (When off, it is a zero-watt device.) Explain in terms of the definition of power.
Explain, in terms of the definition of power, why energy consumption is sometimes listed in kilowatt-hours rather than joules. What is the relationship between these two energy units?
A spark of static electricity, such as that you might receive from a doorknob on a cold dry day, may carry a few hundred watts of power. Explain why you are not injured by such a spark.
Does the work done in lifting an object depend on how fast it is lifted? Does the power expended depend on how fast it is lifted?
Can the power expended by a force be negative?
How can a 50-W light bulb use more energy than a 1000-W oven?
How much work does a supermarket checkout attendant do on a can of soup he pushes 0.600 m horizontally with a force of 5.00 N?
A 75.0-kg person climbs stairs, gaining 2.50 m in height. Find the work done to accomplish this task.
(a) Calculate the work done on a 1500-kg elevator car by its cable to lift it 40.0 m at constant speed, assuming friction averages 100 N. (b) What is the work done on the lift by the gravitational force in this process? (c) What is the total work done on the lift?
Suppose a car travels 108 km at a speed of 30.0 m/s, and uses 2.0 gal of gasoline. Only 30% of the gasoline goes into useful work by the force that keeps the car moving at constant speed despite friction. (The energy content of gasoline is about 140 MJ/gal.) (a) What is the magnitude of the force exerted to keep the car moving at constant speed? (b) If the required force is directly proportional to speed, how many gallons will be used to drive 108 km at a speed of 28.0 m/s?
Calculate the work done by an 85.0-kg man who pushes a crate 4.00 m up along a ramp that makes an angle of 20.0°20.0°with the horizontal (see below). He exerts a force of 500 N on the crate parallel to the ramp and moves at a constant speed. Be certain to include the work he does on the crate and on his body to get up the ramp.
How much work is done by the boy pulling his sister 30.0 m in a wagon as shown below? Assume no friction acts on the wagon.
A shopper pushes a grocery cart 20.0 m at constant speed on level ground, against a 35.0 N frictional force. He pushes in a direction 25.0°25.0° below the horizontal. (a) What is the work done on the cart by friction? (b) What is the work done on the cart by the gravitational force? (c) What is the work done on the cart by the shopper? (d) Find the force the shopper exerts, using energy considerations. (e) What is the total work done on the cart?
Suppose the ski patrol lowers a rescue sled and victim, having a total mass of 90.0 kg, down a 60.0°60.0° slope at constant speed, as shown below. The coefficient of friction between the sled and the snow is 0.100. (a) How much work is done by friction as the sled moves 30.0 m along the hill? (b) How much work is done by the rope on the sled in this distance? (c) What is the work done by the gravitational force on the sled? (d) What is the total work done?
A constant 20-N force pushes a small ball in the direction of the force over a distance of 5.0 m. What is the work done by the force?
A toy cart is pulled a distance of 6.0 m in a straight line across the floor. The force pulling the cart has a magnitude of 20 N and is directed at 37°37° above the horizontal. What is the work done by this force?
A 5.0-kg box rests on a horizontal surface. The coefficient of kinetic friction between the box and surface is μK=0.50.μK=0.50. A horizontal force pulls the box at constant velocity for 10 cm. Find the work done by (a) the applied horizontal force, (b) the frictional force, and (c) the net force.
A sled plus passenger with total mass 50 kg is pulled 20 m across the snow (μk=0.20)(μk=0.20) at constant velocity by a force directed 25°25° above the horizontal. Calculate (a) the work of the applied force, (b) the work of friction, and (c) the total work.
Suppose that the sled plus passenger of the preceding problem is pushed 20 m across the snow at constant velocity by a force directed 30°30° below the horizontal. Calculate (a) the work of the applied force, (b) the work of friction, and (c) the total work.
How much work does the force F(x)=(−2.0/x)NF(x)=(−2.0/x)N do on a particle as it moves from x=2.0mx=2.0m to x=5.0m?x=5.0m?
How much work is done against the gravitational force on a 5.0-kg briefcase when it is carried from the ground floor to the roof of the Empire State Building, a vertical climb of 380 m?
It takes 500 J of work to compress a spring 10 cm. What is the force constant of the spring?
A bungee cord is essentially a very long rubber band that can stretch up to four times its unstretched length. However, its spring constant varies over its stretch [see Menz, P.G. "The Physics of Bungee Jumping." The Physics Teacher (November 1993) 31: 483-487]. Take the length of the cord to be along the x-direction and define the stretch x as the length of the cord lminus its un-stretched length l0;l0; that is, x=l−l0x=l−l0 (see below). Suppose a particular bungee cord has a spring constant, for 0≤x≤4.88m0≤x≤4.88m, of k1=204N/mk1=204N/m and for 4.88m≤x4.88m≤x, of k2=111N/m.k2=111N/m. (Recall that the spring constant is the slope of the force F(x) versus its stretch x.) (a) What is the tension in the cord when the stretch is 16.7 m (the maximum desired for a given jump)? (b) How much work must be done against the elastic force of the bungee cord to stretch it 16.7 m?
Figure 7.16 (credit: Graeme Churchard)
A bungee cord exerts a nonlinear elastic force of magnitude F(x)=k1x+k2x3,F(x)=k1x+k2x3, where x is the distance the cord is stretched, k1=204N/mk1=204N/m and k2=−0.233N/m3.k2=−0.233N/m3. How much work must be done on the cord to stretch it 16.7 m?
Engineers desire to model the magnitude of the elastic force of a bungee cord using the equation
F(x)=a[x+9m9m−(9mx+9m)2]F(x)=a[x+9m9m−(9mx+9m)2],
where x is the stretch of the cord along its length and a is a constant. If it takes 22.0 kJ of work to stretch the cord by 16.7 m, determine the value of the constant a.
A particle moving in the xy-plane is subject to a force
F⃗ (x,y)=(50N·m2)(xiˆ+yjˆ)(x2+y2)3/2,F→(x,y)=(50N·m2)(xi^+yj^)(x2+y2)3/2,
where x and y are in meters. Calculate the work done on the particle by this force, as it moves in a straight line from the point (3 m, 4 m) to the point (8 m, 6 m).
A particle moves along a curved path y(x)=(10m){1+cos[(0.1m−1)x]},y(x)=(10m){1+cos[(0.1m−1)x]}, from x=0x=0 to x=10πm,x=10πm, subject to a tangential force of variable magnitude F(x)=(10N)sin[(0.1m−1)x].F(x)=(10N)sin[(0.1m−1)x]. How much work does the force do? (Hint: Consult a table of integrals or use a numerical integration program.)
Compare the kinetic energy of a 20,000-kg truck moving at 110 km/h with that of an 80.0-kg astronaut in orbit moving at 27,500 km/h.
(a) How fast must a 3000-kg elephant move to have the same kinetic energy as a 65.0-kg sprinter running at 10.0 m/s? (b) Discuss how the larger energies needed for the movement of larger animals would relate to metabolic rates.
Estimate the kinetic energy of a 90,000-ton aircraft carrier moving at a speed of at 30 knots. You will need to look up the definition of a nautical mile to use in converting the unit for speed, where 1 knot equals 1 nautical mile per hour.
Calculate the kinetic energies of (a) a 2000.0-kg automobile moving at 100.0 km/h; (b) an 80.-kg runner sprinting at 10. m/s; and (c) a 9.1×10−31-kg9.1×10−31-kg electron moving at 2.0×107m/s.2.0×107m/s.
A 5.0-kg body has three times the kinetic energy of an 8.0-kg body. Calculate the ratio of the speeds of these bodies.
An 8.0-g bullet has a speed of 800 m/s. (a) What is its kinetic energy? (b) What is its kinetic energy if the speed is halved?
(a) Calculate the force needed to bring a 950-kg car to rest from a speed of 90.0 km/h in a distance of 120 m (a fairly typical distance for a non-panic stop). (b) Suppose instead the car hits a concrete abutment at full speed and is brought to a stop in 2.00 m. Calculate the force exerted on the car and compare it with the force found in part (a).
A car's bumper is designed to withstand a 4.0-km/h (1.1-m/s) collision with an immovable object without damage to the body of the car. The bumper cushions the shock by absorbing the force over a distance. Calculate the magnitude of the average force on a bumper that collapses 0.200 m while bringing a 900-kg car to rest from an initial speed of 1.1 m/s.
Boxing gloves are padded to lessen the force of a blow. (a) Calculate the force exerted by a boxing glove on an opponent's face, if the glove and face compress 7.50 cm during a blow in which the 7.00-kg arm and glove are brought to rest from an initial speed of 10.0 m/s. (b) Calculate the force exerted by an identical blow in the gory old days when no gloves were used, and the knuckles and face would compress only 2.00 cm. Assume the change in mass by removing the glove is negligible. (c) Discuss the magnitude of the force with glove on. Does it seem high enough to cause damage even though it is lower than the force with no glove?
Using energy considerations, calculate the average force a 60.0-kg sprinter exerts backward on the track to accelerate from 2.00 to 8.00 m/s in a distance of 25.0 m, if he encounters a headwind that exerts an average force of 30.0 N against him.
A 5.0-kg box has an acceleration of 2.0m/s22.0m/s2 when it is pulled by a horizontal force across a surface with μK=0.50.μK=0.50. Find the work done over a distance of 10 cm by (a) the horizontal force, (b) the frictional force, and (c) the net force. (d) What is the change in kinetic energy of the box?
A constant 10-N horizontal force is applied to a 20-kg cart at rest on a level floor. If friction is negligible, what is the speed of the cart when it has been pushed 8.0 m?
In the preceding problem, the 10-N force is applied at an angle of 45°45° below the horizontal. What is the speed of the cart when it has been pushed 8.0 m?
Compare the work required to stop a 100-kg crate sliding at 1.0 m/s and an 8.0-g bullet traveling at 500 m/s.
A wagon with its passenger sits at the top of a hill. The wagon is given a slight push and rolls 100 m down a 10°10° incline to the bottom of the hill. What is the wagon's speed when it reaches the end of the incline. Assume that the retarding force of friction is negligible.
An 8.0-g bullet with a speed of 800 m/s is shot into a wooden block and penetrates 20 cm before stopping. What is the average force of the wood on the bullet? Assume the block does not move.
A 2.0-kg block starts with a speed of 10 m/s at the bottom of a plane inclined at 37°37° to the horizontal. The coefficient of sliding friction between the block and plane is μk=0.30.μk=0.30. (a) Use the work-energy principle to determine how far the block slides along the plane before momentarily coming to rest. (b) After stopping, the block slides back down the plane. What is its speed when it reaches the bottom? (Hint: For the round trip, only the force of friction does work on the block.)
When a 3.0-kg block is pushed against a massless spring of force constant constant 4.5×103N/m,4.5×103N/m, the spring is compressed 8.0 cm. The block is released, and it slides 2.0 m (from the point at which it is released) across a horizontal surface before friction stops it. What is the coefficient of kinetic friction between the block and the surface?
A small block of mass 200 g starts at rest at A, slides to B where its speed is vB=8.0m/s,vB=8.0m/s, then slides along the horizontal surface a distance 10 m before coming to rest at C. (See below.) (a) What is the work of friction along the curved surface? (b) What is the coefficient of kinetic friction along the horizontal surface?
A small object is placed at the top of an incline that is essentially frictionless. The object slides down the incline onto a rough horizontal surface, where it stops in 5.0 s after traveling 60 m. (a) What is the speed of the object at the bottom of the incline and its acceleration along the horizontal surface? (b) What is the height of the incline?
When released, a 100-g block slides down the path shown below, reaching the bottom with a speed of 4.0 m/s. How much work does the force of friction do?
A 0.22LR-caliber bullet like that mentioned in Example 7.10 is fired into a door made of a single thickness of 1-inch pine boards. How fast would the bullet be traveling after it penetrated through the door?
A sled starts from rest at the top of a snow-covered incline that makes a 22°22° angle with the horizontal. After sliding 75 m down the slope, its speed is 14 m/s. Use the work-energy theorem to calculate the coefficient of kinetic friction between the runners of the sled and the snowy surface.
A person in good physical condition can put out 100 W of useful power for several hours at a stretch, perhaps by pedaling a mechanism that drives an electric generator. Neglecting any problems of generator efficiency and practical considerations such as resting time: (a) How many people would it take to run a 4.00-kW electric clothes dryer? (b) How many people would it take to replace a large electric power plant that generates 800 MW?
What is the cost of operating a 3.00-W electric clock for a year if the cost of electricity is $0.0900 per kW·hkW·h?
A large household air conditioner may consume 15.0 kW of power. What is the cost of operating this air conditioner 3.00 h per day for 30.0 d if the cost of electricity is $0.110 per kW·hkW·h?
(a) What is the average power consumption in watts of an appliance that uses 5.00 kW·hkW·h of energy per day? (b) How many joules of energy does this appliance consume in a year?
(a) What is the average useful power output of a person who does 6.00×106J6.00×106J of useful work in 8.00 h? (b) Working at this rate, how long will it take this person to lift 2000 kg of bricks 1.50 m to a platform? (Work done to lift his body can be omitted because it is not considered useful output here.)
A 500-kg dragster accelerates from rest to a final speed of 110 m/s in 400 m (about a quarter of a mile) and encounters an average frictional force of 1200 N. What is its average power output in watts and horsepower if this takes 7.30 s?
(a) How long will it take an 850-kg car with a useful power output of 40.0 hp (1 hp equals 746 W) to reach a speed of 15.0 m/s, neglecting friction? (b) How long will this acceleration take if the car also climbs a 3.00-m high hill in the process?
(a) Find the useful power output of an elevator motor that lifts a 2500-kg load a height of 35.0 m in 12.0 s, if it also increases the speed from rest to 4.00 m/s. Note that the total mass of the counterbalanced system is 10,000 kg—so that only 2500 kg is raised in height, but the full 10,000 kg is accelerated. (b) What does it cost, if electricity is $0.0900 per kW·hkW·h ?
(a) How long would it take a 1.50×105-kg1.50×105-kg airplane with engines that produce 100 MW of power to reach a speed of 250 m/s and an altitude of 12.0 km if air resistance were negligible? (b) If it actually takes 900 s, what is the power? (c) Given this power, what is the average force of air resistance if the airplane takes 1200 s? (Hint: You must find the distance the plane travels in 1200 s assuming constant acceleration.)
Calculate the power output needed for a 950-kg car to climb a 2.00°2.00° slope at a constant 30.0 m/s while encountering wind resistance and friction totaling 600 N.
A man of mass 80 kg runs up a flight of stairs 20 m high in 10 s. (a) how much power is used to lift the man? (b) If the man's body is 25% efficient, how much power does he expend?
The man of the preceding problem consumes approximately 1.05×107J1.05×107J (2500 food calories) of energy per day in maintaining a constant weight. What is the average power he produces over a day? Compare this with his power production when he runs up the stairs.
An electron in a television tube is accelerated uniformly from rest to a speed of 8.4×107m/s8.4×107m/s over a distance of 2.5 cm. What is the power delivered to the electron at the instant that its displacement is 1.0 cm?
Coal is lifted out of a mine a vertical distance of 50 m by an engine that supplies 500 W to a conveyer belt. How much coal per minute can be brought to the surface? Ignore the effects of friction.
A girl pulls her 15-kg wagon along a flat sidewalk by applying a 10-N force at 37°37° to the horizontal. Assume that friction is negligible and that the wagon starts from rest. (a) How much work does the girl do on the wagon in the first 2.0 s. (b) How much instantaneous power does she exert at t=2.0st=2.0s?
A typical automobile engine has an efficiency of 25%. Suppose that the engine of a 1000-kg automobile has a maximum power output of 140 hp. What is the maximum grade that the automobile can climb at 50 km/h if the frictional retarding force on it is 300 N?
When jogging at 13 km/h on a level surface, a 70-kg man uses energy at a rate of approximately 850 W. Using the facts that the "human engine" is approximately 25% efficient, determine the rate at which this man uses energy when jogging up a 5.0°5.0° slope at this same speed. Assume that the frictional retarding force is the same in both cases.
Additional Problems
A cart is pulled a distance D on a flat, horizontal surface by a constant force F that acts at an angle θθ with the horizontal direction. The other forces on the object during this time are gravity (FwFw), normal forces (FN1FN1) and (FN2FN2), and rolling frictions Fr1Fr1 and Fr2Fr2, as shown below. What is the work done by each force?
Consider a particle on which several forces act, one of which is known to be constant in time: F⃗ 1=(3N)iˆ+(4N)jˆ.F→1=(3N)i^+(4N)j^. As a result, the particle moves along the x-axis from x=0x=0 to x=5mx=5m in some time interval. What is the work done by F⃗ 1F→1 ?
Consider a particle on which several forces act, one of which is known to be constant in time: F⃗ 1=(3N)iˆ+(4N)jˆ.F→1=(3N)i^+(4N)j^. As a result, the particle moves first along the x-axis from x=0x=0 to x=5mx=5m and then parallel to the y-axis from y=0y=0 to y=6m.y=6m.What is the work done by F⃗ 1F→1 ?
Consider a particle on which several forces act, one of which is known to be constant in time: F⃗ 1=(3N)iˆ+(4N)jˆ.F→1=(3N)i^+(4N)j^. As a result, the particle moves along a straight path from a Cartesian coordinate of (0 m, 0 m) to (5 m, 6 m). What is the work done by F⃗ 1F→1 ?
Consider a particle on which a force acts that depends on the position of the particle. This force is given by F⃗ 1=(2y)iˆ+(3x)jˆ.F→1=(2y)i^+(3x)j^. Find the work done by this force when the particle moves from the origin to a point 5 meters to the right on the x-axis.
A boy pulls a 5-kg cart with a 20-N force at an angle of 30°30° above the horizontal for a length of time. Over this time frame, the cart moves a distance of 12 m on the horizontal floor. (a) Find the work done on the cart by the boy. (b) What will be the work done by the boy if he pulled with the same force horizontally instead of at an angle of 30°30° above the horizontal over the same distance?
A crate of mass 200 kg is to be brought from a site on the ground floor to a third floor apartment. The workers know that they can either use the elevator first, then slide it along the third floor to the apartment, or first slide the crate to another location marked C below, and then take the elevator to the third floor and slide it on the third floor a shorter distance. The trouble is that the third floor is very rough compared to the ground floor. Given that the coefficient of kinetic friction between the crate and the ground floor is 0.100 and between the crate and the third floor surface is 0.300, find the work needed by the workers for each path shown from A to E. Assume that the force the workers need to do is just enough to slide the crate at constant velocity (zero acceleration). Note: The work by the elevator against the force of gravity is not done by the workers.
A hockey puck of mass 0.17 kg is shot across a rough floor with the roughness different at different places, which can be described by a position-dependent coefficient of kinetic friction. For a puck moving along the x-axis, the coefficient of kinetic friction is the following function of x, where x is in m: μ(x)=0.1+0.05x.μ(x)=0.1+0.05x. Find the work done by the kinetic frictional force on the hockey puck when it has moved (a) from x=0x=0 to x=2mx=2m, and (b) from x=2mx=2m to x=4mx=4m.
A horizontal force of 20 N is required to keep a 5.0 kg box traveling at a constant speed up a frictionless incline for a vertical height change of 3.0 m. (a) What is the work done by gravity during this change in height? (b) What is the work done by the normal force? (c) What is the work done by the horizontal force?
A 7.0-kg box slides along a horizontal frictionless floor at 1.7 m/s and collides with a relatively massless spring that compresses 23 cm before the box comes to a stop. (a) How much kinetic energy does the box have before it collides with the spring? (b) Calculate the work done by the spring. (c) Determine the spring constant of the spring.
You are driving your car on a straight road with a coefficient of friction between the tires and the road of 0.55. A large piece of debris falls in front of your view and you immediate slam on the brakes, leaving a skid mark of 30.5 m (100-feet) long before coming to a stop. A policeman sees your car stopped on the road, looks at the skid mark, and gives you a ticket for traveling over the 13.4 m/s (30 mph) speed limit. Should you fight the speeding ticket in court?
A crate is being pushed across a rough floor surface. If no force is applied on the crate, the crate will slow down and come to a stop. If the crate of mass 50 kg moving at speed 8 m/s comes to rest in 10 seconds, what is the rate at which the frictional force on the crate takes energy away from the crate?
Suppose a horizontal force of 20 N is required to maintain a speed of 8 m/s of a 50 kg crate. (a) What is the power of this force? (b) Note that the acceleration of the crate is zero despite the fact that 20 N force acts on the crate horizontally. What happens to the energy given to the crate as a result of the work done by this 20 N force?
Grains from a hopper falls at a rate of 10 kg/s vertically onto a conveyor belt that is moving horizontally at a constant speed of 2 m/s. (a) What force is needed to keep the conveyor belt moving at the constant velocity? (b) What is the minimum power of the motor driving the conveyor belt?
A cyclist in a race must climb a 5°5° hill at a speed of 8 m/s. If the mass of the bike and the biker together is 80 kg, what must be the power output of the biker to achieve the goal?
Shown below is a 40-kg crate that is pushed at constant velocity a distance 8.0 m along a 30°30° incline by the horizontal force F⃗ .F→. The coefficient of kinetic friction between the crate and the incline is μk=0.40.μk=0.40. Calculate the work done by (a) the applied force, (b) the frictional force, (c) the gravitational force, and (d) the net force.
The surface of the preceding problem is modified so that the coefficient of kinetic friction is decreased. The same horizontal force is applied to the crate, and after being pushed 8.0 m, its speed is 5.0 m/s. How much work is now done by the force of friction? Assume that the crate starts at rest.
The force F(x) varies with position, as shown below. Find the work done by this force on a particle as it moves from x=1.0mx=1.0m to x=5.0m.x=5.0m.
Find the work done by the same force in Example 7.4, between the same points, A=(0,0)andB=(2m,2m)A=(0,0)andB=(2m,2m), over a circular arc of radius 2 m, centered at (0, 2 m). Evaluate the path integral using Cartesian coordinates. (Hint: You will probably need to consult a table of integrals.)
Answer the preceding problem using polar coordinates.
Find the work done by the same force in Example 7.4, between the same points, A=(0,0)andB=(2m,2m)A=(0,0)andB=(2m,2m), over a circular arc of radius 2 m, centered at (2 m, 0). Evaluate the path integral using Cartesian coordinates. (Hint: You will probably need to consult a table of integrals.)
Constant power P is delivered to a car of mass m by its engine. Show that if air resistance can be ignored, the distance covered in a time t by the car, starting from rest, is given by s=(8P/9m)1/2t3/2.s=(8P/9m)1/2t3/2.
Suppose that the air resistance a car encounters is independent of its speed. When the car travels at 15 m/s, its engine delivers 20 hp to its wheels. (a) What is the power delivered to the wheels when the car travels at 30 m/s? (b) How much energy does the car use in covering 10 km at 15 m/s? At 30 m/s? Assume that the engine is 25% efficient. (c) Answer the same questions if the force of air resistance is proportional to the speed of the automobile. (d) What do these results, plus your experience with gasoline consumption, tell you about air resistance?
Consider a linear spring, as in Figure 7.7(a), with mass M uniformly distributed along its length. The left end of the spring is fixed, but the right end, at the equilibrium position x=0,x=0, is moving with speed v in the x-direction. What is the total kinetic energy of the spring? (Hint: First express the kinetic energy of an infinitesimal element of the spring dm in terms of the total mass, equilibrium length, speed of the right-hand end, and position along the spring; then integrate.)
Previous: 7.4 Power
Next: Introduction
University Physics Volume 1 by OpenStax is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted. | CommonCrawl |
Geochemical Transactions
Discrimination of topsoil environments in a karst landscape: an outcome of a geochemical mapping campaign
Ozren Hasan1,
Slobodan Miko ORCID: orcid.org/0000-0001-9191-610X1,
Nikolina Ilijanić1,
Dea Brunović1,
Željko Dedić1,
Martina Šparica Miko1 &
Zoran Peh1
Geochemical Transactions volume 21, Article number: 1 (2020) Cite this article
The study presented in this work emerged as a result of a multiyear regional geochemical survey based on low-density topsoil sampling and the ensuing geochemical atlas of Croatia. This study focuses on the Dinaric part of Croatia to expound the underlying mechanisms controlling the mobilities and variations in distribution of potentially harmful elements as observed from different environmental angles. Although serious environmental degradation of the vulnerable karst soil landscapes was expected to occur chiefly through the accumulation of various heavy metals, the most acute threat materialized through the soil acidification (Al-toxicity) affecting the entire Dinaric karst area. This picture surfaced from the analysis of all three investigated discriminant function models employing the abovementioned environmental criteria selected autonomously with respect to the evaluated soil geochemistry, namely, geologic setting, regional placement and land use. These models are presented by not only the characteristic discriminant-function diagrams but also a set of appropriate mathematically derived geochemical maps disclosing the allocations of potential threats to the karst soil landscapes posed by soil acidity.
As soon as the Geochemical Atlas of Croatia (GAC) was published at the end of the last decade [27], it became obvious that the search for the regional geochemical background (at least on the territory of a single country) is its primary goal. The formation of systematic and relational geochemical GIS databases as the secondary goal would open a number of new exploratory avenues to be covered in the following years. A strong signal suggesting that the in-depth analysis of various variables (state factors) involved in the process of soil formation and development is essential for understanding the soil geochemistry had echoed from the first multi-element geochemical map of Croatia [28]. This thematic map, based on the posterior classification probabilities, employed the regional division (faithfully epitomizing contrasting bedrock lithology) as an independent grouping criterion in discriminant function analysis (DFA) of Croatian topsoil (interval 0–25 cm) geochemical data. In recent times, the probability maps, albeit originally designed for the purpose of petroleum prospecting [29], proved very useful in urban geochemical studies dealing with soil complexity on a local scale [65]. However, on a regional scale (GAC), this type of map clearly distinguished the Dinaric (DIN) from the Pannonian (PAN) part of the Croatian territory on account of the extremely high mean classification rate of 94% for the total set of data (low-density regional survey with 2521 samples in a regular 5 × 5 km grid) whereby samples were a priori classified as pertaining to either the DIN or PAN group [28]. Accordingly, the two regions, each presented as a single geodynamic unit with its own set of soil-forming factors and local disturbances, have been considered distinct entities, suggesting separate studies as the most effective approach to further geochemical and environmental investigations. Out of this dichotomy, the Dinaric region (as any karst area) was brought into sharp focus owing to its characteristic carbonate lithology, which provides the geochemical basis for an extremely fragile karst ecosystem whose soil cover is frequently exposed to erosion and pollution because of improper land use [24]. Unable to cope with specific hazards and impacts caused by neglectful human activity, the vulnerable soil landscapes on the Adriatic coast and its neighbouring mountainous area show the symptoms of increasing environmental degradation. Most of the changes that affected soils arose from growing tourism and expansion of urban areas, recent/collapsed industrial activity (including mining and quarrying), and deforestation caused either by natural (e.g., freezing rains) or anthropogenic (e.g., acid rains) effect (see [68]). Necessarily, it became imperative for future research activities, especially in this area, to fathom the various environmental factors involved in the processes described above because notwithstanding the growing intensity and scale of their use and abuse, the soils stay firmly ingrained at the foundations of human life [39] necessitating sustainable environmental management.
All things considered, the main objective of this study is to investigate the factors responsible for the characteristic geochemical signature of the modern soils developed over the Dinaric karst in the south-western exposures of Croatia. To this purpose, the topsoils collected during the multi-year geochemical mapping campaign (GAC) are examined in light of the various geological and environmental criteria [40,41,42, 51, 58, 59]. These criteria, as in other geochemical studies on similar problems in the area [34, 52, 53], are exploited in this study as the most revealing avenues through which the processes mentioned above can be most effectively understood. The criteria are autonomous with regard to the soil geochemistry such as the geological (lithological) setting, description of land use, soil types, or geographical position (with climate implications), which provide the most efficient means of a priori arrangement of the soil samples into a number of coherent and exhaustive statistical groups. In the final analysis, DFA is employed as a method of data reduction and organization, generating the models based on geochemical partitioning between the established groups. As mathematical models by their nature, they generate the structural patterns that help describe the behaviour of the observed geochemical data in process-form terms [64], notably in the form of maps—the spatial structural patterns [43].
Description of the study area
Croatia is a Mediterranean and Central European country geographically located between 13.5° and 19.5° eastern longitudes and 42.5° and 46.5° northern latitudes, extending from the vast Pannonian plain across the narrow Dinaric mountain range to the Adriatic coast. Almost half of its territory (46%) is located in its maritime (Adriatic) and mountainous (Dinaric) regions (Fig. 1). As a result, the climate is strongly controlled by relief, ranging from continental temperate in the mountains (Cfc and Dfc types) to Mediterranean and sub-Mediterranean (Csa and Cfa types) along the coastline and in the adjacent hinterland [74]. Although the Dinarides are the most important mountain range (Dinara Mt., 1831 m), only 0.11% of the mountain topography is situated above 1500 m ASL. Nevertheless, the mountain barrier strongly affects the mean annual temperatures and precipitation. Temperatures increase from west to east while precipitation varies inversely with temperatures: mountain areas are characterized by high amounts of precipitation between 1100 and 1940 mm, while average rainfall in the coastal area varies between 855 and 1253 mm [8]. Acid rains are at the heart of the problem in the mountainous zone, particularly in the Velebit Mt. area, affecting the growth of the Dinaric beech-fir forest communities [6, 37].
Geographical position of the study area with sampling points (5 × 5 km grid)
Geological setting and soils
The area of the Croatian karst Dinarides is represented by a thick succession of carbonate rocks deposited between the Late Palaeozoic (Middle Permian) and the Eocene on platforms of different ages, types and palaeogeographic settings (Fig. 2). The evolution of DIN began on an epeiric carbonate platform situated at the northern Gondwana margin with significant deposition of mixed carbonate-siliciclastic sediments during the Permian and mostly siliciclastic deposits in the Early Triassic [72]. The Middle Triassic was marked by the separation of the Adria Microplate and sedimentation of carbonate facies with locally significant volcanoclastic influences. Late Triassic dolomites and limestones represent typical deposits of the large isolated Southern Tethyan mega-platform [72] that experienced rift-induced fragmentation resulting in a number of long-lasting carbonate platforms during the Triassic-Jurassic transition. The largest among these platforms was Adriatic-Dinaric Carbonate Platform (ADCP) consisting of four tectonostratigraphic units: the Dinaric NE unit (Inner Karst), Dinaric SW unit (High Karst), Adriatic NE unit (Dalmatian Karst) and Adriatic SW unit (Istrian Karst) ([33] and references therein). The disintegration of the ADCP, characterized by ramp-type carbonate deposition along the margins and the development of flysch basins, started in the Late Cretaceous, while the Cretaceous-Palaeogene transition was marked by a period of regional emergence involving the entire platform. As dynamic tectonics continued into the Palaeogene, the platform depositional sequences were mostly under control of intense synsedimentary tectonics, sometimes deposited in the ramp-like settings. The final uplift of the entire Dinaric area as a result of collision between the Adriatic and Dinaric segments reached its culmination in the Oligocene–Miocene.
Simplified geological map of the study area
Diverse and complex bedrock geology, climate and relief result in a wide range of soils developed in the DIN landscape. Encompassing the coastal and mountainous areas in the SW part of Croatia with their typical carbonate bedrock, the landscape is dominated by various types of automorphic soils, in particular polygenetic Cambisols (eutric, distric, chromic) developed on dolomite and limestone as well as Leptosols, Regosols, Melanosols and anthropogenic soils on flysch bedrock [7]. In the southern part of the Croatian coastal regions are areas under hydromorphic soils—Fluvisols and Gleysols—especially in the Neretva River valley and in some karst poljes [8, 11].
Field and analytical procedures
The locations of the sampling sites and the sampling density were defined by the systematic sampling design according to the ISO 10381-1 and ISO 10381-2 [30, 31] protocols whereby each cell represents the area of 25 km2 in a regular grid of 5 × 5 km2. This scheme includes 1247 soil sampling sites covering the entire DIN part of the country (Fig. 1). Samples were collected from the centre of each cell within the tolerance circle of 15% around the central cell point. The randomness of sampling sites was defined during a pilot geochemical mapping project related to karst terrains in Croatia, which situated the initial sampling point in the heart of the Istrian Peninsula [57]. Soil samples were taken in the center of the 5 × 5 km cell. Soils were sampled with a plastic spade from five shallow pits on each site in the depth interval 0 to 25 cm. One composite sample was prepared for every sampling location. Detailed soil sampling protocols, statistical methodology, and the choice of sampling cell size are presented in the papers by Pirc et al. [57], Prohić et al. [58, 59], and Miko et al. [40, 42] and finalized by the completion of the GAC [27], together with the laboratory protocols and a detailed field and data handling manual that basically followed the geochemical mapping protocols presented in the report by Darnley et al. [16].
Inasmuch as numerous environmental geochemical studies have shown that the optimal grain size fraction for characterization of the trace element contents of soils and sediments should not exceed 0.180 mm [16, 35, 66], the chemical analysis was carried out on fractions < 0.063 mm.
Sample preparation and analysis
The soil samples were dried and homogenized and then dissolved in a mixture of concentrated acids HF-HCl-HNO3-HClO4. Solutions were analysed by mass spectrometry using a Perkin Elmer Elan 6000 or 9000 ICP-MS [2] for a set of 41 elements: Ag, Al, As, Au, Ba, Be, Bi, Ca, Cd, Ce, Co, Cr, Cu, Fe, Hf, K, La, Li, Mg, Mn, Mo, Na, Nb, Ni, P, Pb, Rb, S, Sb, Sc, Sn, Sr, Ta, Th, Ti, U, V, W, Y, Zn, and Zr. In the process, the recovery of refractory minerals such as cassiterite, wolframite, chromite, spinel, beryl, zircon, tourmaline, magnetite and barite was incomplete after the 4-acid digestion. Moreover, due to the evaporation of HClO4, losses of As and Cr were also possible, while silica completely evaporated with HF. Mercury analysis was performed using aqua regia extraction by flameless atomic adsorption spectrometry (FAAS).
The accuracy was controlled by certified geological reference materials, i.e., GXR-2, GXR 5, and SJS-1 soils from the USGS (ACME Labs). The accuracy for most elements analysed in reference soil materials was found in the range of ± 10% of the certified values. The precision was determined by repeated analyses of both certified reference samples and randomly selected soil samples (every 20th sample in the batch) with a resulting average coefficient of variation of approximately 5%.
Analyses of total organic (TOC) and inorganic (TIC) carbon abundances were performed on sieved soil samples with an elemental Thermo Fisher Scientific Soil Flash 2000 NC analyser. To determine carbon in organic form, carbon measurements were carried out after removal of carbonates from the soil. Carbonates were removed by adding an aqueous acid solution (1 M HCl) to soil samples. A subset of samples was checked by XRD diffraction to check whether the dissolution of carbonates was complete. The total inorganic carbon was calculated by the difference between total carbon (untreated sample) and total organic carbon (sample treated with 1 M HCl).
Statistical processing and map generation
The data
The spatial continuous geochemical database of the Dinaric region includes 1459 samples of which the greater part (1247 samples) was collected in a regular grid of 5 × 5 km while the rest was taken from denser grids of 2.5 × 2.5 km and 1 × 1 km, which had been designed for the purpose of closer inspection into the geochemical landscapes of several national parks of Croatia (Brijuni, Plitvice Lakes, Risnjak and Mljet) as well as special karst features such as karst poljes [59]. The latter database was created for special purposes and was not included in the present study. The primary 5 × 5 cell database planned for statistical analysis was formed in the ESRI® ArcInfo™ 10.2.1 GIS software and designed in such a way that each particular sample point was connected to the set of its description data consisting of coordinates, relief, slope, lithology, soil structure, soil texture, environment, potential pollution, organic matter, remarks, colour description (according to Munsell [44]), results of chemical analysis, three levels of Corine Land Cover 2012 categories, geographic region, and soil type. A reduced set of 26 (8 major and 18 trace elements) out of the total set of 41 analysed elements and soil organic carbon (TOC) was selected for further analysis and representation in this work following the recommendations of Darnley et al. [16] that elements having concentrations lower than detection limits in more than 20% of the samples should not be exploited for statistical and mapping purposes. Where measured concentration values were below the detection limit, half the detection limit was used for statistical analysis according to the Guidelines for the FOREGS-EuroGeoSurveys' Geochemical Baseline Mapping [16, 19, 62].
Compositional data analysis
The original dataset contains the suite of 28-part geochemical compositions used routinely in similar previous investigations based on the low-density soil sampling during the geochemical mapping of Croatia (e.g., [28, 42, 51]), namely, Al, As, Ba, Ca, Cd, Co, Cr, Cu, Fe, K, La, Mg, Mn, Na, Nb, Ni, P, Pb, Sc, Sr, Th, Ti, V, Y, Zn, and Zr but also including N and TOC in this case. These variables were selected as input data (predictors) for discriminant function analysis (DFA) in pursuance of the behaviour of potentially toxic elements in the topsoils (soil depth from 0 to 25 cm) developed on the Croatian karst. Descriptive statistics for the whole dataset—min, max, median, Q1 and Q3 quartiles, median absolute deviation (MAD) and geometric mean (g)—are summarized in Table 1 showing, however, information that is suitable only for comparison purposes since the data displayed represent relative rather than absolute values. A notorious truth that soil geochemical data represent a typical example of compositional data (CoDa) should effectively preclude their use in the raw form in any statistical analysis [22]. The nature of CoDa involves the mathematical peculiarity that all variables (component parts) in each individual case (analysed sample) are always positive and constrained to a constant sum defined a priori as 100%, 106 ppm, or 1.0. By virtue of the unit-sum constraint, CoDa can be naturally displayed only in the restricted sample space (compositional space) known as simplex and consisting of D parts or components (geochemical variables). A set of D-part composition (SD) occupies a restricted part (from zero to, say, 100%) of a D-dimensional real space (RD), forming a subset of its vectors [12, 13, 46]. The principles of the simplex as the natural sample space for compositional data are conveyed through the following expression [12, 47]:
Table 1 Descriptive statistics for raw (compositional) geochemical data
$$ S^{D} = \left\{ { \left( {x_{1} , x_{2} , x_{3} , } \right. \ldots , \left. { x_{D} } \right) : x_{i } > 0 \left( i \right. = 1, 2, 3, \ldots , \left. D \right), \mathop \sum \limits_{i = 1}^{D} x_{i} = \left. \right\} } \right. $$
where κ is a constraint-sum constant; x1, x2, x3,…, xD are components of the composition x; and 1, 2, 3,…, D are parts of the composition x.
The simplex can "unfold" in the Euclidean vector space only after the proper transformation of its components. Since the treatment of the closed data seriously interferes with the methods of traditional statistics, this transformation is mandatory in order to safely apply standard statistical techniques. From a number of transformations used in the literature, the centred log-ratio transformation (clr) of raw (compositional) data, originally proposed by Aitchison [3], is used in this work. The application of the centred log-ratio is held indispensable for CoDa processing in multivariate statistical methods such as DFA since the clr preserves the original distances between corresponding compositions and allows them to be handled in a straightforward way [22, 67]. Simultaneously, the singularity problem inherent to a clr-transformed covariance matrix can be circumvented allowing DFA to operate on its reduced form, that is, not relying on the full rank of covariance [17]. Since clr-transformed data represent unbounded real vectors in a real space, Mahalanobis distances (MD) remain invariant regardless of which component may be removed from analysis [4]. Conveniently, nonessential clr-transformed variables may be amalgamated ("other") and removed from further analysis.
Clr-coefficients can be computed from the following expression:
$$ clr\left( x \right) = \left( {\ln \frac{{x_{1} }}{g\left( x \right)}} \right.,\; \ln \frac{{x_{2} }}{g\left( x \right)},\; \ln \frac{{x_{3} }}{g\left( x \right)}, \ldots , \ln \left. {\frac{{x_{D} }}{g\left( x \right)}} \right) $$
where x1, x2, x3,…, xD are components of the composition x and g(x) represents their geometric mean.
Discriminant function analysis—the strategy
DFA is a powerful statistical tool for approaching a great number of numeric attributes such as, in this example, the geochemical compositions of soils developed on the karst bedrock.
This technique aims to reduce problems with organization, distinction, or comparison of the vast body of data to a scale providing clearer insight into the underlying geological and environmental controls. In addition, data processed in this way can develop a mapping quality that explains the relationship among the original variables more clearly.
The aims and principles of DFA are described in detail elsewhere (e.g., [18, 20, 61]) and have been explained repeatedly by the present authors in various geochemical and environmental studies [23, 26, 28, 34, 49, 50, 52, 53, 65]. It suffices to say in this paper that DFA is a multivariate method that is particularly effective in pursuing the major sources of between-group differences which, in this study, derive their origin from the accumulation of heavy metals and possibly harmful elements (PHE) in karst soils. To this purpose, a vast body of data (1247 soil samples) must be previously organized in a manner that provides the most effective relation between the soil geochemical signature and various facets of the soil immediate environment. The definition of grouping criteria is crucial in this respect since geochemical patterns in the sampling media, as a rule, always follow the bigger picture on a regional scale—geological, environmental and other systemic constraints prevailing in the investigated area (Croatian Dinaric region). Of necessity, these principles are autonomous with regards to the analysed variables (see, e.g., [61]). One of the most obvious standards suitable for the group characterization in the present case is the underlying geology (lithology). This characterization is based on earlier research work [28] that has proved profitable, emphasizing the strong geochemical contrast between the soil geochemistry of the two regions of Croatia broadly defined as the DIN and PAN areas. Consequently, although the bedrock is predominantly carbonate in both regions, bedrock underlying the soils of the DIN was expected to be lithologically sufficiently diversified to affect the geochemical signal. Further, earlier investigations [42] strongly suggested that geographical division (zoning) may show distinctive preferences in the areal distribution of certain elements irrespective of the underlying geology. Last but not least, the recent investigations concerning the GEMAS Project (Geochemical mapping of agricultural and grazing land soil) [60] indicated the usefulness of the land cover classes, borrowed from the Corine Land Cover (CLC) inventory, in the search for environmental impacts on the geochemical composition of soils.
Following the suggestions given above, three main themes of this work are outlined with respect to the presented grouping strategy—GEOLOGY, REGION, and CLC. In each particular case, a different number of classes is derived depending on the nature of the grouping variables, which originate, at least partly, from the familiar 'clorpt' equation (climate—organisms—relief—parent material—time) that describes the role of variables (state factors) in the process of soil formation (e.g., [9, 54, 55]). This concept was later extended to include ecosystem, soil, vegetation and fauna (e.g., [10]) and finally reviewed in a recent work on soil complexity and pedogenesis [56]. The groups (Table 2) are formed according to the following sources: the GEOLOGY division is based on the general lithology of the investigated area accepted from the Geological Map of the Republic of Croatia (1:300,000; [14]) and contains five groups consisting of siliciclastic rocks (1), Quaternary sediments (2), carbonate rocks (3), carbonate clastic rocks (4) and flysch (5); the REGION division uses the map of agricultural regions and sub-regions of Croatia [8] modified at the DIN-PAN border to accommodate the distribution of predominant carbonate lithology and is composed of five groups consisting of North, Mid and South Mediterranean (NMED (1), MidMED (2), SMED (3)), mountainous (MOUNT (4)) and sub-mountainous (SubMOUNT (5)) regions; and finally, the CLC division exploited the most general level of standard CLC classification (Label 1) from the Corine Land Cover 2012 (CLC2012) raster data (European Environment Agency (EEA, http://www.eea.europa.eu)), combined into 4 groups consisting of artificial surfaces/urban or builtup areas (ARTS (1)), agricultural land (AGRS (2)), forests/forest land and semi-natural areas (FSNA (3)) and wetlands (WETL (4)) (Fig. 3). In all four cases containing 1247 valid objects in total (N), the same suite of variables (p = 28) is used.
Table 2 Grouping criteria
CLC map (the most general level of standard CLC classification, Label 1)
A concise summary of the main results of the analysis is displayed in the joint table (Table 4) comprising the three explanatory discriminant models. The overall significance of their discrimination is tested beforehand by the appropriate multivariate tests (Table 3), revealing the vanishingly low associated probabilities at the p < 0.05 level, which are essential in order to proceed safely with computing discriminant functions (DF). In virtue of the high separation potential of the computed DFs in all discriminant models, ample parsimony was achieved by attaching a plausible geological meaning to the selection of functions explaining the highest portion of the total variance. As shown in Table 4, most of the total between-group variance (80% or more) is sufficiently explained in all models by the first two DFs. Additionally, a grouping principle accountable for a high number of pre-defined groups has proved itself quite suitable for DFA analysis as the overall classification rate is rather high, amounting to a classification efficiency of 80% and greater in the cases of the REGION and GEOLOGY criteria, respectively (Table 5). It must be noted in this regard that raising the level of CLC degrades the classification rates remarkably, reducing the values from 70% for the first level (CLC-1) with four registered groups to approximately 50% for the second level (CLC-2) containing 11 registered groups, and finally to 33% for the third level (22 recorded groups). This situation is why the base level (CLC-1) is preferred from among the different choices for the purpose of this investigation.
Table 3 Multivariate test for overall significance of discrimination
Table 4 Tests of residual roots (discriminant functions) for all three models 3.3
Table 5 Classification matrix
Functional models—labelling the discriminant functions
The labelling of DFs is essentially a transfiguration of the structural (mathematical) into functional (process) models, which in this case are essentially geochemical. The technique of labelling discriminant axes is thoroughly described elsewhere, including an explanation of why scatterplots are used instead of biplots in the CoDa analysis (e.g., [23, 52, 65]). Suffice it to say that the group centroids (means) are exploited in this work as the alternative for the host of individual objects in the construction of the scatterplots. This alternative is used in order to improve the intelligibility of representation, which may be marred by a high number of sample points occupying the reduced discriminant space. The group means are also useful later in calculating the contribution of each DF to a particular group.
Scatterplots of variable loadings and group centroids are constructed for all discriminant models applying the first two DFs that explain the greatest portion of the between-group variance. The models are compared using multiple scatterplots of the DF1 and DF2 pairs of discriminant function (orthogonal axes) (Figs. 4, 5 and 6).
Comparison between variables and groups in the GEOLOGY, REGION and CLC discriminant function models (clr-transformed data): scatterplots of a variable loadings and b individual objects (samples) in the reduced discriminant space of the first two discriminant functions (DF1–DF2)
GEOLOGY model
In the GEOLOGY model (generally referring to the parent-material state variable, p) the first discriminant function DF1 separates on the basis of the carbonate/siliciclastic lithological contrast of the parent material and corresponding affinities of certain elements, principally Ca and Sr, identifying the flysch bedrock as the main source of geochemical variation in the soil samples (Fig. 4). DF1 is thus essentially monopolar, emphasizing the uniqueness of the flysch group, which plots far from the intersection of the DF1 and DF2 axes. This arrangement is essentially caused by the nature of the parent material as one of the crucial state factors (variables) of soil formation. Soils that evolved on flysch (mostly Leptosols (rendzinas)) and formed on soft marls and weakly consolidated calcareous sandstones are typically "immature", that is, incipient and undeveloped as a result of the strong dynamism involving progressive and regressive pedogenesis in the process of rapid erosion and mixing of fresh parent material with the already formed regolith. This process, recognized on the Istrian Peninsula [40, 48, 51, 57, 58, 75] and elsewhere along the Adriatic coast [27], results in a "dilution effect" that places in clear relief the flysch- and carbonate-derived soils. As a rule, undeveloped (flysch-derived) soils do not exhibit typical enrichment in trace elements but stable or even depleted concentrations instead [38], while elevated contents of carbonate minerals (elevated Ca and Sr) are the result of poor drainage and leaching, which, conversely, is a norm of "mature" soils evolved over carbonate bedrock. Thus the latter, develop in a quite different pedo-environment, often end up as a repository for PHE and other trace elements whose accumulation may be additionally enhanced by human-influenced environmental processes [51].
Apart from DF1, whose primary discriminatory role is the flysch/carbonate bedrock contrast, DF2 adds another dimension to the model and explains the most of the remaining (residual) variance left after DF1 is removed. DF2 is also concerned with the flysch group, which is separated from the "siliciclastics" group (siliciclastic-derived soils) in a clearly displayed bipolar relationship. In this case, the flysch group differs from its clastic counterpart by reason of the Ni/K–Al–Ti–(…) inverse relationship revealing enrichment in Ni (followed by Co, Mn, and Cr, that is, potentially harmful elements, PHE) in the former and deficiency in the latter. On the other hand, the siliciclastics group is enriched in K and Al, most likely in the form of clay minerals and rock-forming feldspars, which are, conversely, relatively under-represented in the flysch-derived soils. The presence of the K–Al assemblage indicates in situ formation of soil clay minerals by alteration of aluminosilicate parent minerals which, simultaneously with illuviation, may be the dominant process of soil formation over the siliciclastic bedrock. Furthermore, the closeness of Fe and Al (Fig. 4) suggests the ubiquitous problem with soil acidity associated with soils developed on siliciclastic bedrock. Conversely, the PHE suite of elements in flysch-derived soils is probably of aeolian origin, accumulated relatively recently from the Raša port industrial zone and the Plomin thermal power plant in Istria [51]. The other three groups are clustered close to the DF2 axis, revealing their impartiality with regards to the geochemical signature of the overlying soils conveyed by this function. Indubitably, the central (near the axis intersection) position of the carbonate groups in both the DF1 and DF2 cases is induced by their excessive weight (84% of all observed or a priori classified data, Table 5) that, however, enables the uniqueness of the formerly described groups to be perceived in clearer relief. "Gravity" of the carbonate group is highlighted by the computed classification rates resulting in 94% correct assignments. A significant body of data (10.5%) has been relocated from other groups based on the mathematically predicted (a posteriori) classifications (Quaternary sediments and carbonate clastic rocks in particular) showing their greater affinity to the carbonate group, that is, the geochemical signature characteristics for carbonate-derived soils.
REGION model
The REGION model approximately adheres to the climate and relief as the state factors (cl, r) of soil formation. As in the former case, the first two discriminant functions are sufficiently informative in explaining the natural processes underlying the data structure (80% of the total variance). At first glance, the characteristic group pattern emerges showing SMED-MidMED-NMED group alignment with mountainous (MOUNT and SubMOUNT) groups apart in the hinterland, all mimicking the predominant northwest-southeast Dinaric direction of regional mountain ranges, albeit with the SMED and NMED groups in inverted geographical positions [cf. Fig. 1 (geographical position) and Fig. 5]. This peculiar diagonal arrangement needs clarification in both DF1 (SMED) and DF2 (NMED) domains. DF1 is bipolar and is primarily concerned with differences between the mountainous (MOUNT and SubMOUNT) and SMED soils, while DF2 shows differences between mountainous and NMED soils (Fig. 5). In the first case, elements forming the clay minerals such as Al, K and Na together with Ti, Fe, Sc and Ba are highlighted, a pattern suggesting the dominance of clay component and a possible role of Fe and Al oxy/hydroxides in sorption of PHE, especially Zn, in MOUNT/SubMOUNT soils (e.g., [45, 63, 71]). This interpretation is supported by the suggestive absence of characteristic trace elements such as Pb or Cd on the part of the latter in contrast to the soils from the southernmost coastal part of the investigated area (SMED). Additionally, SMED and MidMED soils are characterized by increased Zr and Ca, both indicating the presence of detrital heavy minerals such as zircon, external materials (of aeolian origin) [21, 73], and carbonate particles. These elements probably appear due to hindered leaching and eluviation on the characteristic carbonate lithology of undeveloped soils on flysch [51, 58]. The Ca/Al–Fe inverse relationship in DF1 reinforces the image of potential stress from Al and/or soil acidity in the MOUNT and SubMOUNT groups (Fig. 5).
DF2 is also bipolar, and it further clarifies the particular deployment of the two mountainous groups. Groups are separated in this case into the northern and central regional divisions (NMED and, less accentuated, MidMED) on account of increased contents of Cr, Ni, Co and Mn in the latter. These elements are typical PHE and pose great pressure on the natural ecosystem, unambiguously deriving their origin from anthropogenic sources represented by the numerous industrial and power plants and oil refineries in the upper Adriatic (Plomin, Rijeka) and metal processing factories in the middle Adriatic (Obrovac). The mountainous and sub-mountainous regions appear in this context as almost pristine areas except for Pb, which is also typical for the soils of the south Adriatic territory, probably for two reasons: long-range aeolian transport and high precipitation in case of the highest mountain areas and traffic in both regions. Last but not least, Ca is also among the elements associated with the SMED and MidMED groups (Fig. 5), emphasizing the NW–SE-trending increase in carbonate content in the topsoils.
CLC model
The most general level of the CLC model, referring broadly to the ecosystem, vegetation and animal properties in Jenny's extended soil functional-factorial model ([32], described in [10]), explains almost 89% of the total variability by the first two (DF1 and DF2) of three discriminant functions altogether (Table 3). The first of these is all-important (68%), and albeit bipolar, contrasting all first-level land cover classes against a single one—forest and semi-natural areas (FSNA)—it highlights the latter group which, similarly to the GEOLOGY model, gravitates to the centre of a scatterplot due to its extreme weight (over 66% of all data) with befitting 90% of correct a posteriori assignments (Fig. 6 and Table 5). Accordingly, all other groups are distinguished by their shared geochemical signal primarily lacking in those component parts that abound in the FSNA group. It comes as no surprise that FSNA in the explored model indicates that the forest ecosystem is under considerable environmental stress caused by acidic deposition and human interventions such as forest harvesting and agricultural activity. This problem is easily observed in the reciprocal position of Ca with respect to both Al and Fe resulting from increasing soil acidity (organic acids) (Fig. 6) [15, 25, 69]. Further, all other vital components also contribute to the gloomy picture of the impacted forest ecosystem showing deficiencies in clay component and soil fertilizers (K, Na and P) in the FSNA group with regards to the ARTS and AGRS groups and especially the WETL group (albeit the latter contributes merely two samples to the model), which are all relatively enriched in these elements. The close mutual positions of Al and Fe characterizing the FSNA group may well result from immobilization of organically bound Al and Fe due to precipitation, perhaps through the formation of solid Al–Si–OH, and Fe–OH phases in the coniferous forest soils [36] that predominate in the NW part of the mountainous Dinaric hinterland. Simultaneously, the presence of Pb, Cd and Zn, most likely deriving from acid rains in the elevated areas (DF2 in Fig. 5), only enhances the process of nutrient depletion and the accompanying contamination of the forest soils. On the whole, the buffering capacity of the forest soils against acidification is lower with respect to agricultural or otherwise used soils (ARTS) due to liming or other acid-neutralizing amendments [5]. To this feature must be added the problem of long-recognized chronic nitrogen deposition via atmospheric pollution resulting in N-saturation in the forest topsoil [1] (see DF1 in Fig. 6). Naturally, the relatively increased carbonate component together with the K–Na–P suite in other groups not only may result in a negative image of FSNA but also may emerge through anthropogenic impact (fertilization) that is intense in some areas (including Cu for vineyards, as on the Istrian Peninsula) (see Fig. 7c).
Discriminant score maps of a GEOLOGY, b, REGION and c CLC models representing areal distribution of the first (DF1) discriminant function. Increasing influence of the respective geochemical signatures displayed in warm colours (yellow–red)
As for DF2, it provides additional insight (20%) into the group deployment separating ARTS from WETL based primarily on the high contrast associating Pb with artificial surfaces with regards to the latter. However, due to its characteristic geochemical signal, the WETL group with its mere two samples is not confounded with any other group, let alone ARTS, which on the contrary loses almost all of its objects (94%) to other groups, seriously questioning its a priori defined integrity in the investigated area. As seen from Table 5, exactly the FSNA soils accepted the majority of ARTS samples. The AGRS group with only 34% of correct assignments is also almost imperceptible as a standalone group losing the majority of its samples to FSNA. Thus, precisely the latter group profoundly characterizes the geochemical signature of the dominant land cover type in the study area, greatly altering the original CLC map (cf. Figs. 6 and 7c).
Functional models—soil geochemical maps
The key feature of DFA modelling is that it produces numeric values (discriminant scores) suitable for spatial display of parameters accounting for discrimination of investigated groups. Hence, such modelling indirectly expounds both dissemination of the group samples and internal cohesive strengths of groups on the terrain. Concerning the latter, the models also provide estimates announcing how closely the group samples hold together by virtue of the probability that any case (sample) holds on to a particular group (via posterior classification probabilities) and thus ultimately highlighting the processes (explained by the predictor variables) that account for a particular spatial pattern in the investigated area. Geochemical maps generated in this way may be very helpful, for example, in physical planning because they promptly indicate the quarters of adverse impacts on the environment produced by human activity. Karst terrains are especially vulnerable in this case, and forest ecosystems with the increasing problems of acidification, soil erosion, disruption of the water cycle and possible loss of biodiversity are particularly so. Statistically speaking, their profits heavily rely both on success rates calculated in the overall classification design and on the power of discrimination functions to distinguish among groups with the highest accuracy possible. Accordingly, two types of geochemical maps are constructed in this work based on two different families of statistical indices generated by DFA, namely, the maps of posterior (post hoc) probabilities (regarding the specific group selected on the basis of its specific relevance) and the maps of discriminant scores (with respect to a particular DF). Both categories have already proved useful in various geochemical and environmental investigations [28, 65].
The map generation
The maps are generated using the ArcGIS™ 10.2.1 extension Spatial Analyst with the Universal Krigging method. For the purpose of map generation, the discriminant scores are divided into eight percentile classes: 5th, 10th, 25th (lower quartile), 50th (median), 75th (upper quartile), 90th and 95th percentiles because the application of the same percentiles for all data allows comparison of the respective spatial distribution maps. Maps of posterior probabilities are divided into seven probability classes: < 0.10, 0.10–0.25, 0.25–0.50, 0.50–0.75, 0.75–0.90, 0.90–0.99 and > 0.99. In the case of posterior probability maps, percentiles are provided by the spreadsheet containing posterior probabilities generated during the computational process. The classes displayed on the geochemical maps range in colour from blue hues for the lowest via green, yellow, and orange to red for the highest values.
In the former case, classification efficacy serves as a powerful indicator by which the stability of the previously defined groups can be screened, weighing mathematically predicted against original (observed) classifications (Table 5).
Discriminant function vs. posterior probability maps—mapping the soil processes and validation of grouping criteria (classification rates)
Inspection of the plots showing the most informative discriminant functions (DF1 and DF2) allows cross-comparison between the models—an approach elucidating the dominant processes in control of the geochemical signature in the soils of the investigated area (Figs. 4, 5 and 6). Thus, it is readily apparent that the combination of certain elements such as Al, Fe, K, Na, Zn, Pb, and Ti signalling the presence of organically bound metals, clays and some potentially harmful elements (PHE) is regularly affiliated with particular groups in all models. This arrangement bonds siliciclastic rocks (GEOLOGY), mountainous zones (REGION), and forest and semi-natural areas (mostly woodlands, CLC) into the sphere of influence controlled by the processes that endanger the karst region at large, especially the forested hinterlands behind the coastal mountain areas. The range of the discriminant scores is displayed on the respective discriminant score maps, which can be unambiguously interpreted on the single process basis (after a selected DF). On the other hand, posterior probability maps are understood differently since their orientation is towards validation of the group integrity. Accordingly, they are designed on a single group basis, highlighting the particular group as reflecting the sum of all relevant processes (represented by respective DFs) that affect its cohesion in the investigated area, albeit each with a different contribution. The involvement of individual DFs in each individual group can be easily calculated from the relative position of the group centroid represented by its discriminant scores (group mean) on all computed discriminant functions. Note that in the case of only two discriminant functions (reduced discriminant space represented by DF1-DF2 axes on 2D scatterplots, Figs. 4, 5 and 6), this influence is specified by the distance (vector length) of the group (G) from the origin (DF1/DF2 intersection) in a simple Pythagorean relation. In a multi-function case, the length of the respective group vector is extended accordingly in n-dimensional discriminant space and can be displayed by the following equation:
$$ {\mathbf{G}} = \left| { \left( {DF1, DF2, \ldots , DFn} \right)^{T} } \right| = \sqrt {DF1^{2} + DF2^{2} + \cdots + DFn^{2} } $$
where DF1, DF2, …, DFn denote the coordinates (discriminant scores, i.e., point projections onto the axes) of a particular group centroid (G). The allowance of each DF for a group is then computed as DF(n) = DFn2/G2.
In the case of the GEOLOGY model, DF1 is focused on soil maturity, the property that strongly delineates the zones of flysch development within the karst environment (Central Istria and the North Dalmatian hinterland in particular, blue hues on Fig. 7a), which are characterized by soils largely containing properties inherited from the parent rocks and thus being rich in carbonate material (Ca and Sr). Warm hues display the transition towards carbonate rocks (groups 3 and 4) (Fig. 7a) via siliciclastic rocks (1) and Quaternary sediments (2) (Fig. 4), as indicated by the decrease in carbonates and increase in elements of poor mobility under all environmental conditions such as Th, Nb and La due to the high stability of the hosting minerals (oxides and silicates). This pattern, characteristic for "mature" soils evolved on carbonate bedrock (especially in coastal mountainous ranges), has been recognized in earlier investigations and is highly perceptible on mono-element geochemical maps produced as a result of the geochemical mapping of Croatia [27]. However, from the group perspective, the GEOLOGY model may yield additional information exposed in a post hoc map constructed for a single group. This procedure is advantageous in a sense that it may bring to the fore the most prominent process underlying all models irrespective of the proposed grouping criteria, not necessarily represented by the first function (DF1). If the latter be the case, the differences between the maps might prove insignificant from the standpoint of the post hoc classification of carbonate rocks (3) (cf. Figs. 7a and 8b). On the other hand, all models exhibit characteristic cross-pollination with regards to the Al–Fe–Zn–Ti cluster that is characteristic of certain groups—siliciclastic rocks (1) in the GEOLOGY model, mountain (4) and sub-mountain (5) regions in the REGION model and forest and semi-natural areas (3) in the CLC model. In GEOLOGY, the "correct" classification rates (p > 0.5) are almost exactly limited to outcrops of siliciclastic rocks (1) appearing in the interior parts of Croatian karst (warm hues on Fig. 8a). The DF2 signalling acute soil acidity is prominent in this group, with an over 82% contribution among the model DFs (see the scatterplot of group means, Fig. 4). The influence of DF1 (maturity) in the flysch group (5) is exactly the same (DF2 is only 16%), the case already recognized from the map of discriminant factor scores (Fig. 7a).
Maps of posterior probabilities of GEOLOGY model representing areal distribution of posterior (post hoc) probabilities computed for: a siliciclastic rocks (group 1) and b carbonate rocks (group 3). Increasing influence of soil geochemical signatures developed on siliciclastic (a) and carbonate rocks (b) displayed by the 75–100 percentile range (orange-red); increasing influence of other rock types (combined posterior probabilities of groups 2, 4 and 5) displayed by the 0–25 percentile range (green–blue). Zone of the mixing influences (yellow) displayed by the 25–75 percentile range
The REGION model exhibits sharp delineation between the groups with regard to the changing influence of elements loading on DF1 in the NW–SE direction. There is a characteristic "neutral" geochemical signal characterizing the NMED and MidMED groups (Istria and North Dalmatia with their interiors) set within the interquartile range (25th–75th percentiles) (Fig. 7b). In "regional" terms, this is the area where the Na–K–Al–Ti–Sc–Ba–(Fe) vs. Zr–Cd element clusters are well-balanced, leaving the MOUNT and SubMOUNT areas on one side and SMED on the other as "outliers". Accordingly, the former groups seem most endangered by the effects of soil acidity (Al toxicity) while the latter, for its own part, suffers increased anthropogenic inputs of Cd and Cu as well as Zr as a mark of residual soil evolved on karst bedrock. While copper is most likely directly related to viticulture developed in the southernmost part of the Croatian coastal area (Adriatic), cadmium in the SMED topsoil may partly originate from the former metal industry and former agricultural use of poor-quality fertilizers (Neretva valley). From the group standpoint (Fig. 9a), the correct assignment of samples to the MOUNT (4) and SubMOUNT (5) groups taken together (80%, Table 5) corroborates the close relationship between the distribution of mountain soils and the extent of acidification previously described. This process, represented by DF1 in the REGION model, participates with 67% in the MOUNT group and 56% in the SubMOUNT group. However, it also reveals that parts of the north Dalmatian area (MidMED) and north Adriatic islands (NMED) may experience the same problem as interior mountain areas (warm hues on Fig. 9a) being post hoc reclassified as SubMOUNT/MOUNT soils according to their geochemical signature. The sharp WSW-ENE line dividing these groups from "non-mountain" groups (MidMED and SMED) to the southeast in all probability represents the line dividing depositional environments on the east Adriatic coastal area during Pleistocene and Holocene, namely, the northern (Po River) and southern (South Adriatic) provenances ([70] and references therein). The northernmost part of the map is also excluded from the scheme, challenging the pre-established idea of some interior mountains such as Medvednica and Žumberak Mts. as parts of the integral Croatian karst territory (Dinaric) and relocating them (from the REGION-model perspective) into the Pannonian instead [28].
Maps of posterior probabilities of REGION (a) and CLC (b) models representing areal distribution of posterior (post hoc) probabilities computed for combined MOUNT + subMOUNT groups (groups 4 and 5) in the former, and FSNA (group 3) in the latter. Influences of respective soil geochemical signatures expressed through the posterior probabilities same as in the Fig. 8
The CLC model most of all directly combines the overt Al–Fe soil toxicity with contamination by PHE (Zn, Pb and Cd in particular), seriously affecting the FSNA (3) on carbonate bedrock (Fig. 6). Thus, damage occurs to FSNA as an ecosystem, which is most evident on the coastal mountains of the north Adriatic zone (Fig. 7c, warm hues). Coastal ranges represent the first front of wet (acid rains) and dry (aerosols and gases) deposition caused by the atmospheric emissions from burning fossil fuels in the onshore power plants. The second front is less affected, while some flat areas on the coast (Istria and North Dalmatia) and in the transition zone towards the Pannonian area seem almost unaffected. From the grouping perspective, the FSNA (3) is the most seriously stricken by this process. As a group occupying the major part of the investigated area (60%), it is also the group with the greatest number of correct assignments (90%, Table 5) and with practically full participation of DF1 in its development and areal distribution. The predominance of other groups carrying different geochemical signals is focused on bordering areas such as Istria and northern and southern Dalmatia, as well as the farther north (Fig. 9b), a trait that is well-matched with the distribution of DF1 displayed on the CLC map of discriminant scores (Fig. 7c).
In this work, a comprehensive investigation of the geochemical composition of topsoils developed in the Dinaric part of Croatia (DIN) was performed, with the purpose of elucidating the underlying mechanisms controlling the mobility and variations in PHE distribution perceived from various environmental perspectives, notably, the geologic setting, regional placement and diverse land use. The latter were employed in discriminant function analysis in place of grouping criteria for the analysed objects (sampled soils), independent as regards the soil geochemistry and approximately corresponding to the state factors in the familiar 'clorpt' equation for soil evolution. Three distinctive discriminant models emerged from the analysis disclosing the complex relationships among observed geochemical data, each with its own set of discriminant functions, namely, GEOLOGY, REGION and CLC. Albeit a number of multi-element soil geochemical signatures typical for investigated environmental domains were isolated, a particular geochemical signal was highlighted in all models, namely, the Fe–Al association related to siliciclastic bedrock, mountain areas (both MOUNT and SubMOUNT groups), and forest and semi-natural areas (FSNA). This result underlined the environmental challenges posed by soil acidification in the entire Dinaric karst area, though not necessarily by mobilizing the largest part of the variance in all models: acidification was the primary issue (DF1) in the models of REGION and CLC but only the secondary issue (DF2) in the case of GEOLOGY, where DF1 recognized Al–Fe clustering (along with the clay component) in soils derived from all lithologies rather as a mirror image (deficiency) of the carbonate (Ca and Sr) component that is, on the contrary, accumulated in flysch-derived soils. The main theme of the REGION model was discrimination between the soils from the NW part (MOUNT, SubMOUNT and NMED) and those from the SE part of the Croatian karst area (MidMED and SMED) based on the Ca/Al–Fe opposition as an indicator of Al and acidity stress in the former. The same image emerged in the CLC model, separating the "unexploited" areas (FSNA) affected by the same concerns from the other land-use types (AGRS, ARTS and WETL).
Two types of soil geochemical maps were constructed in the work, explicit discriminant function and posterior probability maps, in order to map the dominant (single) geochemical process (represented by DF1 in each model) and to check the integrity of a particular á priori defined group in the investigated area (map as a multi-function model). With remarkable accuracy, the first type follows the original grouping criteria, highlighting the areas where the performance of the mapped function (process) is the highest or the lowest. This distinction is particularly manifested in the case of the REGION DF1 model map, where the geographical division (as a grouping criterion) and the spatial distribution of DF1 match almost perfectly. Additionally, the GEOLOGY DF1 model map distinctly delineates the flysch outcrops, especially in Istria, while the CLC map draws attention to the heavily forested areas (FSNA) occupying the high mountains towards the coast hinterland (Velebit Mt. and Gorski Kotar area). The second type of map strongly depends on the correct post hoc group assignments, which is why in accordance with the group sizes, some "scavenging" may appear towards the smaller groups such as in the cases of the MOUNT + SubMOUNT posterior probability map where soils developed in the north Adriatic islands assume the characteristics of mountainous soils. On the other hand, the siliciclastic rock posterior probability map is an outstanding example of a highly cohesive group strictly under geological constraints (geologic bedrock) and controlled by the process defined by DF2 in the GEOLOGY model.
The data used in this paper are available in the Geochemical Atlas of the Republic of Croatia [27] which is on the web site of the Croatian Geological Survey at http://www.hgi-cgs.hr; organic carbon data are available on the Croatian Environmental and Nature Protection Agency http://www.haop.hr/hr/pristup-informacijama.
Aber JD, Ollinger SV, Driscoll CT (1997) Modeling nitrogen saturation in forest ecosystems in response to land use and atmospheric deposition. Ecol Modell 101:61–78. https://doi.org/10.1016/S0304-3800(97)01953-4
ACME Analytical Laboratories Ltd (2007) Assaying and geochemical analyses. ACME Analytical Laboratories Ltd, Vancouver, p 19
Aitchison J (1986) The statistical analysis of compositional data. Chapman and Hall, London
Barceló-Vidal C, Pawlowsky-Glahn V (1999) Letter to the Editor: comment on "Singularity and Nonnormality in the Classification of Compositional Data" by Bohling, G.C., Davis, J.C., Olea, R.A., Harff. J Math Geol 31:581–585. https://doi.org/10.1023/A:1007520124870
Baize D, van Oort F (2014) Potential harmful elements in forest soils. In: Bini C, Bech J (eds) A pedological viewpoint. In potential harmful elements, environment and human heailth. Chapter: 4. Springer, Netherlands. https://doi.org/10.1007/978-94-017-8965-3_4
Bakšić D, Pernar N, Vukelić J, Baričević D (2008) Properties of cambisol in beech-fir forests of Velebit and Gorski Kotar. Period Biol 110(2):119–125
Bašić F (2013) The soils of Croatia, World Soil Book Series, International Union of Soil Sciences. In: Hartemink AE (ed), Springer, Berlin, pp 179
Bašić F, Bogunović M, Božić M, Husnjak S, Jurić I, Kisić I, Mesić M, Mirošević N, Romić D, Žugec I (2007) Regionalisation of Croatian agriculture. ACS 72(1):27–38
Bockheim JG, Gennadiyev AN, Hammer RD, Tandarich JP (2005) Historical development of key concept in pedology. Geoderma 124:23–36. https://doi.org/10.1016/j.geoderma.2004.03.004
Bockheim JG, Gennadiyev AN (2010) Soil-factorial models and earth-system science: a review. Geoderma 159(3–4):243–251
Bogunović M, Vidaček Ž, Husnjak S, Sraka M (1997) Namjenska pedološka karta Republike Hrvatske i njena uporaba. Agron Glas 59(5–6):363–399 (in Croatian)
Buccianti A (2013) Is compositional data analysis a way to see beyond the illusion? Comput Geosci 50:165–173. https://doi.org/10.1016/j.cageo.2012.06.012
Buccianti A, Grunsky E (2014) Compositional data analysis in geochemistry: are we sure to see what really occurs during natural processes? J Geochem Explor 141:1–5. https://doi.org/10.1016/j.gexplo.2014.03.022
Croatian Geological Survey (2009) Geološka karta Republike Hrvatske 1:300.000 (Geological Map of the Republic of Croatia) scale 1:300,000. Hrvatski geološki institut (Croatian Geological Survey), Zagreb. 1 sheet
Cronan CS, Grigal DF (1995) Use of calcium aluminium ratios as indicators of stress in forest ecosystems. J Environ Qual 24:209–226. https://doi.org/10.2134/jeq1995.00472425002400020002x
Darnley AG, Björklund A, Bølviken B, Gustavsson N, Koval V, Plant JA, Steenfelt A, Tauchid M, Xuejing X (1995) A global geochemical database for environmental and resource management-recommendations for international geochemical mapping-final report of IGCP Project 259. Earth Sciences 19. UNESCO Publishing
Daunis-i-Estadella J, Thió-Henestrosa S, Mateu-Figueras, G (2011) Two more things about compositional biplots: quality of projection and inclusion of supplementary elements. In: Egozcue JJ, Tolosana-Delgado R, Ortego MI (eds) Proceedings of the 4th international workshop on compositional data analysis (CoDaWork'11). Universitat de Girona, Girona, pp 1–14
Davis JC (1986) Statistics and data analysis in geology. Wiley, New York, p 646
De Vos W, Tarvainen T, Salminen R, Reeder S, De Vivo B, Demetriades A, Batista MJ, Marsina K, Ottesen RT, O'Connor PJ, Bidovec M, Lima A, Siewers U, Smith B, Taylor H, Shaw R, Salpeteur I, Gregorauskiene V, Halamić J, Slaninka I, Lax K, Gravesen P, Birke M, Breward N, Ander EL, Jordan G, Duris M, Klein P, Locutura J, Bel-Lan A, Pasieczna A, Lis J, Mazreku A, Gilucis A, Heitzmann P, Klaver G, Petersell V (2006) Geochemical Atlas of Europe—Part 2, interpretation of geochemical maps, additional tables, figures, maps, and related publications. Geological Survey of Finland, Espoo, p 692
Dillon WR, Goldstein M (1984) Multivariate analysis: methods and applications. Wiley, New York, p 587
Durn G, Aljinović D, Crnjaković M, Lugović B (2007) Heavy and light mineral fractions indicate polygenesis of extensive terra rossa soils in Istria, Croatia. Dev Sedimentol 58:701–737. https://doi.org/10.1016/S0070-4571(07)58026-3
Egozcue JJ, Pawlowsky-Glahn V (2006) Simplicial geometry for compositional data. In: Buccianti A, Mateu-Figueras G, Pawlowsky-Glahn V (eds) Compositional data analysis in the geosciences: from theory to practice, vol 264. Geol. Soc. Spec. Publ., London, pp 145–158. https://doi.org/10.1144/gsl.sp.2006.264.01.16
Galović L, Peh Z (2016) Mineralogical discrimination of the Pleistocene loess/paleosol sections in Srijem and Baranja, Croatia. Aeolian Res 21:151–162. https://doi.org/10.1016/j.aeolia.2016.04.006
Goldscheider N (2012) A holistic approach to groundwater protection and ecosystem services in karst terrains. AQUA mundi, Am06046, pp 117–124. http://dx.doi.org/10.4409/Am-046-12-0047
Goulding KWT (2016) Soil acidification and the importance of liming agricultural soils with particular reference to the United Kingdom. Soil Use Manag 32:390–399. https://doi.org/10.1111/sum.12270
Grizelj A, Peh Z, Tibljaš D, Kovačić M, Kurečić T (2017) Mineralogical and geochemical characteristics of Miocene pelitic sedimentary rocks from the south-western part of the Pannonian Basin System (Croatia): implications for provenance studies. Geosci Front 8(1):65–80. https://doi.org/10.1016/j.gsf.2015.11.009
Halamić J, Miko S (eds) (2009) Geochemical atlas of the Republic of Croatia. Coatian Geological Survey, Zagreb, p 87
Halamić J, Peh Z, Miko S, Galović L, Šorša A (2012) Geochemical Atlas of Croatia: environmental implications and geodynamical thread. J Geochem Explor 115:36–46. https://doi.org/10.1016/j.gexplo.2012.02.006
Harbaugh JW, Wendebourg J (1993) Risk analysis of petroleum prospects, p. 85–98. In: Davis JC, Herzfeld HC (eds) Computers in geology: 25 years of progress. Oxford Univ. Press, New York, p 298
ISO 10381-1: 2002(en). Soil quality—sampling—part 1: guidance on the design of sampling programs
Jenny H (1961) Derivation of state factor equations of soils and ecosystems. Soil Sci Soc Am J 25:385–388. https://doi.org/10.2136/sssaj1961.03615995002500050023x
Korbar T (2009) Orogenic evolution of the external Dinarides in the NE Adriatic region; a model constrained by tectonostratigraphy of Upper Cretaceous to Paleogene carbonates. Earth Sci Rev 96:296–312. https://doi.org/10.1016/j.earscirev.2009.07.004
Kovačević Galović E, Ilijanić N, Peh Z, Miko S, Hasan O (2012) Geochemical discrimination of Early Palaeogene bauxites in Croatia. Geol Croat 65:53–65. https://doi.org/10.4154/gc.2012.04
Levinson AA (1974) Introduction to exploration geochemistry. Applied Publishing, Calgary
Lundström US, van Breemen N, Bain D (2000) The podzolization process. A review. Geoderma 94:91–107. https://doi.org/10.1016/S0016-7061(99)00036-1
Martinović J (1994) Periodic characterization of forest soils acidification on Croatian karst. Agron Glas 1–2:121–130 (in Croatian)
McMartin I, Henderson PJ, Plouffe A, Knight RD (2002) Comparison of Cu–Hg–Ni–Pb concentrations in soils adjacent to anthropogenic point sources: examples from four Canadian sites. Geochem Explor Environ Anal 2:57–74. https://doi.org/10.1144/1467-787302-007
McNeill JR, Winiwarter V (2004) Breaking the sod: humankind, history, and soil. Science 304:1627–1628. https://doi.org/10.1126/science.1099893
Miko S, Durn G, Prohić E (1999) Evaluation of terra rossa geochemical baselines from Croatian karst regions. J Geochem Explor 66:173–182. https://doi.org/10.1016/S0375-6742(99)00010-2
Miko S, Peh Z, Bukovec D, Prohić E, Kastmüller Ž (2000) Geochemical baseline mapping and lead pollution assessment of soils on karst in Western Croatia. Nat Croat. 9:41–59
Miko S, Halamić J, Peh Z, Galović L (2001) Geochemical baseline mapping of soils developed on diverse bedrock from two regions in Croatia. Geol Croat 54:53–118
Mulligan J, Mitchelmore M (2009) Awareness of pattern and structure in early mathematical development. Math Educ Res J 21(2):33–49. https://doi.org/10.1007/BF03217544
Munsell Colour Company (1994) Munsell soil colour charts, revised edition. Macbeth Division of Kollmorgen, Baltimore
Parker A, Rae JE (eds) (1998) Environmental interactions of clays: clays and the environment. Springer, Berlin, p 271
Pawlowsky-Glahn V, Egozcue JJ (2006) Compositional data and their analysis: an introduction. In: Buccianti A, Mateu-Figueras G, Pawlowsky-Glahn V (eds) Compositional data analysis in the geosciences: from theory to practice, vol 264. Geol Soc Spec Publ, London, pp 1–10. https://doi.org/10.1144/gsl.sp.2006.264.01.01
Pawlowsky-Glahn V, Egozcue JJ, Tolosana-Delgado R (2007) Lecture notes on compositional data analysis. Universitat de Girona, Girona
Peh Z, Miko S, Bukovec D (2003) The geochemical background in Istrian soils. Nat Croat 12(4):195–232
Peh Z, Šajn R, Halamić J, Galović L (2008) Multiple discriminant analysis of the Drava River alluvial plain sediments. Environ Geol 55(7):1519–1535. https://doi.org/10.1007/s00254-007-1102-2
Peh Z, Halamić J (2010) Discriminant function model as a tool for classification of stratigraphically undefined radiolarian cherts in ophiolite zones. J Geochem Explor 107:30–38. https://doi.org/10.1016/j.gexplo.2010.06.003
Peh Z, Miko S, Hasan O (2010) Geochemical background in soils: a linear process domain? An example from Istria (Croatia). Environ Earth Sci 59(6):1367–1383
Peh Z, Galović EK (2014) Geochemistry of Istrian Lower Palaeogene bauxites—Is it relevant to the extent of subaerial exposure during Cretaceous times? Ore Geol Rev 63:296–306
Peh Z, Galović EK (2016) Geochemistry of Lower Palaeogene bauxites–unique signature of the tectonostratigraphic evolution of a part of the Croatian Karst. Geologia Croatica 69(2):269–279
Phillips JD (1998) On the relations between complex systems and the factorial model of soil formation (with Discussion). Geoderma 86(1–2):1–21
Phillips JD (2002) Global and local factors in earth surface systems. Ecol Model 149(3):257–272
Phillips JD (2017) Soil Complexity and Pedogenesis. Soil Sci 182:117–127
Pirc S, McNeal MJ, Lenarčič T, Prohić E, Svrkota R (1991) Geochemical mapping of carbonate terrain. Bull Inst Mettal Min Appl Earth Sci 100:B-74–B-83
Prohic E, Hausberger G, Davis JC (1997) Geochemical patterns in soils of the karst region. Croatia. J Geochem Explor 60(2):139–155
Prohic E, Peh Z, Miko S (1998) Geochemical characterization of a karst polje–an example from Sinjsko polje. Croatia. Environ Geol 33(4):263–273
Reimann C, Birke M, Demetriades A, Filzmoser P, O'Connor P (eds) (2014) Chemistry of Europe's agricultural soils–Part A. Methodology and interpretation of the GEMAS data set. Geologisches Jahrbuch (Reihe B 102), Schweizerbarth, Hannover
Rock NMS (1988) Lecture notes in earth sciences, vol 18. In: Numerical geology. Springer, Berlin
Salminen R, Batista MJ, Bidovec M, Demetriades A, De Vivo B, De Vos W, Duris M, Gilucis A, Gregorauskiene V, Halamić J, Heitzmann P, Jordan G, Klaver G, Klein P, Lis J, Locutura J, Marsina K, Mazreku A, O'Connor PJ, Olsson SÅ, Ottesen R-T, Petersell V, Plant JA, Reeder S, Salpeteur I, Sandström H, Siewers U, Steenfelt A, Tarvainen T (2005) Geochemical Atlas of Europe, Part 1, background information, methodology and maps. Geological Survey of Finland, Espoo
Sparks DL (2003) Environmental soil chemistry, 2nd edn. Academic Press
Strahler AN (1980) Systems theory in physical geography. Phys Geogr 1(1):1–27
Šorša A, Peh Z, Halamić J (2018) Geochemical mapping the urban and industrial legacy of Sisak, Croatia, using discriminant function analysis of topsoil chemical data. J Geochem Explor 187:155–167
Thalmann F, Schermann O, Schroll E, Hausberger G (1989) Geochemischer Atlas der Republik Österreich 1:1.000.000. Arbeitsgemeinschaft Voest-Alpine & Bundesversuchs und Forschungsanstalt Arsenal & Geologische Bundesanstalt, Vienna, p 141
Tolosana-Delgado R (2012) Uses and misuses of compositional data in sedimentology. Sediment Geol 280:60–79
van Beynen P, Brinkmann R, van Beynen K (2012) A sustainability index for karst environments. J Cave Karst Stud 74(2):221–234
Vanguelova EI, Hirano Y, Eldhuset TD, Sas-Paszt L, Bakker MR, Püttsepp Ü, Brunner I, Lõhmus K, Godbold D (2007) Tree fine root Ca/Al molar ratio–Indicator of Al and acidity stress. Plant Biosyst Int J Deal Asp Plant Biol 141(3):460–480
Vaniček V (2013) Pleistocene deposits in Croatian part of the Adriatic subsea. Doctoral thesis. Faculty of Science, University of Zagreb
Violante A, Cozzolino V, Perelomov L, Caporale AG, Pigna M (2010) Mobility and bioavailability of heavy metals and metalloids in soil environments. J Soil Sci Plant Nutr 10(3):268–292
Vlahović I, Tišljar J, Velić I, Matičec D (2002) The Karst Dinarides are composed of relics of a single Mesozoic platform: facts and consequences. Geol Croat 55/2:171–183
Yaalon DH (1997) Soils in the Mediterranean region: what makes them different? Catena 28(3–4):157–169
Zaninović K (ed) (2008) Klimatski atlas Hrvatske/Climate atlas of Croatia 1961–1990, 1971–2000. Državni hidrometeorološki zavod/Croatian Meteorological and Hydrological Service, Zagreb, p 200
Zupančič N, Pirc S (1999) Calcium distribution in soils and stream sediments in Istria (Croatia) and the Slovenian littoral. J Geochem Explor 65(3):205–218
This study was supported by The Ministry of Science and Education, Republic of Croatia (MZO)—Scientific Programme: The Geological Maps of the Republic of Croatia; project: Basic Geochemical Map of the Republic of Croatia. Their assistance is greatly appreciated.
The funding was obtained through the The Ministry of Science and Education, Republic of Croatia (MZO), Grant Number: 181-1811096-1181 Basic Geochemical Map of Croatia; z-Projects, Basic Geochemical Map of the Republic of Croatia
Croatian Geological Survey, Sachsova 2, P.O. Box 268, 10000, Zagreb, Croatia
Ozren Hasan, Slobodan Miko, Nikolina Ilijanić, Dea Brunović, Željko Dedić, Martina Šparica Miko & Zoran Peh
Ozren Hasan
Slobodan Miko
Nikolina Ilijanić
Dea Brunović
Željko Dedić
Martina Šparica Miko
Zoran Peh
Conceptualization, ZP, SM, OH and NI; methodology, ZP, MSM, DB, MS; software-GIS, OH, ŽD; organic carbon discussion MSM and DB; Writing—original draft, ZP, SM, OH, and NI, visualization, OH, ŽD, NI, and ZP; supervision, MS, ZP and OH; funding acquisition, MS and ZP. All authors read and approved the final manuscript.
Correspondence to Slobodan Miko.
The authors are consent with the publishing of the paper.
Hasan, O., Miko, S., Ilijanić, N. et al. Discrimination of topsoil environments in a karst landscape: an outcome of a geochemical mapping campaign. Geochem Trans 21, 1 (2020). https://doi.org/10.1186/s12932-019-0065-z
Geochemical mapping
Compositional data
Discriminant function analysis
Submission enquiries: [email protected]
General enquiries: [email protected] | CommonCrawl |
BMC Infectious Diseases
Sero-prevalence of hepatitis B virus markers and associated factors among children in Hawassa City, southern Ethiopia
Bedru Argaw1,
Adane Mihret2,
Abraham Aseffa2,
Azeb Tarekegne2,
Siraj Hussen3,
Demelash Wachamo4,
Techalew Shimelis3 &
Rawleigh Howe2
BMC Infectious Diseases volume 20, Article number: 528 (2020) Cite this article
Hepatitis B virus (HBV) infection is one of the major public health problems worldwide. Limited information exists about the epidemiology of HBV infection in Ethiopia. This study aimed to assess sero-prevalence of HBV markers and associated factors in children living in Hawassa City, southern Ethiopia.
A community-based cross-sectional study was conducted among 471 children in Hawassa City, southern Ethiopia from May to September, 2018. A total of 471 children were included in the study using a multistage sampling technique. Data on demographic and risk factors were gathered using structured questionnaires. Blood samples were collected and sera were screened for hepatitis B surface antigen (HBsAg), antibody to core antigen (anti-HBc), and antibody against surface antigen (anti-HBs) using enzyme-linked immunosorbent assay.
The sero-prevalence of HBsAg, anti-HBc, and anti-HBs markers among children were 4.4, 19.5 and 20.0%, respectively. Children at higher risk of having HBsAg marker were those who had a history of injectable medications (AOR 5.02, 95% CI: 1.14, 22.07), a family history of liver disease (AOR 6.37, 95% CI: 1.32, 30.74), a HBsAg seropositive mothers, (AOR 11.19, (95% CI: 3.15, 39.67), and had no vaccination history for HBV (AOR, 6.37, 95% CI: 1.32, 30.74). Children from families with low monthly income, who were home delivered, unvaccinated for HBV or with HBsAg seropositive mother had increased risk of having anti-HBc.
The study findings showed an intermediate endemicity of HBV infection in the study setting. The observed rate of residual HBV infection with low rate of immunized children after HBV vaccination was high. Hence, introducing birth dose vaccine, safe injection practice and improving immunization coverage during pregnancy as part of the antenatal care package should be considered. Furthermore, governmental and non-governmental organizations should give attention on timely measures for the prevention of ongoing vertical transmission from mother to child as well as early horizontal transmission of HBV in Hawassa City, Ethiopia.
Hepatitis B virus (HBV) is a deoxyribonucleic acid (DNA) virus classified under the hepadnaviridae family [1, 2]. About 257 million people globally had chronic HBV infection in 2015 [3, 4]. It was estimated that, annually, about 4.5 million new cases occurred, and 887, 000 people die globally from chronic sequelae of HBV infection including cirrhosis (52%) and liver cancer (38%) [5]. Africa has the second largest number of chronic HBV carriers after Asia and is considered as a region of highly endemicity [6]. About 60 million people in Africa are chronically infected with HBV and an estimated prevalence of 6.1% in adult population [7, 8].
HBV transmission occurs through sexual exposure, transfusion of infected blood, and use of unsterilized equipment for medical procedures and sharing of sharp materials [9]. However, perinatal exposure to HBV is the most common mode of transmission in areas of medium to high endemicity [10]. Babies born to a mother who is positive for HBsAg and HBeAg markers have ≥90% chance of contracting the infection and becoming chronic carrier [11]. Of these children, 15 to 25% have risk of dying from cirrhosis or liver cancer during adulthood [12, 13].
HBV is an important public health problem in Ethiopia, and the epidemiology varies with geographical area, population practice, age and mode of acquisition [10, 14, 15]. A previous national survey showed that 10.8% of young men had HBsAg, and 73.3% had at least one HBV marker [16]. A 7% sero-prevalence of HBsAg was also reported in a community-based study conducted in Addis Ababa [10].
The World Health Organization (WHO) set a strategy to reduce HBV incidence in children under-five to below 1% by 2020, and decreasing the prevalence to 0.1% by 2030 [3]. In Ethiopia, HBV vaccination has been introduced to the national Expanded Program for Immunization (EPI) since 2007 [17]. The current immunization schedule for children under one year of age in the country includes; BCG, measles, DPT-HepB-Hib or penta-valent vaccine, OPV. The HBV vaccination as (DPT3/PENTA 3) coverage was 73 and 86% in 2007 and 2011, respectively in Ethiopia [18]. To assess the progress of interventions, understanding the epidemiology of HBV infection and vaccination status would be important. However, there have been limited data on HBV infection in Ethiopia. Therefore, this study aimed to determine sero-prevalence of HBV markers (HBsAg, anti-HBc and anti-HBs) among children at Hawassa City, Southern Ethiopia.
The study was conducted in Hawassa City in Southern Ethiopia. Hawassa City has 8 sub-cities and located 272 km south of Addis Ababa, Ethiopia (Hawassa city administration, 2011). The city administration is divided into 7 urban sub-cities, containing 20 kebeles (smallest administrative unit) and one rural sub-city with 12 kebeles. According to the report of housing and population census (CSA, 2009), the population size of Hawassa City Administration in 2018 was 374,034; of which 190,757 were males and 183,277 were females [19]. The total children aged 5–8 years old were 56,252 and the total number of households were 78,124.
A community based cross-sectional study was conducted from May to September, 2018. The study population consisted of children aged 5–8 years and who were found at home during the study period. Mothers and/or children who were sick or on antiretroviral therapy or unwilling to participate in the study were excluded. Further, children who had not received full dose vaccine were considered to have no history of vaccination and were excluded from the study.
The sample size was determined using a single population proportion formula with assumptions of 5.3% HBsAg sero-prevalence among children [20], 95% confidence interval, and 3% marginal of error (d). The sample size calculated was 471 after considering a 10% non-response rate and design effect of 2.
$$ \mathrm{n}={\left({Z}_{\raisebox{1ex}{$1-a$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}\right)}^2\frac{P\left[1-P\right]}{d^2}, $$
Where.
n = Sample size.
Z = Standard normal distribution value at the 95% CI, which is 1.96.
P = The prevalence of HBV infections 5.3% and.
D = The margin of error, taken as 0.03%. Hence.
\( \mathrm{n}=\frac{(1.96)^2(0.053)(0.953)}{(0.30)^2}=214 \) . The final sample size was adjusted as follows:
Sample size = n (sample size) + (10% non-respondent) X Design effect (2). Thus, final sample size (n) was calculated as n = (214+ 21.4) X 2 = 470.85 ~ 471.
A multistage random sampling technique was used. Two sub-cities were selected using a simple random sampling technique in the first stage, and four kebeles were selected in the second stage. In the third stage, the total sample size was proportionated to the four kebeles. In the last stage, households of eligible study participants were selected using systematic random sampling technique, and study participants were selected using lottery method (Fig. S1).
Pre-tested and structured questionnaires were used to collect information on socio-demographic characteristics and other associated factors. The vaccination status of children was collected from immunization cards and/or by asking mothers. Health workers conducted face-to-face interviews with mothers and gathered the data. Training on data collection and sample drawing and transportation were given, and pretest was done to validate the questionnaire prior to actual work.
Serological analysis
About 5 ml of blood sample was drawn from every child and mother. Samples were transported within 6 h of collection to the laboratory of the Hawassa University Comprehensive Specialized Hospital using cold box. Separated sera were stored at -80 °C and then transported to the Armauer Hansen Research Institute (AHRI) in Addis Ababa for analyses. Sera were tested for HBsAg, anti-HBc, and anti-HBs using enzyme-linked immunosorbent assay (ELISA) (Monolisa PLUS, BIO-RAD, France). Testing was performed according to the instructions of the manufacturer.
The questionnaire was prepared in English language and translated to Amharic language and then back to English. One week prior to data collection, the questionnaire was pre-tested on 5% of the calculated sample size at Adare Hospital other than the actual study sites to ensure questions were unambiguous. Prior to the beginning of data collection, all data collectors were trained by the principal investigator. The collected data were checked daily for consistency and accuracy. Standardized procedures were strictly followed during sample collection, storage and analytical process. The quality of test results was maintained using known negative and positive samples as external quality controls.
Operational definitions
HBV infected: whose blood is serologically positive for HBsAg
HBV immune following a resolved infection: whose blood is serologically HBsAg negative, anti-HBc positive and anti-HBs positive
HBV immune following vaccination: anti-HBs positive after vaccination with anti-HBs titer ≥10mIU/ml.
HBV susceptible: HBsAg negative, anti-HBc negative and anti-HBs negative
Data entry, cleaning and analysis was done using SPSS version 23.0 software. Frequencies and percentages were calculated to summarize results of categorical variables. Bivariate logistic regression analysis was conducted to compute crude odds ratio (COR). Variables with a p-value < 0.25 in bivariate analysis were candidates for multivariable logistic regression. Adjusted odds ratios (AOR) with 95% confidence intervals (CI) were used to measure strength of the association between HBV infection and its determinant factors. A p-value < 0.05 was considered as a significant association.
Socio-demographic characteristics
A total of 471 study participant were enrolled, of which, 451 (95.8%) were included in the analysis. The mean age of the children was 6.56 years (standard deviation SD), 1.22; range, 5–8 years), and 232 (51.4%) were boys. Among the study participants, 147 (32.6%) were from Alamura, 119 (26.4%) from Hogane-Wacho, and the remaining were from Dume and Wukiro kebeles. (Table 2).
Sero-prevalence of hepatitis B virus markers (HBsAg, anti-HBc and anti-HBs)
The overall sero-prevalence of HBsAg, anti-HBc, and anti-HBs among children was 4.4% [95% confidence interval (CI): 2.8–6.6], 19.5% [95% CI: 16.1–23.4] and 20.0% [95% CI: 16.5–23.8], respectively. All children with HBsAg were also positive for anti-HBc, and 37 (8.4%) participants were positive for both anti-HBs and anti-HBc markers. Of the children, 53 (11.6%) had HBV immunity following vaccination (anti-HBs+) (Table 1). In addition, the prevalence of HBsAg among paired mothers was 7.1% [95% confidence interval (CI): 4.7–9.8] (Table 2).
Table 1 Sero-prevalence of hepatitis B virus markers among children at Hawassa City, southern Ethiopia in 2018. (n = 451)
Table 2 Bivariate and multivariable analysis of sociodemographic and self reported associated factors for HBsAg positivity among children at Hawassa City, Southern Ethiopia in 2018
Associated factors for hepatitis B virus infection among children
Of the study participants, 32 (7.1%) had a history of hospital admission and 36 (8%) had history of injectable medications. The number of children born at home was181 (40.1%), and 215 (47.7%) children were not vaccinated (Table 2).
In multivariable analysis, children who had history of injectable medications were 5 times (AOR 5.02: 95% CI: 1.14, 22.1) more likely to have HBV infection. Children who had a family history of liver disease were 6 times more likely to be exposed to HBV infection (AOR 6.37, 95% CI: 1.32, 30.7). In addition, children born to mothers with HBsAg were at higher risk of HBV infection than children whose mothers were negative for HBsAg (AOR 11.2, (95% CI: 3.15, 39.67). Moreover, children who had no history of vaccination were 6 times more likely to have HBV infection (AOR, 6.36, 95% CI: 1.32, 30.74) as compared to their counterparts (Table 2).
Associated factors for hepatitis B virus exposure among children
After further analysis for those significantly associated variables in multivariable logistic regression analysis, children with family monthly income < 2000 ETB (AOR 2.15, 95% CI:1.25, 3.72), home delivered (AOR 2.82, 95% CI:1.58, 5.06), a history of vaccination (AOR 2.45, 95% CI:1.41, 4.27) or with a HBsAg sero-positive mother (AOR 2.59, 95% CI: 1.13, 5.96) had higher sero-positivity for anti-HBc compared to their counterparts. There was no statistically significant association between self-reported risk factors including average monthly income, history of hospital admission, age and history of day care with HBV infection (Table 3).
Table 3 Bivariate and multivariable analysis of sociodemographic and self-reported associated factors for anti-HBc positivity among children at Hawassa City, Southern Ethiopia, 2018
To reduce the morbidity and mortality related to HBV infection, WHO recommended that all countries should introduce the vaccine in routine immunization programs by 1995 [21]. Ethiopia introduced HBV vaccine to the national EPI program in 2007. However, no data are available on epidemiology of HBV infection in children born after the introduction of the vaccine. We assessed the sero-prevalence of HBV markers among children in Hawassa City, Ethiopia.
The sero-prevalence of HBsAg, anti-HBc, and anti-HBs positivity among children aged 5–8 years was 4.4% [95% CI: 2.7–6.4], 19.5% [95% CI: 16.1–23.4] and 20.0% [95% CI: 16.5–23.8], respectively. According to the criteria of WHO, the observed prevalence of HBsAg showed an intermediate endemicity of HBV infection in the study area [22]. The prevalence of HBsAg in this study was in agreement with results reported in similar study population in Gambia (2.8%) [23]. However, a higher prevalence was reported in Ghanaian rural children (21%) [24], and a lower prevalence was reported from Senegal (2.0%) [25] and Lao People's Democratic Republic (2.1%) [26]. Further, the prevalence of anti-HBc in this study was lower than results from Gambia (31%) [23], Senegal (27%) [25] and Ghana (75%) [24]. A higher prevalence of anti-HBs was reported in studies conducted in Egypt (57.7–67%) [27] and Senegal (58%) [25]. The difference in prevalence of HBsAg, anti-HBc and anti HBs might be due to variability of study methods employed and diverse risk factors involved in various geographical regions.
Children who had history of using injectable medications were more likely to have HBV infection. Even though direct comparison is difficult because of difference in study population, consistent finding was reported in China [28] and Saudi Arabia [29]. This may be due to proper usage of universal precautions, sharing needle and using unsterilized equipment's for medical purpose, which increase the risk of infection. Home delivered children had high seropositivity of anti-HBc, as also reported in studies conducted in Addis Ababa, Ethiopia [10] and Saudi Arabia [29]. This may be due to the use of unsterilized or inadequately sterilized instruments. A higher risk of HBV infection in children from family with a history of liver disease compared with those with no family history of liver disease. This result was similar with the study conducted in China [28], Saudi Arabia [29] and USA [30]. This may be due to increased chance of coming in contact with an infected person's blood and other body fluids.
Children with history of vaccination had lower risk of having HBV infection compared to their counterparts. This result was similar with the study conducted in Papua New Guinea [31], Uganda [32], Pakistan [33] and Nigeria [34]. The study result emphasizes the importance of receiving full HBV vaccination to protect against the infection. It was also observed that the prevalence of residual HBV infection was higher among children with lower rate of immunization (53.4%). According to the Ethiopian demographic health survey report, the coverage of DPT-HepB-Hib3 and all vaccine in urban area of Ethiopia were 60.5, and 48.1%, respectively [35]. This may be explained by absence of HBV birth dose and low maternal education. This study revealed that children who were not vaccinated showed a higher sero-positivity of anti-HBc compared to their counter parts. Similar findings were reported from Uganda [32] and Nigeria [34], indicating a higher risk of exposure to HBV infection among non-vaccinated children.
Maternal HBsAg seropositivity also increased the risk of HBV infection among children. This result was similar with the study conducted in Pakistan [33], Taiwan [36] and Iran [37]. This may be due to a higher risk of HBV transmissibility from infected mother to child during birth, intrauterine transmission in HBeAg-positive mothers with high HBV viral load (> 200,000 IU/ml), and horizontal transmission through sharing sharp materials (like razor blade, and needle) and body fluid exposure [38, 39]. The observed association between mothers HBsAg positivity and child anti-HBc positivity was also shown in studies in Pakistan [33], Taiwan [36] and Iran [37]. This may be due to the fact that HBV infected mothers transmit the virus to their child vertically during birth or horizontally afterwards. Children from family with low monthly income had a higher prevalence of anti-HBc, which was consistent with results from low and middle income countries [40]. In relation to economic situations, an increased risk of exposure to share sharp materials and having lower health care utilization might occur.
Our study had some limitations. First, we did not include testing for HBeAg, anti-HBe and HBV viral load, which are also important determinants of the HBV transmission. Second, the study was done in urban populations and may underestimate the true burden of the disease in the rural community. Third, we did not screen children for HIV infection, which might be considered as missed opportunity to assess the influence of HIV on response to HBV vaccine. Despite these shortcomings, this study provides relevant epidemiological information about HBV infection in children in the study area.
The study findings showed an intermediate endemicity of HBV infection in the study setting. Histories of injectable medication, family liver disease, lack of vaccination and maternal HBsAg sero-positivity were independent predictors of HBV infection. The observed rates of residual HBV infection with low rate of immunized children after HBV vaccination was high. Therefore, introducing hepatitis B vaccine and possibly hepatitis B immunoglobulin within 12 h of birth for infants born to infected mothers and provision of treatment for high viremic pregnant mothers, as well as safe injection practice would be important interventions. Furthermore, governmental and non-governmental organizations should give attention to timely measures for the prevention of ongoing vertical transmission from mother to child as well as early horizontal transmission of HBV in Hawassa/Ethiopia.
There is no remaining data and materials, all information is clearly presented in themain manuscript.
HBV:
HBsAg:
Hepatitis B surface antigen
Anti-HBc:
Antibody to core antigen
Anti-HBs:
Antibodies against surface antigen
HBIG:
Hepatitis B immunoglobulin
World Health Organization, Guidelines for the Prevention Care and Treatment of Persons with Chronic Hepatitis B Infection: Geneva: World Health Organization; 2015.
Sunbul M. Hepatitis B virus genotypes: global distribution and clinical importance. World J Gastroenterol: WJG 2014;20(18):5427.
WHO. World health statistics 2016: monitoring health for the SDGs sustainable development goals: World Health Organization; 2016.
Disease G B D, Injury I, Prevalence C. Global, regional, and national incidence, prevalence, and years lived with disability for 328 diseases and injuries for 195 countries, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet (London, England). 2017;390(10100):1211–59.
Global Health Estimates. Deaths by cause, age, sex, by country and by region, 2000–2015. Geneva: World Health Organization. 2015;2016.
Kim H, Shin AR, Chung HH, Kim MK, Lee JS, et al. Recent trends in hepatitis B virus infection in the general Korean population. The Korean journal of internal medicine. 2013;28(4):413.
Hwang E W, Cheung R. Global epidemiology of hepatitis B virus (HBV) infection. North American Journal of Medicine and Science. 2011;4(1).
Lemoine M, Eholié S, Lacombe K. Reducing the neglected burden of viral hepatitis in Africa: strategies for a global approach. J Hepatol. 2015;62:469–76.
Jafari S, Copes R, Baharlou S, Etminan M, Buxton J. Tattooing and the risk of transmission of hepatitis C: a systematic review and meta-analysis. Int J Infect Dis. 2010;14(11):e928–e40.
Tegegne D, Desta K, Tegbaru B, Tilahun T. Seroprevalence and transmission of Hepatitis B virus among delivering women and their new born in selected health facilities, Addis Ababa, Ethiopia: a cross sectional study. BMC research notes. 2014;7(1):239.
Bakthavatchalu S. Hepatitis B Surface Antigen Carrier State among Asymptomatic Pregnant Women and Its Correlation with Vertical Transmission. International Journal of Research in Pharmacy & Science. 2012;2(3).
Camvulam N, Gotsch P, Langan RC. Caring for pregnant women and newborns with Hepatitis B or C. Am Fam Physician. 2010;82(10):1225–9.
Lam N-C V, Gotsch PB, Langan RC. Caring for pregnant women and newborns with hepatitis B or C. Am Fam Physician. 2010;82(10):1225–9.
Howell J, Lemoine M, Thursz M. Prevention of materno-foetal transmission of hepatitis B in sub-Saharan Africa: the evidence, current practice and future challenges. J Viral Hepat. 2014;21(6):381–96.
Walle F, Asrat D, Alem A, Tadesse E, Desta K. Prevalence of hepatitis B surface antigen among pregnant women attending antenatal care service at Debre-Tabor Hospital. Northwest Ethiopia Ethiop J Health Sci. 2008;17(17):13–20.
Abebe A, Nokes D, Dejene A, Enquselassie F, Messele T, et al. Seroepidemiology of hepatitis B virus in Addis Ababa, Ethiopia: transmission patterns and vaccine control. Epidemiology & Infection. 2003;131(1):757–70.
Siddiqi N, Khan A, Nisar N, Siddiqi A. Assessment of EPI (expanded program of immunization) vaccine coverage in a peri-urban area. Jpma. 2007;57(8):391–5.
Ethiopia National Expanded Programme on Immunization: Comprehensive Multi-Year Plan 2016–2020. Federal Ministry of Health, Addis Ababa 2015:11.
Lenjiso T, Tesfaye B, Michael A, Esatu T, Legessie Z. SNNPR Hawassa city Adminstration health department. GTP assessment report booklet. 2010-2012;2013:9–65.
Berhe N, Myrvamg B, Gundersen SG. Intensity of Schisosoma mansoni, Hepatitus B, age and sex predict level of hepatic periportal thichkening/ fibrosis (PPT/F), a large scale community based study in Ethiopia. AM J Trop Med Hyg. 2007;77(6):1079–86.
WHO: Hepatitis B Fact Sheet;. 2008.
World health organization. Department of Communicable Diseases Surveillance and Response: Hepatitis B virus; . Geneva: 2002.
Peto TJ, Mendy ME, Lowe Y, Webb EL, Whittle HC, et al. Efficacy and effectiveness of infant vaccination against chronic hepatitis B in the Gambia Hepatitis intervention study (1986–90) and in the nationwide immunisation program. BMC Infect Dis. 2014;14(1):7.
Cho Y, Bonsu G, Akoto-Ampaw A, Nkrumah-Mills G, Nimo JJ, et al. The prevalence and risk factors for hepatitis B surface Ag positivity in pregnant women in eastern region of Ghana. Gut and liver. 2012;6(2):235.
PubMed PubMed Central CAS Article Google Scholar
Coursaget P, Leboulleux D, Soumare M, le Cann P, Yvonnet B, et al. Twelve-year follow-up study of hepatitis B immunization of Senegalese infants. J Hepatol. 2004;21(2):250–4.
Komada K, Sugiyama M, Vongphrachanh P, Xeuatvongsa A, Khamphaphongphane B, et al. Seroprevalence of chronic hepatitis B, as determined from dried blood spots, among children and their mothers in Central Lao People's Democratic Republic: a multistage, stratified cluster sampling survey. Int J Infect Dis. 2015;36:21–6.
Salama II, Sami SM, Said ZNA, El-Sayed MH, El Etreby LA, et al. Effectiveness of hepatitis B virus vaccination program in Egypt: multicenter national project. World J Hepatol. 2015;7(22):2418.
Lao TT, Sahota DS, Law L-W, Cheng YK, Leung T-Y. Age-specific prevalence of hepatitis B virus infection in young pregnant women, Hong Kong special administrative region of China. Bull World Health Organ. 2014;92:782–9.
Ageely H, Mahfouz MS, Gaffar A, Elmakki E, Elhassan I, et al. Prevalence and risk factors of Hepatitis B virus in Jazan region, Saudi Arabia: cross-sectional health facility based study. Health. 2015;7:459–65.
Hontelez JA, Hahné S, Koedijk FH, de Melker HE. Effectiveness and impact of hepatitis B virus vaccination of children with at least one parent born in a hepatitis B virus endemic country: an early assessment. J Epidemiol Community Health. 2010;64(10):890–4.
Kitau R, Datta SS, Patel MK, Hennessey K, Wannemuehler K, et al. Hepatitis B surface antigen seroprevalence among children in Papua New Guinea, 2012–2013. The American journal of tropical medicine and hygiene. 2015;92(3):501–6.
Bwogi J, Braka F, Makumbi I, Mishra V, Bakamutumaho B, et al. Hepatitis B infection is highly endemic in Uganda: findings from a national serosurvey. African health sciences. 2009;9(2).
Huma Q, Najma J, Syed EA, Khalif M. The evidence of mother to child transmission of hepatitis B virus infection in Pakistan and the need for hepatitis B immunization policy change. J Pak Med Assoc. 2014;64(4):401–5.
Tabor E, Gerety RJ. Hepatitis B virus infection in infants and toddlers in Nigeria: the need for early intervention. J Pediatr. 2009;95(4):647–50.
Central Statistical Agency, Ethiopia Demographic and Health Survey; final draft report. Addis Ababa Ethiopia. ICF International Calverton, Maryland, USA. 2011.
Lin Y-C, Chang M-H, Ni Y-H, Hsu H-Y, Chen D-S. Long-term immunogenicity and efficacy of universal hepatitis B virus vaccination in Taiwan. J Infect Dis. 2003;187(1):134–8.
Lavanchy D. Hepatitis B virus epidemiology, disease burden, treatment, and current and emerging prevention and control measures. J Viral Hepat. 2004;11(2):97–107.
Zanetti AR, Van Damme P, Shouval D. The global impact of vaccination against hepatitis B: a historical overview. Vaccine. 2008;26(49):6266–73.
Amsalu A, Ferede G, Eshetie S, Tadewos A, Assegu D. Prevalence, infectivity, and associated risk factors of hepatitis B virus among pregnant women in Yirgalem hospital. Implication of Screening to Control Mother-to-Child Transmission. Journal of pregnancy: Ethiopia; 2018.
Razavi-Shearer D, Gamkrelidze I, Blach S, Brandon S, Estes C, et al. THU-097 - the global prevalence of HBsAg by age in 2016 and the case for universal treatment in low and middle income countries. J Hepatol. 2018;68:S169.
We like to extend our deepest gratitude to the Ministry of Health for funding this study through the Clinical Research capacity Building program at the Armauer Hansen Research Institute (AHRI). The authors are also grateful to the study participants who took part in the study.
Small amount of funding was obtained from AHRI to support a postgraduate student.
Department of Medical Laboratory Science, Hawassa College of Health Sciences, South Nations and Nationalities Peoples Region, Hawassa, Ethiopia
Bedru Argaw
Armauer Hansen Research Institute, Addis Ababa, Ethiopia
Adane Mihret, Abraham Aseffa, Azeb Tarekegne & Rawleigh Howe
School of Medical Laboratory Science, College of Medicine and Health Sciences, Hawassa University, Hawassa, Ethiopia
Siraj Hussen & Techalew Shimelis
Department of Public Health, Hawassa College of Health Sciences, South Nations and Nationalities Peoples' Region, Hawassa, Ethiopia
Demelash Wachamo
Adane Mihret
Abraham Aseffa
Azeb Tarekegne
Siraj Hussen
Techalew Shimelis
Rawleigh Howe
RH, AA, AM, BA and TS designed the study. BA, AT, and AM run the laboratory work. BA, TS, AM, SH and DW performed the statistical analyses. All authors contributed to interpretation, the write-up and approved the final version of the manuscript.
Correspondence to Siraj Hussen.
Ethical approval was obtained from the Institutional Review Board of Armauer Hansen Research Institute and College of Medicine and Health Sciences, Hawassa University. The purpose and importance of the study were explained to each study participants. To ensure confidentiality of participant's codes numbers was used on the questionnaire. Participant was interviewed alone to keep the privacy. All participants were not paid for the test. Informed written consent was obtained from a parent or guardian for children to participate in the study. The study incurs no cost to the study participants and interviewed free of charge. Participants who have diagnosed and suspected cases was get proper advice and referred for further diagnosis, better treatment and care to nearest public health facilities.
The authors declare there is no competing interest.
The sampling technique from Hawassa city, Southern Ethiopia, 2019.
Argaw, B., Mihret, A., Aseffa, A. et al. Sero-prevalence of hepatitis B virus markers and associated factors among children in Hawassa City, southern Ethiopia. BMC Infect Dis 20, 528 (2020). https://doi.org/10.1186/s12879-020-05229-7
Sero-prevalence
Submission enquiries: [email protected] | CommonCrawl |
Chen, M. H., Shao, Q. M. and Ibrahim, J.G. (2000) , Monte Carlo Methods In Bayesian Computation . Springer Series in Statistics, Springer-Verlag , New York. ISBN 0-387-98935-8
Lai, T.L., de la Pena, V. and Shao, Q. M. (2009) , Self-normalized Processes: Theory and Statistical Applications. Springer Series in Probability and its Applications, Springer-Verlag , New York. ISBN 978-3-540-85635-1.
Chen, L.H.Y., Goldstein, L. and Shao, Q.M. (2010). Normal Approximation by Stein's Method . Springer-Verlag , New York. ISBN 978-3-642-15006-7.
Asymptotic Theory in Probability and Statistics with Applications. Higher Eduction Press of China, and International Press, 2007 (Edited book with T. L. Lai and L. F. Qian).
Selected Publications of Qi-Man Shao
Non-normal approximation by Stein's method of exchangeable pairs with application to the Curie-Weiss model. Ann. Appl. Probab. 21 (2011), 464-483 (with S. Chatterjee)
Large deviations for local times and intersection local times of fractional Brownian motions and Riemann-Liouville processes. Ann. Probab. 39 (2011), 729-778 (with X. Chen, W. Li and J. Rosinski)
Nonparametric estimate of spectral density functions of sample covariance matrices: A first step. Ann. Statisit. 38 (2010), 3724-3750 (with B.Y. Jing, G.M. Pan and W. Zhou)
Cramer-type moderate deviation for the maximum of the periodogram with application to simultaneous tests in gene expression time series. Ann. Statist. 38 (2010), 1913-1935. (with W.D. Liu)
The asymptotic distribution and Berry-Esseen bound of a new test for independence in high dimension with an application to stochastic optimization. Ann. Appl. Probab. 18 (2008), 2337-2366 (with Z.Y. Lin and W.D. Liu)
Towards a universal self-normalized moderate deviation. Trans. Amer. Math. Soc. 360 (2008), 4263--4285 (with B.Y. Jing and W. Zhou).
Normal approximation for nonlinear statistics using a concentration inequality approach Bernoulli 13 (2007), 581-599 (with L.H.Y. Chen).
On discriminating between long-range dependence and changes in mean. Ann. Stat. 34 (2006), 1140-1165 (with I. Berkes, L. Horvath and P. Kokoszka)
Saddlepoint approximation for Student's t-statistic with no moment conditions. Ann. Statist. 32 (2004), 2679-2711 (with B.Y. Jing and W. Zhou)
On propriety of the posterior distribution and existence of the maximum likelihood estimator for regression models with covariates missing at random. J. Amer. Stat. Assoc. 99 (2004), 421-438 (with M.H. Chen and J. G. Ibrahim)
Normal approximation under local dependence. Ann. Probab. 32 (2004), 1985-2028 (with L.H.Y. Chen)
Lower tail probabilities of Gaussian processes. Ann. Probab. 32 (2004), 216-242 (with W. Li)
Self-normalized Cramer type large deviations for independent random variables. Ann. Probab. (with B. Y. Jing and Q.Y. Wang) 31 (2003), 2167-2215.
Random polynomials having few or no real zeros. J. Amer. Math. Soc. 15 (2002), 857-892 (with A. Dembo, B. Poonen and O. Zeitouni)
A normal comparison inequality and its applications. Probab. Theory Related Fields 122 (2002), 494-508 (with W.Li)
Bootstrapping the Student t-statistic. Ann. Probab. 29 (2001), 1435-1450 (with D. Mason)
Capture time of Brownian pursuits. Probab. Theory Relat. Fields 121 (2001), 30-48 (with W.V. Li)
A non-uniform Berry-Esseen bound via Stein's method. Probab. Theory Relat. Fields 120 (2001), 236-254(with L. H. Y. Chen)}
A new skewed link model for dichotomous quantal response data. J. Amer. Statist. Assoc. 94 (1999), 1172-1186. (with M.H. Chen and D.K. Dey)
Limit theorems for quadratic forms with applications to Whittle's estimate. Ann. Appl. Probab. 9 (1999), 146-187 (with L. Horv\'ath).}
Limit distributions of directionally reinforced random walks. Adv. Math. 134 (1998), 367-383 (with L. Horv\'ath).
Monte Carlo methods for Bayesian analysis of Constrained parameter problems. Biometrika 85 (1998), 73-87 (with M.H. Chen)
Self-normalized large deviations. Ann. Probab. 25 (1997), 285-328.
On Monte Carlo methods for estimating ratios of normalizing constants. Ann. Statist. 25 (1997), 1563-1594 (with M.H. Chen)
A general Bahadur representation of M-estimators and its application to linear regression with nonstochastic designs. Ann. Statist. 24 (1996), 2608-2630 (with X. He)
Limit theorem for maximum of standardized U-statistics with an application. Ann. Statist. 24 (1996), 2266-2279 (with L. Horv\'ath).
Large deviations and law of the iterated logarithm for partial sums normalized by the largest absolute observation. Ann. Probab. 24 (1996), 1368-1387 (with L. Horv\'ath).
Weak convergence for weighted empirical processes of dependent sequences. Ann. Probab. 24 (1996), 2098-2127 (with H. Yu).
Maximal inequality for partial sums of $\rho$-mixing sequences. Ann. Probab. 23 (1995), 948-965.
Small ball probabilities of Gaussian fields. Probab. Theory Relat. Fields 102 (1995), 511--517 (with D. Wang).
On almost sure limit inferior for B-valued stochastic processes and applications. Probab. Theory Relat. Fields 99 (1994), 29-54 (with M. Csorgo)
Strong limit theorems for large and small increments of $\ell^p$-valued Gaussian processes. Ann. Probab. 21 (1993), 1958--1990 (with M. Csorgo)
A note on small ball probability of Gaussian processes with stationary increments. J. Theoret. Probab. 6 (1993), 595-602.
Bootstrapping the sample means for stationary mixing sequences. Stochastic Process. Appl. 48 (1993), 175-190 (with H. Yu)
An Erdos and Revesz type law of the iterated logarithm for stationary Gaussian processes. Probab. Theory Relat. Fields 94 (1992), 119-133.
On a problem of Csorgo and Revesz. Ann. Probab. 17 (1989), 809--812.
List of Publications of Qi-Man Shao:
An almost sure invariance principle for Gaussian sequences. Chinese J. Appl. Probab. Statist. 1 (1985), 43--46
Weak convergence of multidimensional empirical processes in the strong mixing case. Chinese Ann. Math. (Ser. A) 7 (1986), 547--552.
A remark on the increment of a Wiener process. J. Math. (Wuhan) 6 (1986), 175--182.
Strong approximations for partial sums of weakly dependent random variables. Sci. Sinica (Ser. A) 30 (1987), 575--587 (with C.R. Lu)
Strong approximations on lacunary trigonometric series with weights. Sci. Sinica (Ser. A) 30 (1987), 796--806
A remark on the invariance principle for $\rho$-mixing sequence. Chinese Ann. Math. Ser. A 9 (1988), 409--412.
A moment inequality and its applications. Acta Math. Sinica 31 (1988), 736--747.
On the invariance principle for $\rho$-mixing sequences of random variables. Chinese Ann. Math. (Ser. B) 10 (1989), 427--433.
On the complete convergence for $\rho$-mixing sequence. Acta Math. Sinica 32 (1989), 377--393.
On the increments of sums of independent random variables. Chinese J. Appl. Probab. Statist. 5 (1989), 117--126.
A Berry-Esseen inequality and an invariance principle for associated random fields. Chinese J. Appl. Probab. Statist. 5 (1989), 1--8.
On a problem of Csorgo and Revesz. Ann. Probab. 17 (1989), 809--812
Exponential inequalities for dependent random variables. Acta Math. Appl. Sinica (English Ser.) 6 (1990), 338--350
On the complete convergence for randomly selected sequences. Chinese Sci. Bull. 35 (1990), 93--98.
A further investigation on the complete convergence for independent random variables. Chinese J. Appl. Probab. Statist. 7 (1991), 174--188.
An investigation on conditions for complete convergence of U-statistics. Acta Math. Sinica 34 (1991), 754-769 (with C. Su)
On a lemma of Butzer and Kirschfink. Approx. Theory Appl. 7 (1991), 35-38.
Contribution to the limit theorems. In: Contemp. Math. 118 (1991), 221--237 (with Z.Y. Lin and C.R. Lu)
Criteria for limit inferior of small increments of Banach space valued stochastic processes. C. R. Math. Rep. Acad. Sci. Canada 13 (1991), 173-178 (with M. Csorgo).
A note on local and global functions of a Wiener process and some Renyi-type statistics. Studia Sci. Math. Hungar. 26 (1991), 239-259 (with M. Csorgo and B. Szyszkowicz).
Fernique type inequality and moduli of continuity for $l^2$-valued Ornstein-Uhlenbeck processes. Ann. Inst. H. Poincar\'e Probab. Statist. 28 (1992), 479-517 (with E. Csaki and Csorgo)
How small are the increments of partial sums of independent random variables. Sci. Sinica Ser. A 35 (1992), 675--689.
Strong limit theorems for large and small increments of $l^p$-valued Gaussian processes . Ann. Probab. 21 (1993), 1958--1990 (with M. Csorgo).
Bootstrapping the sample means for stationary mixing sequences. Stochastic Process. Appl. 48 (1993), 175-190 (with H. Yu).
On independence and dependence properties for a set of random events. Amer. Statist. 47 (1993), 112-115 (with Y.H. Wang and J. Stoyanov).
Almost sure invariance principles for mixing sequences of random variables. Stochastic Process. Appl. 48 (1993), 319-334
Convergence of integrals of uniform empirical and quantile processes. Stochastic Process. Appl. 45 (1993), 283-294 (with M. Csorgo and L. Horvath)
On the law of the iterated logarithm for infinite dimensioal Ornstein-Ohlenbeck process. Canad. J. Math. 45 (1993), 159-175
Randomization moduli of continuity for $\ell^2$-norm squared Ornstein-Uhlenbeck processes. Canad. J. Math. 45 (1993), 269-283 (with M. Csorgon and Z.Y. Lin)
On complete convergence for $\alpha$-mixing sequences. Statist. Probab. Lett. 16 (1993), 279-287 .
On the invariance principle for $\rho$-mixing sequence of random variables with infinite variance. Chinese Ann. Math. Ser. B 14 (1993), 27-42.
On the weighted asymptotics of partial sums and empirical processes of independent random variables. In: Contemp. Math. 149 (1993), 139-148 (with M. Csorgo, L. Horvath and B. Szyszkowicz).
On the limiting behaviors of increments of sums of random variables without moment conditions. Chinese Ann. Math. Ser. B 14 (1993), 307-318 (with Z. Y. Lin).
Some limit theorems for muti-dimensional Brownian motion . Acta. Math. Sinica 36 (1993), 53-59 (with B. Chen).
On almost sure limit inferior for B-valued stochastic processes and applications. Probab. Theory Related Fields 99 (1994), 29-54 (with M. Csorgo).
Path properties for $\ell_{\infty$-valued Gaussian processes. Proc. Amer. Math. Soc. 121 (1994), 225-236 (with M. Csorgo and Z.Y. Lin).
Self-normalizing central limit theorem for sums of weakly dependent random variables. J. Theoret. Probab. 7 (1994), 309-338 (with M. Peligrad).
A self-normalized Erdos-Renyi type law of large numbers. Stochastic Process. Appl. 50 (1994), 187-196 (with M. Csorgo ).
On a new law of iterated logarithm of Erdos and Revesz Acta Math. Hungar. 64 (1994), 157-181.
Random increments of a Wiener process and their applications. Studia Sci. Math. Hungar. 29 (1994), 443-480.
Kernel generated two-time parameter Gaussian processes and some of their path properties. Canad. J. Math. 46 (1994), 81-119 (with M. Csorgo and Z.Y. Lin).
A note on dichotomy theorems for integrals of stable processes. Statist. Probab. Lett. 19 (1994), 45-49 (with L. Horvath).
A new proof on the distribution of the local time of a Wiener processes. Statist. Probab. Lett. 19 (1994), 285-290 (with M. Csorgo).
Studentized increments of partial sums. Sci. Sinica Ser.A 37 (1994), 265-276 (with M. Csorgo and Z.Y. Lin).
A note on the law of large numbers for directed random walks in random environments. Stoch. Process. Appl. 54 (1994), 275 - 279 (with L. Horvath).
Strong approximation theorems for independent random variables and their applications. J. Multivariate Anal. 52 (1995), 107 - 130.
Estimation of the variance of partial sums for $\rho$-mixing random variables. J. Multivariate Anal. 52 (1995), 140 - 157. (with M. Peligrad).
A note on the almost sure central limit theorem for weakly dependent random variables. Statist. Probab. Lett. 22 (1995), 131-136. (with M. Peligrad).
A Chung type law of the iterated logarithm for subsequences of a Wiener process. Stoch. Process. Appl. 59 (1995), 125-142.
Moduli of continuity for local time of Gaussian processes. Stoch. Process. Appl. 58 (1995), 1-21 (with M. Csorgo and Z.Y. Lin)
A small deviation theorem for independent random variables. Theory Probab. Appl. 40 (1995), 225-235.
On a conjecture of R\'ev\'esz . Proc. Amer. Math. Soc. 123 (1995), 575-582.
Small ball probabilities for Gaussian processes with stationary increments under Holder norms. J. Theoret. Probab. 8 (1995), 361-386 (with J. Kuelbs and W. V. Li).
Asymptotics for directed random walks in random environments. Acta Math. Hungar. 68 (1995), 21-36 (with L. Horvath).
Limit theorems for the union-intersection test. J. Statist. Plan. Inf. 44 (1995), 133-148 (with L. Horvath).
Limit theorems for the maximum of standardized Cesaro and Abel sums. J. Statist. Res. 29 (1995), 37-50 (with L. Horvath).
Moduli of continuity for $\ell^p$-valued Gaussian processes. Acta. Sci. Math. 60 (1995), 149-175 (with E. Csaki and M. Csorgo).
A general Bahadur representation of M-estimators and its application to linear regression with nonstochastic designs. Ann. Statist. 24 (1996), 2608-2630 (with X. He).
Limit theorem for maximum of standardized U-statistics with an application. Ann. Statist. 24 (1996), 2266-2279 (with L. Horvath).
Large deviations and law of the iterated logarithm for partial sums normalized by the largest absolute observation. Ann. Probab. 24 (1996), 1368-1387 (with L. Horvath).
Bounds and estimators of a basic constant in extreme value theory of Gaussian processes. Statist. Sinica 7 (1996), 245-257.
p-Variation of Gaussian processes with stationary increments. Studia Sci. Math. Hungar. 31 (1996), 237-247.
A note on estimation of the variance for $\rho$-mixing sequences. Statist. Probab. Lett. 26 (1996), 141 - 145. (with M. Peligrad).
A Darling-Erdos-type theorem for standardized random walk summation. Bull. London Math. Soc. 28 (1996), 425 - 432. (with L. Horvath).
Bahadur efficiency and robustness of studentized score tests. Ann. Inst. Statist. Math. 48 (1996), 295-314 (with X. He).
Darling-Erdos type theorems for sums of Gaussian variables with long range dependence. Stoch. Process. Appl. 63 (1996), 117-137 (with L. Horvath)
On Monte Carlo methods for estimating ratios of normalizing constants. Ann. Statist. 25 (1997), 607-630 (with M.H. Chen).
Estimating ratios of normalizing constants for densities with different dimensions. Statist. Sinica 7 (1997), 607-630 (with M.H. Chen).
Performance study of marginal posterior density estimation via Kullback-Leibler divergence. Test 6 (1997), 321-350 (with M.H. Chen).
Almost sure summability of partial sums. Studia Sci. Math. Hungar. 33 (1997), 45-74 (with M. Csorgo and L. Horvath).
Monte Carlo methods for Bayesian analysis of constrained parameter problems. Biometrika 85 (1998), 73-87. (with M.H. Chen)
Limit distributions of directionally reinforced random walks. Adv. Math. 134 (1998), 367-383 (with L. Horvath).
Self-normalized large deviations in vector spaces. In: Progress in Probability (Eberlein, Hahn, Talagrand, eds) Vol 43 (1998), 27-32. (with A. Dembo)
Self-normalized moderate deviations and LILs. Stochastic Process. Appl. 75 (1998), 51-65. (with A. Dembo)
Recent developments in self-normalized limit theorems. In Asymptotic Methods in Probability and Statistics (editor B. Szyszkowicz), pp. 467 - 480. Elsevier Science, 1998.
Limit theorems for quadratic forms with applications to Whittle's estimate. Ann. Appl. Probab. 9 (1999), 146-187 (with L. Horv\'ath).
A Cram\'er type large deviation result for Student's t-statistic. J. Theoret. Probab. 12 (1999), 387-398. ( pdf ), ( dvi ), ( ps )
The law of the iterated logarithm for negatively associated random variables. Stochastic Process. Appl. 83 (1999), 139-148 (with C. Su ).
Small ball estimates for Gaussian processes under Soblev type norms. J. Theoret. Probab. 12 (1999), 699-720 (with W. V. Li).
Existence of Bayes Estimators for the polychotomous quantal response models. Ann. Inst. Statist. Math. 51 (1999), 637-656 (with M.H. Chen)
Properties of prior and posterior distributions for multivariate categorical response data models. J. Multivariate Anal. 71 (1999), 277-296. (with M.H. Chen)
Monte Carlo estimation of Bayesian credible and HPD intervals. J. Computational and Graphical Statist 8 (1999), 69-92 (with M.H. Chen)
On central limit theorems for shrunken random variables. Proc. Amer. Math. Soc. 128 (2000), 261-267 (with E. Housworth).
Power prior distributions for generalized linear models. J. Statist. Plan. Inf. 84 (2000), 121-137 (with M.H. Chen and J. G. Ibrahim)
On parameters of increasing dimensions. J. Multivariate Anal. 73 (2000), 120-135. (with X. He)
A comparison theorem on moment inequalities between negatively associated and independent random variables. J. Theoret. Probab. 13 (2000), 343-356.
A note on the Gaussian correlation conjecture. In: Progress in Probability (E. Gin\'e, D. M. Mason and J.A. Wellner, eds), Vol. 47 (2000), 163-172 (with W.V. Li)
Strong laws for $L_p$-norms of empirical and related processes. Periodica Mathematica Hungarica} 41 (2000), 35-69 (with I. Berkes, L. Horv\'ath and J. Steinebach).
Propriety of posterior distribution for dichotomous quantal response models with general link functions. Proc. Amer. Math. Soc. 129 (2001),293-302. (with M.H. Chen)
Gaussian processes: inequalities, small ball probabilities and applications. In: Stochastic Processes: Theory and Methods. Handbook of Statistics, Vol. 19 (2001), Edited by C.R. Rao and D. Shanbhag, 533-597 (with W.V. Li)
A non-uniform Berry-Esseen bound via Stein's method. Probab. Theory Relat. Fields 120 (2001), 236-254 (with L. H. Y. Chen)
Bayesian analysis of binary data using skewed logit models. Calcutta Stat. Assoc. Bulletin 51 (2001), 11-30 (with M.H. Chen and D.K. Dey)
Do stock returns follow a finite variance distribution? Ann. Economics and Finance 2 (2001), 467-486 (with H.Yu and J. Yu)
Bootstrapping the Student t-statistic. Ann. Probab. 29 (2001), 1435-1450(with D. Mason)
How big is the uniform convergence interval of the strong law of large numbers? Stochastic Analysis and Applications Vol. 2 (editors: Y.J. Cho, J.K Kim and Y.K. Choi) (2002), 141-148.
A normal comparison inequality and its applications Probab. Theory Relat. Fields 122 (2002), 494-508 (with W. V. Li)}
Partition-weighted Monte-Carlo estimation. Ann. Inst. Statist. Math. 54 (2002), 338--354 (with Chen M.H.)
Sufficient and necessary conditions on the propriety of posterior distributions for generalized linear mixed models. Sankhy Ser. B 64 (2002), 57-85 (with M.H. Chen and D. Xu)
Prior Elicitation for Model Selection and Estimation in Generalized Linear Mixed Models. J. Statist. Planning Inference 111 (2003), 57-76 (with M.H. Chen, M.H., J.G. Ibrahim and R.E. Weiss)
A Gaussian correlation inequality and its application to the existence of small ball constant. Stoch. Process. Appl. 107 (2003), 269-287.
Self-normalized Cram\'er type large deviations for independent random variables. Ann. Probab. 31 (2003), 2167-2215 (with B. Y. Jing and Q.Y. Wang)
A Monte Carlo gap test in computing HPD regions. Development of Modern Statistics and Related topics (editors: H. Zhang and J. Huang) Series in Biostatistics Vol. 1. pp. 38--52. World Sci. Publishing, River Edge, NJ, 2003 (with M.H. Chen, X. He and H. Xu)
Asymptotic distributions and Berry-Esseen bounds for sums of record values. Electronic J. Probab. 9 (2004), 544-559 (with C. Su and G. Wei)
Recent progress on self-normalized limit theorems. Probability, Finance and Insurance, 50-68. Edited by T.L. Lai, H. Yang, and S.P. Yung, World Scientific, 2004.
On Helgason's number and Khintchine's inequality. Asymptotic Methods in Stochastics: Festschrift for Mikl\'os Cs\"org\H{o}. Edited by L. Horvath and B. Szyszkowicz, Fields Institute Communications, Vol. 44, pp.195-201, 2004 (with K.A. Ross)
Normal approximation. In: An Introduction to Stein's Method (A.D. Barbour and L.H.Y. Chen eds). Lecture Notes Series, Institute for Mathematical Sciences, NUS, Vol. 4, p. 1-59. World Scientific, 2005 (with L.H.Y. Chen)
An explicit Berry - Esseen bound for the Student t-statistic via Stein's method. In: Stein's Method and Applications (A.D. Barbour and L.H.Y. Chen eds). Lecture Notes Series, Institute for Mathematical Sciences, NUS, Vol. 5, p. 143-155. World Scientific, 2005.
Almost sure convergence of the Barlett estimator. Period. Math. Hungar. 51 (2005), 11-25 (with I. Berkes, L. Horv\'ath and P. Kokoszka)
On discriminating between long-range dependence and changes in mean. Ann. Stat.} 34 (2006), 1140-1165 (with I. Berkes, L. Horv\'ath and P. Kokoszka)
Posterior propriety and computation for the Cox regression model with applications to missing covariates. Biometrika 93 (2006), 791--807 (with M.H. Chen and J.G. Ibrahim)
The Berry-Esseen bound for character ratios. Proc. Amer. Math. Soc. 134 (2006), 2153--2159 (with Z. Su)
Large and moderate deviations for Hotelling's $t^2$-statistic. Elect. Comm. in Probab 11 (2006), 149-159 (with A. Dembo)
A calibrated scenario generation model for heavy-tailed risk factors. IMA J. Management Math. 17 (2006), 289-303 (with H. Wang and H. Yu)
A note on the self-normalized large deviation. Chinese J. Appl. Probab. Statist. 22 (2006), 358--362
Normal approximation for nonlinear statistics using a concentration inequality approach Bernoulli 13 (2007), 581-599 (with L.H.Y. Chen)
Limiting distributions of the non-central $t$-statistic and their applications to the power of $t$-tests under non-normality. Bernoulli 13 (2007), 346-364 (with V. Bentkus, B.Y. Jing, and W. Zhou)
Limit theorems for permutations of empirical processes. Stoch. Process. Appl. 117 (2007), 1870--1888 (with L. Horv\'ath)
Self-normalized limit theorems in probability and statistics. In: Asymptotic Theory in Probability and Statistics with Applications (Editors: T.L. Lai, L.F Qian and Q.M. Shao)(2007), 3-43 (with T.L. Lai).}
Cram\'er type large deviations for the maximum of self-normalized sums. E. Journal Probab. 14 (2009), 1181-1197 (with Z.S. Hu and Q.Y. Wang)
Berry-Esseen bounds for projections of coordinate symmetric random vectors. E. Comm. Probab. 14 (2009), 474 - 485 (with L. Goldstein)
A note on directed polymers in gaussian environments. E. Comm. Probab. 14 (2009), 518 - 528 (with Y. Hu)
Asymptotic distributions of non-central studentized statistics. Science in China (Ser A) 52 (2009), 1262 - 1284 (with R.M. Zhang)
Maximum likelihood inference for the Cox regression model with applications to missing covariates. J. Multivariate Anal. 100 (2009), 2018--2030 (with M.H. Chen and J.G. Ibrahim)
On the longest length of consecutive integers. Acta Math. Sin. (Engl. Ser.) 27 (2011), 329 - 338 (with M. Zhao) | CommonCrawl |
Alexa Jakob
Hardware and climate enthusiast. Probably reading nonfiction about New York City.
Home About Experience Projects
© 2022. MIT License.
Visualizing Sound with Light: Color Organ Circuit Design
23 May 2021 • Blog
For our final project this semester, my friend Nathaniel and I built a color organ, a multi-sensory device that produces a light show correlated to music or sound. The concept of "color music" first emerged in the 1500s, and color organs have since become more complex and responsive to the intricacies of a piece of music, reaching a popularity peak in the 1970's disco scene (hence our project's unofficial name - Seventies-Style Sight and Sound System, or S^5).
We did this entirely using analog circuitry, transforming an input audio signal into LED lighting corresponding to the signal's frequency band. Key features include volume control, implemented using an operational amplifier, which allows the user to adjust the volume of the output signal, and a series of active bandpass filters that separate the audio into three bands: low, medium, and high frequency.
Block Diagram of the SSSSS
There are four main components: input signal, amplification, filtering, and output. The input signal was inserted with a 3.5 mm stereo plug (so you can play a song on a phone or computer). In order to produce the output LED display, the signal was amplified with a transistor amplifier and filtered into its different compositional frequencies with bandpass filters. Depending on the filter, the signal was sent to either the red LEDs (low frequencies), yellow LEDs (medium frequencies), green LEDs (high frequencies) to be displayed. A color organ requires that the user have the full experience of listening to music in addition to watching flashing LED lights, so the signal was also sent to a speaker. Volume control was also implemented so that the user can adjust the volume level to their preference.
Full S^5 Schematic
Audio signals generally have very small amplitudes, and amplifying the signals ensures they can be adequately picked up by later stages of the design. One amplifier alone was not found to be sufficient, so a second amplifier stage was added. To prevent loading effects, a buffer in an emitter follower configuration was used in between stages. An operational amplifier could produce a better amplifier (more linear, and with higher gain) - but we didn't have enough of that component. Darlington transistors could replace one of the transistors, for even higher gain, and to simplify construction. But this configuration is good enough.
Transistor Amplifier Stage
We used a common emitter with degeneration topology for its high gain and favorable output impedance. A DC blocking capacitor was connected to the signal to prevent loading, and two biasing resistors were employed to maintain a DC operating point of 5 volts at the base. A design using degeneration was chosen to improve the gain's linearity across a wide range of audio frequencies. Although degeneration decreases the gain, it allows for better input impedance and linearity of gain, especially for higher frequencies. The 2N3904 transistor was selected for its availability and high amplification factor Beta. The gain of one common emitter amplifier with degeneration was calculated as follows:
$A_v=\frac{-\beta R_C}{r_\pi + R_E(\beta+1)}=5$
With both stages, the resulting gain is 25, allowing for considerable amplification. In between each amplifier and buffer stage, a blocking capacitor was added to the output to remove any DC bias. An additional buffer was added between the last amplifier stage and the filters to prevent any further loading effects.
The amplified signal then needed to be filtered into one of three bands: low, medium, and high frequency. Operational amplifiers were used to make active bandpass filters. Active filters tend to be more sensitive and have a tighter passband than passive filters, which was valuable since the frequency ranges were small. The topology we used controlled the bandpass tightly compared to simpler topologies. Along with gain calculations, we used a MATLAB program to select resistor and capacitor values.
Implemented filter
The filtered signal is then sent to the LEDs for display. If the signal has passed through the low-frequency filter, it is sent to the red LED; for the medium-frequency, it is sent to the yellow LED; and the high frequencies are sent to the green LEDs.
Because an LED is a diode, it has a forward "on" voltage of approximately 0.6 volts. If a signal does not pass through the bandpass filter, its maximum voltage will surely be less than 0.6 volts - therefore, the LED is not active, and it will not be illuminated. Signals that do pass through the bandpass filter are large enough to activate the LED (amplitude larger than 1.2 V), and as such will be represented.
Passing the signal output directly to the LED is possible because the opamp has very low output impedance - an ideal opamp's output impedance is zero. A series resistor is added to protect the LED from overcurrent. LEDs are not limited by frequencies in the audio range, meaning that it will be able to switch on and off with the input signal. This allows the lights to flash literally in tune with the music. However, given that the human eye can only process about 60 frames per second, the flashing will be so fast that the user will not notice, and simply see a constantly lit bulb.
I realize that driving an LED with an audio signal is bad practice - if I had more time I'd probably design a circuit that uses a transistor as a switch to turn on the LED, which would make the circuit more reliable since you could tune the threshold value depending on the transistor's characteristics.
Volume control was also added to optimize the audio experience. An inverting amplifier was implemented using an operational amplifier with a potentiometer as the feedback resistor, Rf.
Inverting amplifier configuration using an operational amplifier
The gain equation for such an inverting amplifier is:
$A_v=-\frac{Rf}{Rin}$
Because the gain is negative and the input wave is a sine wave, the amplifier produces a 180 degree phase shift, which does not affect the final output. Utilizing the inverting amplifier, however, allows for varying Rf down to zero using a potentiometer, which means the system could be muted. A non-inverting amplifier, with gain always greater than one, would not have allowed for muting or lower volumes than the input. An active amplifier was selected for its high gain, linearity, and low output impedance. Only four LF411 operational amplifiers were available; with three being used for bandpass filters, this was the most important place to use the final opamp.
The LF411 operation amplifier was not able to produce enough current to drive the 8𝛺 speaker. A push/pull buffer was included in the output stage in order to drive the speaker. To avoid crossover distortion, the feedback in the inverting amplifier was connected to the output of the push/pull buffer rather than to the output of the operation amplifier.
The amplifier is noninverting, because each common emitter stage is inverting. With a sine wave input of 100mV peak to peak, the output of the amplifier produces an amplitude of 3.2V peak to peak, or a gain of 32. This ensures small-voltage audio signals are amplified enough so they can be manipulated in the color amplifier later.
Amplifier input (green) and output (blue)
After each audio signal is amplified, it is passed to three bandpass filters that split the signal into three frequency bands: low, medium, and high. The gain of each bandpass filter was measured with LTSpice's AC analysis tool.
AC Analysis Results for Low-Frequency Bandpass Filter. Center frequency: 318 Hz. Most gain from 250 - 400 Hz.
AC Analysis Results for Medium-Frequency Bandpass Filter.
AC Analysis Results for High-Frequency Bandpass Filter. (The gain for the highest frequency filter isn't nearly as much as the low and medium filters, which remains an issue.)
At first glance, there appears to be a beat frequency initially, which would severely impact circuit performance with varying amplitude.
Apparent Beats Frequency at Bandpass Filter Output
We tested this by incrementing the frequency of the sine wave input as in the schematic below, which mimics a sequence of varying frequencies or notes in a piece of music.
It turns out that the transient only occurs on startup, for a very short and negligible amount of time, not when changing frequencies. As a result, the overall performance of the circuit will not be affected.
Result of Test with Sine Wave Input Sequence
Constructed Device
The device was constructed on a single breadboard
Active Components in the Built Circuit
Light/Frequency band correspondence in the built circuit
Circuit I/O
In order to get a sound signal from the laptop into the circuit, we modified an old headphone cable. The cable fit into the audio jack on the laptop, and the plan was to solder solid-core wires to the wires in the cable and connect the solid-core wires to the breadboard. This was complicated, since the wires were enamel coated and needed to be melted off using a soldering iron (don't try this at home…).
Some discussion
Designing accurate bandpass filters was a challenge. As previously discussed, the filter topology was chosen to achieve a tighter passband. The R1 and R2 input resistors were chosen such that R1 was slightly larger than R2, and the R3 resistor was chosen to be very large. The center frequency of such a filter can be calculated as:
$f_r=\frac{1}{2\pi\sqrt{R3(R1 // R2)C1C2}}$
And R1, R2, and R3 were chosen accordingly, and tweaked based on simulation.
Where possible, C1 and C2 were set to have the same value for simplicity, but this was only possible for low and medium frequencies. For the high frequency, using the calculated value would have resulted in a high-pass filter with a linear relationship between frequency and gain rather than a bandpass filter. As a result, the values for C1 and C2 were differentiated and selected using simulation - C1 was chosen to be 10nF, and C2 560pF.
Parts Selection
Parts availability was difficult. We did get a kit of components (since the class was virtual), but it only had about a quarter of the capacitors listed on the parts summary, which would have made our design impossible, even after making edits to reflect the parts that were actually in the kit. Fortunately, Nathaniel was able to pick up additional capacitors from his high school electronics teacher. Only four operational amplifiers were provided, and these were best used for the active bandpass filters and volume control. Although the kit's parts summary listed four LM411s, there were only three LM411s and one LM741. The LM741, first designed and manufactured in 1968, has been deprecated, because modern operational amplifiers have better specifications. In order to prevent any discrepancies between filters, the LM411s were used for the bandpass filters, which is an application requiring more precision than volume control, for which the LM741 was used. If more operational amplifiers were available, the transistor amplifier could have been replaced, which would improve circuit performance because of the linearity and high gain of op-amp amplifiers.
Power Supply Integrity
Finally, during the construction process, it was found that the speaker played a sound with distortions, and it was unclear why the distortions were occurring. Adding a decoupling capacitor between ground and Vdd and ground and Vss resolved this issue, and the schematic was updated to reflect this change.
This project was interesting because we were able to connect the EE concepts we learned in class with visualization of sound. We successfully implemented frequency-selective lights and volume control, despite limitations on parts (both intended, such as the limited number of operation amplifiers, and unintended, such as the missing capacitors), and successfully constructed an adapter to allow the color organ to play music from any device that includes a standard headphone jack.
What's next? Now that we know that the circuit works, it could be cute to design and lay out some PCBs (and give them as gifts?). Building a circuit on a PCB is more stable than a breadboard, since it gets rid of a lot of the faulty connections and parasitic capacitance. The LEDs for each band could also be more tuned with regards to gain to equalize the amount of time each LED is lit- the low frequencies are very prevalent in music, but high frequencies, not so much.
A more sophisticated color organ could also achieve different light flashing effects, for example by implementing variations of volume or duration using pulse width modulation in conjunction with the Arduino software board or a 555 timer, allowing the lights to fade in and out at the beginning and ends of notes in an appealing manner.
And since you've made it to the end, you're probably wondering what it looks like! Videos here:
https://youtu.be/z6yldUKYbNA
https://youtu.be/0g2T4mcMrfw
design hardware
To Alleviate the Electrical Engineer Shortage, Invest in Diversity and Inclusion 27 Jul 2022
Politics and Philosophy of the Crescent Street Bike Lane 13 Jun 2022
Benchmarking Josephson Junctions for Scalable Quantum Computing 27 May 2022 | CommonCrawl |
Scalarized black holes
Jose Luis Blázquez-Salcedo1,
Burkhard Kleihaus2 &
Jutta Kunz ORCID: orcid.org/0000-0001-7990-87132
Arabian Journal of Mathematics (2021)Cite this article
Black holes represent outstanding astrophysical laboratories to test the strong gravity regime, since alternative theories of gravity may predict black hole solutions whose properties may differ distinctly from those of general relativity. When higher curvature terms are included in the gravitational action as, for instance, in the form of the Gauss–Bonnet term coupled to a scalar field, scalarized black holes result. Here we discuss several types of scalarized black holes and some of their properties.
The existence of black holes in the Universe, following gravitational collapse, is a genuine prediction of general relativity (GR) [61]. However, in GR their properties are highly constrained, when one assumes the standard model of particle physics for the allowed matter fields and considers astrophysically relevant black holes. The expectation of astrophysical black holes being (basically) uncharged, then leads to the conclusion that they are (in good approximation) all described by the Kerr family of rotating black holes, i.e., asymptotically flat black hole solutions of the vacuum Einstein equations.
Kerr black holes are uniquely characterized by their mass and their angular momentum (see, e.g. [21]) and thus they carry no hair. All their multipole moments are given in terms of these two quantities. Also, Kerr black holes are subject to a bound on their angular momentum, which is reached in the extremal limit. Beyond this bound only naked singularities reside. The no-hair hypothesis that astrophysical black holes are indeed described by the family of Kerr black holes is tested in current and future observations [19].
So far all observations are in agreement with the Kerr hypothesis, be it the motion of stars around the supermassive black hole at the center of the Milky Way (2020 Nobel Prize), the observation of gravitational waves from black hole mergers (2017 Nobel Prize) or the observation of the shadow of the supermassive black hole at the center of M87 (EHT collaboration). However, it is expected that GR will be superseded by a new gravitational theory, that will include quantum mechanics, and that might as well explain (part of) the cosmological dark components, dark matter and dark energy. Overviews of alternative theories of gravity are found, for instance, in [11, 34, 64, 73].
A particularly attractive type of alternative theories of gravity are theories that contain higher curvature terms in the form of the Gauss–Bonnet invariant, as they arise, for instance, in string theories [36, 54, 79]. Since in four dimensions the Gauss–Bonnet term corresponds to a topological term, that does not contribute to the field equations, this term has to be coupled to another field in order to make its presence count. In string theory this field is a scalar field, a so-called dilaton, that arises with a specific exponential coupling function to the Gauss–Bonnet term in the low energy limit of string theory. We will refer to these theoretically well motivated theories in the following as Einstein–dilaton–Gauss–Bonnet (EdGB) theories. EdGB theories do not allow for GR black hole solutions. Instead all EdGB black hole solutions carry dilatonic hair [4, 5, 14, 15, 23, 37, 43,44,45,46, 50, 53, 59, 60, 71, 76, 78].
In recent years, other interesting coupling functions for the scalar field have been suggested [1, 2, 27, 66, 68, 69]. In the following we will call such theories simply Einstein–scalar–Gauss–Bonnet (EsGB) theories. Like the EdGB theories, the EsGB theories possess the attractive features that they give rise to second order equations of motion, and do not possess Ostrogradsky instabilities and ghosts [20, 42, 47].
By allowing for more general coupling functions of the scalar field a new interesting phenomenon was observed: curvature induced spontaneous scalarization of black holes [1,2,3, 6,7,8, 13, 16,17,18, 22, 24, 27, 29, 30, 40, 52, 55,56,57, 66, 67]. In that case an appropriate choice of coupling function allows the GR black holes to remain solutions of the EsGB equations, while, at critical values of the coupling, GR black holes develop a tachyonic instability where new branches of spontaneously scalarized black holes arise. Moreover, in the case of rotation, there can exist two types of spontaneously scalarized black holes. Those, that arise simply from the static black holes in the limit of slow rotation, and those that arise only for fast rotation and are termed spin induced spontaneously scalarized black holes [12, 26, 31, 32, 39, 41, 77].
This paper is organized as follows: In Sect. 2 we briefly recall some properties of black holes in GR. We discuss black holes in EdGB theories in Sect. 3 and black holes in EsGB theories in Sect. 4. In both cases we will address first the static and then the rotating black holes, where in the EsGB case we then differentiate between those, that emerge continuously from the static limit, and the spin induced ones. We end in Sect. 5 with our conclusions.
Black holes in general relativity
Since our aim is to address deviations of the properties of black holes in certain alternative theories of gravity from the properties of black holes in GR, we will start with a brief recap of some basic properties of black holes that arise as solutions of the Einstein field equations in vacuum
$$\begin{aligned} G_{\mu \nu } = R_{\mu \nu } -\frac{1}{2} R\, g_{\mu \nu } =0 , \end{aligned}$$
with Einstein tensor \(G_{\mu \nu }\), Ricci tensor \(R_{\mu \nu }\), curvature scalar R and metric tensor \(g_{\mu \nu }\).
The asymptotically flat, static, spherically symmetric black hole solutions of these vacuum field equations are the Schwarzschild black holes. The corresponding set of rotating black holes is the family of Kerr black holes. For these vacuum black holes there is the well-known no-hair theorem [21], stating that a Kerr black hole is uniquely characterized in terms of only two global parameters: the mass M and the angular momentum J. For the static Schwarzschild black hole the angular momentum vanishes, so the mass is the only parameter.
Moreover, all the multipole moments of Kerr black holes are given in terms of these two quantities, the mass M and the angular momentum J [35, 38, 70]
$$\begin{aligned} M_l + i S_l = M \left( i \frac{J}{M} \right) ^l , \end{aligned}$$
with mass \(M_0=M\), and angular momentum \(S_1=J\), and multipole number l. The quadrupole moment Q is then given by \(M_2 = Q = - \frac{J^2}{M} \).
Kerr black holes are subject to a bound on their angular momentum,
$$\begin{aligned} j = \frac{J}{M^2} \le 1 , \end{aligned}$$
the so-called Kerr bound, reached in the extremal limit, when the two horizons of the Kerr black holes coincide. Solutions beyond the Kerr bound represent naked singularities. The Kerr black holes have been analyzed in many further respects. Their shadow, for instance, was obtained first by Bardeen [9] and recently revisited numerous times, because of its significance for the EHT observations.
Black holes in Einstein–dilaton–Gauss–Bonnet theories
We now turn to black holes in EdGB theories, providing first the theoretical settings, and presenting then the static and rotating solutions and some of their properties.
Theoretical settings
The effective action for EdGB and EsGB theories reads
$$\begin{aligned} S=\frac{1}{16 \pi }\int {\mathrm{d}}^4x \sqrt{-g} \left[ R - \frac{1}{2} \partial _\mu \phi \,\partial ^\mu \phi - U(\phi ) + F(\phi ) R^2_{\mathrm{GB}} \right] , \end{aligned}$$
where R is the curvature scalar, \(\phi \) is the scalar field, \(F(\phi )\) is the coupling function, and \(U(\phi )\) is the potential, and
$$\begin{aligned} R^2_{\mathrm{GB}} = R_{\mu \nu \rho \sigma } R^{\mu \nu \rho \sigma } - 4 R_{\mu \nu } R^{\mu \nu } + R^2 \end{aligned}$$
is the Gauss–Bonnet term.
Variation of the action with respect to the metric and the scalar field leads to the Einstein equations and to the scalar field equation, respectively,
$$\begin{aligned}&G_{\mu \nu }=T_{\mu \nu } , \end{aligned}$$
$$\begin{aligned}&\nabla ^\mu \nabla _\mu \phi +\dot{F}(\phi ) R^2_{\mathrm{GB}}-\dot{U}(\phi )=0, \end{aligned}$$
where the dot denotes the derivative with respect to the scalar field \(\phi \). The stress–energy tensor in the gravitational field equation is an effective one, since it contains not only the usual contributions from the scalar field, but also contributions from the Gauss–Bonnet term. It is given by the expression
$$\begin{aligned} T_{\mu \nu } = -\frac{1}{4}g_{\mu \nu }\left( \partial _\rho \phi \partial ^\rho \phi + 2 U(\phi ) \right) +\frac{1}{2} \partial _\mu \phi \partial _\nu \phi -\frac{1}{2}\left( g_{\rho \mu }g_{\lambda \nu }+g_{\lambda \mu }g_{\rho \nu }\right) \eta ^{\kappa \lambda \alpha \beta }\tilde{R}^{\rho \gamma }_{\alpha \beta } \nabla _\gamma \partial _\kappa F(\phi ) , \end{aligned}$$
where \(\tilde{R}^{\rho \gamma }_{\alpha \beta }=\eta ^{\rho \gamma \sigma \tau } R_{\sigma \tau \alpha \beta }\) and \(\eta ^{\rho \gamma \sigma \tau }= \epsilon ^{\rho \gamma \sigma \tau }/\sqrt{-g}\). As mentioned above, this resulting set of coupled field equations is of second order. Also note, that in four spacetime dimensions the coupling of the Gauss–Bonnet term to another field is really needed in order to allow for solutions that differ from those of GR.
Since in this section we will discuss dilatonic black holes, we now specify the coupling function to the dilatonic coupling function
$$\begin{aligned} F(\phi )=\frac{{\alpha }}{4} {\mathrm{e}}^{-{\gamma } \phi }, \end{aligned}$$
where \(\alpha \) is the Gauss–Bonnet coupling constant, and \(\gamma \) is the dilaton coupling constant with string theory value \(\gamma =1\). For this coupling function \(\dot{F}(\phi ) \ne 0\) unless \(\phi \rightarrow \infty \). Therefore the dilaton field equation (7) does not allow a constant value of \(\phi \) as a solution, if the Gauss–Bonnet term is non-vanishing, as it would be the case for a Schwarzschild black hole. Consequently, the Schwarzschild black hole cannot be a solution of EdGB theory, and neither can the Kerr black hole: all EdGB black hole solutions necessarily carry dilaton hair.
Dilatonic black holes
Static, spherically symmetric black hole solutions of EdGB theory were first obtained by Kanti et al. [43]. Because of symmetry one can choose the ansatz
$$\begin{aligned} {\mathrm{d}}s^2= - {\mathrm{e}}^{2\Phi (r)}{\mathrm{d}}t^2 + {\mathrm{e}}^{2\Lambda (r)} {\mathrm{d}}r^2 + r^2 ({\mathrm{d}}\theta ^2 + \sin ^2\theta {\mathrm{d}}\varphi ^2) \end{aligned}$$
for the metric, with two metric functions \(\Phi (r)\) and \(\Lambda (r)\), that depend only on the radial coordinate, like the dilaton function \(\phi (r)\). While the EdGB black hole solutions have not been found in closed form, numerical integration has yielded their domain of existence and their properties [43].
We show in Fig. 1 the scaled horizon radius versus the scaled mass and compare with the Schwarzschild black hole.
Static spherically symmetric EdGB black holes (\(\gamma =1\)): scaled horizon radius \(r_H/\sqrt{\alpha }\) vs scaled mass \(M/\sqrt{\alpha }\). For comparison the Schwarzschild solutions are also shown
Clearly, for large masses, the EdGB black holes approach the Schwarzschild black holes, whereas for small masses the deviation from the Schwarzschild black holes becomes large. Surprisingly, one finds a minimal value of the mass for these EdGB black holes. The reason can be found in the expansion of the functions at the horizon. Here a square root appears
$$\begin{aligned} \sqrt{ 1-6 \frac{\alpha ^2}{r_H^4} {\mathrm{e}}^{2 \gamma \phi _H}}, \end{aligned}$$
whose radicand vanishes at the minimal value of the mass. We will refer to such solutions as critical solutions. Depending on the value of the dilaton coupling constant \(\gamma \), a tiny second branch may exist, where the mass increases again slightly until the horizon becomes singular [37, 71].
The static EdGB black holes can be generalized to include rotation, either perturbatively or by non-perturbative numerical calculations [4, 5, 44,45,46, 53, 59, 60]. The non-perturbative solutions can for instance be obtained with the stationary axially symmetric line element [44,45,46]
$$\begin{aligned} {\mathrm{d}}s^2=- f {\mathrm{d}}t^2 + \frac{m}{f} \left( {\mathrm{d}} r^2+ r^2{\mathrm{d}}\theta ^2 \right) + \frac{l}{f} r^2\sin ^2\theta \left( {\mathrm{d}}\varphi -\frac{\omega }{r} {\mathrm{d}}t\right) ^2, \end{aligned}$$
with quasi-isotropic radial coordinate r. The metric functions f, m, l and \(\omega \) depend on r and \(\theta \) only, and the scalar field is also a function of r and \(\theta \) only, \(\phi =\phi (r,\theta )\).
We present the domain of existence of these rotating EdGB black holes in Fig. 2. In Fig. 2a the scaled horizon area \(A_H/16 \pi r_H^2\) is shown versus the scaled angular momentum \(J/M^2\). For a fixed value of the coupling constant, black holes exist in the shaded region. The boundary of this region consists of the static black holes (left vertical boundary), the Kerr black holes (mostly upper boundary) and the critical black holes (mostly lower boundary). Very close to the Kerr bound \(J/M^2=1\) these two boundaries cross and interchange. The last boundary is only seen in the inset in the figure, and shows the extremal black holes, which are not regular, however, in the EdGB case. Clearly, in a small part of the domain of existence the Kerr bound is slightly exceeded by almost extremal EdGB black holes. The curves inside the plot represent curves of constant horizon angular velocity.
Rotating EdGB black holes \((\gamma =1)\): a scaled horizon area \(a_H=A_H/16 \pi r_H^2\) vs scaled angular momentum \(J/M^2\); b scaled entropy \(s=S/16 \pi r_H^2\) vs scaled angular momentum \(J/M^2\)
Figure 2b shows the entropy of these black holes. In GR black holes possess an entropy that is simply a quarter of the event horizon area. However, in the presence of a Gauss–Bonnet term, coupled to a scalar field, the entropy of the EdGB black holes acquires an extra contribution [72]. Then the total entropy can be written in Wald's form as an integral over the event horizon
$$\begin{aligned} S=\frac{1}{4}\int _{\Sigma _\mathrm{H}} {\mathrm{d}}^{2}x \sqrt{h}\left( 1+ \frac{1}{2}\alpha {\mathrm{e}}^{-\gamma \phi } {\tilde{R}}\right) , \end{aligned}$$
where h is the determinant of the induced metric on a spatial cross section of the horizon and \({\tilde{R}}\) is the event horizon curvature. The figure shows, that the dilatonic black holes have larger entropy than the Kerr black holes, while they have smaller horizon area.
Considering further properties of the rotating EdGB black holes we note, that they can possess much larger quadrupole moments than Kerr black holes, and their ISCOs and orbital frequencies can deviate appreciably from the respective Kerr values, as well. Since their horizon area is smaller than for Kerr black holes, one might also expect considerable deviations for their shadow as compared to the Kerr black hole shadow. However, these deviations turn out to be rather small [23]. Also the X-ray reflection spectrum of accreting EdGB black holes shows only small deviations from the Kerr case [76].
Linear mode analysis: quasi-normal modes
To investigate stability of black holes under small perturbations, a linear mode analysis can be performed. Here we will directly address the formalism for quasi-normal modes. Since in gravity small perturbations will typically lead to the emission of gravitational waves the frequencies that are found in perturbation theory also contain an imaginary part, which explains the terminology. From an observational point of view, such quasi-normal modes will appear in the ringdown spectra of black holes after merger. This makes their study most relevant in connection with current and future gravitational wave observations.
While we refrain from a full derivation of these quasi-normal modes and refer to the literature [10, 48, 49, 58, 62], we briefly recall some of the relevant aspects. To this end we consider lowest order perturbation theory in the metric
$$\begin{aligned} g_{\mu \nu } = g_{\mu \nu }^{(0)}(r) + \epsilon h_{\mu \nu }(t,r,\theta ,\varphi ) \end{aligned}$$
and in the scalar field
$$\begin{aligned} \phi = \phi _0(r) + \epsilon \delta \phi (t,r,\theta ,\varphi ) , \end{aligned}$$
where \(g_{\mu \nu }^{(0)}\) and \(\phi _0\) are the metric and the scalar field of the background black hole, respectively, and \(h_{\mu \nu }\) and \(\delta \phi \) are the perturbations. \(\epsilon \) is the small perturbation parameter.
Symmetry allows for a decomposition of the perturbations into even-parity and odd-parity perturbations. The scalar field has even parity, therefore it decouples in the case of odd-parity perturbations, which are therefore pure spacetime modes. The even-parity modes are also called polar modes, while the odd-parity modes are also termed axial modes. Besides the decomposition with respect to parity, we can also make a multipolar decomposition of the modes, characterized by the angular parameter l.
There are quasi-normal modes for all values of the angular parameter l. Because of the scalar field also modes with angular parameter \(l=0\) and \(l=1\) arise, corresponding to radial modes (monopole modes) and dipole modes. For \(l=2\) quadrupole modes arise, which are also present in GR. But because of the scalar field, there will now be two types of such modes: \(l=2\) modes dominated by the scalar field, which in the limit of vanishing Gauss–Bonnet coupling would correspond to modes of the scalar field in the background of a Schwarzschild black hole, and \(l=2\) modes dominated by the gravitational field, which would correspond to the lowest Schwarzschild quadrupolar modes. We will refer to the first set of modes as scalar-led modes, and to the second set as grav-led modes in the following.
The time dependence of the modes is factored out by an exponential
$$\begin{aligned} \exp {( i \omega t)} = \exp {( i (\omega _R + i \omega _I)t)} = \exp {(i \omega _R t - \omega _I t )} , \end{aligned}$$
which shows that the real part \(\omega _R\) is the frequency and the imaginary part \(\omega _I\) is the inverse damping time if \(\omega _I>0\), otherwise, for \(\omega _I<0\), it signals an instability. (Note, that the overall sign choice in the exponent is only convention.) For a given parity and angular parameter l the complex frequency \(\omega \) is obtained by solving the respective resulting system of coupled differential equations, subject to proper boundary conditions. At the black hole horizon the wave must be purely ingoing, and at infinity purely outgoing.
As an interesting example we show in Fig. 3 the quasi-normal polar \(l=2\) modes for the static EdGB black holes, normalized to the respective Schwarzschild values for vanishing coupling constant. Figure 3a shows the real part of \(\omega \), and Fig. 3b the imaginary part versus the scaled Gauss–Bonnet coupling constant \(\zeta =\alpha /M^2\). We note a distinctly different behavior for the grav-led and the scalar-led modes. We also note, that the presence of the scalar field in the polar modes breaks isospectrality of the modes, i.e., the degeneracy of the axial and polar \(l=2\) gravitational modes in the Schwarzschild case.
Quasi-normal polar \(l=2\) modes of static EdGB black holes \((\gamma =1)\): a scaled frequency \(\omega _R/\omega _R^S\) vs scaled Gauss–Bonnet coupling constant \(\zeta =\alpha /M^2\); b scaled inverse damping time \(\omega _I/\omega _I^S\) vs scaled Gauss–Bonnet coupling constant \(\zeta =\alpha /M^2\)
Black holes in Einstein–scalar–Gauss–Bonnet theories
Whereas EdGB theories are already considerably constrained from observations, this is much less the case for EsGB theories with more general coupling functions. We now turn to the black holes in these theories and focus on coupling functions which allow for spontaneous scalarization.
Curvature induced spontaneous scalarization
The phenomenon of spontaneous scalarization was discovered for neutron stars in scalar-tensor theories [25], where GR neutron stars can develop a scalar field when the solutions become sufficiently compact. Here the trigger for the scalarization is the highly compact matter. Therefore the spontaneous scalarization is referred to as matter induced spontaneous scalarization. The absence of matter for Schwarzschild and Kerr black holes therefore precludes this phenomenon for these black holes.
Only a few years ago it was realized that spontaneous scalarization can also be curvature induced and therefore arise for black holes in EsGB theories [1, 2, 27, 66]. In order to allow for such spontaneous scalarization the coupling function should possess certain properties. First of all, the GR black hole solutions should remain solutions of the theory. This is of course the case, when the Gauss–Bonnet term does not contribute in the field equations. So if we choose a coupling function \(F(\phi )\) such that
$$\begin{aligned} {\dot{F}} (\phi )=0 \quad \text {for} \ \phi =0 , \end{aligned}$$
then the source term in the scalar field equation
$$\begin{aligned} \nabla ^\mu \nabla _\mu \phi + {\dot{F}} (\phi ) R^2_{\mathrm{GB}}=0 \end{aligned}$$
vanishes for \(\phi =0\), and \(\phi =0\) is a solution. Note, that we have assumed a vanishing scalar field potential \(U(\phi )\) at the moment. The Einstein equations then also receive no contribution from the Gauss–Bonnet term, and therefore the GR solutions remain solutions of such EsGB theories. However, the GR solutions are not the only black hole solutions in certain parameter ranges, that depend on the coupling function. Here black holes with scalar hair arise, and this hair is curvature induced.
To understand this mechanism, we consider the Gauss–Bonnet term for the metric of a Schwarzschild black hole
$$\begin{aligned} R^2_{\mathrm{GB}} = \frac{48 M^2}{r^6} , \end{aligned}$$
which is solely coming from the Kretschmann scalar. Clearly, this curvature term can become rather big. We now choose the simple coupling function
$$\begin{aligned} F(\phi ) = \eta \frac{\phi ^2}{2},\quad {\dot{F}} = \eta \phi . \end{aligned}$$
When we insert this into the scalar field equation, we see, that we can identify an effective mass squared \(m^2_\mathrm{eff}\) in this equation
$$\begin{aligned} m^2_\mathrm{eff} = - \eta R^2_{\mathrm{GB}} < 0,\quad \text {if} \ \eta > 0 , \end{aligned}$$
and this effective mass squared is negative, i.e., tachyonic, for positive coupling constant \(\eta \). Therefore the Gauss–Bonnet curvature term triggers a tachyonic instability of the Schwarzschild solution, when its contribution is strong enough, and a branch of scalarized black holes bifurcates from the Schwarzschild solution.
Static black holes
We now consider the coupling function [27]
$$\begin{aligned} F(\phi ) = \frac{ \lambda ^2}{12} \left( 1- {\mathrm{e}}^{-3\phi ^2/2}\right) , \end{aligned}$$
which for small \(\phi \) becomes simply a quadratic coupling, \(F(\phi )= (\lambda ^2/8) \phi ^2\). The tachyonic instability then arises at \(M/\lambda =0.587\), where a branch of scalarized black holes emerges. Since this is the first bifurcation, we refer to this branch as the fundamental or \(n=0\) branch. But this branch is not the only one, and at smaller values of \(M/\lambda \) further branches arise. These are radially excited branches, where the scalar field function possesses n nodes. Thus on the first excited branch (\(n=1\)), which arises at \(M/\lambda =0.226\), the scalar field has one node, on the second excited branch (\(n=2\): \(M/\lambda =0.140\)) it has two nodes, etc. When one follows these scalarized branches, as shown in Fig. 4a, one notes, that only the fundamental branch extends from the bifurcation all the way to vanishing mass (\(M=0\)). The excited branches all have finite extent, and they are the shorter the higher n. Note, that in the figure the scaled scalar charge \(D/\lambda \), which is read off from the asymptotic behavior of the scalar field (\(\phi \sim D/r\)) is shown versus the scaled mass \(M/\lambda \). To highlight the bifurcations, also the Schwarzschild black hole is shown, which of course carries no scalar charge.
Static EsGB black holes: a scaled scalar charge \(D/\lambda \) vs scaled mass \(M/\lambda \) for the fundamental (\(n=0\)) and radially excited (\(n>0\)) solutions; b scaled imaginary frequency \(\omega _I M^2/\lambda \) of the unstable radial modes vs scaled mass \(M/\lambda \) for the fundamental (\(n=0\)) and radially excited (\(n>0\)) solutions. The Schwarzschild solution and its unstable modes are also shown for comparison
Let us now consider the stability of these solutions, in particular, we would like to know, whether the fundamental scalarized solution is stable, when it emerges from the Schwarzschild solution, since the Schwarzschild solution has to become unstable to develop scalar hair (tachyonic instability). A first indication of stability is easily obtained by evaluating the entropy of the fundamental scalarized solution and comparing it to the entropy of the Schwarzschild solution [27]. This shows, that the \(n=0\) solution has higher entropy, and should therefore be (thermodynamically) preferred.
The next step is to consider radial (\(l=0\)) perturbations [16], which are polar perturbations involving the scalar field. When the Schrödinger-like master equation for the eigenvalue \(\omega \) is solved for the Schwarzschild background, a zero mode is found precisely at the first bifurcation point. As \(M/\lambda \) is further decreased, this zero modes turns into a negative mode, as seen in Fig. 4b. In fact, at each bifurcation, where a new branch of radially excited scalarized black holes arises, another zero mode of the Schwarzschild solution appears, that turns into another unstable mode for smaller values of \(M/\lambda \).
When we solve the Schrödinger-like master equation for the radial perturbations in the background of the fundamental scalarized black hole solutions, however, no radially unstable modes are found in the region from the bifurcation up to a critical value of \(M/\lambda \), that is marked by the vertical dashed line in Fig. 4b. Here the perturbation equation loses hyperbolicity and the employed formalism breaks down. Let us denote this point by S1 for later reference and turn to the radially excited branches.
Figure 4b also shows the radially unstable modes for the excited branches. Since these branches emerge from the Schwarzschild black hole at their respective bifurcation point, continuity at this bifurcation point demands, that the unstable modes of the radially excited black holes also bifurcate there from the Schwarzschild zero and unstable mode(s). So for the \(n=1\) solution we observe two unstable modes, one starting at the bifurcation point at the zero mode and one starting at the first unstable Schwarzschild mode. For the \(n=2\) solution we then have three unstable modes, etc.
While it is expected that radially excited solution are unstable, it would be nice, if the fundamental branch really were stable. So far we have only considered the \(l=0\) modes. Therefore we now turn to modes with higher l. These were analyzed in [13, 17]. Since axial modes do not involve perturbations of the scalar field, they start with the quadrupolar case \(l=2\). The analysis shows, that no further instability arises here, however, hyperbolicity of the equations is lost slightly earlier than in the radial case. Denoting this second point of loss of hyperbolicity by S2, we note, that there is no axial mode instability between the bifurcation point and S2 for the fundamental branch. Similarly, when the polar modes with \(l=1\) (dipole) and \(l=2\) quadrupole are considered, no further instability is encountered. Thus we conclude, that the fundamental branch is mode stable in the region between its bifurcation point and the point S2.
While there are no new unstable modes, there are of course numerous stable modes, where the imaginary part of the eigenvalue is positive and corresponds to an inverse damping time. As an example we exhibit the lowest such axial and polar grav-led \(l=2\) (quadrupole) modes in Fig. 5. The figure nicely shows the degeneracy of these modes for the Schwarzschild case, i.e., the isospectrality of the Schwarzschild modes. In contrast, for the fundamental scalarized black holes isospectrality is broken and the axial and polar modes generically differ.
Polar and axial \(l=2\) grav-led modes of static EsGB and Schwarzschild black holes: a scaled real part \(\omega _R/\lambda \) vs scaled mass \(M/\lambda \) for the fundamental (\(n=0\)) solution; b scaled imaginary part \(\omega _I/\lambda \) vs scaled mass \(M/\lambda \) for the fundamental (\(n=0\)). Note the isospectrality of the Schwarzschild modes
A similar analysis can, in principle, also be performed for other coupling functions. The simplest coupling function is of course the quadratic one, Eq. (20). Here already the entropy indicates instability of the fundamental branch of scalarized black holes, and a radial mode analysis shows, that the scalarized static spherically symmetric black holes are indeed all unstable including the fundamental branch [16]. Moreover, this branch is rather short, and oriented toward larger values of \(M/\lambda \) unlike the fundamental branch for the exponential coupling function, Eq. (22). Obviously, stability and length depend significantly on the coupling function. Including higher order terms in the coupling function with an appropriate sign can stabilize the solutions, as demonstrated in [67]. Another way to stabilize the solutions is to allow for an appropriate self-interaction potential \(U(\phi )\) of the scalar field, as shown in [52] for a quartic self-interaction.
Rotating black holes
With applications to astrophysics in mind, one has to include rotation of the black holes, and thus consider the phenomenon of curvature induced scalarization in the presence of rotation. Here the GR solution is of course the Kerr black hole. Therefore we have to inspect the source term in the scalar field equation, Eq. (18), i.e., the Gauss–Bonnet term for a Kerr black hole
$$\begin{aligned} R^2_{\mathrm{GB}} = \frac{48 M^2}{(r^2 + \chi ^2)^6} \left( r^6 - 15 r^4 \chi ^2 + 15 r^2 \chi ^4 - \chi ^6 \right) ,\quad \chi =a \cos \theta , \end{aligned}$$
where a is the usual Kerr specific angular momentum. Recalling Eq. (21) for the effective mass (with positive coupling constant \(\eta \)) and inserting the above expression for the Gauss–Bonnet term, we conjecture that the presence of the new terms that depend on the angular momentum suppresses the scalarization for large rotation, since the source term becomes weaker in (part of) the region with large curvature.
We begin the discussion of the rotating black hole solutions and their properties by considering the quadratic coupling function, Eq. (20), constructed in [22]. We exhibit the domain of existence of the fundamental scalarized branch in Fig. 6a, where the scaled angular momentum \(J/\lambda ^2\) is shown versus the scaled mass \(M/\lambda \), and we have introduced the coupling constant \(\eta =\lambda ^2/8\), while keeping a vanishing self-interaction potential \(U(\phi )=0\). The figure contains three curves showing the extremal Kerr solutions, the existence line for the scalarized black holes and the critical line for the scalarized black holes (from left to right). The existence line marks the onset of spontaneous scalarization, while the critical line shows where the scalarized black holes cease to exist. Whereas the domain of Kerr black holes is the whole area below the extremal curve, the domain of scalarized black holes is only the small band between the existence line and the critical line. As conjectured, the band becomes thinner when the angular momentum is increased, i.e., angular momentum indeed suppresses the scalarization.
In Fig. 6b we show the scaled area \(a_H=A_H/16 \pi r_H^2\) versus the scaled angular momentum \(j=J/M^2\) for the fundamental solution and for the \(n=1\) radial excitation. The scaled entropy \(s=S/4\pi M^2\) is also shown. In this representation the Kerr black holes form the upper limiting curve, for which area and entropy agree. As in the static case with quadratic coupling the entropy of the rotating fundamental black holes is smaller than the entropy of the Kerr black holes. Thus the instability persists for the fundamental scalarized black holes with a quadratic coupling function and no self-interaction when rotation is included.
Rotating EsGB black holes \((\eta >0)\): a scaled angular momentum \(J/\lambda ^2\) vs scaled mass \(M/\lambda \) for the existence line and the critical line, and also for the extremal Kerr black holes, b scaled area \(a_H=A_H/16 \pi r_H^2\) and scaled entropy \(s=S/4\pi M^2\) vs scaled angular momentum \(j=J/M^2\) for the fundamental and first radially excited black holes
In [24] the rotating fundamental scalarized black holes were obtained for the exponential coupling function, Eq. (22). Here the static fundamental branch is much larger and (at least to a large extent) also stable. Starting from this large interval of static solutions, the domain of existence is therefore much larger for the rotating solutions for this coupling function. However, for fast rotation, the domain narrows again strongly, leaving only a small band of rapidly rotating scalarized black holes. We note, that an interesting consequence of the broad range of slowly rotating black holes is the possibility to obtain a limit on the Gauss–Bonnet coupling constant, by comparing the EsGB black hole shadow with observations [24].
Let us now consider a final twist concerning curvature induced rotating scalarized black holes. To that end we return to the Gauss–Bonnet term evaluated for a Kerr black hole, Eq. (23). Above we have noticed the strong suppressive effect of fast rotation for spontaneous scalarization. Now we would like to make use of this effect in a new constructive way. As noticed in [26] and further elaborated on in [31, 32, 41, 77], a new way of inducing the tachyonic instability in the scalar field equation is obtained for sufficiently fast rotation, when a negative coupling constant \(\eta <0\) is chosen in the coupling function \(F(\phi )\). Therefore this type of spontaneous scalarization is termed spin induced spontaneous scalarization. Its onset happens at a Kerr rotation parameter of \(j=0.5\).
Following this interesting observation the associated rotating scalarized black holes were constructed in [39] for the exponential coupling function and in [12] for the quadratic one. Whereas the exponential coupling function did not yield any surprises, the quadratic one did. Namely for the spin induced rotating scalarized black holes the entropy is larger than the Kerr entropy also for the simple quadratic coupling. Therefore these solutions could be stable, as well. We illustrate the domain of existence of the spin induced rotating scalarized black holes with quadratic coupling in Fig. 7.
Rotating EsGB black holes \((\eta <0)\): a scaled scalar charge D/M and scaled dipole charge P/M vs scaled coupling constant \(-\eta /4M^2\), b scaled entropy \(S/2\pi M^2\) vs scaled angular momentum j, with scaled horizon area \(A_H/16 \pi r_H^2\) vs j in the inset
Already for the onset of the scalarization several different modes were studied [26], where besides even parity modes also odd parity modes were included (which do not exist in the spherically symmetric case, of course). In the two parity sectors the scalar field transforms as \(\varphi (\pi -\theta ) = + \varphi (\theta )\) and \(\varphi (\pi -\theta ) = - \varphi (\theta )\), respectively. The fundamental rotating scalarized black holes have even parity and a monopolar scalar field at infinity, whereas the odd parity black holes represent excited solutions whose lowest term at infinity is a dipole term. Therefore one can associate a monopole or scalar charge D to the even parity solutions, and a dipole charge P to the odd parity solutions.
In Fig. 7a we show the scaled scalar charge D/M and the scaled dipole charge P/M versus the scaled coupling constant \(-\eta /4M^2\) for the quadratic coupling function and no self-interaction [12]. Both types of solutions represent the lowest solutions in their respective parity sector. At vanishing scalar and dipole charge, the bifurcation from the Kerr solutions takes place. The critical line then marks the upper boundary of the domain of existence of even (D/M) and odd (P/M) rotating scalarized black holes. The various curves threading the domain of existence correspond to fixed values of the horizon angular velocity of the black holes.
Figure 7b demonstrates that the parity is indeed larger for these rotating scalarized black holes than for the Kerr black holes. Here the scaled entropy \(S/2\pi M^2\) is shown versus the scaled angular momentum j, again for the lowest solutions in both parity sectors. Clearly, the Kerr bound \(j \le 1\) can be violated for such rapidly rotating scalarized black holes. The inset of the figure shows the scaled horizon area \(A_H/16 \pi r_H^2\), for comparison, for both parity sectors.
Among the numerous alternative theories of gravity EdGB and EsGB theories are theoretically very attractive, since they are motivated from quantum gravity theories, possess second order field equations, and avoid Ostrogradsky instabilities and ghosts. Here a dilaton or a general scalar field is coupled to the Gauss–Bonnet term, which is quadratic in curvature. While there are already significant constraints on EdGB theories, EsGB theories are much less constrained.
Black holes in EdGB theories have been studied since the nineties, first the static black holes and later the rotating black holes. Because of the specific dilatonic coupling of the scalar field to the Gauss–Bonnet term, all black hole solutions in EdGB theories carry dilatonic hair, while the GR black holes do not solve the set of EdGB field equations.
This is different for EsGB theories, when the coupling function satisfies appropriate conditions. Then the GR black holes remain solutions of the EsGB field equations. However, they undergo tachyonic instabilities, where branches of curvature induced scalarized black holes arise. In the rotating case even two types of scalarized black holes are present, those with a static limit, and those, that exist only for rapid rotation, which are called spin induced EsGB black holes.
Since these EsGB theories have so far survived the constraints that have emerged in the GW emission during binary mergers, when the scalar coupling function allows for a vanishing scalar field in the cosmological context, and thus leads to the same cosmological solutions as the standard cosmological \(\Lambda \)CDM model [63], this makes them attractive also for dynamical numerical relativity studies. Recently several groups have already done work in this direction, studying, e.g., dynamical scalarization and descalarization in binary BH mergers, dynamics of rotating BH scalarization, or dynamical formation of scalarized BHs through stellar core collapse [28, 33, 51, 65, 74, 75].
Antoniou, G.; Bakopoulos, A.; Kanti, P.: Evasion of no-hair theorems and novel black-hole solutions in Gauss–Bonnet theories. Phys. Rev. Lett. 120(13), 131102 (2018)
Antoniou, G.; Bakopoulos, A.; Kanti, P.: Evasion of no-hair theorems and novel black-hole solutions in Gauss–Bonnet theories. Phys. Rev. D 97(8), 084037 (2018)
MathSciNet Google Scholar
Antoniou, G.; Bakopoulos, A.; Kanti, P.: Black-hole solutions with scalar hair in Einstein–Scalar–Gauss–Bonnet theories. Phys. Rev. D 97(8), 084037 (2018)
Ayzenberg, D.; Yunes, N.: Slowly-rotating black holes in Einstein–Dilaton–Gauss–Bonnet gravity: quadratic order in spin solutions. Phys. Rev. D 90, 044066 (2014)
Ayzenberg, D.; Yagi, K.; Yunes, N.: Linear stability analysis of dynamical quadratic gravity. Phys. Rev. D 89(4), 044023 (2014)
Bakopoulos, A.; Antoniou, G.; Kanti, P.: Novel black-hole solutions in Einstein–scalar–Gauss–Bonnet theories with a cosmological constant. Phys. Rev. D 99(6), 064003 (2019)
Bakopoulos, A.; Kanti, P.; Pappas, N.: Existence of solutions with a horizon in pure scalar–Gauss–Bonnet theories. Phys. Rev. D 101(4), 044026 (2020)
Bakopoulos, A.; Kanti, P.; Pappas, N.: Large and ultra-compact Gauss–Bonnet black holes with a self-interacting scalar field. Phys. Rev. D 101(8), 084059 (2020)
Bardeen, J.M.: Timelike and null geodesics in the Kerr metric. In: DeWitt, C., DeWitt, B.S. (eds.) Black Holes (Les Astres Occlus), p. 215. Gordon and Breach, New York (1973)
Berti, E.; Cardoso, V.; Starinets, A.O.: Quasinormal modes of black holes and black branes. Class. Quantum Gravity 26, 163001 (2009)
MathSciNet MATH Google Scholar
Berti, E.; Barausse, E.; Cardoso, V.; Gualtieri, L.; Pani, P.; Sperhake, U.; Stein, L.C.; Wex, N.; Yagi, K.; Baker, T.; et al.: Testing general relativity with present and future astrophysical observations. Class. Quantum Gravity 32, 243001 (2015)
Berti, E.; Collodel, L.G.; Kleihaus, B.; Kunz, J.: Spin-induced black-hole scalarization in Einstein–scalar–Gauss–Bonnet theory. Phys. Rev. Lett. 126(1), 011104 (2021)
Blàzquez-Salcedo, J.L.; Doneva, D.D.; Kahlen, S.; Kunz, J.; Nedkova, P.; Yazadjiev, S.S.: Polar quasinormal modes of the scalarized Einstein–Gauss–Bonnet black holes. Phys. Rev. D 102 (10), 024086 (2020)
Blázquez-Salcedo, J.L.; Macedo, C.F.B.; Cardoso, V.; Ferrari, V.; Gualtieri, L.; Khoo, F.S.; Kunz, J.; Pani, P.: Perturbed black holes in Einstein–dilaton–Gauss–Bonnet gravity: stability, ringdown, and gravitational-wave emission. Phys. Rev. D 94(10), 104024 (2016)
Blázquez-Salcedo, J.L.; Khoo, F.S.; Kunz, J.: Quasinormal modes of Einstein–Gauss–Bonnet–dilaton black holes. Phys. Rev. D 96(6), 064008 (2017)
Blázquez-Salcedo, J.L.; Doneva, D.D.; Kunz, J.; Yazadjiev, S.S.: Radial perturbations of the scalarized Einstein–Gauss–Bonnet black holes. Phys. Rev. D 98(8), 084011 (2018)
Blázquez-Salcedo, J.L.; Doneva, D.D.; Kahlen, S.; Kunz, J.; Nedkova, P.; Yazadjiev, S.S.: Axial perturbations of the scalarized Einstein–Gauss–Bonnet black holes. Phys. Rev. D 101(10), 104006 (2020)
Brihaye, Y.; Ducobu, L.: Hairy black holes, boson stars and non-minimal coupling to curvature invariants. Phys. Lett. B 795, 135 (2019)
Cardoso, V.; Gualtieri, L.: Testing the black hole 'no-hair' hypothesis. Class. Quantum Gravity 33(17), 174001 (2016)
Charmousis, C.; Copeland, E.J.; Padilla, A.; Saffin, P.M.: General second order scalar–tensor theory, self tuning, and the Fab Four. Phys. Rev. Lett. 108, 051101 (2012)
Chrusciel, P.T.; Lopes Costa, J.; Heusler, M.: Stationary black holes: uniqueness and beyond. Living Rev. Relativ. 15, 7 (2012)
MATH Google Scholar
Collodel, L.G.; Kleihaus, B.; Kunz, J.; Berti, E.: Spinning and excited black holes in Einstein–scalar–Gauss–Bonnet theory. Class. Quantum Gravity 37(7), 075018 (2020)
Cunha, P.V.P.; Herdeiro, C.A.R.; Kleihaus, B.; Kunz, J.; Radu, E.: Shadows of Einstein–dilaton–Gauss–Bonnet black holes. Phys. Lett. B 768, 373 (2017)
Cunha, P.V.P.; Herdeiro, C.A.R.; Radu, E.: Spontaneously scalarized Kerr black holes in extended scalar–tensor–Gauss–Bonnet gravity. Phys. Rev. Lett. 123(1), 011101 (2019)
Damour, T.; Esposito-Farese, G.: Nonperturbative strong field effects in tensor–scalar theories of gravitation. Phys. Rev. Lett. 70, 2220–2223 (1993)
Dima, A.; Barausse, E.; Franchini, N.; Sotiriou, T.P.: Spin-induced black hole spontaneous scalarization. Phys. Rev. Lett. 125(23), 231101 (2020)
Doneva, D.D.; Yazadjiev, S.S.: New Gauss–Bonnet black holes with curvature induced scalarization in the extended scalar–tensor theories. Phys. Rev. Lett. 120(13), 131103 (2018)
Doneva, D.D.; Yazadjiev, S.S.: Dynamics of the nonrotating and rotating black hole scalarization. Phys. Rev. D 103(6), 064024 (2021)
Doneva, D.D.; Kiorpelidi, S.; Nedkova, P.G.; Papantonopoulos, E.; Yazadjiev, S.S.: Charged Gauss–Bonnet black holes with curvature induced scalarization in the extended scalar–tensor theories. Phys. Rev. D 98(10), 104056 (2018)
Doneva, D.D.; Staykov, K.V.; Yazadjiev, S.S.: Gauss–Bonnet black holes with a massive scalar field. Phys. Rev. D 99(10), 104045 (2019)
Doneva, D.D.; Collodel, L.G.; Krüger, C.J.; Yazadjiev, S.S.: Black hole scalarization induced by the spin: 2 + 1 time evolution. Phys. Rev. D 102(10), 104027 (2020)
Doneva, D.D.; Collodel, L.G.; Krüger, C.J.; Yazadjiev, S.S.: Spin-induced scalarization of Kerr black holes with a massive scalar field. Eur. Phys. J. C 80(12), 1205 (2020)
East, W.E.; Ripley, J.L.: Dynamics of spontaneous black hole scalarization and mergers inEinstein–scalar–Gauss–Bonnet gravity. Phys. Rev. Lett. (10), 127 101102 (2021)
Faraoni, V.; Capozziello, S.: Beyond Einstein Gravity: A Survey of Gravitational Theories for Cosmology and Astrophysics. Springer, Dordrecht (2011)
Geroch, R.P.: Multipole moments. II. Curved space. J. Math. Phys. 11, 2580–2588 (1970)
Gross, D.J.; Sloan, J.H.: The quartic effective action for the heterotic string. Nucl. Phys. B 291, 41 (1987)
Guo, Z.K.; Ohta, N.; Torii, T.: Black holes in the dilatonic Einstein–Gauss–Bonnet theory in various dimensions. I. Asymptotically flat black holes. Prog. Theor. Phys. 120, 581 (2008)
Hansen, R.O.: Multipole moments of stationary space-times. J. Math. Phys. 15, 46–52 (1974)
Herdeiro, C.A.R.; Radu, E.; Silva, H.O.; Sotiriou, T.P.; Yunes, N.: Spin-induced scalarized black holes. Phys. Rev. Lett. 126(1), 011103 (2021)
Hod, S.: Spontaneous scalarization of Gauss–Bonnet black holes: analytic treatment in the linearized regime. Phys. Rev. D 100(6), 064039 (2019)
Hod, S.: Onset of spontaneous scalarization in spinning Gauss–Bonnet black holes. Phys. Rev. D 102(8), 084060 (2020)
Horndeski, G.W.: Second-order scalar–tensor field equations in a four-dimensional space. Int. J. Theor. Phys. 10, 363 (1974)
Kanti, P.; Mavromatos, N.E.; Rizos, J.; Tamvakis, K.; Winstanley, E.: Dilatonic black holes in higher curvature string gravity. Phys. Rev. D 54, 5049 (1996)
Kleihaus, B.; Kunz, J.; Radu, E.: Rotating black holes in dilatonic Einstein–Gauss–Bonnet theory. Phys. Rev. Lett. 106, 151104 (2011)
Kleihaus, B.; Kunz, J.; Mojica, S.: Quadrupole moments of rapidly rotating compact objects in dilatonic Einstein–Gauss–Bonnet theory. Phys. Rev. D 90(6), 061501 (2014)
Kleihaus, B.; Kunz, J.; Mojica, S.; Radu, E.: Spinning black holes in Einstein–Gauss–Bonnet–dilaton theory: nonperturbative solutions. Phys. Rev. D 93(4), 044047 (2016)
Kobayashi, T.; Yamaguchi, M.; Yokoyama, J.: Generalized G-inflation: inflation with the most general second-order field equations. Prog. Theor. Phys. 126, 511 (2011)
Kokkotas, K.D.; Schmidt, B.G.: Quasinormal modes of stars and black holes. Living Rev. Rel. 2, 2 (1999)
Konoplya, R.A.; Zhidenko, A.: Quasinormal modes of black holes: from astrophysics to string theory. Rev. Mod. Phys. 83, 793–836 (2011)
Konoplya, R.; Zinhailo, A.; Stuchlík, Z.: Quasinormal modes, scattering, and Hawking radiation in the vicinity of an Einstein–dilaton–Gauss–Bonnet black hole. Phys. Rev. D 99(12), 124042 (2019)
Kuan, H.J.; Doneva, D.D.; Yazadjiev, S.S.: Dynamical formation of scalarized black holes and neutron stars through stellar core collapse. Phys. Rev. Lett. (16), 127 161103 (2012)
Macedo, C.F.B.; Sakstein, J.; Berti, E.; Gualtieri, L.; Silva, H.O.; Sotiriou, T.P.: Self-interactions and spontaneous black hole scalarization. Phys. Rev. D 99(10), 104041 (2019)
Maselli, A.; Pani, P.; Gualtieri, L.; Ferrari, V.: Rotating black holes in Einstein–Dilaton–Gauss–Bonnet gravity with finite coupling. Phys. Rev. D 92(8), 083014 (2015)
Metsaev, R.R.; Tseytlin, A.A.: Order alpha-prime (two loop) equivalence of the string equations of motion and the sigma model Weyl invariance conditions: dependence on the dilaton and the antisymmetric tensor. Nucl. Phys. B 293, 385 (1987)
Minamitsuji, M.; Ikeda, T.: Scalarized black holes in the presence of the coupling to Gauss–Bonnet gravity. Phys. Rev. D 99(4), 044017 (2019)
Myung, Y.S.; Zou, D.: Quasinormal modes of scalarized black holes in the Einstein–Maxwell–Scalar theory. Phys. Lett. B 790, 400–407 (2019)
Myung, Y.S.; Zou, D.C.: Black holes in Gauss–Bonnet and Chern–Simons–scalar theory. Int. J. Mod. Phys. D 28(09), 1950114 (2019)
Nollert, H.P.: Topical review: quasinormal modes: the characteristic 'sound' of black holes and neutron stars. Class. Quantum Gravity 16, R159–R216 (1999)
Pani, P.; Cardoso, V.: Are black holes in alternative theories serious astrophysical candidates? The case for Einstein–Dilaton–Gauss–Bonnet black holes. Phys. Rev. D 79, 084031 (2009)
Pani, P.; Macedo, C.F.B.; Crispino, L.C.B.; Cardoso, V.: Slowly rotating black holes in alternative theories of gravity. Phys. Rev. D 84, 087501 (2011)
Penrose, R.: Gravitational collapse and space-time singularities. Phys. Rev. Lett. 14, 57–59 (1965)
Rezzolla, L.: Gravitational waves from perturbed black holes and relativistic stars. ICTP Lect. Notes Ser. 14, 255–316 (2003)
Sakstein, J.; Jain, B.: Implications of the neutron star merger GW170817 for cosmological scalar–tensor theories. Phys. Rev. Lett. 119(25), 251303 (2017)
Saridakis, E.N.; et al.: Modified gravity and cosmology: an update by the CANTATA network. CANTATA. arXiv:2105.12582 [gr-qc]
Silva, H.O.; Witek, H.; Elley, M.; Yunes, N.: Dynamical scalarization and descalarization in binary black hole mergers. Phys. Rev. Lett. (3), 127 031101 (2021)
Silva, H.O.; Sakstein, J.; Gualtieri, L.; Sotiriou, T.P.; Berti, E.: Spontaneous scalarization of black holes and compact stars from a Gauss–Bonnet coupling. Phys. Rev. Lett. 120(13), 131104 (2018)
Silva, H.O.; Macedo, C.F.B.; Sotiriou, T.P.; Gualtieri, L.; Sakstein, J.; Berti, E.: Stability of scalarized black hole solutions in scalar–Gauss–Bonnet gravity. Phys. Rev. D 99(6), 064011 (2019)
Sotiriou, T.P.; Zhou, S.Y.: Black hole hair in generalized scalar–tensor gravity. Phys. Rev. Lett. 112, 251102 (2014)
Sotiriou, T.P.; Zhou, S.Y.: Black hole hair in generalized scalar–tensor gravity: an explicit example. Phys. Rev. D 90, 124063 (2014)
Thorne, K.S.: Multipole expansions of gravitational radiation. Rev. Mod. Phys. 52, 299–339 (1980)
Torii, T.; Yajima, H.; Maeda, K.I.: Dilatonic black holes with Gauss-Bonnet term. Phys. Rev. D 55, 739 (1997)
Wald, R.M.: Black hole entropy is the Noether charge. Phys. Rev. D 48(8), R3427–R3431 (1993)
Will, C.M.: The Confrontation between general relativity and experiment. Living Rev. Relativ. 9, 3 (2006)
Witek, H.; Gualtieri, L.; Pani, P.; Sotiriou, T.P.: Black holes and binary mergers in scalar Gauss–Bonnet gravity: scalar field dynamics. Phys. Rev. D 99(6), 064035 (2019)
Witek, H.; Gualtieri, L.; Pani, P.: Towards numerical relativity in scalar Gauss–Bonnet gravity: \(3+1\) decomposition beyond the small-coupling limit. Phys. Rev. D 101(12), 124055 (2020)
Zhang, H.; Zhou, M.; Bambi, C.; Kleihaus, B.; Kunz, J.; Radu, E.: Testing Einstein–dilaton–Gauss–Bonnet gravity with the reflection spectrum of accreting black holes. Phys. Rev. D 95(10), 104043 (2017)
Zhang, S.J.; Wang, B.; Wang, A.; Saavedra, J.F.: Object picture of scalar field perturbation on Kerr black hole in scalar–Einstein–Gauss–Bonnet theory. Phys. Rev. D 102(12), 124056 (2020)
Zinhailo, A.: Quasinormal modes of Dirac field in the Einstein–Dilaton–Gauss–Bonnet and Einstein–Weyl gravities. Eur. Phys. J. C 79(11), 912 (2019)
Zwiebach, B.: Curvature squared terms and string theories. Phys. Lett. 156B, 315 (1985)
We would like to thank our collaborators: Emanuele Berti, Vitor Cardoso, Lucas G. Collodel, Daniela D. Doneva, Valeria Ferrari, Leonardo Gualtieri, Sarah Kahlen, Panagiota Kanti, Fech Scen Khoo, Caio F. B. Macedo, Sindy Mojica, Petya Nedkova, Paolo Pani, Eugen Radu, Kalin V. Staykov, Stoytcho S. Yazadjiev. We gratefully acknowledge support by the DFG Research Training Group 1620 Models of Gravity and the COST Actions CA15117 and CA16104. JLBS would like to acknowledge support from FCT project PTDC/FIS-AST/3041/2020.
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Departamento de Física Teórica II and IPARCOS, Facultad de Ciencias Físicas, Universidad Complutense de Madrid, 28040, Madrid, Spain
Jose Luis Blázquez-Salcedo
Institute of Physics, University of Oldenburg, 26111, Oldenburg, Germany
Burkhard Kleihaus & Jutta Kunz
Burkhard Kleihaus
Jutta Kunz
Correspondence to Jutta Kunz.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Blázquez-Salcedo, J.L., Kleihaus, B. & Kunz, J. Scalarized black holes. Arab. J. Math. (2021). https://doi.org/10.1007/s40065-021-00349-7
Mathematics Subject Classification | CommonCrawl |
Diophantine geometry over groups VI: the elementary theory of a free group
@article{Sela2006DiophantineGO,
title={Diophantine geometry over groups VI: the elementary theory of a free group},
author={Zlil Sela},
journal={Geometric \& Functional Analysis GAFA},
Z. Sela
Geometric & Functional Analysis GAFA
Abstract.This paper is the sixth in a sequence on the structure of sets of solutions to systems of equations in a free group, projections of such sets, and the structure of elementary sets defined over a free group. In the sixth paper we use the quantifier elimination procedure presented in the two parts of the fifth paper in the sequence, to answer some of A. Tarski's problems on the elementary theory of a free group, and to classify finitely generated (f.g.) groups that are elementarily…
Diophantine Geometry over Groups X: The Elementary Theory of Free Products of Groups
This paper is the 10th in a sequence on the structure of sets of solutions to systems of equations over groups, projections of such sets (Diophantine sets), and the structure of definable sets over…
Diophantine geometry over groups VIII: Stability
This paper is the eighth in a sequence on the structure of sets of solutions to systems of equations in free and hyperbolic groups, projections of such sets (Diophantine sets), and the structure of…
Diophantine Geometry over Groups IX: Envelopes and Imaginaries
This paper is the ninth in a sequence on the structure of sets of solutions to systems of equations in free and hyperbolic groups, projections of such sets (Diophantine sets), and the structure of…
On Systems of Equations over Free Products of Groups
M. Casals-Ruiz, I. Kazachkov
Subgroups Of Direct Products Of Elementarily Free Groups
M. Bridson, J. Howie
Abstract.The structure of groups having the same elementary theory as free groups is now known: they and their finitely generated subgroups form a prescribed subclass $${\mathcal{E}}$$ of the…
Limit groups for relatively hyperbolic groups. I. The basic tools
D. Groves
We begin the investigation of -limit groups, where is a torsion-free group which is hyperbolic relative to a collection of free abelian subgroups. Using the results of (16), we adapt the re- sults…
Algebraic Geometry over Free Groups: Lifting Solutions into Generic Points
O. Kharlampovich, A. Myasnikov
In this paper we prove Implicit Function Theorems (IFT) for al- gebraic varieties defined by regular quadratic equations and, more generally, regular NTQ systems over free groups. In the model…
Orderable groups, elementary theory, and the Kaplansky conjecture
B. Fine, A. Gaglione, G. Rosenberger, D. Spellman
Mathematics, Computer Science
Groups Complex. Cryptol.
It is shown that each of the classes of left-orderable groups and orderable groups is a quasivariety with undecidable theory and an explicit set of universal axioms is found.
Equations and algebraic geometry over profinite groups
S. G. Melesheva
The notion of an equation over a profinite group is defined, as well as the concepts of an algebraic set and of a coordinate group. We show how to represent the coordinate group as a projective limit…
On subdirect products of type FP m of limit groups
D. Kochloukova
Abstract We show that limit groups are free-by-(torsion-free nilpotent) and have non-positive Euler characteristic. We prove that for any non-abelian limit group the Bieri–Neumann–Strebel–Renz…
SHOWING 1-6 OF 6 REFERENCES
Diophantine geometry over groups III: Rigid and solid solutions
This paper is the third in a series on the structure of sets of solutions to systems of equations in a free group, projections of such sets, and the structure of elementary sets defined over a free…
Diophantine geometry over groups V1: Quantifier elimination I
Z. Zela
This paper is the first part (out of two) of the fifth paper in a sequence on the structure of sets of solutions to systems of equations in a free group, projections of such sets, and the structure…
Diophantine geometry over groups I: Makanin-Razborov diagrams
This paper is the first in a sequence on the structure of sets of solutions to systems of equations in a free group, projections of such sets, and the structure of elementary sets defined over a free…
View 10 excerpts, references background and methods
Diophantine geometry over groups IV: An iterative procedure for validation of a sentence
This paper is the fourth in a series on the structure of sets of solutions to systems of equations in a free group, projections of such sets, and the structure of elementary sets defined over a free…
Elementary properties of free groups
G. Sacerdote
In this paper we show that several classes of elementary properties (properties definable by sentences of a first order logic) of groups hold for all nonabelian free groups. These results are…
Cyclic Splittings of Finitely Presented Groups and the Canonical JSJ-Decomposition
E. Rips, Z. Sela
The classification of stable actions of finitely presented groups on ℝ-trees has found a number of applications. Perhaps one of the most striking of these applications is the theory of canonical… | CommonCrawl |
December 2005 , 2:1 | Cite as
Large-Scale Dynamics of the Convection Zone and Tachocline
Mark S. Miesch
The past few decades have seen dramatic progress in our understanding of solar interior dynamics, prompted by the relatively new science of helioseismology and increasingly sophisticated numerical models. As the ultimate driver of solar variability and space weather, global-scale convective motions are of particular interest from a practical as well as a theoretical perspective. Turbulent convection under the influence of rotation and stratification redistributes momentum and energy, generating differential rotation, meridional circulation, and magnetic fields through hydromagnetic dynamo processes. In the solar tachocline near the base of the convection zone, strong angular velocity shear further amplifies fields which subsequently rise to the surface to form active regions. Penetrative convection, instabilities, stratified turbulence, and waves all add to the dynamical richness of the tachocline region and pose particular modeling challenges. In this article we review observational, theoretical, and computational investigations of global-scale dynamics in the solar interior. Particular emphasis is placed on high-resolution global simulations of solar convection, highlighting what we have learned from them and how they may be improved.
Supplementary material is available for this article at 10.12942/lrsp-2005-1.
1 A Turbulent Sun
Measurements of plasma flows in the surface layers of the Sun by Doppler imaging, tracking of surface features, and helioseismic inversions reveal an intricate, rapidly evolving structure characteristic of a highly turbulent fluid (e.g., Toomre, 2002). Small-scale (∼ 1–2 Mm) granulation cells dominate the velocity and irradiance patterns, blanketing the solar surface with a network of relatively cool, dark downflow lanes surrounding brighter, broader upwellings of warmer fluid. The granulation patterns change continually and chaotically, driven by vigorous thermal convection under the influence of stratification, ionization, and radiative transfer effects as convective heat transfer gives way to radiation, moving energy outward through the extended solar atmosphere and into the interplanetary medium (e.g., Stein and Nordlund, 1998, 2000). Larger-scale convective patterns have also been detected including mesogranulation at ∼ 5 Mm and supergranulation at ∼ 30 Mm (Leighton et al., 1962; November et al., 1981; Muller et al., 1992; Rast, 2003; DeRosa and Toomre, 2004). Local helioseismology reveals even more structure: swirling, converging, and diverging horizontal flows, meandering zonal jets, and global meridional circulations all of which evolve substantially over the course of months and years (Section 2.2).
Despite this seething complexity, the Sun exhibits some striking regularities. Among these is the latitudinal variation of the surface rotation rate, which is non-uniform; equatorial regions rotate with a period of about 27 days whereas polar regions rotate with a period of about 35 days. This differential rotation pattern is remarkably smooth and steady, monotonically decreasing from equator to pole and varying by not more than about 5% since the first systematic measurements were made by Carrington (1863) over a century ago. Another striking manifestation of order amid the chaos of solar convection is the solar activity cycle in which belts of magnetic activity regularly appear at mid latitudes, propagate toward the equator, and then vanish as new activity belts form at mid latitudes and repeat the process (Schrijver and Zwaan, 2000; Stix, 2002; Charbonneau, 2005). Other systematic patterns are also evident within the framework of this activity cycle, such as the orientation and chirality of individual active regions and the frequency and magnitude of eruptive events such as flares and coronal mass ejections (see Section 3.8).
Turbulent, electrically conducting flows such as solar granulation are generally capable of amplifying and maintaining magnetic fields through hydromagnetic dynamo action. This is the likely origin of much of the small-scale magnetic flux observed in the photosphere, sometimes referred to as the magnetic carpet or as the salt and pepper which dots high-resolution magnetograms of the solar disk (e.g., Schrijver and Zwaan, 2000). This small-scale flux concentrates in granular downflow lanes and evolves rapidly, continually replenishing itself in less than a day. However, it does not exhibit the emergence patterns and cyclic behavior characteristic of much larger active regions. Rather, the generation of small-scale magnetic flux locally by dynamo action within the solar surface layers and its advection by granulation and supergranulation is distinct from, but coupled to, the generation of larger-scale field which is manifested in the solar activity cycle (Simon et al., 2001). Thus, there is not one solar dynamo, but two: a local dynamo which continuously generates small-scale, relatively random magnetic fluctuations in the solar surface layers, and a global dynamo which maintains the larger-scale cyclic activity (Cattaneo, 1999).
The regularities in magnetic activity associated with the global dynamo likely have little to do with the granulation and supergranulation patterns observed in the photosphere. These motions are thought to be confined to the upper few percent of the solar interior (r ≥ 0.97R⊙). Solar structure models and helioseismic inversions suggest that the solar envelope is convectively unstable over a much larger region, down to r ∼ 0.71R⊙ (Christensen-Dalsgaard et al., 1991; Basu and Antia, 2001). Relative to granulation and supergranulation, the motions which occupy the bulk of the solar convection zone are thought to be larger-scale and slower, with turnover timescales comparable to the solar rotation period of about one month. These motions are thus more influenced by rotation which induces anisotropic momentum and heat transport, thus maintaining global-scale flows such as differential rotation and meridional circulations. Such flows are thought to play a key role in the global dynamo. Rotation also induces kinetic and magnetic helicity, another important ingredient in solar dynamo theory (Section 4.5).
Understanding the dynamics and dynamo processes occurring within the deep solar convection zone has far-reaching implications for understanding solar and stellar magnetism, evolution, structure, and variability. Furthermore, since much of solar variability is tied to cyclic magnetic activity, such insight is essential in order to gain a better understanding of how the Sun influences life on Earth through a variety of processes collectively known as space weather (Schrijver and Zwaan, 2000). However, large-scale convection motions in the Sun are notoriously difficult to observe directly because they are masked by the much more vigorous granulation in the near-surface layers (Section 3.5). We must instead rely on their indirect observational manifestations such as magnetic activity in the solar atmosphere and the internal rotation profile inferred from helioseismology (Section 3.1). The helioseismic investigations have proven particularly enlightening as they have revealed a narrow layer of strong radial shear in the solar angular velocity, where the differential rotation of the convective envelope undergoes a transition to nearly uniform rotation in the radiative interior. The discovery of this shear layer, now known as the solar tachocline, has had profound implications for solar dynamo theory.
In this review we will give a general overview of solar interior dynamics, focusing on large-scale motions in the convection zone and tachocline. Smaller-scale dynamics in the solar surface layers including granulation, supergranulation, and issues relating to the local dynamo are discussed elsewhere in this journal. Although we will often discuss the deep convection zone and tachocline in the context of the global dynamo, we make no attempt to cover all aspects of solar dynamo theory. More comprehensive discussions of solar dynamo modeling and of the evolution and emergence of magnetic flux in the convection zone can be found in these volumes in the reviews by Fan (2004) and Charbonneau (2005). Even with this restricted scope, the subject matter is vast and we must necessarily focus on some aspects more than others. Particular emphasis will be placed on 3D numerical simulations of turbulent convection. References are provided throughout should the reader wish to explore the subject matter further or to seek a different perspective.
This review is organized as a web-based reference in that it has a modular form and ample cross-referencing; the reader is encouraged to skip to the sections of most interest. Like most reviews, it is targeted mainly at non-specialists: students and interested researchers from other disciplines.
In Section 2 we describe the means by which we can potentially glean information about the solar interior; how do we know what we know? The most relevant observational results are then be reviewed in Section 3. There we discuss observational diagnostics of convection, mean flows, and dynamo processes in the solar envelope. We also define the tachocline and review what is known about it observationally. Some fundamental theoretical principles and modeling approaches are then discussed in Sections 4 and 5. Among these approaches, high-resolution numerical simulations of thermal convection in rotating spherical shells offer unique promise in elucidating the complex turbulent dynamics of the solar convection zone and we discuss their implications, current limitations, and future prospects in Sections 6 and 7. We then turn to the tachocline and the region of convective overshoot which forms the interface between the solar envelope and the radiative interior. Since most of the tachocline is thought to be stably-stratified, it exhibits qualitatively different dynamics relative to the convection zone, as we discuss in Section 8. We close with an attempt to tie it all together in Section 9 where we assess the current state of interplay between dynamical models and observations.
2 Probing the Solar Interior
We cannot observe the solar interior directly1. Rather, we must infer what is occurring below the surface from measurements made in the solar photosphere and above. In this section we review the types of observations which provide insight into solar interior dynamics and discuss what they can tell us, both in principle and in practice. Results from these observations will be discussed in Section 3.
The most stringent observational constraints on dynamical models of the solar interior are provided by helioseismology, for which many excellent and much more comprehensive reviews exist; see for example Gizon and Birch (2005) in these volumes, and also Gough and Toomre (1991) and Christensen-Dalsgaard (2002). A more detailed discussion of the solar rotation profile in particular, including both observational results and modeling efforts, is given by Thompson et al. (2003). Many earlier reviews of solar rotation are also available, focusing primarily on surface measurements (Gilman, 1974; Howard, 1984; Schröter, 1985; Rüdiger, 1989; Beck, 2000).
2.1 Global helioseismology
Granulation in the surface layers of the Sun is highly compressible (Mach numbers approaching or exceeding unity) and is therefore a strong source of acoustic waves. These waves propagate throughout the solar interior, reflecting off the surface and interfering with one another to form global standing modes with characteristic periods of about five minutes. In this way the Sun resonates with acoustic oscillations which can be used to probe its internal structure and dynamics2.
Helioseismic investigations typically begin with a stellar structure model. Resonant modes of oscillation are then computed by considering linear, adiabatic perturbations about the spherically symmetric background state obtained from these models. Perturbations are typically expressed in terms of spherical harmonic basis functions Y ℓm in latitude and longitude, and in terms of eigenfunctions in radius characterized by a radial order n. The frequencies of these resonant oscillations depend on the spherical harmonic degree, ℓ, the radial order, n, and the properties of the background state, principally the sound speed and the density (Christensen-Dalsgaard, 2002). The next step is to observe these oscillations on the Sun and compare them to the theoretical predictions. Helioseismic measurements typically consist of a time series of photospheric images in some dynamical variable such as the radial velocity as determined by the Doppler shift of a spectral emission line. These observations are then subjected to spherical harmonic transforms in space and Fourier transforms in time in order to determine oscillation frequencies. The measured frequencies agree remarkably well with the theoretical predictions (Gough, 1996; Christensen-Dalsgaard, 2002) and for low spherical harmonic degree ℓ ≤ 150, they are quantized, indicative of resonant oscillations. At higher spherical harmonic degree, the frequencies are blurred in ℓ due to locally-excited traveling waves which have not yet propagated around the solar sphere to interfere with other waves. These modes form the basis of local-domain helioseismology which will be discussed in Section 2.2 (see also Gizon and Birch, 2005).
Different oscillation modes are sensitive to different regions of the solar interior; for example, high-ℓ modes sample only the near-surface layers whereas low-ℓ modes penetrate much deeper. The oscillation frequencies are weighted integrals over the sampling region (loosely, the ray path) so some inversion procedure is necessary to infer solar interior properties such as the variation of the sound speed with depth (Christensen-Dalsgaard, 2002). The inversions are usually assumed to be linear so weighted summations over different frequencies can be used to derive averaging kernals which are sensitive to localized regions of the solar interior. Parametric representations may also be used, with minimization procedures to determine the best fit to solar data. Global inversions generally become less reliable in the polar regions and in the deep interior which are not well-sampled by observable oscillation modes.
With regard to solar interior dynamics, the most important feature of global acoustic oscillations is their so-called rotational splitting. In a non-rotating star, the frequencies of resonant acoustic oscillations are independent of the spherical harmonic order m (neglecting the asphericity caused by flows or magnetic fields). This is no longer the case when the effects of rotation are included. The resulting frequency shifts are small relative to the reference frequency so they can be reliably treated as perturbations. Helioseismic inversions can then be used to infer the internal rotation profile as a function of latitude and radius as shown in Figure 1.
Angular velocity profile in the solar interior inferred from helioseismology (after Thompson et al., 2003). In panel (a), a 2D (latitude-radius) rotational inversion is shown based on the subtractive optimally localized averaging (SOLA) technique. In panel (b), the angular velocity is plotted as a function of radius for several selected latitudes, based on both SOLA (symbols, with 1σ error bars) and regularized least squares (RLS; dashed lines) inversion techniques. Dashed lines indicate the base of the convection zone. All inversions are based on data from the Michelson Doppler Imager (MDI) instrument aboard the SOHO spacecraft, averaged over 144 days. Inversions become unreliable close to the rotation axis, represented by white areas in panel (a). Note also that global modes are only sensitive to the rotation component which is symmetric about the equator (courtesy M.J. Thompson & J. Christensen-Dalsgaard)
A limitation of global helioseismology is that the inversions used to infer rotation profiles or structural quantities such as sound speed are only sensitive to the component which is symmetric about the equator. Furthermore, they are insensitive to meridional circulations and non-axisymmetric convective motions. In order to probe such dynamics other techniques are necessary, the most promising being local helioseismology.
2.2 Local helioseismology
Not all acoustic waves (p-modes) in the Sun are resonant oscillations of the full sphere. Locally-excited waves interact with local variations in sound speed, flow fields, and magnetic activity which alter their propagation characteristics. Thus, a careful analysis of the acoustic wave field in a localized patch of the solar photosphere can potentially reveal a great deal about the subsurface dynamics.
Extracting dynamical information from local wave fields can be more challenging than from global oscillations, primarily because the forward problem is more difficult; for a given structure and flow, what acoustic signal will be manifested on the solar surface? This depends to some degree on the source of the waves, which is complex and intermittent. A thorough understanding of this forward problem is necessary in order to devise reliable inversion techniques for inferring subsurface structure and dynamics from photospheric measurements.
Several related inversion techniques exist for local helioseismology, including ring-diagram analysis (Hill, 1988), time-distance methods (Kosovichev et al., 2000), and acoustic holography (Lindsey and Braun, 2000a). All of these approaches are discussed in detail by Gizon and Birch (2005).
From the perspective of solar interior dynamics, the most important result to come from local helioseismology has been the mapping of horizontal flows in the surface layers of the Sun as shown in Figure 3. Such mappings reveal meandering meridional and zonal circulation patterns as well as intricate smaller-scale flows associated with active regions and supergranulation. The investigation and monitoring of these flows has given rise to the new discipline of solar subsurface weather, SSW (Toomre, 2002). Local helioseismology has also been used to study the acoustic and flow structure underlying sunspots (Kosovichev et al., 2000; Braun and Lindsey, 2000; Zhao et al., 2001; Zhao and Kosovichev, 2003) and to image active regions on the far side of the Sun (Lindsey and Braun, 2000b; Braun and Lindsey, 2001).
The probing of horizontal flows by local helioseismology has provided unpreceded insight into the structure and evolution of differential rotation (Section 3.3), meridional circulation (Section 3.4), and giant cells (Section 3.5). However, like any method, it has its limitations. Most notably, the small-wavelength acoustic waves, which local helioseismology is best suited to investigate, are confined principally to the near-surface layers, r ≥ 0.97R⊙. Some analyses have attempted to probe deeper (Giles et al., 1997; Braun and Fan, 1998) but the resolution is limited and the results are generally less reliable. There is much promise that with improved instrumentation and analysis techniques local helioseismology can do better and may soon provide information on flow structure, magnetic activity, and thermal asphericity as deep as the tachocline (Gizon and Birch, 2005).
Although local helioseismology can currently only provide detailed dynamical information for the outer few percent of the solar interior, the large-scale flow patterns it reveals may extend deeper into the convective envelope. For the same reason, surface observations are also relevant (Section 2.3).
2.3 Surface and atmospheric observations
Understandably, much of solar physics is concerned with the part of the Sun we can observe directly, namely the photosphere, chromosphere, and corona. Although such observations do not provide direct information on physical conditions in the solar interior, they can provide insight into the nature of solar convection and dynamo processes.
Plasma flows in the photosphere may be measured directly by Doppler imaging or may be inferred by tracking the horizontal movement of magnetic structures, emission features, and convective patterns across the solar disk. Such measurements provide a useful check on near-surface flow fields obtained from helioseismology. Surface measurements also provide an extensive time history of the solar rotation profile, tracing its long-term evolution. The differential rotation of the solar surface has been monitored for almost 150 years (since Carrington, 1863) and careful analysis of prior sunspot records can potentially extend this time coverage even further back (Eddy et al., 1977; Ribes and Nesme-Ribes, 1993). By comparison, helioseismic determinations of the solar rotation only date back to the mid 1980's.
Doppler maps of photospheric flow fields, known as Dopplergrams, are dominated by granulation: small-scale (∼ 1–2 Mm) turbulent convection cells confined to the near-surface layers and driven by ionization and radiative transfer effects. Characteristic velocity amplitudes depend somewhat on the resolution of the instrument but are, at least, several km s−1. More sophisticated analyses, such as correlation tracking, also reveal another scale of convection known as supergranulation with characteristic length and velocity scales of about 30 Mm and several hundred m s−1 (Leighton et al., 1962; DeRosa and Toomre, 2004). At intermediate scales of ∼ 5 Mm, another pattern known as mesogranulation has also been detected in correlation tracking measurements with characteristic velocity amplitudes of ∼ 60 m s−1 (November et al., 1981; Muller et al., 1992). However, mesogranulation is not apparent in power spectra computed from Doppler measurements of surface velocities whereas granulation and supergranulation are (Hathaway, 1996b; Hathaway et al., 2000). Such patterns must be filtered out or otherwise removed from surface Doppler measurements in order to detect the relatively weak, larger-scale motions more relevant to the dynamics of the deep solar interior, including differential rotation (∼ 200 m s−1), meridional circulation (∼ 20–30 m s−1), and larger-scale convective motions (∼ 10–100 m s−1). The five-minute acoustic oscillations which form the basis of helioseismology must also be filtered out when studying large-scale surface flows by means of Doppler measurements.
Removing contaminating signals arising from rotation, granulation, supergranulation, acoustic oscillations, and small-scale magnetic activity is perhaps the biggest challenge in determining large-scale flow patterns from surface Doppler measurements. Projection effects such as limb darkening also pose problems for both Doppler and tracking techniques, and the non-uniform rotation of the Sun makes it more difficult to identify and monitor long-lived velocity features. Furthermore, techniques which rely on tracking magnetic features or flow patterns via auto-correlations can give misleading results if the features or patterns evolve substantially over the course of the tracking interval or if the features are not just passively advected by the fluid as is implicitly assumed.
Measurements of photospheric intensity or irradiance are also very instructive from the standpoint of solar interior dynamics because they may reflect inhomogeneities in temperature or heat flux induced by large-scale convective motions. However, detecting such large-scale variations is difficult because, like Doppler measurements, solar irradiance measurements are dominated by granulation patterns and small-scale emission features related to magnetic activity such as faculae. After removing these effects, the residual latitudinal variations are only about one part in 104 (Section 3.7).
The Sun exhibits a wide variety of magnetic activity, from the quiet photospheric network to sunspots and coronal loops to MHD waves and explosive events such as flares and coronal mass ejections. Indeed, solar magnetism lies at the heart of nearly all the companion reviews in this journal, including Charbonneau (2005); Fan (2004). Although much of this research focuses on structures in the solar atmosphere, the ultimate origin of this magnetic activity lies below the surface, in the convection zone and tachocline (Section 4.5). Reproducing patterns of magnetic activity such as the solar butterfly diagram (Section 3.8) therefore ranks among the most important and difficult challenges to dynamical models of the solar interior.
3 What Do We Observe?
In the previous section we reviewed the types of observations which can potentially give us insight into what is occurring inside the Sun from a dynamical perspective. Here we survey the variety of phenomena which such observations have revealed. These results can be used to motivate, evaluate, and calibrate solar interior models. In other words, this is what we have to go on.
3.1 Differential rotation of the solar envelope
The internal rotation of the Sun inferred from global helioseismology is shown in Figure 1. Throughout the convective envelope, the rotation rate decreases monotonically toward the poles by about 30%. Angular velocity contours at mid-latitudes are nearly radial. Near the base of the convection zone, there is a sharp transition between differential rotation in the convective envelope and nearly uniform rotation in the radiative interior. This transition region has become known as the solar tachocline and will be discussed further in the next section (Section 3.2). The rotation rate of the radiative interior is intermediate between the equatorial and polar regions of the convection zone. Thus, the radial angular velocity gradient across the tachocline is positive at low latitudes and negative at high latitudes, crossing zero at a latitude of about 35°.
In addition to the tachocline, there is another layer of comparatively large radial shear in the angular velocity near the top of the convection zone. At low and mid-latitudes there is an increase in the rotation rate immediately below the photosphere which persists down to r ∼ 0.95R⊙. The angular velocity variation across this layer is roughly 3% of the mean rotation rate and according to the helioseismic analysis of Corbard and Thompson (2002) Ω decreases within this layer approximately as r−1. At higher latitudes the situation is less clear. The radial angular velocity gradient in the subsurface shear layer appears to be smaller and may switch sign (Corbard and Thompson, 2002).
Although helioseismic inversions become less reliable at high latitudes (Section 2.1), available data indicate that the monotonic decrease of angular velocity with latitude continues to the polar regions. Moreover, the inferred rotation rate of the polar regions is even slower than that given by a smooth extrapolation of the rotation rate at low and mid-latitudes (Schou, 1998). This is a striking result, since flows approaching the rotation axis might be expected to spin up the polar regions if they tend to conserve their angular momentum (cf. Sections 6.3 and 6.4).
Finer structure is also present in the rotational inversions, including "wiggles" in the angular velocity contours and propagating, banded zonal flows known as torsional oscillations (Section 3.3). Zonal jets (localized regions of prograde or retrograde flow) may also be present. Schou (1998) reported evidence for a prograde polar jet which can also be seen in the RLS (Regularized Least Squares) inversion results of panel b of Figure 1 (dashed line) at a latitude of 75° and a radius of ∼ 0.95R⊙. However, some data and analysis techniques spanning the same time interval do not reveal such a jet, so its existence is still questionable (Schou et al., 2002). Spatial and temporal variations in the rotation rate are particularly apparent near the poles where the small moment arm, λ = r sin θ, implies large angular velocity variations even for moderate zonal velocities: Ω = υ ϕ /λ. Although many of these fluctuations can likely be attributed to observational and analysis errors, some are statistically significant (Toomre et al., 2000).
Global helioseismic inversions such as those shown in Figure 1 can only provide the equatorally-symmetric component of the angular velocity but local helioseismology reveals significant asymmetries, particularly in the torsional oscillations (Haber et al., 2002; Basu and Antia, 2003; Zhao and Kosovichev, 2004).
3.2 The tachocline
The tachocline is a transition layer between two distinct rotational regimes: the differentially-rotating solar envelope and the radiative interior where the rotation is uniform, within the error estimates of the inversions (Figure 1). This transition is sharp and it occurs near the base of the convection zone as determined by helioseismic inversions and solar models (Section 3.6), implying that convection is responsible for the differential rotation of the envelope (Section 4.3). Although some authors incorporate structural information (e.g., subadiabaticity), most define the tachocline solely by means of the rotation profile. We will follow the latter convention.
Recent helioseismic estimates by Charbonneau et al. (1999a) and Basu and Antia (2003) indicate that the tachocline is centered at rt ∼ 0.693±0.003R⊙ near the equator. This is below the convection zone base of rb = 0.713 ± 0.003R⊙ but it may lie within the overshoot region (Section 3.6). At higher latitudes, the location of the tachocline shifts upward, reaching rt ∼ 0.717 ± 0.003R⊙ at a latitude of 60° (Charbonneau et al., 1999a; Basu and Antia, 2003). Thus, the tachocline is significantly prolate. This is in contrast to the base of the convection zone, rb, in which helioseismic inversions have not yet detected any significant latitudinal variation (Section 3.6).
Estimates of the width of the tachocline vary according to how it is defined. Charbonneau et al. (1999a) characterize the transition in terms of an error function
$$f(r;{r_{\rm{t}}},{\Delta _{\rm{t}}}) = {1 \over 2}\left\{{1 + {\rm{erf}}\left[ {{{2(r - {r_{\rm{t}}})} \over {{\Delta _{\rm{t}}}}}} \right]} \right\},$$
and then estimate the best-fit parameters using several inversion techniques. Their results yield a tachocline thickness of Δt/R⊙ = 0.039 ± 0.013 at the equator and Δt/R⊙ = 0.042 ± 0.013 at a latitude of 60°, suggesting that the tachocline may get somewhat wider at high latitudes but that the result is not statistically significant. On the other hand, Basu and Antia (2003) argue for a statistically significant increase in the tachocline thickness with latitude, from Δt ∼ 0.016R⊙ at the equator to Δt ∼ 0.038R⊙ at latitudes of 60° (when the width is defined as in Charbonneau et al., 1999a). Furthermore, they suggest that the variation may not be smooth; there may be a sharp transition from a narrow tachocline at low latitudes to a wider tachocline at high latitudes, possibly associated with the sign of the radial angular velocity gradient which reverses at a latitude of ∼ 35°. Other estimates for the width of the tachocline range from 0.01R⊙ to 0.09R⊙ (Kosovichev, 1996; Basu, 1997; Corbard et al., 1999; Elliott and Gough, 1999; Basu and Antia, 2001).
These helioseismic results suggest that the tachocline lies almost entirely below the convective envelope at low latitudes but it may extend well into the convection zone at high latitudes. Moreover, it appears that the tachocline contains the overshoot region but extends beyond it, perhaps both above and below. However, these results may need to be reexamined in light of new determinations of elemental abundances in the solar envelope, which has important implications for helioseismic inversions (Asplund et al., 2005; Bahcall et al., 2005).
Throughout most of the tachocline, the vertical shear in the mean zonal velocity almost an order of magnitude larger than the latitudinal shear: dυ ϕ /dr ∼ ±1.5 × 10−6 s−1 whereas r−1dυ ϕ /dθ ∼ 2 × 10−7 s−1. The exception is at latitudes of ∼ 35° where dυ ϕ /dr changes sign. The total change in the zonal velocity across the tachocline is about 100 m s−1 at the equator and somewhat less at high latitudes, ∼ 90 m s−1.
3.3 Torsional oscillations and other temporal variations in the solar rotation
The rotation rate of the Sun varies on evolutionary timescales; it was once much faster. Here we are concerned with variations on the much shorter dynamical timescales of months, years, and decades. We ask whether the differential rotation profile shown in Figure 1 changes significantly over the course of, for example, the solar activity cycle. The answer is yes. There are two distinct cyclical patterns which have been detected in the solar differential rotation, which we will refer to as torsional oscillations and tachocline oscillations.
The most well-established temporal variations in the solar differential rotation are torsional oscillations, which have been studied using global helioseismology, local helioseismology, and surface Doppler measurements (Howard and LaBonte, 1980; Ulrich et al., 1988; Howe et al., 2000b; Vorontsov et al., 2002; Haber et al., 2002; Basu and Antia, 2003; Zhao and Kosovichev, 2004). These are alternating bands of faster and slower rotation which propagate with a cyclical period of 11 years. At latitudes below about 42°, the bands propagate equatorward but at higher latitudes they propagate poleward. The low-latitude bands are about 15° wide in latitude and extend from the surface down to r ∼ 0.9R⊙ or deeper, possibly to the base of the convection zone. The high-latitude bands are somewhat wider and deeper, possibly extending to the base of the convection zone (Vorontsov et al., 2002). The amplitude of the angular velocity variation is about 2–5 nHz, which is roughly 1% of the mean rotation rate. This corresponds to a zonal flow of about 5–10 m s−1.
The 11-year period of the torsional oscillations strongly suggests that they are associated in some way with the 22-year solar activity cycle. Indeed, surface magnetic activity correlates well with the oscillation patterns, with activity belts tending to lie on the poleward side of the faster-rotating bands at low latitudes, migrating toward the equator together as the activity cycle progresses (Zhao and Kosovichev, 2004). Recent results by Beck et al. (2002) based on time-distance helioseismology indicate that meridional flows may be diverging out of the activity belts, with equatorward and poleward flows correlating well with the faster and slower bands of the torsional oscillations, respectively. There is some evidence that the zonal bands may get slightly faster at times of peak magnetic activity (Zhao and Kosovichev, 2004). Some evidence has also been found for higher-order harmonics in the torsional oscillation signal (Vorontsov et al., 2002) and for possibly related non-axisymmetric wave patterns having longitudinal wavenumbers up to m = 8 (Ulrich, 2001).
The second type of oscillation which has been detected in the differential rotation profile, first reported by Howe et al. (2000a), is distinct from the torsional oscillation in that it has a shorter period, ∼ 1.3 yr and it is localized around the tachocline and the lower convection zone. Furthermore, there is currently no evidence for latitudinal propagation. Rather, it appears to be a standing wave straddling the base of the convection zone, with angular velocity variations at r = 0.63R⊙ out of phase with those at r = 0.72R⊙. The amplitude of the angular velocity variation is about 3 nHz at the equator and perhaps slightly larger, ∼ 4 nHz, at a latitude of 60°. The oscillation may not be strictly periodic; the high-latitude signal in particular appears to be somewhat erratic.
The tachocline oscillation signal is not far from the current sensitivity limits of helioseismic inversions so it is difficult to probe in detail. Basu and Antia (2001) find roughly periodic variations similar to those reported by Howe et al. (2000a) but they argue that the result is not statistically significant. A subsequent analysis by Toomre et al. (2003) further made the case that the oscillations are indeed real but they appear to be varying in amplitude as the solar cycle proceeds, first waning then waxing. Further monitoring of these oscillations is needed in order to verify their presence and better understand their origin.
In addition to the torsional and tachocline oscillation patterns, the solar rotation, particularly at high latitudes, undergoes monthly variations on the order of a few percent or less which appear to be more random (Toomre et al., 2000). Apart from these small variations, the differential rotation profile appears to be remarkably steady. Surface measurements show little variation for well over a century (Gilman, 1974; Schröter, 1985; Rüdiger, 1989). Still, there is some indication that the low-latitude rotation rate as traced by sunspots may increase by as much as a few percent during periods of minimum and to a lesser extent maximum activity (Howard, 1984; Hathaway and Wilson, 1990; Javaraiah, 2003). Longer-term variations may also be present. Javaraiah (2003) has considered rotation data from sunspot groups covering the period from 1879–2002 and has found possible evidence for several patterns, including a speedup of the equatorial rotation rate by ∼ 0.1% in alternate sunspot cycles, accompanyed by an increase in the differential rotation (latitudinal angular velocity gradient) and a greater asymmetry between the northern and southern hemispheres.
Sunspot groups may not be accurate tracers of solar rotation. Small variations in their rotational properties may reflect other physics, such as where they are "anchored" to the plasma. Thus, we are only beginning to explore systematic variations in the solar rotation over time scales of years and decades. High-quality helioseismic inversions have only been available for a single sunspot cycle. Continued monitoring of the internal rotation profile via helioseismology promises to provide new insight into its evolution for many years to come.
3.4 Meridional circulation
The differential rotation is the axisymmetric component of the mean longitudinal flow, < υ ϕ >. The axisymmetric flow in the meridional plane, < υ θ > and < υ r >, is generally known as the meridional circulation.
The meridional circulation in the solar envelope is much weaker than the differential rotation, making it relatively difficult to measure (e.g., Hathaway, 1996a). Furthermore, although it can in principle be probed using global helioseismology (Woodard, 2000), the effect of meridional circulation on global acoustic oscillations is small and may be difficult to distinguish from rotational and magnetic effects (Giles et al., 1997). Thus, we must currently rely on surface measurements and local helioseismology.
Early attempts to measure the mean meridional circulation in the solar photosphere by both Doppler and tracer techniques (reviewed by Hathaway, 1996a; Snodgrass and Dailey, 1996; Latushko, 1996) varied dramatically. Many suggested a poleward flow of ∼ 10–20 m s−1, but others found amplitudes ranging from 1 ∼ 100 m s−1 and complex latitudinal structure with both poleward and equatorward flows, multiple cells, and large asymmetries about the equator.
More recent Doppler measurements of the photospheric meridional circulation by Hathaway (1996b) and Hathaway (1996a) yield a poleward flow of about 20 m s−1 on average, confirming many of the earlier results. This mean poleward flow is nearly symmetric about the equator and peaks at latitudes of about 40°. However, Hathaway (1996a) found substantial monthly and yearly variations in the flow amplitude and profile, reaching speeds of up to 50 m s−1 (panel a in Figure 2). Doppler measurements by Ulrich et al. (1988) showed even larger fluctuations, sometimes reversing sign and becoming equatorward.
Spatial and temporal variation of the meridional circulation in the surface layers of the Sun. (a) The colatitudinal velocity 〈υ θ 〉 in the solar photosphere obtained from Doppler measurements, averaged over longitude and time. Positive values represent southward flow and different curves correspond to adjacent 6-month averaging intervals between 1992 and 1995 (from Hathaway, 1996b). (b) 〈υ θ 〉 as a function of latitude and depth inferred from ring-diagram analysis. Each inversion is averaged over a 3-month interval and results are shown for 1997, 1999, and 2001. Grey and white regions represent southward and northward flow, respectively. A contour plot of the velocity amplitude underlies the arrow plots, with contours labeled in m s−1. Flow near the surface and in the southern hemisphere is generally poleward but beginning in 1998, equatorward circulation is found in the northern hemisphere at depths below ∼ 3 Mm (from Haber et al., 2002).
Recent estimates of the meridional circulation obtained from the cross-correlation of magnetic features yield an average latitudinal flow which is poleward at low latitudes and weakly equator-ward at high latitudes, with a peak amplitude of about 15 m s−1 (Komm et al., 1993; Snodgrass and Dailey, 1996; Latushko, 1996). However, these methods too exhibit large temporal variations. In the 26-year interval studied by Snodgrass and Dailey (1996), the meridional flow achieves amplitudes as large as 50 m s−1 and often becomes equatorward at latitudes below 20° and above 40°.
Local helioseismology provides an alternative to surface measurements and gives us the capability of probing the meridional flow below the photosphere. Near the surface the results are generally consistent with Doppler and tracer measurements, showing poleward flow of about 20 m s−1 with substantial time variation and significant asymmetry about the equator (Giles et al., 1997; Chou and Dai, 2001; Haber et al., 2002; Basu and Antia, 2003; Zhao and Kosovichev, 2004).
Below the surface, Haber et al. (2002) have reported a flow reversal in the northern hemisphere where the circulation becomes equatorward at depths greater than about 3 Mm below the photosphere (r ∼ 0.99R⊙), down to the limit of their sampling domain which lies at a depth of 15 Mm (panel b in Figure 2). Their ring-diagram analysis spans six years, from 1996–2001, with the flow reversal occurring in the latter four, from 1998–2001. Such a flow reversal is not evident in the time-distance results of Zhao and Kosovichev (2004) who present meridional flows averaged over depths of 3–4.5 Mm and 6–9 Mm. Several local helioseismic studies have attempted to probe deeper still. Giles et al. (1997) presented time-distance results for the upper 4% of the solar interior and concluded that the meridional flow throughout this region was poleward. Braun and Fan (1998) similarly find no evidence for a return equatorward flow down to 0.85R⊙. Inferring the circulation at depth below about 0.98R⊙ is a difficult task and it is still too early to know what to make of these efforts.
There is evidence from both surface measurements and local helioseismology that the amplitude of the meridional circulation may be anticorrelated with magnetic activity, decreasing during solar maximum and increasing during solar minimum (Komm et al., 1993; Chou and Dai, 2001; Basu and Antia, 2003). Furthermore, a weak meridional circulation component of a few m s−1 has been found which diverges out of magnetic activity belts and propagates with them toward the equator as the activity cycle progresses (Snodgrass and Dailey, 1996; Beck et al., 2002). However, Zhao and Kosovichev (2004) report the opposite: weak meridional flows which converge toward activity belts. They argue that the convergence occurs in the outermost layers, less than ∼ 12 Mm below the photosphere whereas the divergence occurs deeper down.
Although much progress has been made in recent years, improving our understanding of the meridional circulation throughout the convective envelope remains an important challenge for local helioseismology in particular and will be a major research focus in the near future.
3.5 Giant cells, waves, and solar subsurface weather
Differential rotation and meridional circulation are essential components of solar interior dynamics but it is also of fundamental importance to investigate the large-scale convective motions which maintain them and which, therefore, lie at the root of solar variability (Section 1). There is no doubt that large-scale structure (ℓ ≤ 100) is present in surface velocity maps obtained from Doppler measurements, feature tracking, and local helioseismology (e.g., Stix, 2002). However, it has been notoriously difficult to identify characteristic patterns or to obtain quantitative diagnostics of large-scale convective motions.
The convection power spectrum in the photosphere obtained from Doppler measurements peaks at granulation scales (ℓ ≥ 1000), with a secondary peak at ℓ ∼ 120, corresponding to supergranulation (Hathaway, 1996b; Hathaway et al., 2000). At lower wavenumbers, the velocity spectrum appears to drop off nearly linearly: υ(ℓ) ∼ ℓ.
Recently, several groups have reported long-lived features in Dopplergrams which are highly correlated in longitude, corresponding to azimuthal wavenumbers of m = 0–8 (angular extent ≥ 45°) but with a narrow latitudinal extent of not more than about 6° (Ulrich, 1993, 2001; Beck et al., 1998). Although Beck et al. (1998) interpret these features as giant convection cells, Ulrich (2001) argues that they more likely comprise a spectrum of inertial oscillations, possibly related to Rossby wave modes (Appendix A.6) and perhaps also to torsional oscillations (Section 3.3).
Evidence for a dramatically different giant cell structure has been presented by Lisle et al. (2004). They study the supergranulation pattern using correlation tracking and find a tendency for north-south alignment of supergranular cells. Such an alignment would be expected if the supergranulation were advected by larger-scale, latitudinally-elongated lanes of horizontal convergence such as those commonly seen in numerical simulations of solar convection (Section 6.2). Advection by such structures may also help to explain why the supergranulation pattern appears to rotate faster than the surrounding plasma (Lisle et al., 2004).
The most substantial recent advance in the search for large-scale non-axisymmetric motions in the solar envelope has been the mapping of horizontal flows by local helioseismology, as shown in Figure 3. After subtracting out the contributions from differential rotation and meridional circulation, the residual flow maps reveal intricate, evolving flows on a range of spatial scales (Haber et al., 2002; Zhao and Kosovichev, 2004; Komm et al., 2004; Hindman et al., 2004). Such flow patterns have become known as solar subsurface weather, SSW (Toomre, 2002).
Shown is a synoptic horizontal flow map 10.2 Mm below the photosphere inferred from ring diagram analysis (Haber et al., 2002; Hindman et al., 2004). Vectors indicate flow speed and direction while the underlying image represents the radial magnetic field strength (red and green denote opposite polarity). Characteristic velocity amplitudes are 30 m s−1. These inversions are based on MDI data averaged over 7 days and sampled over square horizontal patches, each spanning 15° in latitude and longitude. The data shown have not been corrected for inclination (p-angle) effects which would shift velocities by about 4 m s−1 (courtesy D. Haber).
The inferred SSW patterns show a high correlation with magnetic activity, becoming more complex at solar maximum. Near the surface, strong horizontal flows converge into active regions and swirl around them, generally in a cyclonic sense (counter-clockwise in the northern hemisphere and clockwise in the southern hemisphere). Deeper down, roughly 10 Mm below the photosphere, the pattern reverses; here flows tend to diverge away from active regions (Zhao and Kosovichev, 2004).
The distribution and relative amplitude of horizontal divergence and vertical vorticity can provide insight into the nature of the flows and can be used to make contact with theoretical and numerical models. Komm et al. (2004) compute the divergence and vorticity fields from SSW flow maps along with other flow descriptors including the vertical velocity (obtained from the mass continuity equation) and vertical gradients of horizontal flows. The results again show a strong correlation with active regions which are associated with cyclonic vorticity, converging flows, and large velocity gradients.
Other flow diagnostics which can in principle be deduced from SSW flow maps include the horizontal Reynolds stress component < υ′ θ υ′ ϕ > (see Section 4.3). Although such quantities have not yet been investigated in detail with helioseismic measurements, they have been measured to some degree using sunspots as tracers. The results yield small but generally positive values, indicating equatorward angular momentum transport (Stix, 2002; Nesme-Ribes et al., 1997).
In addition to giant convective cells, large-scale, non-axisymmetric flow patterns may also arise from wave phenomena. A familiar example is the acoustic wave spectrum which forms the basis of helioseismology. There is also some evidence for the presence of Rossby wave modes or, more generally, inertial oscillations. Ulrich (2001) has interpreted long-lived features in photospheric Dopplergrams as a hierarchy of inertial oscillations with longitudinal wavelengths m ≤ 8. Some hints of these patterns can also be seen in SSW flow maps (Haber et al., 2002). Further evidence for Rossby waves on the Sun has been reported by Kuhn et al. (2000) and Lou (2000). Gizon et al. (2003) have suggested that supergranulation patterns may also exhibit wavelike behavior although this has been disputed by Rast et al. (2004).
3.6 The base of the convection zone
Inversions to determine the radial profile of sound speed and other structure quantities have been used to great effect in improving our understanding of the physics which goes into solar structure models (e.g., Gough, 1996; Christensen-Dalsgaard, 2002). In the context of solar interior dynamics, the most important contribution of structure inversions has been to locate the base of the solar convection zone at r b = 0.713 ± 0.003R⊙ (Christensen-Dalsgaard et al., 1991), defined as the radius at which the stratification changes from nearly adiabatic stratification to substantially subadiabatic stratification (see Section 8.1). This result has until recently been viewed as very reliable but new elemental abundance determinations have called it into question (Asplund et al., 2005; Bahcall et al., 2005). Helioseismic estimates further suggest that the extent of the overshoot region below the convection zone is no more than about 5% of a pressure scale height, which is less than 1% of the solar radius (Monteiro et al., 1994; Basu, 1997). Basu and Antia (2001) find no significant variations in either rb or the thickness of the overshoot region with latitude or time (variations in the structure of the tachocline obtained from rotational inversions are discussed in Sections 3.2 and 3.3).
3.7 Thermal asphericity and subsurface magnetic fields
Latitudinal variations (asphericity) in the sound speed may be caused by temperature perturbations induced by convection or magnetism or they may be caused by the direct influence of the Lorentz force on the propagation speed of acoustic waves. The two effects are difficult to disentangle in helioseismic inversions.
Latitudinal sound speed variations inferred by global helioseismology are found to be very weak (about one part in 104) and appear to be dominated by small-scale magnetic activity near the solar surface (Gough, 1996; Dziembowski et al., 2000; Antia et al., 2001, 2003). In particular, enhancements in the sound speed are found to correlate well with latitudinal bands of magnetic activity in the photosphere which migrate toward the equator during the course of the solar activity cycle. However, weak latitudinal variations have also been detected deeper in the interior. Time-averaged inversions reveal a significant sound speed enhancement throughout the convection zone, peaking at a latitude of ∼ 60° and radius of ∼ 0.92R⊙ (Dziembowski et al., 2000; Antia et al., 2001, 2003). This feature appears to be present at least over the time interval from 1995 to 2002 and its magnitude is consistent with a fractional sound speed variation of about 10−4, a magnetic field of strength ∼ 60 kG, or some combination of the two.
Probing magnetic fields near the base of the convection zone is of particular importance to solar dynamo theory since the tachocline and overshoot region are believed to play a key role in generating and storing toroidal magnetic flux which eventually rises to the surface to form active regions (see Section 4.5). Such fields have not yet been unambiguously detected but helioseismic measurements have suggested an upper limit of about 300 kG (Basu, 1997; Antia et al., 2000, 2003).
Thermal asphericity induced by convective motions may also give rise to latitudinal irradiance variations in the photosphere which can in principle be measured. However, in practice, such variations are dominated by magnetic features such as sunspots and faculae, making it difficult to distinguish purely thermal effects (Hudson, 1988). Early estimates of the pole-equator temperature difference (reviewed by Altrock and Canfield, 1972) were only able to set upper limits of a few K. After removing the facular contribution, Kuhn et al. (1988) report residual irradiance variations which they interpret as latitudinal temperature variations. The temperature peaks at low latitudes in warm bands which correlate well with the magnetic activity belts, propagating toward the equator as the cycle progresses. A second component is also present, consisting of warm poles which exhibit little variation over the course of the activity cycle. The amplitudes of the low and high-latitude maxima are about 3 K and 1 K, respectively, relative to the temperature minimum at mid-latitudes. However, further analysis has called this interpretation into question and suggests that the irradiance variations may instead be attributed to emission from diffuse magnetic elements (Woodard and Libbrecht, 2003).
Asphericity in the density field appears to be even weaker than that in the sound speed (fractional variation < 10−4) and has not yet been reliably detected (Antia et al., 2001).
3.8 Solar magnetism
Observations of magnetic activity on the Sun reveal extremely complex behavior but systematic patterns also exist, at least some of which may be traced back to field generation in the convection zone and tachocline. Thus, a wide variety of magnetic activity is of relevance to solar interior dynamics; here we will only scratch the surface. More comprehensive reviews are given in these volumes by Fan (2004) and Charbonneau (2005), (see also Schrijver and Zwaan, 2000; Ossendrijver, 2003).
The most familiar and compelling magnetic activity pattern in the Sun is the sunspot cycle and the corresponding butterfly diagram (e.g., Stix, 2002). Sunspots and other manifestations of magnetic activity emerge in well-defined latitudinal bands which migrate toward the equator on a timescale of about 11 years. As these activity bands converge on the equator, the polarity of the global field reverses and the emergence pattern repeats, returning to its previous magnetic configuration after two reversals, yielding a net 22-year periodicity.
Sunspot groups are often separated into regions of outward and inward magnetic polarity which are aligned nearly east-west (meaning the neutral line is nearly north-south), but tilted somewhat relative to lines of constant longitude. The polarity of the leading (eastern) side is opposite in each hemisphere and reverses sign every 11 years with the activity cycle (known as Hale's polarity rules) whereas the tilt angle increases approximately linearly with latitude (known as Joy's law). These patterns suggest that bipolar active regions are made up of toroidal magnetic flux which has emerged as a loop from below the photosphere and may still be anchored there (Fan, 2004).
The loops which emerge are often twisted and many obey systematic rules for the sense of the twist as defined by the magnetic helicity or current helicity (e.g., Biskamp, 1993). Helicity indicators in the photosphere, chromosphere, and corona are generally positive in the northern hemisphere and negative in the southern hemisphere (Pevtsov et al., 1994, 1995; Zirker et al., 1997; Chae, 2000; Pevtsov, 2002). The pattern is most evident with relatively large-scale structures such as coronal loops.
Another pattern in magnetic activity which has particular relevance to solar interior dynamics is the presence of active nests or active longitudes: localized regions of the solar photosphere where magnetic flux appears to emerge preferentially and repeatedly over the course of multiple rotation periods (Bumba and Howard, 1965; Bogart, 1982; Brouwer and Zwaan, 1990). DeToma et al. (2000) chart a number of such regions during the rising phase of the current solar cycle. They find nests which persist for up to seven rotations, and the number of simultaneous nests increases progressively as the cycle proceeds from zero in late 1995 to three in 1998 (previous studies revealed up to six coexisting longitudinal bands of enhanced activity).
The global structure of the coronal magnetic field as inferred from white light observations can also provide insight into the nature of the solar dynamo operating in the interior, although it is strongly influenced by dynamical processes in the atmosphere as well, such as advection by the solar wind (Aschwanden et al., 2001). Potential-field extrapolations from photospheric measurements and more sophisticated coronal models yield a complex web of magnetic loops and open fields with a range of size scales and connectivity across the solar surface (e.g., Altschuler and Newkirk, 1969; Gibson et al., 1999; Aschwanden et al., 2001; Schrijver and DeRosa, 2003). On the largest scales, the axisymmetric component of the poloidal field is approximately dipolar during solar minimum with an amplitude at the solar photosphere of roughly 10 G. However, as the activity cycle progresses, the field becomes much more complicated and dynamic, with substantial contributions from higher-order multipoles. Figure 4 illustrates the coronal field structure near solar maximum. Note that a potential-field extrapolation as shown does not take into account dynamics occurring above the photosphere and thus may not in general be an accurate indicator of the actual field structure (Gibson et al., 1999; Aschwanden et al., 2001). However, it is a good first approximation and suffices for our purposes here, as a diagnostic of dynamo processes in the solar interior.
Shown is a potential-field extrapolation of the radial magnetic field measured in the photosphere with the MDI instrument aboard the SOHO spacecraft (Schrijver and DeRosa, 2003,; see also http://www.lmsal.com/forecast). White lines denote closed loops while green and magenta lines denote open fields of positive and negative polarity, respectively (courtesy M. DeRosa).
4 Fundamental Concepts
The observed phenomena reviewed in Section 3 are compelling, calling out for a theoretical interpretation. In this section we lay the groundwork for such an interpretation and then we proceed to discuss more sophisticated modeling efforts in the remainder of the paper.
4.1 Governing equations
In order to understand the phenomena described in Section 3, we must consider the equations of magnetohydrodynamics (MHD) which express the conservation of mass, energy, and momentum in a magnetized plasma. Although the dynamics is made more complex by the presence of density stratification, rotation, magnetic fields, and spherical geometry, there is at least one property of the motions which may be safely exploited in order to simplify the equations of motion somewhat: they possess a low Mach number (this can usually be verified a posteriori in any numerical or theoretical model). In other words, the kinetic energy of the convection is small relative to the internal energy of the plasma. Furthermore, the ratio of magnetic to internal energy is also small (implying an Alfvénic Mach number ≪ 1). Under such conditions, it is valid to adopt the anelastic approximation. The anelastic approximation is justified throughout the solar interior with the exception of the near-surface layers (r ≥ 0.98R⊙) where velocities associated with granulation can exceed the sound speed and where radiative transfer and ionization effects must be taken into account.
In the anelastic approximation the velocity, magnetic fields, and thermodynamic variations induced by convection (or by other means) are treated as perturbations relative to a spherically-symmetric background or reference state. The resulting system of equations is given in Appendix A.2. In numerical applications, the anelastic equations can be much more computationally efficient to implement than the fully compressible MHD equations because they filter out high-frequency acoustic waves which would otherwise severely limit the time step required to maintain numerical stability. Furthermore, from a theoretical standpoint, the anelastic equations are generally more analytically tractable, partly because the velocity field can be expressed in terms of scalar streamfunctions and velocity potentials, thus eliminating one velocity variable (e.g., Glatzmaier, 1984).
In the remainder of this paper, we will use the anelastic equations described in Appendix A.2 to illustrate a few fundamental aspects of solar interior dynamics, the first being energy balance. The reference state density, pressure, specific entropy, and temperature are represented by overbars: \(\bar{\rho},\overline{P},\overline{S}\), and \(\overline{T}\). These same symbols without overbars denote fluctuations about the reference state. For more on notation, see Appendixes A.1 and A.2.
4.2 Energetics
Conservation of energy in the anelastic system is expressed as
$${\partial \over {\partial t}}({E_{\rm{k}}} + {E_{\rm{t}}} + {E_{\rm{m}}}) = - {\bf{\nabla}} \cdot \left({{{\cal F}^{{\rm{KE}}}} + {{\cal F}^{{\rm{EN}}}} + {{\cal F}^{{\rm{RD}}}} + {{\cal F}^{{\rm{PF}}}} + {{\cal F}^{{\rm{VD}}}} + {{\cal F}^{{\rm{BS}}}}} \right),$$
where Ek and Em represent the kinetic and magnetic energy density respectively and Et is the thermal energy. In the anelastic system, Et incorporates both the internal energy density associated with the thermodynamic perturbations and the gravitational potential energy. It is proportional to the specific entropy perturbation, \(E_{\rm t}=\bar{\rho}\overline{T}S\), defined relative to a nearly adiabatic background stratification. The derivation of Equation (2) is carried out in Appendix A.3 where complete expressions are given for all the energy and flux terms.
The terms \({\cal F}^{{\rm KE}}\) and \({\cal F}^{{\rm EN}}\) represent kinetic energy and enthalpy flux by convective motions. The latter of these, \({\cal F}^{{\rm EN}}\), dominates the energy flux throughout most of the convective envelope, transporting energy outward from the interior to the surface where it is then radiated into space. By contrast, the kinetic energy flux \({\cal F}^{{\rm KE}}\) is much weaker and is directed inward as a result of the asymmetry between upflows and downflows which is characteristic of compressible convection (see Section 6.2).
In the deep interior, the energy flux is carried by radiative diffusion, \({\cal F}^{{\rm RD}}\), which falls off gradually above the base of the convection zone at r ∼ 0.7R⊙. The Poynting flux \({\cal F}^{{\rm PF}}\) plays little role in the overall energy balance but can have a significant influence on dynamo processes, particularly if the magnetic boundary conditions permit leakage out of the domain (Brun et al., 2004). The viscous energy flux, \({\cal F}^{{\rm VD}}\), is generally negligible both in the Sun and in numerical models. Many numerical applications also include an additional diffusive heat flux which operates on the entropy gradient and which is intended to represent energy transport by unresolved convective motions (e.g., Miesch et al., 2000). This additional term is designed to carry flux outward near the upper boundary where the convective fluxes vanish and the radiative diffusion is small.
The final term in Equation (2), involving \({\cal F}^{{\rm BS}}\) reflects the internal and gravitational potential energy associated with the background stratification. If the reference state is adiabatic, this term vanishes. Even if the reference entropy gradient is nonzero, the horizontal average of \({\cal F}^{{\rm BS}}\) vanishes so it contributes nothing to the total radial energy flux (see Appendix A.3). However, this term together with the radiative heat flux, \({\cal F}^{{\rm RD}}\), provides the energy input which drives convective motions.
If the system is in thermal equilibrium, the fluxes must balance such that:
$${\langle {\cal F}_r^{{\rm{KE}}} + {\cal F}_r^{{\rm{EN}}} + {\cal F}_r^{{\rm{RD}}} + {\cal F}_r^{{\rm{PF}}} + {\cal F}_r^{{\rm{VD}}}\rangle _{\theta \phi t}} = {{{L_ \odot}} \over {4\pi {r^2}}},$$
where L⊙ is the solar luminosity and brackets indicate an average over the horizontal dimensions and time. The approach to equilibrium occurs on relatively long timescales because the energy flux through the convection zone is small relative to the internal energy of the plasma. An estimate for the relaxation timescale is \({\tau _{{\rm{rad}}}} = {M_{{\rm{CZ}}}}{C_V}\overline T/{L_ \odot}\), where MCZ is the total mass in the convection zone: \({M_{{\rm{CZ}}}}\sim\bar \rho (4\pi/3)(R_ \odot ^3 - r_{\rm{b}}^3)\), with rb ∼ 0.7R⊙. This comes out to be τrad ∼ 105 yr. By comparison, convective turnover timescales are thought to be of order a month.
If the anelastic equations are solved within a spherical shell Equation (2) implies that the total energy will be conserved if the net flux through the inner and outer boundaries vanishes. This will be the case if the boundary conditions are impenetrable and stress-free, if no net heat flux is applied, and if the magnetic field is required to be radial at the top and bottom of the shell. Other boundary conditions may lead to energy transport into or out of the domain.
Figure 5 summarizes the exchange of energy between the different reservoirs of the system. Energy is supplied from below via a radiative energy flux which ultimately originates from nuclear burning in the solar core. Convective motions tap this energy source through the buoyancy force which convert thermal energy to kinetic energy. This kinetic energy can then be converted into magnetic energy by the Lorentz force or back into thermal energy by pressure work on expanding or contracting fluid elements through the P∇·v term in the mechanical and internal energy equations (see Appendix A.3). Kinetic and magnetic energy may also be converted into thermal energy by viscous and Ohmic heating. These heating terms are unidirectional, but the buoyancy force, Lorentz force, and compression can operate in both directions, either extracting or injecting kinetic energy. Because we have neglected the centrifugal force, the kinetic energy associated with the uniform component of the solar rotation cannot be tapped directly, although the differential rotation component can be (Section 4.3).
Schematic diagram illustrating the energy flow in an anelastic model. The thermal energy incorporates both the internal energy of the plasma and the gravitational potential energy as described in the text. The buoyancy force and compression can transfer energy among the thermal and kinetic energy reservoirs while the Lorentz force can transfer energy among the kinetic and magnetic energy reservoirs. Viscous and Ohmic heating can also convert kinetic and magnetic energy to thermal energy.
4.3 Maintenance of differential rotation
The most stringent observational constraints on solar interior dynamics come from helioseismic determinations of the solar differential rotation (reviewed in Section 3). In this subsection we address how this differential rotation is established and maintained.
4.3.1 Angular momentum redistribution
The angular momentum per unit mass is defined as
$${\cal L} = r\sin \theta ({\Omega _0}r\sin \theta + \langle {v_\phi}\rangle) = {\lambda ^2}\Omega,$$
where Ω0 is the angular velocity of the rotating coordinate system and λ is the moment arm, λ = r sin θ. An evolution equation for \({\cal L}\) may be derived from the zonal component of the momentum equation, averaged over longitude, and the result may be written as
$$\bar \rho {{\partial {\cal L}} \over {\partial t}} = - \nabla \cdot ({{\bf{F}}^{{\rm{MC}}}} + {{\bf{F}}^{{\rm{RS}}}} + {{\bf{F}}^{{\rm{MS}}}} + {{\bf{F}}^{{\rm{MT}}}} + {{\bf{F}}^{{\rm{VD}}}}).$$
The right-hand-side includes contributions from the meridional circulation, Reynolds stress, Maxwell stress, mean magnetic fields, and viscous diffusion. Complete expressions for each of these flux terms are given in Appendix A.4.
The first term represents the advection of angular momentum by the mean meridional circulation, having the form \({{\bf{F}}^{{\rm{MC}}}} = \bar \rho \langle {{{\bf{v}}_{\rm{M}}}} \rangle {\cal L}\). The uniform rotation component of this, \(\bar \rho \langle {{{\bf{v}}_{\rm{M}}}} \rangle {\lambda ^2}{\Omega _0}\), represents the Coriolis force which redirects meridional flows into zonal flows. Within the anelastic approximation, the divergence of FMC may also be expressed as
$$ - {\bf{\nabla}} \cdot {{\bf{F}}^{{\rm{MC}}}} = - \bar \rho \langle {{\bf{v}}_{\rm{M}}}\rangle \cdot {\bf{\nabla}} {\cal L}.$$
Thus, meridional circulations perpendicular to \({\cal L}\) contours redistribute angular momentum, tending to make \({\cal L}\) constant along streamlines. If there were a global-scale circulation cell in the solar envelope extending from low to high latitudes, it would tend to "spin up" the poles relative to the equator. This is clearly not the case in the Sun (see Figure 1), so there must be more to the story.
The net angular momentum transport through any closed surface of constant L must vanish due to the divergenceless nature of the mass flux. For similar reasons, the component of FMC due to the uniform rotation, Ω0, cannot transport angular momentum across cylindrical surfaces aligned with the rotation axis. This result also applies to the more general case of a cylindrical rotation profile Ω(r, θ, t) = Ω(λ, t). Any net transport of angular momentum toward or away from the rotation axis by meridional circulation must come from the advection of the non-cylindrical component of the differential rotation (see also Section 4.3.2).
It may also be noted that angular momentum transport by meridional circulation alone cannot produce localized minima or maxima in \({\cal L}\). This follows from Equation (6), since \({\bf{\Delta }}{\cal L}\) vanishes at local extrema. Isolated features in the differential rotation profile such as jets must be produced by other means.
The main driver in maintaining the solar rotation profile is thought to be the Reynolds stress, FRS. This term represents the redistribution of angular momentum by non-axisymmetric motions, particularly convection. Rotation, stratification, magnetic fields, and the spherical shell geometry all introduce anisotropies into the flow which give rise to systematic correlations between the fluctuating velocity components. Horizontal velocity correlations 〈 υ′ θ υ′ ϕ 〉 produce latitudinal angular momentum transport whereas 〈 υ′ r υ′ ϕ 〉 correlations produce radial transport. Elucidating the nature of these correlations ranks among the greatest challenges in solar interior dynamics.
In the solar envelope, the Reynolds stress is dominated by turbulent convection, but other motions may also contribute in the tachocline and radiative interior. Convective overshoot excites a spectrum of internal wave modes, most notably gravity waves, which propagate throughout the radiative interior (see Section 8.4). In the absence of dissipation, linear waves cannot redistribute angular momentum. However, dissipation by thermal diffusion or wave breaking can induce a net angular momentum transport via the Reynolds stress which is generally long-range (non-local) and therefore difficult to model. A reliable model of wave transport requires a realistic depiction of wave generation, propagation, and dissipation, which is a formidable task due to the wide range of spatial scales involved. Other potential sources of Reynolds and Maxwell stresses include shear instabilities (see Section 8.2).
Magnetism can alter the rotation profile either by altering the Reynolds stress or by redistributing angular momentum directly via the Lorentz force. The angular momentum flux by the Lorentz force is here decomposed into contributions from fluctuating (non-axisymmetric) fields, FMS, and mean (axisymmetric) fields, FMT. The fluctuating component is known as the Maxwell stress and involves the nonlinear correlations 〈 B′ θ B′ ϕ 〉 and 〈 B′ r B′ ϕ 〉 Like the Reynolds stress, these may arise from turbulent convection, waves, or instabilities, and understanding their nature is every bit as challenging. The mean-field contribution is more straightforward and can be expressed as
$$ - {\bf{\nabla}} \cdot {{\bf{F}}^{{\rm{MT}}}} = {1 \over {4\pi}}\langle {{\bf{B}}_{\rm{M}}}\rangle \cdot {\bf{\nabla}} (\lambda \langle {B_\phi}\rangle).$$
In this manner, a mean poloidal field < BM > will resist deformation in the zonal (ϕ) direction because of the magnetic tension force. This "rubber band effect" will tend to reduce angular velocity gradients. The Maxwell stress may also have a similar "stiffening" effect due to magnetic tension (see Section 6.5).
The viscous contribution, FVD, is negligible in the Sun but can be significant in numerical and theoretical models (see Section 6.3). This term opposes angular velocity gradients, FVD ∝ − ∇Ω, driving the system toward uniform rotation.
The primary angular momentum balance in the Sun is thought to be between the Reynolds stress and meridional circulation, with a lesser role played by the Lorentz force. Thus, if the differential rotation is in a statistically steady state, we expect the following to hold, at least in an approximate and time-averaged sense:
$${\bf{\nabla}} \cdot {{\bf{F}}^{{\rm{RS}}}} = - {\bf{\nabla}} \cdot {{\bf{F}}^{{\rm{MC}}}}.$$
It has been realized for decades that this balance is likely to hold in the solar envelope (e.g., Tassoul, 1978; Zahn, 1992, and references therein) but there had been little further progress until recently, thanks to new insights from helioseismology and high-resolution numerical simulations. Now the specific angular momentum profile, \({\cal L}\), is well-established from global helioseismic inversions (see Figure 1). The meridional circulation is still only known reliably in the solar surface layers (see Section 3.4) but plausible profiles which are consistent with these surface results can be used to compute possible forms for FMC. Equation (8) may then be used to determine the corresponding Reynolds stress divergence. In other words, if we take the inferred differential rotation profile from helioseismology, we can determine what the Reynolds stress must be doing in order to maintain that profile against redistribution by some assumed meridional circulation. An illustrative example is shown in Figure 6.
(a) Angular velocity profile based on helioseismic inversions. This is a 2D SOLA inversion based on MDI data similar to that shown in panel a of Figure 1. Solid and dotted lines denote prograde and retrograde rotation relative to Ω0 = 2.6 × 10−6. (b) The specific angular momentum profile given by the rotation profile in (a). (c) A hypothetical meridional circulation pattern, illustrated in terms of the mass-flux streamfunction defined in Equation (13). The circulation in the northern hemisphere is counter-clockwise. (d) Divergence of the angular momentum flux FMC carried by the hypothetical meridional circulation. Solid and dotted lines denote positive and negative values respectively. If Equation (8) were satisfied, this would be equal to the convergence of the angular momentum transport by the Reynolds stress, FRS.
Although the angular velocity in the solar envelope, Ω, varies by ∼ 30% from equator to pole and exhibits nearly radial contours at mid-latitudes (Figure 6, panel a), the corresponding specific angular momentum, \({\cal L} = {\lambda ^2}\Omega\), is approximately cylindrical (Figure 6, panel b). The hypothetical meridional circulation pattern shown in panel c of Figure 6 would redistribute this angular momentum as shown in panel d of Figure 6. Thus, if the balance expressed in Equation (8) holds, the Reynolds stress must act to accelerate the lower convection zone and equatorial regions and to decelerate the upper convection zone in order to offset the advection of angular momentum by the meridional circulation. Any self-consistent mean-field model which exhibits a solar-like differential rotation profile as shown in panel a of Figure 6 and a single-celled meridional circulation pattern as shown in panel c of Figure 6 must include a Reynolds stress parameterization which redistributes angular momentum as shown in panel d of Figure 6 (unless the Lorentz force plays a significant role).
The results shown in Figure 6 are easily generalized to more complicated circulation patterns. If the angular momentum transport by Reynolds stress is to maintain a balance, it must converge wherever the circulation is away from the rotation axis and diverge wherever it is toward the rotation axis. This is best demonstrated by expressing the meridional circulation flux divergence as in Equation (6) and by noting that \({\bf{\Delta }}{\cal L}\) is directed away from the rotation axis. Another perspective can be gained by turning the problem around. For a given model of the Reynolds stress, helioseismic rotation profiles can be used to deduce the meridional circulation needed to maintain an equilibrium. This has been done by Durney (2000a).
If the anelastic equations are solved in a spherical shell with impenetrable, stress-free boundaries, and if the magnetic field is assumed to be radial at the boundaries, then there is no net torque and the total angular momentum of the shell, \(\int\bar{\rho}{\cal L}dV\), is conserved. This is of course just an approximation. In actuality, coupling between the convective envelope and the radiative interior may play a role in the global angular momentum balance (Section 7.3). Angular momentum exchange between the convection zone and the solar atmosphere is likely less important on dynamical timescales, although it is believed that the Sun has lost a large fraction of its initial angular momentum over the course of its lifetime via the solar wind.
4.3.2 The Taylor-Proudman theorem and thermal wind balance
In the previous section we discussed the mechanisms which can redistribute angular momentum in the solar interior, giving rise to differential rotation. There is more we can say about the angular momentum balance which may eventually be achieved if we consider the limit of rapid rotation3 such that Ro ≪ 1, where the Rossby number is defined as
$${\rm{Ro}} \equiv {U \over {2{\Omega _0}r}},$$
where U is a characteristic velocity scale relative to the rotating reference frame. We neglect viscous diffusion and the Lorentz force, and we assume that the mean flows are in a statistically steady state. With these approximations, the momentum Equation (40) expresses what is called geostrophic (or heliostrophic) and hydrostatic balance:
$$2{{\bf{\Omega}}_0} \times {\langle {\bf{v}} \rangle _{\phi, t}} + {{{\bf{\nabla}} {{\langle P \rangle}_{\phi, t}}} \over {\bar \rho}} + {{{{\langle \rho \rangle}_{\phi, t}}g} \over {\bar \rho}}\hat r = 0.$$
If we compute the zonal component of the curl of Equation (10), we obtain, with a little manipulation:
$${{\bf{\Omega}}_0} \cdot {\bf{\nabla}} \Omega = {1 \over {2\bar \rho r\lambda}}\left({H_\rho^{- 1}{{\partial {{\langle P \rangle}_{\phi, t}}} \over {\partial \theta}} - g{{\partial {{\langle \rho \rangle}_{\phi, t}}} \over {\partial \theta}}} \right) = {g \over {2{C_P}\lambda r}}{{\partial {{\langle S \rangle}_{\phi, t}}} \over {\partial \theta}}.$$
The final equality in Equation (11) holds if the reference state is approximately adiabatic and hydrostatic. A more general reference state can be incorporated by interpreting the latitudinal gradient on the right-hand-side as the mean gradient on isobaric (constant pressure) surfaces.
Equation (11) is the well-known Taylor-Proudman theorem (e.g., Pedlosky, 1987), as it applies to the solar differential rotation. If the stratification is perfectly adiabatic (∂S/∂θ = 0), this equation implies that the rotation profile should be cylindrical, i.e., contours of angular velocity Ω should be parallel to the rotation axis, Ω0. Alternatively, if significant latitudinal entropy gradients are present, then the Taylor-Proudman balance expressed by Equation (11) implies non-cylindrical rotation profiles, such that relatively warm poles (∂S/∂θ < 0 in the northern hemisphere) correspond to a decrease in angular velocity toward higher latitudes (Ω0·∇Ω < 0). In other words, latitudinal gradients of entropy (or density or temperature) on isobaric surfaces will tend to establish a non-cylindrical differential rotation.
If the rotation profile satisfies Equation (11) it is said to be in thermal wind balance, in analogy with the thermal wind of geophysical fluid dynamics (Pedlosky, 1987). More specifically, the thermal wind component of the differential rotation is the component which is non-cylindrical and which satisfies Equation (11).
In a thermal wind, departures from cylindrical symmetry are maintained by latitudinal entropy gradients. This is consistent with the angular momentum Equation (5) because if the Taylor-Proudman balance is satisfied perfectly, then both the Reynolds stress and the meridional circulation are negligible (as are the Lorentz and viscous forces), so Equation (5) becomes degenerate. However, meridional circulations are the means by which the thermal wind balance is established and maintained in a rapidly-rotating fluid shell. An imbalance in Equation (11) will drive circulations which will redistribute angular momentum until balance is achieved.
In the solar envelope, latitudinal entropy gradients may be established by the influence of rotation on the efficiency of the convection. For example, if convection is more efficient in the polar regions where the rotation vector is nearly vertical, then these regions will be relatively warm. In radiative equilibrium (the net energy flux into the convection zone equals the net flux out through the surface), such efficiency variations must be balanced by latitudinal energy transport as reflected by Equation (2). Thus, the role played by anisotropic energy transport in maintaining the solar differential rotation may potentially be as important as that played by the Reynolds stress, and may be just as enigmatic.
If the solar differential rotation were in thermal wind balance, we would expect thermal variations of a few parts in 106 as shown in Figure 7. If we neglect the pressure contribution to the latitudinal entropy gradient as a first approximation, the resulting temperature variations are about 5 K, increasing from equator to pole (Figure 7, panel b). Thus, if helioseismic inversions were to detect relatively warm poles near the base of the convection zone, this could be interpreted as evidence for thermal wind balance. However, the implied variations are still below the sensitivity limits of current inversions (Section 3.7).
Shown are the thermal variations implied by Equation (11) based on an angular velocity profile, Ω, obtained from helioseismic inversions (Figure 6, panel a), and on other parameters (g, C P , \(\overline{T}\)) obtained from a solar structure model (model S of Christensen-Dalsgaard, 1996). Frame (a) illustrates the normalized latitudinal entropy gradient, \(-C_{P}^{-1}\partial S/\partial\theta\), consistent with thermal wind balance. Frame (b) illustrates the corresponding temperature perturbation, assuming \(C_{P}^{-1}\partial S/\partial\theta\approx \partial T/\partial\theta\) (cf. Equation (43) in Appendix A.2).
4.4 Maintenance of meridional circulation
The axisymmetric circulation in the meridional plane may be described in terms of the zonal component of the curl of the mass flux
$$\varpi \equiv ({\bf{\nabla}} \times \langle {\bar \rho {{\bf{v}}_{\rm{M}}}} \rangle) \cdot \hat \phi = \bar \rho \langle {{\omega _\phi}} \rangle + {{d\bar \rho} \over {dr}}\langle {{v_\theta}} \rangle,$$
where ω ϕ is the zonal component of the vorticity and vM denotes the meridional component of the velocity: \({{\bf{v}}_{\rm{M}}} = {v_r}\hat r + {v_\theta }\hat \theta\). If we wish to take advantage of the vanishing divergence of the mass flux under the anelastic approximation, we may also introduce a streamfunction Ψ, defined such that
$$\langle {\bar \rho {{\bf{v}}_{\rm{M}}}} \rangle \equiv {\bf{\nabla}} \times \left({\Psi \hat \phi} \right).$$
This implies
$$\varpi = - {\nabla ^2}\Psi + {\Psi \over {{r^2}{{\sin}^2}\theta}}.$$
The evolution equation for w may be expressed as follows (see Appendix A.5):
$${{\partial \varpi} \over {\partial t}} = - r\sin \theta \sim{\bf{\nabla}} \cdot \left({{G \over {r\sin \theta}}} \right) = - {\bf{\nabla}} \cdot G + {{G \cdot \hat \lambda} \over {r\sin \theta}},$$
$$G = {G^{{\rm{RS}}}} + {G^{{\rm{AD}}}} + {G^{{\rm{BF}}}} + {G^{{\rm{MT}}}} + {G^{{\rm{VD}}}}.$$
The Reynolds stress has three distinct components [see Equation (76)]:
$${G^{{\rm{RS}}}} = \bar \rho \langle {{\bf{v}}_{\rm{M}}^{\prime}\omega _\phi ^{\prime}} \rangle - \bar \rho \langle {v_\phi ^{\prime}} \rangle \omega _{\rm{M}}^{\prime} + {{\langle {{{({v^{\prime}})}^2}} \rangle} \over {2{H_\rho}}}\bar \rho \hat \theta.$$
The first term is the most straightforward; it represents advection of zonal vorticity by the fluctuating meridional flow. The second term is easier to interpret if we consider its divergence: \({\bf{\nabla }} \cdot \left({\bar \rho \langle {v_\phi ^{\prime}} \rangle \omega _{\rm{M}}^{\prime}} \right) = \langle {\omega _{\rm{M}}^{\prime} \cdot \nabla \left({\bar \rho v_\phi ^{\prime}} \right)} \rangle\) Vortex structures which lie in the meridional plane \(\omega_{{\rm M}}^{\prime}\) may be tilted out of the plane by radial and latitudinal gradients in the longitudinal momentum, ρυ ϕ , thus generating longitudinal vorticity, ω ϕ . The final term in Equation (17) arises from the density stratification and its divergence is proportional to latitudinal kinetic energy gradients. It cannot generate longitudinal vorticity, ω ϕ , but it can modify ϖ through the second term on the right-hand-side of Equation (12), inducing a net mass flux circulation by altering υ θ .
The mean-flow term likewise involves three components due to the advection of longitudinal vorticity by the meridional circulation, the tipping of the absolute vorticity (relative to an inertial frame) associated with the mean rotation, ωrot = ∇× (Ωλ) = 〈ωM〉 + 2Ω0, and latitudinal kinetic energy gradients [see Equation (77)]:
$${G^{{\rm{AD}}}} = \bar \rho \langle {{{\bf{v}}_{\rm{M}}}} \rangle \langle {{\omega _\phi}} \rangle - \bar \rho \langle {{v_\phi}} \rangle {\omega _{{\rm{rot}}}} + \bar \rho {{{{\langle v \rangle}^2}} \over {2{H_\rho}}}\hat \theta.$$
The contribution from ωrot may also be regarded as the generation of meridional circulation via the action of the Coriolis force on the differential rotation.
In a compressible fluid, buoyancy cannot generate vorticity directly. However, if the mass flux is divergenceless as in the anelastic approximation, buoyancy can induce overturning circulations as reflected by the term GBF. In the present context, these may be regarded as axisymmetric convection cells. The Lorentz force may only induce mass flux circulations through magnetic tension, (B·∇)B. This effect is contained in the term GMT which includes contributions both from fluctuating fields (the Maxwell stress) and from mean fields.
In the Sun the rotational component of GAD (that involving ωrot) plays an important role, particularly at low latitudes where the prograde differential rotation is forced outward by the Coriolis force and subsequently turns poleward in the surface layers (see Section 6.4). The buoyancy and Reynolds stress terms (GBF,GRS) are also likely to be important (see Section 6.4).
In our anelastic formulation, we have neglected the centrifugal force. It is known that the centrifugal force can produce axisymmetric motions, often called Eddington-Sweet circulations, in the radiative zones of stellar interiors due to the distortion of the gravitational potential surfaces relative to surfaces of constant temperature (e.g., Tassoul, 1978). The mixing of chemicals and angular momentum by such circulations may have important consequences for stellar evolution models or for the relatively "slow" dynamics which may contribute to tachocline confinement (Section 8.5). However, Eddington-Sweet circulations are insignificant in the convection zone and upper tachocline. Measured meridional flows in the solar surface layers imply turnover timescales of years to decades, much longer than the Eddington-Sweet timescale which is more than 106 yr.
Equation (15) quantifies the relative importance of processes which redistribute meridional momentum but, as with the differential rotation (cf. Section 4.3.2), other balance equations can often provide further insight into the meridional circulation amplitude and profile which may ultimately be achieved in equilibrium. In this respect, the mean thermal energy equation is particularly important:
$${\bf{\nabla}} \cdot \left[ {{{\langle {\bar \rho {{\bf{v}}_{\rm{M}}}} \rangle}_{\phi t}}\left({{{\langle S \rangle}_{\phi t}} + \bar S} \right)} \right] = - {\bf{\nabla}} \cdot {\langle {\bar \rho {v^{\prime}}{S^{\prime}}} \rangle _{\phi t}} + {\bar T^{- 1}}{\bf{\nabla}} \cdot \left[ {{\kappa _r}\bar \rho {C_P}{\bf{\nabla}} \left({{{\langle T \rangle}_{\phi t}} + \bar T} \right)} \right] + {\cal Q}.$$
Here \({\cal Q}\) represents viscous and Ohmic heating. Equation (19) has been derived by averaging Equation (41) in Appendix A.2 over longitude and time (denoted by < > ϕt ) and assuming a steady state. In the radiative zone below the solar tachocline, non-axisymmetric fluctuations and dissipation \({\cal Q}\) are negligible so advective heat transport by the meridional circulation balances radiative diffusion (Spiegel and Zahn, 1992). In the convection zone there is an additional contribution from the convective heat flux, represented by the first term on the left-hand-side of Equation (19). Thus if the thermal structure is known and the convective heat flux is parameterized via mean-field theory or otherwise given, then Equation (19) may be used to determine the equilibrium meridional circulation. In a more sophisticated mean-field model, Equation (19) may be solved simultaneously with the zonal and meridional momentum equations to obtain a self-consistent equilibrium state.
In the solar convection zone, the advection of angular momentum by meridional circulation is thought to balance angular momentum transport by the Reynolds stress as expressed by Equation (8). Thus, if the Reynolds stress and rotation profile are given, this equation may similarly be used to determine the equilibrium meridional circulation. However, the thermal wind component of the differential rotation discussed in Section 4.3.2 is independent of the meridional circulation profile. The equation for thermal wind balance (11) may be derived from the meridional circulation maintenance Equation (15) if the uniform rotation component of \({G^{{\rm{AD}}}}(- 2\bar \rho \langle {{v_\phi }} \rangle {{\bf{\Omega }}_0})\) balances the buoyancy term GBF and if geostrophic balance [Equation (10)] is assumed. Under these conditions, the maintenance Equation (15) becomes independent of ϖ.
4.5 The solar dynamo
As discussed in Sections 1 and 4.2, the magnetic fields which drive solar variability are thought to be generated by fluid motions in the convection zone and tachocline. Kinetic energy is converted to magnetic energy via hydromagnetic dynamo processes and this flux subsequently emerges from the surface, playing a central role in the dynamics of the solar atmosphere and heliosphere.
There are many recent reviews on the solar dynamo so there is no need for a detailed discussion here. For a comprehensive overview of solar dynamo theory as a whole an excellent place to start is the recent article by (Ossendrijver, 2003). Mean-field models of the solar activity cycle are reviewed in these volumes by Charbonneau (2005). Tobias (2004) focuses on the role of the solar tachocline in particular. Further details and perspectives on solar and stellar dynamos are provided by Weiss (1994), Mestel (1999), Schrijver and Zwaan (2000), and Rüdiger and Hollerbach (2004). Dynamo theory from a more general astrophysical perspective has been reviewed comprehensively by Moffatt (1978), Parker (1979), Childress and Gilbert (1995), and most recently by Brandenburg and Subramanian (2004).
Some insight into the nature of solar dynamo processes may be obtained from the evolution equation for the mean field, which is just the longitudinal average of Equation (42):
$${\partial \over {\partial t}}\langle {\bf{B}} \rangle = \hat \phi \lambda \langle {{{\bf{B}}_{\rm{M}}}} \rangle \cdot {\bf{\nabla}} \Omega + {\bf{\nabla}} \times (\langle {{{\bf{v}}_{\rm{M}}}} \rangle \times \langle {\bf{B}} \rangle + {\cal E} - \eta {\bf{\nabla}} \times \langle {\bf{B}} \rangle),$$
where ε is the turbulent emf, arising from the non-axisymmetric field components:
$${\cal E} = \langle {{{\bf{v}}^{\prime}} \times {{\bf{B}}^{\prime}}} \rangle.$$
The first term on the right-hand side of Equation (20) is the familiar Ω-effect; differential rotation converts poloidal field 〈BM〉 to toroidal field 〈B ϕ 〉 and amplifies it, extracting energy from the rotational shear. The second term represents advection of magnetic flux by the meridional circulation. Although the meridional circulation may redistribute and amplify magnetic flux, it cannot produce an exchange of energy between the mean toroidal and poloidal field components.
The term involving ε represents field generation by turbulent convection or other processes, such as shear instabilities (see Section 8.2). Note that our derivation of Equation (20) involves no additional approximations beyond the standard anelastic (or compressible) MHD equations. However, this equation is the starting point for mean-field dynamo theory in which additional approximations are made in order to make the system more tractable. In many mean-field models the rotation profile Ω and the meridional circulation 〈vM〉 are specified and the Lorentz force is neglected, making the approach kinematic. Some type of parameterization is then introduced for the turbulent emf ε and Equation (20) is solved for 〈B〉.
The simplest and most common parameterization may be derived by exploiting the linearity of the induction equation in B (neglecting Lorentz force feedback on v) and by assuming scale separation between the mean and fluctuating fields. The problem can be further simplified by assuming that the fluctuations are pseudo-isotropic, meaning that their statistics are invariant under rotation of the coordinate system but not necessarily invariant under reflection. In this case the turbulent emf may be represented in terms of the mean field as:
$${\cal E} = \alpha \langle {\bf{B}} \rangle - {\eta _t}{\bf{\nabla}} \times \langle {\bf{B}} \rangle,$$
which is valid to lowest order in the ratio of fluctuating scales to mean scales (Moffatt, 1978). The term involving a on the right-hand-side of Equation (22) represents the amplification of mean fields by fluctuating motions, which is widely known as the α-effect. The final term in Equation (22) represents turbulent diffusion with an effective diffusivity given by η t . If the assumptions of homogeneity and pseudo-isotropy are relaxed, a and η t become pseudo-tensors and can represent more general transport processes such as magnetic pumping (Ossendrijver, 2003). In general, α and η t vary with latitude and radius and may depend on other parameters of the problem such as the rotation rate Ω0 and the strength of the mean field 〈B〉2. For example, in many mean-field models, α and η t are quenched (reduced in amplitude) as Ω0 or 〈B〉2 become large (e.g., Rüdiger and Hollerbach, 2004; Charbonneau, 2005).
In analogy with Equation (22), we will in this paper loosely refer to the α-effect in the general sense of field generation via the turbulent emf term in Equation (20). This does not necessarily imply that the parameterization in Equation (22) is an accurate one. In practice, solar dynamo processes may be much more subtle than this simple expression suggests (see Section 6.5). Still, the classical α-effect is a useful concept and remains an important ingredient of dynamo theory.
Unlike the Ω-effect, the α-effect can work both ways: it may convert toroidal field energy to poloidal field energy or vice versa. The field conversion and amplification process is often associated with vorticity and shear as in the classical scenario, first described by Parker (1955), in which field lines are lifted and twisted by helical eddies. In the special case of homogeneous, pseudo-isotropic turbulence, the a parameter is directly proportional to the mean kinetic helicity of the flow, Hk = 〈ω·v〉 (Moffatt, 1978; Ossendrijver, 2003). Rotation induces vorticity and breaks the reflection symmetry of the fluid equations, so rotating flows are generally helical and tend to be efficient dynamos, although rotation is not required for sustained dynamo action (Cattaneo et al., 2003).
Although Equation (20) only strictly applies to the mean (longitudinally-averaged) field (or some other suitable spatial or ensemble average), similar processes also operate on fluctuating (non-axisymmetric) fields. All toroidal field structures are amplified to some extent by rotational shear and processes akin to the (generalized) α-effect generate magnetic energy on a wide range of spatial scales. Most solar dynamo models focus on the axisymmetric component of the field but observations indicate that the magnetic field structure in the solar photosphere and corona is quite complex, with a large non-axisymmetric component (see Section 3.8 and Figure 4). Solar variability is dominated not by mean fields but by localized structures such as active regions, filaments, and coronal loops.
Our current paradigm for how the solar dynamo operates is illustrated in Figure 8. The density stratification tends to make solar convection highly anisotropic, characterized by relatively weak, broad upflows amid a complex, evolving network of strong downflow lanes and plumes (0). Turbulent downflow plumes possess substantial vorticity and helicity which may amplify fields through the α-effect (1). These fields are then pumped downward by the anisotropic convection and accumulate in the overshoot region and tachocline (2). Intermittent plumes may dredge up some of this flux and return it to the convection zone where it may be further amplified and again pumped down. Differential rotation in the tachocline stretches and amplifies this disorganized field into strong, coherent toroidal flux tubes and sheets (3). As the field becomes stronger, it eventually becomes buoyantly unstable and rises toward the surface (4). The Coriolis force acting on these rising structures twists them in a systematic way which depends on latitude (5). Weaker structures may be shredded by turbulent convection in the envelope and the flux is then recycled (6). Stronger fields and configurations (e.g., twisted tubes) remain coherent throughout the convection zone and emerge from the surface as bipolar active regions (7). Large-scale poloidal fields may be generated by the α-effect (1) or by the turbulent diffusion of surface flux after the tubes have emerged (7). Due to the manner in which field is amplified by the Ω-effect (3) and to the tilts induced in surface active regions due to the Coriolis force (5), surface diffusion would tend to build large-scale poloidal fields opposite in sign to the prevailing field, eventually producing a global polarity reversal.
Schematic illustration of the solar dynamo. Numbers indicate particular processes as described in the text (courtesy N. Brummell).
This schematic picture of the solar dynamo is compelling but highly simplified. In actuality, each of the processes identified in Figure 8 is complex and researchers are only beginning to understand how they work in detail.
5 Modeling Solar Convection
The extreme parameter regimes which prevail in the solar interior are inaccessible to laboratory experiments. Although experiments can provide important insight into fundamental aspects of turbulence and dynamo processes, most of our current knowledge about large-scale dynamics in the Sun comes from numerical and theoretical modeling efforts. In this section we briefly describe some of the modeling strategies which have been used. A comprehensive account is beyond the scope of this review; the reader is referred to the publications cited in this section for more information. For further information on laboratory experiments regarding convection and dynamo processes in a solar/planetary context, the reader is referred to Hart et al. (1986), Siggia (1994), Niemela et al. (2000), Busse (2000), and Gailitis et al. (2002).
5.1 The challenge
The molecular viscosity in the solar interior may be estimated by ν ∼ 1.2×10−16 ∼ T5/2ρ−1 cm2 s−1, which is valid for a fully ionized hydrogen plasma, neglecting the contribution due to radiation (Parker, 1979). This yields ν ∼ 1 cm2 s−1 in the upper convection zone, rising to somewhat higher values near the tachocline. If giant cells have an amplitude of U ∼ 100 m s−1 and scales of L ∼ 200 Mm, this implies Reynolds numbers of Re = UL/ν ∼ 1014. In other words, inertia dominate over viscous dissipation, making solar convection strongly nonlinear and thus highly turbulent.
Although solar convection is certainly not homogeneous and isotropic, a rough estimate of the viscous dissipation scale dv can be obtained by assuming a classical Kolmogorov inertial range (e.g., Lesieur, 1997). The result is \({d_{\rm{v}}}\sim LR_{\rm{e}}^{ - 3/4}\sim 1\;{\rm{cm}}\) — more than ten orders of magnitude smaller than the solar radius! As in most other astrophysical and geophysical systems, direct numerical simulations which capture all the dynamical scales of the system are not feasible because computers simply are not efficient enough to perform all the necessary calculations.
The thermal and magnetic dissipation scales are larger than the viscous dissipation scale but are still beyond the resolution of a global numerical model. We can estimate the magnetic diffusivity by again assuming a fully ionized hydrogen plasma where η = 1013 ∼ T−3/2 cm2 s−1 (Parker, 1979). In the solar interior, radiative diffusion dominates over thermal conduction, giving rise to an effective thermal diffusivity of \({\kappa _{\rm{r}}} = 16{\sigma _{{\rm{sb}}}}{T^3}/(3\chi {\rho ^2}{C_P})\), where σsb is the Stefan-Boltzman constant and χ is the opacity (Hansen and Kawaler, 1994). Entering values from a solar structure model (model S of Christensen-Dalsgaard, 1996) yields κr ≈ η ∼ 105 cm2 s−1 near the surface, with κr increasing to ∼ 107 cm2 s−1 and η decreasing to ∼ 103 cm2 s−1 in the tachocline. These values imply low Prandtl and magnetic Prandtl numbers: Pr = ν/κ ∼ 10−3 −10−6 and Pm = ν/η ∼ 10−5 −10−6. The corresponding thermal and magnetic dissipation scales are then several meters to several kilometers.
If motions in the Sun were self-similar then the large dynamical range might not be a problem (see Section 7.2). Although this may be a good approximation for the smallest scales, it does not apply throughout because qualitatively different dynamics occur over a wide range of scales in the solar interior. On the largest scales ∼ 1000 Mm, we have differential rotation and meridional circulation which require the full spherical geometry to be investigated in detail. In the solar surface layers, the strong stratification coupled with ionization and radiation effects drives much smaller-scale motions including granulation (∼ 1 Mm) and supergranulation (∼ 30 Mm). Relatively small-scale motions are also driven by the strong rotational shear and the stiff transition from subadiabatic to superadiabatic stratification at the base of the convection zone, where the region of convective overshoot is thought to be less than 10 Mm thick (Sections 3.6 and 8). In between, in the bulk of the convection zone, we have so-called giant cells (Section 3.5) which likely occupy a wide dynamic range from hundreds of Mm where most of the buoyancy driving occurs down to, at least, supergranulation scales (Section 7.1). The coupling between the bulk of the convection zone and the distinct dynamics occurring in the upper and lower interface regions and beyond is a challenging problem which remains poorly understood (Section 7.3).
The range of temporal scales which characterize solar interior dynamics is every bit as daunting as the range of spatial scales. Granulation evolves over the course of a few minutes, which is comparable to the oscillation frequency of acoustic waves (∼ 5 min). Supergranulation timescales in the surface layers and gravity wave periods in the radiative interior are both somewhat longer — about one day and several hours, respectively. Turnover timescales of giant cells are thought to be comparable to the rotation period of about a month, but substantial evolution likely occurs over the course of days and weeks (Section 6.2). These giant cells likely play a crucial role in the 22-year solar activity cycle (Section 3.8), which must be the ultimate target of any comprehensive dynamical model of the solar interior. Variations of this activity cycle such as the well-known Maunder minimum are known to occur on timescales of centuries or millennia (e.g., Usoskin and Mursula, 2003; Charbonneau, 2005). Meanwhile, thermal relaxation timescales are hundreds of millennia (Section 4.2) and spin-down of the Sun due to magnetic braking and angular momentum loss in the solar wind occurs on still longer timescales — millions to billions of years!
From a modeling perspective, the vast dynamic range of spatial and temporal scales is the most challenging aspect of solar interior dynamics; no single model can hope to capture all the relevant processes. Some approximations must be made.
5.2 numerical simulations
High-resolution numerical simulations provide a powerful means by which to investigate the diverse and complex dynamics occurring in the solar interior. They have as their basis the fundamental equations of mass, energy, and momentum conservation in a magnetized or neutral fluid and explicitly resolve nonlinear interactions over a wide range of spatial scales. As such, they can capture dynamical processes which lie outside the scope of other modeling approaches and they have therefore become an essential tool in solar physics and throughout turbulence research (e.g., Pope, 2000).
Global-scale phenomena such as differential rotation and the solar activity cycle must ultimately be studied using global models. However, in light of the formidable computational challenges highlighted in Section 5.1, much progress can still be made by considering local Cartesian domains intended to represent a small subvolume of the solar envelope. The results, limitations, and promise of global convection simulations will be reviewed at length in Sections 6 and 7.
Although many high-resolution local simulations of solar convection focus on dynamics in the surface layers such as granulation and its interaction with magnetic fields (Weiss et al., 1996, 2002; Stein and Nordlund, 1998, 2000; Hurlburt et al., 2002; Vögler et al., 2005; Rincon et al., 2005), others are concerned with more fundamental fluid dynamical processes which occur throughout the convection zone. These models have been based either on the fully compressible fluid equations (Cattaneo et al., 1991; Brummell et al., 1996, 1998; Brandenburg et al., 1996; Stein and Nordlund, 1998; Porter and Woodward, 2000; Tobias et al., 2001; Brummell et al., 2002b; Ziegler and Rüdiger, 2003) or on the Boussinesq approximation where the compressibility of the fluid is neglected outside of the buoyancy driving (Julien et al., 1996a, b; Weiss et al., 1996, 2002; Cattaneo, 1999; Cattaneo et al., 2003). This is in contrast to recent global models which are based on the anelastic equations described in Appendix A.2. Anelastic models have been developed in local domains but these have thus far focused mainly on the dynamics of magnetic flux structures rather than convection (reviewed by Fan, 2004).
With a few exceptions (e.g., Porter and Woodward, 2000), most local models employ spectral methods for the horizontal dimensions which are treated as periodic. The Cartesian geometry permits the use of fast Fourier transforms (FFTs) which are more computationally efficient than the Legendre transforms necessary for the spherical harmonic algorithm currently used in global simulations. Because of this greater efficiency and the simplified geometry, local models can generally achieve somewhat higher resolution and more turbulent parameter regimes than global simulations and are therefore well equipped to study the fundamental coupling between turbulent convection, rotation, and magnetic fields.
Local simulations were the first to demonstrate the granulation-like character of turbulent compressible convection; broad, relatively weak, relatively laminar upflows surrounded by a network of strong turbulent downflow lanes and plumes where vorticity and magnetic fields are highly concentrated. These strong vortical downflow plumes were identified as the dominant structures of the flow which could remain coherent over multiple density scale heights. Although Boussinesq simulations are symmetric about the mid-plane, they also exhibit an interconnected network of lanes and plumes flowing away from the boundaries which resembles granulation near the top of the layer (e.g., Cattaneo et al., 2003).
Brummell et al. (1996, 1998) found that in the presence of rotation, turbulent plumes tend to align with the rotation axis, altering the Reynolds stress relative to more laminar flows. Quasi-2D vortex interactions among plumes, enhanced by rotation, alter their entrainment and transport properties (Julien et al., 1996a, 1999; Brummell et al., 1996). In particular, vortex interactions can lead to enhanced horizontal mixing and a decorrelation of the temperature and vertical velocity in a plume, reducing the buoyancy driving. The resulting decrease in the convective enthalpy and kinetic energy flux must be compensated by thermal diffusion, leading to a larger superadiabatic entropy gradient in the convection zone relative to comparable non-rotating flows. The Boussinesq simulations by Julien et al. (1996a, 1999) possess both upward and downward plumes which dominate the convective heat flux even though they have a small filling factor. However, downward plumes dominate in compressible flows with a substantial density stratification and the resulting downward kinetic energy flux can nearly balance the upward enthalpy flux such that the plumes contribute little to the net vertical energy transport (Cattaneo et al., 1991). This asymmetry between upflows and downflows also leads to a net downward pumping of magnetic fields from the convection zone to the stably-stratified radiative interior, a process which has also been investigated in detail with local simulations (Brandenburg et al., 1996; Tobias et al., 2001; Dorch and Nordlund, 2001; Ziegler and Rüdiger, 2003).
The highest-resolution simulations of solar convection to date have achieved roughly 10003 spatial grid points. Thus, even the most ambitious models can only capture a fraction of the vast dynamic range which characterizes solar interior dynamics (Section 5.1). For this reason, all simulations of solar convection should be viewed as large-eddy simulations (LES) in which unresolved subgrid-scale (SGS) processes must be parameterized or otherwise modeled (Section 7.2). Most current models simply treat unresolved motions as an effective turbulent diffusion of momentum, heat, and magnetic fields which is many orders of magnitude larger than the molecular diffusion. Thus, such simulations may also be regarded as direct numerical simulations (DNS) of a hypothetical physical system which is not the Sun.
5.3 Reduced models
High-resolution numerical simulations have become invaluable research tools but they do have their limitations. Because of the computational expense, it is difficult to comprehensively explore the sensitivity of the solutions to parameter values, boundary conditions, and the influence of dynamics which are unresolved or otherwise beyond the scope of the model. Furthermore, it is difficult to investigate relatively slow dynamics such as long-term modulations of the solar activity cycle or the spin-down of a star over the course of its main-sequence lifetime. For these and other purposes, a variety of reduced models have been devised, which are based on some approximated form of the full 3D equations of motion.
The most common approach is to average the equations of motion, generally over longitude, and then to introduce parameterizations for the nonlinear advection terms, including the Reynolds stress, the convective energy fluxes [\({\cal F}^{{\rm EN}}\) and \({\cal F}^{{\rm KE}}\); see Equation (2)], and the turbulent emf (Section 4.5). These parameterizations may themselves be nonlinear and they may introduce additional variables and additional prognostic equations, but they are designed to be more analytically or computationally tractable than the full, 3D equations of motion. The reduced equations are then solved to obtain the structure and evolution of the mean fields which are the quantities of interest. In a solar physics context, this approach is often referred to as mean-field hydrodynamics or mean-field dynamo modeling but it is closely related to what in the turbulence community is called Reynolds-averaged Navier-Stokes (RANS) modeling.
As a simple example of how a mean-field model may work in practice, we consider Equation (5) which is the evolution equation for the differential rotation. In a mean-field model, we may wish to approximate the Reynolds stress in terms of a turbulent viscosity operating on the mean flow (cf. Equation (73)) and a A-effect (Rüdiger, 1989; Rüdiger and Hollerbach, 2004):
$${{\bf{F}}^{{\rm{RS}}}}\sim\bar \rho \left({{\lambda ^2}{\nu _{\rm{V}}}{{\partial \Omega} \over {\partial r}} + {\Lambda _{\rm{V}}}{\Omega _0}\sin \theta} \right)\hat r + \left({{\lambda ^2}{{{\nu _{\rm{H}}}} \over r}{{\partial \Omega} \over {\partial \theta}} + {\Lambda _{\rm{H}}}{\Omega _0}\cos \theta} \right)\hat \theta.$$
The turbulent viscosity represents diffusive mixing of momentum by turbulent motions and is usually justified using mixing-length arguments. It is in general anisotropic (νV ≠ νH) and in-homogeneous (νV = νV(r, θ), νH = νH(r, θ)) due to the influence of rotation and stratification. The A terms are non-diffusive source terms which are intended to represent systematic velocity correlations induced by the Coriolis force. The coefficients ΛV and ΛH may also depend on the latitude, radius, and rotation rate. Many recent models include quenching mechanisms so the Λ coefficients remain bounded as the rotation rate or the magnetic field strength becomes large (e.g., Rüdiger et al., 1998).
If one specifies the coefficients νV, νH, ΛV, and ΛH and also the meridional circulation, then Equation (5) may be solved numerically to obtain the equilibrium rotation profile (neglecting the Lorentz force). A more self-consistent approach would be to solve the angular momentum equation together with the longitudinally-averaged meridional momentum and thermal energy equations to obtain the full mean flow and thermodynamic fields. In order to do this, similar parameterizations must be introduced to represent the meridional Reynolds stress and the convective heat flux.
Although an anisotropic viscosity alone can induce mean flows, the differential rotation in many mean-field models is driven mainly by either the Λ-effect or by latitudinal variations in the convective heat flux which may drive a thermal wind (Section 4.3.2). The importance of the latter effect in particular has recently been emphasized by Kitchatinov and Rüdiger (1995) and Durney (1999).
As an example of a multi-equation RANS approach, we consider the κ-∊ model which is commonly used in industrial applications (e.g., Pope, 2000; Durbin and Pettersson Reif, 2001). Here the Reynolds stress is expressed in terms of an isotropic turbulent viscosity which is proportional to \(E_{{\rm k}}^{2}/\epsilon\) where Ek is the kinetic energy of the fluctuating velocity field and ∊ is the energy dissipation rate, which is assumed to be scale-invariant within a self-similar inertial range. This expression may be justified using dimensional arguments for homogeneous, isotropic, incompressible flow at high Reynolds numbers. Diagnostic equations for Ek and ∊ may then be derived from the fluctuating flow equations or from phenomenological arguments. These equations are then solved simultaneously along with the mean-field equations.
Similar multi-equation approaches may be followed in the Sun, but they must be somewhat more sophisticated in order to take into account rotation, stratification, shear, and if they're ambitious enough, magnetic fields. Canuto et al. (1994) have developed a Reynolds stress model based on a hierarchy of equations obtained by taking successive moments of the compressible Navier-Stokes equations and then introducing analytic closures for the highest-order moments. A multi-equation model for the convective energy flux has been developed by Canuto and Dubovikov (1998) and has been used by Marik and Petrovay (2002) to investigate the structure of the overshoot region.
Mean-field hydrodynamics in a solar context has been thoroughly reviewed by Rüdiger (1989), Canuto and Christensen-Dalsgaard (1998), and Rüdiger and Hollerbach (2004). More general reviews of turbulence modeling are given by Cambon and Scott (1999), Pope (2000), Durbin and Pettersson Reif (2001), and Hanjalić (2002).
Mean-field dynamo models are distinct from hydrodynamic models in that many of them are kinematic, based only on the mean induction equation, with a specified mean flow field and parameterizations introduced for the turbulent emf (Section 4.5). Much recent attention has focused on flux-transport dynamo models in which the meridional circulation plays a key role in setting the period of the activity cycle and in establishing emergence patterns of magnetic flux such as the butterfly diagram (Choudhuri et al., 1995; Durney, 1995; Dikpati and Charbonneau, 1999; Dikpati and Gilman, 2001a; Charbonneau, 2005). The literature on mean-field solar dynamo models is vast and we make no attempt to review it here. The reader is referred to Charbonneau (2005) and to the other references given in Section 4.5.
Many dynamo models have been developed which do consider the feedback of magnetic fields on the mean flow, often focusing on temporal variations of the differential rotation (Kitchatinov et al., 1999; Durney, 2000b; Covas et al., 2001, 2004). These models have shown that the torsional oscillations in particular (Section 3.3) are likely due to the action of the Lorentz force from the axisymmetric dynamo-generated field in relation to the activity cycle. This was first suggested by Yoshimura (1981) and Schüssler (1981) soon after the torsional oscillations were discovered.
An alternative to (or in some cases a variation of) mean-field models are phenomenological approaches which are motivated by observations, numerical simulations, or laboratory experiments. Chief among these are the various models which describe solar convection as an ensemble of turbulent plumes (Schmitt et al., 1984; Rieutord and Zahn, 1995; Rast, 2003; Rempel, 2004) or eddies (Kumar et al., 1995). Another type of phenomenological model has been proposed by Longcope et al. (2003) who consider a plasma permeated with thin flux tubes which exert a visco-elastic drag on the mean flow (see also Parker, 1985).
Although they can provide valuable insight, the main disadvantage of reduced models of any kind is that it is difficult to verify whether the parameterizations and approximations introduced are reliable representations of the underlying dynamics. The overwhelming majority of reduced models may be classified as mean-field models and of these, nearly all assume scale separation in space and/or time. There is little empirical or numerical evidence that such scale separation is valid for solar convection. Furthermore, some reduced models are not completely self-consistent. For example, the well-known α-effect parameterization commonly used in mean-field dynamo modeling is based in part on the linearity of the induction equation in B (see Section 4.5). This argument is only strictly valid if the velocity field is independent of B which cannot be the case in any real-world, sustained dynamo where the Lorentz force must react back on the flow to curb unlimited field amplification. Even so, mean-field dynamo models are quite successful at reproducing many features of the solar activity cycle, a result which might provide clues into the nature of the dynamo (see Section 6.5).
Mean-field hydrodynamics is built on a more questionable theoretical foundation than mean-field dynamo modeling. The turbulent viscosity formalism in particular is known to be inaccurate even for the simplest turbulent flows where momentum transport is often not directed down large-scale velocity gradients (e.g., Pope, 2000). Although experimental verification is difficult in a solar context, some testing and calibration of reduced models can be done by comparing them to solar and stellar observations and numerical simulations (e.g., Kupka, 1999).
In principle, there is not a large conceptual gap between mean-field/RANS models and large-eddy simulations (LES); the difference lies mainly in the nature and scale of the averaging. In practice, however, there is usually a substantial gap because mean-field models are generally 2D or much lower resolution. Still, some of the parameterizations and procedures developed for reduced approaches could be incorporated into a large-eddy simulation as a subgrid-scale model (e.g., Canuto, 2000). This will be discussed further in Section 7.2.
5.4 Thin-shell approximations for the tachocline
Another type of reduced model is designed specifically for the lower portion of the solar tachocline where the strong stable stratification inhibits vertical motions, making the dynamics quasi-two-dimensional. Here the equations of motions may be simplified by considering the thin-shell limit δ = Δt/rt ≪ 1 where Δt is the thickness of the tachocline layer and rt is the radial location. Helioseismic inversions imply that δ is of order 0.05 or less (Section 3.2).
The most extreme form of the thin-shell limit is to neglect vertical motions, magnetic fields, and gradients entirely, leaving the 2D equations of magnetohydrodynamics (MHD) in latitude and longitude. Such models have recently been used to investigate linear MHD shear instabilities in the tachocline and their subsequent nonlinear evolution (see Section 8.2).
Some degree of vertical variation can be taken into account without greatly increasing the mathematical complexity of the problem by treating the upper boundary of the layer as a free surface. In this case one can apply the so-called shallow-water (SW) equations which are commonly used in meteorology and oceanography (e.g., Pedlosky, 1987). Gilman (2000b) has generalized the SW system to include magnetic fields in order to model the stably-stratified portion of the solar tachocline. The upper boundary of this layer is the solar convection zone which is nearly adiabatically-stratified and which therefore should offer little buoyant resistance to surface deformations. This is the rationale behind the SW approach in a tachocline context.
In the SW approximation, motions are assumed to be incompressible and the vertical momentum equation reduces to magneto-hydrostatic balance. Horizontal velocities and magnetic fields are assumed to be independent of height z but unlike the 2D approach, they can possess a horizontal divergence which gives rise to vertical flows and fields. Vertical motions do not overturn; rather, they deform the outer surface. Integrating the magneto-hydrostatic equation over depth gives a direct relationship between the total pressure (gas plus magnetic) and the height of the layer. Thus, the complete SW system consists of the 2D horizontal momentum and induction equations together with another evolution equation for the layer height and divergence-free conditions for the velocity and magnetic fields.
The MHD SW equations conserve energy, mass, momentum, magnetic flux, and other quantities known as Casimir functionals (Dellar, 2002). They also support a variety of wave modes including Alfvén waves and MHD analogues of surface gravity waves (Schecter et al., 2001). Dikpati and Gilman (2001b) have used the shallow water system to investigate dynamical equilibria in the solar tachocline between pressure gradients and the magnetic tension force associated with an axisymmetric ring of toroidal flux. The poleward tension force is balanced by an equatorward pressure gradient supplied by a buildup of mass at the poles, yielding a prolate tachocline structure as suggested by helioseismic inversions (Section 3.2). Rempel and Dikpati (2003) showed that the required prolateness is reduced if the flux ring contains a zonal jet which helps balance the magnetic tension through the Coriolis force. They also showed that the SW treatment of this problem is analogous to one based on the axisymmetric MHD equations in which the latitudinal pressure gradients are supplied by deformations of the isentropic surfaces. The MHD SW equations have also been used to investigate the linear stability of the latitudinal differential rotation in the tachocline (see Section 8.2).
Another approach which has its roots in meteorology and oceanography is to explicitly take the thin-shell limit of the governing equations in a stably-stratified fluid layer, retaining the full height dependence of all flows and fields. This yields what geophysicists call the hydrostatic primitive equations (HPE) which have formed the basis of climate and ocean models for decades (e.g., Pedlosky, 1987; Salby, 1996). An MHD generalization of the HPE system has recently been developed by Miesch and Gilman (2004). This thin-shell system preserves the conservation properties of the full 3D MHD equations (energy, mass, momentum, magnetic helicity) and is dynamically rich enough to incorporate vertical shear, internal gravity waves, and stratified MHD turbulence. Yet, it is more computationally efficient and analytically accessible than the full 3D equations. For example, separation of variables in the thin-shell system has been exploited to obtain analytic results on the penetration of meridional circulation below the solar convection zone (Gilman and Miesch, 2004) and MHD shear instabilities in the tachocline (see Section 8.2).
A limitation of both the SW and the thin-shell systems is that they do not incorporate magnetic buoyancy which requires a complete vertical momentum equation. Other approximations which have been used to simplify the equations of motion in the tachocline and radiative interior include geostrophic balance and axisymmetry. Some of these approaches will be discussed in Section 8.
6 What Do Global Simulations Tell Us about the Convection Zone?
In this and the following section we will focus on global-scale simulations of solar convection and we review what insights they have provided into solar interior dynamics and where they are in need of improvement. We will begin by placing these models in a historical context and by saying a few words about the computational approach.
6.1 Historical perspective
The most conceptually straightforward approach to studying global-scale solar convection is to solve the nonlinear, 3D equations of motion in a rotating spherical shell of fluid heated from below and cooled from above. The first numerical models to do so were developed by Gilman (1977, 1978, 1983), Gilman and Miller (1981, 1986), and Glatzmaier (1984, 1985a, b, 1987). The convection structure was dominated by traveling, columnar convection cells with a north-south alignment and a periodic longitudinal structure (m ∼ 10), similar to the preferred convection modes predicted by linear theory (Busse, 1970; Gilman, 1975). These became known as banana cells because of their elongated appearance, sheared into a crescent shape by the differential rotation they established. These pioneering studies yielded great insight into the nonlinear interaction between convection, rotation, and magnetic fields, but they had limited spatial resolution and were therefore restricted to relatively laminar flows, far from the highly turbulent parameter regimes thought to exist in the solar interior (see Section 5.1).
In the two decades since, many more simulations of convection in rotating spherical shells have appeared, but most have been concerned with physical conditions which are characteristic of the Earth's outer core and other planetary interiors (e.g., Sun and Schubert, 1995; Tilgner and Busse, 1997; Kageyama and Sato, 1997; Christensen et al., 1999; Roberts and Glatzmaier, 2000; Zhang and Schubert, 2000; Ishihara and Kida, 2002; Busse, 2002; Glatzmaier, 2002). Relative to the Sun, the Earth is rapidly rotating (smaller Rossby number), weakly compressible (smaller density contrast) and highly magnetic (strong Lorentz force). Furthermore, the geometry of the convective shell is somewhat different and physical effects such as compositional gradients and radioactivity play an important role.
In order to revisit solar convection with the latest generation of scalable parallel supercomputers, Clune et al. (1999) developed a numerical model which is now known as the Anelastic Spherical Harmonic (ASH) code. The algorithm is similar to that described by Glatzmaier (1984) and solves the 3D anelastic equations described in Appendix A.2 using a pseudospectral method with spherical harmonic and Chebyshev basis functions. Recent ASH simulations have achieved much higher resolution and subsequently more turbulent parameter regimes than the pioneering studies by Gilman and Glatzmaier referred to above. In the remainder of this section, we will focus on results obtained with the ASH code. For a description of the numerical method see Clune et al. (1999) and Brun et al. (2004). Further details on the scientific results have been reported by Miesch et al. (2000); Elliott et al. (2000); Brun and Toomre (2002); DeRosa et al. (2002) and Brun et al. (2004).
The ASH code is dimensional and uses realistic values for the solar radius, luminosity, and mean rotation rate. The reference state is based on 1D solar structure models. Since global simulations cannot capture the complex dynamics occurring in the near-surface layers (see Section 7.3), the upper boundary of the computational domain is generally placed below the photosphere, at 0.96–0.98R⊙. For computational efficiency, the lower boundary is often placed at the base of the convection zone (see Section 7.3) but some simulations have included penetration into the radiative interior (Miesch et al., 2000).
6.2 Convection structure
Figure 9 illustrates the variety of convective patterns which have been found in high-resolution simulations of global solar convection. Three simulations are shown, including Case M3 of Brun et al. (2004), Case F of Brun et al. (2005) and Case D2 of DeRosa et al. (2002). The primary difference between Cases M3 and F is that the latter is less dissipative and therefore more turbulent (higher Rayleigh and Reynolds numbers). Case D2 is comparable to Case F but with a thin-shell geometry; the lower boundary was set at r1 = 0.92R⊙ as opposed to r1 = 0.62R⊙. Cases F and D2 both have an upper boundary at r2 = 0.98R⊙, somewhat closer to the solar photosphere than Case M3 (r2 = 0.96R⊙). Case M3 is the only one of the three which includes magnetism, although this does not have a substantial influence on the convective patterns shown in Figure 9 (Brun et al., 2004).
The radial velocity near the top of the simulation domain is shown for Case M3 (Brun et al., 2004), Case F (Brun et al., 2005), and Case D2 (DeRosa et al., 2002). Bright and dark tones denote upflow and downflow as indicated by the color tables. Orthographic projections are shown with the north pole tilted 35° toward the observer. The equator is indicated with a solid line. Magnified areas shown in the lower panels correspond to square 45° patches which extend from latitudes of 10° N−55° N.
At intermediate Rayleigh (and Reynolds) numbers, there is a marked contrast between the convective structure at low and high latitudes (Figure 9, panel a). Near the equator, the convection is dominated by extended downflow lanes oriented north-south which propagate in longitude faster than the local differential rotation (Miesch et al., 2000). These are reminiscent of the banana cells in earlier more laminar simulations (see Section 6.1) but they are not strictly periodic in longitude, they extend only to mid-latitudes, and they are asymmetric with respect to upflow and downflow due to the density stratification (cf. Section 5.2). Near the poles the convection patterns are more isotropic and homogeneous and the characteristic spatial scales are somewhat smaller.
This variation in convective patterns results arises from the influence of rotation and some insight into its origin can be gained from linear theory (Busse, 1970; Gilman, 1975; Busse and Cuong, 1977). In order to minimize the stabilizing influence of the Coriolis force, convection at low latitudes tends to favor flows which are perpendicular to the rotation axis. If the rotation is rapid, columnar convection cells are preferred which align with the rotation axis and propagate in a prograde direction due to their tendency to conserve angular momentum (or potential vorticity) under the influence of the spherical geometry and density stratification; in this sense they may be regarded as thermal Rossby waves (Glatzmaier and Gilman, 1981; Busse, 2002). At high latitudes, inside the tangent cylinder4 overturning motions can no longer remain perpendicular to Ω0, resulting in more isotropic cells with smaller horizontal scales.
In more turbulent parameter regimes (higher Rayleigh and Reynolds numbers), the convection near the top of the domain exhibits a more granulation-like character across the shell as shown in panel b of Figure 9. As in simulations of turbulent compressible convection in Cartesian domains (see Section 5.2), the convection structure is dominated by an intricate, interconnected network of downflow lanes amidst broader, weaker upflows. Although the patterns appear relatively homogeneous and isotropic with little indication of banana cells, broad upwellings and extended north-south lanes still occur at low latitudes within the more intricate downflow network. These extended downflow lanes generally penetrate deeper into the convection zone than the smaller-scale network patterns (see Figure 12) and play an important role in maintaining the differential rotation (see Section 6.3). Horizontal rolls analogous to north-south downflow lanes are present in even the most turbulent Cartesian simulations of turbulent compressible convection when the rotation vector is made horizontal in order to simulate the equatorial regions (Brummell et al., 2002b).
Although these convective patterns are reminiscent of granulation or supergranulation, their scale is much larger. By eye, the predominant convective cells in panel b of Figure 9 appear to span roughly 10 angular degrees, which corresponds to a horizontal scale of 120 Mm. More localized, swirling structures are also evident near the interstices of the downflow network at midlatitudes. The power spectrum of the radial velocity field peaks at spherical harmonic wavenumbers of ℓ ∼ 50–60, which corresponds to ∼ 80 Mm. Recall that the characteristic scales of granulation and super-granulation are about 1–2 Mm and 30 Mm, respectively (Section 2.3).
Convective motions at supergranular scales have been reported in global simulations by DeRosa et al. (2002) who focused on the upper regions of the solar convection zone. Higher spatial resolution was achieved by limiting the simulation domain to radii between 0.92–0.98R⊙ and by imposing a four-fold periodicity in longitude. The convection structure in one of these simulations is illustrated in panel c of Figure 9. The pattern exhibits a hierarchy of scales, from supergranular-scale mottling to a network of larger cells and extended north-south downflow lanes more comparable to the deep-shell simulations (cf. Figure 9, panel b). Although provocative, it is premature to identify this small-scale convection pattern too closely with supergranulation on the Sun. Solar supergranulation may involve dynamics which are not captured in these global simulations such as ionization effects or self-organization processes involving smaller-scale granules (Rast, 2003). On the other hand, although simulations of granulation with large aspect ratios exhibit structure on mesogranule scales (∼ 5 Mm), they have not yet achieved larger-scale patterns so the origin of supergranulation remains unclear (Rincon et al., 2005; see also Simon and Weiss, 1991).
In turbulent parameter regimes, the downflow network evolves rapidly, changing substantially over the course of a few days (recall that the rotation period is about a month). This is demonstrated in Figure 10 which follows the radial velocity field near the top of the convection zone in Case F. Advection and distortion of the downflow network by the differential rotation is evident, with low-latitude patterns moving eastward and high-latitude patterns moving westward relative to the rotating coordinate system. Downflow lanes continually merge and re-form as upwellings diverge and fragment. Particularly at high and mid-latitudes, numerous localized vortices appear and disappear near the interstices of the downflow network, often forming new upwellings via the centrifugal siphoning of fluid from below (Brandenburg et al., 1996; Brummell et al., 1996; Miesch, 2000). Such vortices are fed by converging horizontal flows which tend to conserve their angular momentum, spinning up in a cyclonic sense5 due to the Coriolis force. This results in intense, intermittent downflow plumes spinning with cyclonic vorticity.
Figure 10:
mpg-Movie (29075.2099609 kB) Still from a Movie showing the temporal evolution of the radial velocity near the top of the shell (r = 0.98R⊙) in Case F is shown in an orthographic projection as in Figure 9. The movie covers a time span of 7 days. (For video see appendix)
These vortical downflow plumes appear as cool spots in the temperature field as shown in panel a of Figure 11. Global temperature variations are also apparent, with equatorial and polar regions a few K warmer than mid-latitudes. The local maxima at the poles are often more pronounced in the entropy field than the temperature field, which has implications for the thermal wind component of the differential rotation (Section 6.3). The downflow network is also faintly visible in the temperature field of panel a in Figure 11; in many simulations it leaves a more noticeable imprint (e.g., Thompson et al., 2003, Figure 13).
The temperature (a), radial vorticity (b), and horizontal divergence (c) near the top of the convection zone in Case F. The time instance and projection are as in Figure 9.
The cyclonic nature of the downflow lanes and plumes is evident in the radial vorticity field, shown in panel b of Figure 11. This pattern stands out amid a background of weaker anti-cyclonic vorticity associated with diverging upflows. As expected, the horizontal divergence field, shown in panel c of Figure 11 correlates well with the vertical velocity field shown in panel b of Figure 9. The relative magnitudes of the vortical and divergent components of the horizontal velocity field near the top of the convection zone can potentially be a point of contact between numerical simulations and helioseismic observations Section 3.5. In simulations, the two are generally comparable (the rms values of the vertical vorticity and horizontal divergence fields shown in Figure 11 are 1.6 × 10−5 s−1 and 1.5 × 10−5 s−1, respectively).
Deeper in the convection zone, the flow structure changes dramatically as illustrated in Figure 12. Only the strongest downflow plumes and lanes in the near-surface network penetrate to the mid convection zone and the network loses its connectivity. Cool, vortical, intermittent plumes dominate but coherent north-south downflow lanes still persist at low latitudes. In the near-surface layers, the enstrophy (vorticity squared) is dominated by the intense cyclonic vertical vorticity found in the downflow network (Figure 11, panel b). In the mid convection zone, enstrophy is still concentrated in downflows but is now dominated by horizontal entrainment vortices, forming rolls and' smoke rings' near the periphery of lanes and plumes (Figure 12, panel c).
The radial velocity (a), temperature (b), and enstrophy (c) are shown for Case F in the mid convection zone. The time instance and projection are as in Figure 9.
6.3 Differential rotation
The helioseismic and surface observations of the solar differential rotation reviewed in Section 3 present several compelling challenges to theoretical and numerical modelers:
a monotonic decrease of angular velocity with latitude,
an angular velocity contrast of about 20% (∼ 90 nHz) between the equator and latitudes of ±60°,
nearly radial angular velocity contours at mid-latitudes throughout the bulk of the convection zone,
narrow layers of strong vertical shear in the angular velocity near the top and bottom of the convection zone,
periodic and non-periodic temporal variations.
So how are we doing? Results from a recent simulation are shown in Figure 13. On the positive side, the angular velocity exhibits a realistic latitudinal variation and contrast (Challenges 1 and 2), with little radial variation above mid-latitudes (Challenge 3). On the negative side, the low-latitude angular velocity contours are somewhat more cylindrical than suggested by helioseismology, with more radial shear. Furthermore, at present there is little tendency for simulations such as these to form rotational shear layers near the top and bottom of the convection zone (Challenge 4). Although these simulations do exhibit non-periodic angular velocity fluctuations of about the right amplitude relative to helioseismic inversions (a few percent; see Miesch, 2000; Brun and Toomre, 2002), there is currently little evidence for systematic behavior such as torsional oscillations (Challenge 5). We will now proceed to discuss the implications of these results in a little more detail.
The angular velocity in Case M3 (Brun et al., 2004) is shown averaged over longitude and time, both as a 2D profile (a) and as a function of radius at selected latitudes (b). Compare with Figure 1 (from Brun et al., 2004).
Figure 14 illustrates how the differential rotation in Case M3 is maintained in terms of the angular momentum balance expressed by Equation (5). The Reynolds stress (RS) moves angular momentum outward and equatorward, maintaining the differential rotation against viscous dissipation (VD). The advection of angular momentum by the meridional circulation (MC) also plays an important role, enhancing the outward transport by the Reynolds stress but opposing their latitudinal transport, moving angular momentum toward the poles. As might be expected, magnetic tension tends to suppress the rotational shear in both radius and latitude, but at least in this simulation, the Maxwell stress (MS) is much more effective at this than the mean poloidal field (MT) (see Section 6.5).
The angular momentum fluxes defined in Appendix A.4, Equations (69)–(73) are plotted for case M3 as a function of radius, integrated over horizontal surfaces (a), and as a function of latitude, integrated over conical (r, ϕ) surfaces (b). All data are averaged over time. Linestyles denote different components as indicated and solid lines denote the sum of all components. Fluxes are in cgs units (g s−1), normalized by \(10^{15}r_{2}^{2}\), where r2 is the outer radius of the shell.
Even in the most turbulent parameter regimes, a persistent feature of global-scale simulations of rotating convection has been the presence of extended downflow lanes at low latitudes aligned in a north-south orientation (see Section 6.2). Such flow structures naturally give rise to prograde equatorial differential rotation as demonstrated in panel a of Figure 15. The Coriolis force tends to divert eastward (prograde) flows toward the equator and westward (retrograde) flows toward the poles, leading to positive < υ′ θ υ′ ϕ > correlations which transport angular momentum toward the equator via the Reynolds stress [see Equation (70)]. This is reflected by the Reynolds stress contribution in panel b of Figure 14, which is efficient enough to maintain the differential rotation against meridional circulation, magnetic tension, and viscous diffusion. Similar Coriolis-induced correlations also produce radially outward transport by the Reynolds stress, but these are generally less efficient (Figure 14, panel a).
(a) Schematic diagram showing the influence of the Coriolis force on horizontal motions which converge into a north-south aligned downflow lane (vertical black line). Eastward and westward flows (red) are diverted toward the south and north, respectively (blue) (cf. Gilman, 1986). (b) Schematic diagram illustrating the dynamics of downflow plumes (after Miesch et al., 2000). In the upper convection zone, horizontal flows converge into the plume, acquiring cyclonic vorticity due to the influence of the Coriolis force (red). Near the base of the convection zone (black line), plumes are decelerated by negative buoyancy and diverge, acquiring anti-cyclonic vorticity (blue). Their remaining horizontal momentum is predominantly equatorward (see text).
Of the challenges listed at the beginning of this section, the first has been particularly difficult. Many simulations of rotating convection in spherical shells exhibit a polar vortex; prograde rotation in the polar regions which arises due to the tendency for flows to conserve angular momentum as they approach the rotation axis. Axisymmetric meridional circulations, in particular, tend to efficiently spin up the poles (see Section 4.3) as reflected by their poleward contribution in panel b of Figure 14. The Reynolds stress must oppose this tendency in order to produce a monotonic decrease in angular velocity with latitude as is apparently the case in the Sun6. This is more easily accomplished if the circulation does not extend all the way to the poles. Indeed, a common feature of those simulations which exhibit slow polar rotation, such as Case M3, is the absence of a single-celled meridional circulation which extends from low to high latitudes (Brun and Toomre, 2002). This may have important implications for solar dynamo models (see Section 6.4).
Thus, a polar vortex can be avoided if the meridional circulation is confined mainly to low and mid-latitudes. This will be discussed further in Section 6.4. Alternatively, if the north-south downflow lanes which are primarily responsible for equatorward angular momentum transport were to extend to higher latitudes, they may help spin down the poles. This occurs if the convection zone is made deeper, moving the tangent cylinder closer to the rotation axis (Gilman, 1979; Glatzmaier, 1987). Although this may not be very relevant for the Sun (the convection zone base is reasonably well established from helioseismic inversions, see Section 3.6), it may have implications for less massive stars which have deeper convective envelopes.
An additional complication to the problem of polar spin-up occurs when the convection is allowed to penetrate into an underlying stable region, as demonstrated in panel b of Figure 15. In turbulent parameter regimes, the convection is dominated by downflow plumes and lanes which acquire cyclonic vorticity in the upper convection zone due to the tendency for converging horizontal flows to conserve their angular momentum (see Section 6.2). As these plumes move deeper into the convection zone, they may converge further due to the density stratification and thus spin up even more (although this convergence may be partially suppressed by entrainment, which has a spreading effect). When the plumes reach the overshoot region, they are decelerated by buoyancy and mass is spread out horizontally and redirected into upflows. The Coriolis force acting on these diverging downflows induce anticyclonic vorticity, leading to a sign reversal of the helicity (Miesch et al., 2000).
These downflow plumes are not purely radial. Rather, the influence of the Coriolis force tends to orient them toward the rotation axis in a process known as turbulent alignment (Brummell et al., 1996). Thus, when buoyancy removes the plumes' vertical momentum in the overshoot region, they have a residual horizontal momentum which diverts them toward the equator. The combination of anticyclonic vorticity and equatorward circulation gives rise to a convergence of angular momentum flux from the Reynolds stress, FRS, at high latitudes, which tends to spin up the poles. In other words, angular momentum transport in the overshoot region is generally poleward. The meridional circulation component, FMC, enhances this poleward transport. As a result, sufficiently turbulent global-scale simulations of solar convection which include convective penetration tend to exhibit relatively fast polar rotation (Miesch et al., 2000, 2004). Thus, the slow polar rotation in the Sun remains somewhat enigmatic, although some non-penetrative simulations like Case M3 do a reasonably good job. One possibility is that the transition from sub-adiabatic to super-adiabatic stratification in the penetrative convective simulations is not yet sharp enough (see Section 7.1).
The second challenge listed above has been less problematic; many simulations exhibit an angular velocity contrast between the equator and higher latitudes of about the right amplitude relative to the Sun (∼ 20–30%). However, the third challenge, that of nearly radial angular velocity contours, has proven every bit as difficult as the first. As discussed in Section 4.3, there are two ways to break the tendency for cylindrical angular velocity contours: the Reynolds stress (i.e., the effective Rossby number is not small), and baroclinic driving (latitudinal entropy gradients), which can establish a thermal wind.
Figure 16 illustrates the relative importance of these two contributions in a simulation which exhibits a solar-like rotation profile (Challenges 1–3 are nearly met). This is Case AB of Brun and Toomre (2002), which is a close relative of Case M3 but is non-magnetic. Frames (a) and (b) illustrate the mean zonal velocity and its gradient along the rotation axis. If the differential rotation were in thermal wind balance, then this axial gradient (Figure 16, panel b) would be equal to the baroclinic term on the left-hand-side of Equation (11), which is shown in panel c of Figure 167. The departure from thermal wind balance is demonstrated in panel d of Figure 16.
The following results are shown for Case AB, averaged over longitude and time (from Brun and Toomre, 2002). (a) The mean zonal velocity < υ ϕ >, (b) the zonal velocity gradient parallel to the rotation axis, Ω0·∇ 〈υ ϕ 〉, (c) the baroclinic contribution to Ω0·∇ 〈υ ϕ 〉 as defined by Equation (11), and (d) the remainder after subtracting profile (c) from profile (b). The color bar on the left refers to frame (a) and the color bar on the right to frames (b)–(c).
The conclusion to be drawn from Figure 16 is that the non-cylindrical component of the angular velocity profile satisfies thermal wind balance in the lower convection zone, but not in the upper convection zone. There the Reynolds stress is responsible for the axial angular velocity gradients. Thus, simulations which come closest to meeting Challenge 3 above do so both by redistributing angular momentum via the Reynolds stress and by establishing latitudinal entropy gradients via anisotropic convective heat transport.
We emphasize that the ASH code was not tuned in any way to achieve the results shown in Figure 13 and elsewhere. The simulations typically begin from uniform rotation or from previous simulations with different parameter values. Boundary conditions are generally stress-free so angular momentum is conserved and uniform-flux or uniform-entropy so a thermal wind is not artificially driven. The subgrid-scale models are purely diffusive. Mean flows and thermal gradients are established solely via momentum and entropy transport by turbulent convection under the influence of rotation. Still, some parameter regimes and boundary conditions do marginally better than others. Low Prandtl numbers (∼ 0.25) tend to produce the most solar-like angular velocity contrasts (Challenge 2) and tend to avoid large-scale meridional circulations which can spin up the poles (Challenge 1). Fixing the heat flux at the boundaries rather than the entropy is more conducive to establishing latitudinal entropy gradients which can help to break the tendency for cylindrical rotational profiles, as discussed above (Challenge 3). Although recent results show a substantial improvement over the early, relatively low-resolution simulations by Gilman and Glatzmier (see Section 6.1), higher Reynolds and Rayleigh numbers do not necessarily yield more solar-like profiles. This may be because the north-south downflow lanes in more turbulent simulations are more confined to lower latitudes than in laminar simulations. Since these structures are primarily responsible for equatorward angular momentum transport as discussed above, the net result is often relatively fast polar rotation and a reduced angular velocity contrast. For further elaboration see Miesch (2000), Elliott et al. (2000) and Brun and Toomre (2002). See also Section 7 where possible resolutions to these issues are discussed.
Global convection simulations are only beginning to address the complicated issues surrounding challenge number 4 above, regarding rotational shear layers. Still, some progress has been made in understanding the speedup of angular velocity below the photosphere, inferred from helioseismic inversions and previously from tracer measurements (see Section 3.1). A plausible origin for this layer is in the tendency for the more vigorous convection in the solar surface layers to conserve angular momentum, spinning up as it approaches the rotation axis. This was first suggested by Foukal and Jokipii (1975) and has generally been borne out in convection simulations by Gilman and Foukal (1979) and more recently by DeRosa et al. (2002). However, these simulations were confined to the upper convection zone; deep shell simulations thus far show little tendency to form near-surface shear layers. Global convection simulations also have yet to form strong shear layers near the base of the convection zone which are comparable in structure to the solar tachocline. This may be because the viscous diffusion is too large and the spatial resolution is insufficient to capture small-scale dynamics occurring in the overshoot region (Section 7). Furthermore, the simulations may have insufficient temporal duration to capture the possibly long-term dynamics which drive the radiative interior toward uniform rotation (see Section 8.5).
Global simulations do not yet exhibit periodic temporal variations such as the solar torsional oscillations discussed in Section 3.3. However, similar torsional oscillations do arise naturally in mean-field dynamo models when the back reaction of the Lorentz force on the differential rotation is taken into account (Yoshimura, 1981; Schüssler, 1981; Kitchatinov et al., 1999; Durney, 2000b; Covas et al., 2001, 2004; Bushby and Mason, 2004). An alternative possibility was recently proposed by Spruit (2003) who argues that torsional oscillations may be a surface phenomenon which arise as a geostrophic flow response to thermally-induced latitudinal pressure gradients associated with belts of magnetic activity. Shorter-period tachocline oscillations may arise from the spatiotemporal fragmentation of torsional oscillations (Covas et al., 2001, 2004) or from the interaction of gravity waves with differential rotation (Section 8.4). Oscillatory shear instabilies may also play a role (Section 8.2).
As a final comment to close this section, we note that the five challenges posed here are in all likelihood intimately connected. Since the radiative interior possesses much more mechanical and thermal inertia than the convective envelope, the differential rotation in the convection zone may be sensitive to the complex dynamics occuring in the tachocline. In other words, we may not fully understand the rotation profile in the convection zone until we get the tachocline right. A realistic tachocline is probably also a prerequisite to achieving the solar-like dynamo cycles and wave-mean flow interactions which appear to be responsible for torsional and tachocline oscillations. These issues will be discussed further in Section 7.3.
In numerical simulations, as in the Sun, the meridional circulation is weak relative to the differential rotation. The kinetic energy is typically smaller by about two orders of magnitude. Sample profiles are illustrated in Figure 17 for case M3 and case P, which is a continuation of Case TUR of Miesch et al. (2000), with increased resolution and lower dissipation8.
Streamlines are shown for the mean meridional mass flux in Case M3 (a) and Case P (b), as defined by the streamfunction Ψ in Equation (13). Red/orange tones and black contours denote clockwise circulation whereas blue tones and green contours denote counter-clockwise circulations. The right frames show the corresponding latitudinal velocity (positive southward) near the top (c) and bottom (d) of the convection zone for each simulation (represented by blue and red lines, respectively). All results are averaged over longitude and time (60 days for Case M3 and 72 days for case P).
The first thing to note about the profiles shown in Figure 17 is that they are much more spatially complex than is assumed in many kinematic dynamo models and other applications. Multiple cells are present in both latitude and radius, and flow patterns are generally not symmetric about the equator. The temporal dependence is equally complex, exhibiting large fluctuations on timescales of weeks and months, as shown in Figure 18. This spatial and temporal complexity can be attributed to the turbulent nature of the convection and to the sensitivity of the meridional circulation to small variations in the differential rotation and Reynolds stress, as will be discussed further later in this section.
mpg-Movie (7096.71777344 kB) Still from a Movie showing streamlines for the longitudinally-averaged mass flux in Case M3 are shown evolving over the course of 60 days. Contours are indicated as in panel a of Figure 17, which represents a temporal average of this sequence of images. The inset illustrates the mean latitudinal velocity < υ ϕ > near the top of the domain (r = 0.96R⊙) as in the temporal average of panel c in Figure 17. (For video see appendix)
Although the spatial and temporal fluctuations are generally chaotic, systematic patterns emerge when the circulation profiles are averaged over several months. In the equatorial plane, the circulation in the upper convection zone is typically outward, giving rise to poleward flow at low latitudes near the surface. This can be seen for case M3 in panel a of Figure 17 and panel c of Figure 179. This outward flow arises primarily as a result of the centrifugal force acting on the prograde differential rotation at low latitudes.
Another systematic trend which is robust in simulations of penetrative convection is a persistent equatorward circulation in the overshoot region of a few m s−1 (Figure 17, panel d). This can be attributed to the turbulent alignment of downflow plumes as illustrated in panel b of Figure 15 and as discussed in Section 4.3. In turbulent parameter regimes, convective overshoot is dominated by helical downflow plumes which are tilted toward the rotation axis with respect to the vertical. When these plumes reach the overshoot region, negative buoyancy removes their vertical momentum but an equatorward latitudinal momentum remains. This equatorward circulation does not occur in more laminar simulations which do not exhibit turbulent plumes (Miesch et al., 2000).
Near the poles, simulations generally exhibit several circulation cells which span about 10°–15° in latitude and extend from the top of the convection zone to the bottom. The sense of the circulation can vary with time and may or may not be the same in the northern and southern hemispheres. Without exception, simulations which exhibit solar-like differential rotation profiles have such localized circulation cells near the poles. Since axisymmetric circulations tend to conserve angular momentum, a single, global cell extending from low to high latitudes would tend to spin up the poles, driving a polar vortex which is inconsistent with helioseismic inversions (see Section 6.3).
How do these simulation results compare with what we know about the meridional circulation in the Sun? Our knowledge of the solar circulation is currently limited to the uppermost regions of the convection zone (see Section 3.4). There the circulation is generally poleward, although it does fluctuate substantially and is not in general symmetric about the equator. Some of these fluctuations appear to be associated with magnetic activity and exhibit a systematic equatorward propagation over the course of the solar activity cycle in conjunction with torsional oscillations (Snodgrass and Dailey, 1996; Beck et al., 2002; Zhao and Kosovichev, 2004). Fluctuations of comparable amplitude occur in simulations both with and without magnetic fields, but they do not exhibit such systematic latitudinal propagation.
The poleward circulation in the Sun is about the same amplitude as in simulations, ∼ 20 ms−1, but it extends to higher latitudes. Doppler measurements and local helioseismic inversions indicate poleward flow in the solar surface layers up to latitudes of at least 60°. By comparison, the poleward flow near the outer boundary in simulations generally only extends to latitudes of about 30°–50° (Figure 17, panel c). Little is currently known about circulation patterns in the polar regions of the Sun but surface tracer measurements do show some hints of flow reversals at latitudes above 60° (Komm et al., 1993; Snodgrass and Dailey, 1996; Latushko, 1996). Multiple-cell structure in the polar regions such as that seen in simulations has not yet been unambiguously found in surface measurements or helioseismic inversions but it cannot be ruled out.
Further insight into the maintenance of meridional circulation in global convection simulations can be obtained by considering the balance Equation (15). If we apply a Legendre transform to this equation, we obtain an evolution equation for the mass flux vorticity in spectral space: \(\tilde{\varpi}\). We may then multiply by \(\tilde{\varpi}\) and integrate over radius to obtain
$${{\partial {\cal W}(\ell)} \over {\partial t}} = {\rm{RS}}(\ell) + {\rm{AD}}(\ell) + {\rm{BF}}(\ell) + {\rm{VD}}(\ell),$$
where \({\cal W}(\ell)=\tilde{\varpi}^{2}/2\) is the mass flux enstrophy spectrum associated with the circulation and the terms on the right-hand-side reflect contributions from the Reynolds stress, axisymmetric advection, the buoyancy force, and viscous diffusion (see Appendix A.5). The spectrum \({\cal W}(\ell)\) is shown in Figure 19, frames (a) and (b), along with the corresponding spectrum for the streamfunction, Ψ, defined in Equation (13).
(a) Power spectra are shown for the mass flux vorticity ϖ (red) and the streamfunction Ψ (blue) for case P, averaged over radius and time [see Equations (12) and (13)]. The former curve (red) is equivalent to \({\cal W}\) in Equation (24). Spectra are normalized such that they sum to unity. Exponential fits to each curve are also shown for comparison. Frame (b) exhibits the same curves as in frame (a) but with a linear vertical axis and a logarithmic horizontal axis. Frame (c) shows the relative contributions of the maintenance terms in Equation (24), using the same normalization as for \({\cal W}\) in frames (a) and (b). In frame (d), the Reynold stress contribution, represented by the blue curve in (c), is decomposed into contributions from radial advection, radial tipping, and latitudinal transport as described in the text. The plots in (b)–(d) extend only to ℓ = 100 as contributions beyond this point are negligible.
The density-weighted enstrophy spectrum, \({\cal W}(\ell)\), decays roughly exponentially with the spherical harmonic degree, ℓ, with an e-folding scale of ℓ ϖ ∼ 31. The streamfunction spectrum is steeper, with an e-folding scale of ℓΨ ∼ 22 over the range shown in panel a of Figure 19. However, it is not as well approximated by an exponential distribution, being somewhat more intermittent. As is most evident in panel b of Figure 19, most of the power in both ϖ and Ψ is concentrated at large scales, ℓ ≤ 20, and in odd values of ℓ. Odd ℓ values correspond to ϖ, Ψ, and < υ θ > profiles which are antisymmetric about the equator and < υr > profiles which are symmetric. For example, a single large-scale circulation cell per hemisphere with upflow at the equator and downflow at the poles is generally dominated by the ℓ = 1 and ℓ = 3 components of ϖ and Ψ (depending on the latitude at which it turns over).
The maintenance terms on the right-hand-side of Equation (24) are shown in panel c of Figure 19. The sum of all contributions is nearly zero, indicating a statistically steady state. Although the GAD and GBF terms dominate the total flux represented in Equation (16), they are largely offset by pressure gradients. Large-scale circulations (ℓ = 1–3) are driven by the Reynolds stress (RS) and the residual buoyancy force (BF), which are balanced by axisymmetric advection (AD) and viscous diffusion (VD). On intermediate scales, 4 < ℓ < 10, axisymmetric advection is the primary driving mechanism and the Reynolds stress plays an inhibiting role.
It is instructive to further decompose the Reynolds stress contribution in order to clarify which processes are most relevant. According to Equation (76) in Appendix A.4, the radial component of the Reynolds stress includes contributions from vorticity advection ∝ 〈υ′ r ω′ϕ〉 and vortex tipping, ∝ 〈υ′ ϕ ω′ r 〉 These contributions are plotted separately in panel d of Figure 19 along with that due to the latitudinal component of the Reynolds stress. This figure indicates that the radial tipping term is most important, followed closely by the radial advection term. The latitudinal Reynolds stress is less significant.
Given the important role of the turbulent Reynolds stress, it is perhaps no surprise that the circulation patterns are complex. If the solar meridional circulation is as spatially and temporally variable as the simulations suggest, then this has important implications for kinematic dynamo models. It may pose problems for flux-transport dynamo models in particular which rely on a steady large-scale circulation component to set the period and other aspects of the magnetic activity cycle (Choudhuri et al., 1995; Durney, 1995; Dikpati and Charbonneau, 1999; Dikpati and Gilman, 2001a; Charbonneau, 2005). On the other hand, the success of flux-transport dynamo models in reproducing many features of the solar cycle may point to some shortcomings of global convection simulations. The maintenance of differential rotation in the solar convection zone is subtle, involving small imbalances among relatively large forces. Simulations may be sensitive to dynamics which are not sufficiently resolved or otherwise missing from the model. Still, in light of this delicate balance, it would be surprising if the solar meridional circulation did not fluctuate substantially in space and time.
One feature that global convection simulations and flux-transport dynamo models have in common is an equatorward circulation in the overshoot region. Hathaway et al. (2003) argue that the observed drift speeds of sunspots as a function of latitude support the presence of such a flow. Some flux-transport models require that this equatorial circulation extend even below the overshoot region (Nandy and Choudhuri, 2002). However, any circulation which is driven in the convection zone is unlikely to penetrate deeper than r ∼ 0.7R⊙ due to the strongly limiting influence of buoyancy and rotation (Gilman and Miesch, 2004). Secondary circulations may be driven by waves and turbulence in the radiative interior but these are likely to be much weaker than those in the convection zone (see Section 8).
6.5 Dynamo processes
The solar dynamo involves an intricate interplay of complex processes occurring over a wide range of spatial and temporal scales (see Section 4.5 and Section 5.1). Consequently, global convection simulations are a long way from making detailed comparisons with photospheric and coronal observations of magnetic activity. Still, they have provided important insight into several key elements of the global dynamo, particularly field generation in the convection zone (processes 0–1 in Figure 8).
Simulations of thermal convection in rotating spherical shells have produced many examples of sustained dynamo action (Gilman and Miller, 1981; Gilman, 1983; Glatzmaier, 1984, 1985a, b; Kageyama and Sato, 1997; Christensen et al., 1999; Roberts and Glatzmaier, 2000; Zhang and Schubert, 2000; Ishihara and Kida, 2002; Busse, 2002; Glatzmaier, 2002; Brun et al., 2004). Most of these are concerned with relatively laminar flows or parameter regimes which are more characteristic of the Earth's core than the solar interior. Simulations of turbulent convection and dynamo action in more solar-like parameter regimes have recently been investigated by Brun et al. (2004). Results are illustrated in Figures 20 and 21.
The radial velocity υ r (a), the radial magnetic field, Br (b), and the toroidal magnetic field, B ϕ (c), are shown near the top of the computational domain (r = 0.95R⊙) for Case M3 of Brun et al. (2004). White and yellow tones denote outward flow (a), outward field (b), and eastward field (c) as indicated by the color tables.
As in Cartesian simulations of MHD convection (e.g., Brandenburg et al., 1996; Cattaneo et al., 2003), radial field near the top of the computational domain is swept into downflow lanes by horizontally converging flows10. The field distribution is intermittent and confined primarily to the downflow network (Figure 20, panels a and b). Field within the network is of mixed polarity and is wrapped up by cyclonic vorticity, generating large gradients which promote magnetic reconnection. Magnetic helicity is generated locally but it is of mixed sign and no clear global patterns emerge11 (cf. Section 3.8). A potential-field extrapolation of the radial field at the upper boundary exhibits a complex topology, with interconnected loops spanning a wide range of spatial scales (Figure 21, panel a). This may be compared with photospheric extrapolations which are similarly complex (see Figure 4).
(a) Potential-field extrapolation of the radial magnetic field B r at the outer boundary of Case M3. White lines represent closed loops while green and magenta lines indicate field which is outward and inward, respectively, at 2.5R⊙, the boundary of the extrapolation domain. (b) Volume rendering of the toroidal field B ϕ of Case M3 in a narrow latitude band centered at the the equator. The equatorial plane is tilted slightly with respect to the line of sight. Typical field amplitudes are 1000 and 3000 G in frames (a) and (b), respectively (from Brun et al., 2004).
Toroidal fields are somewhat less intermittent and peak in the horizontally-diverging regions between downflow lanes (Figure 20, panel c). These regions tend to be broadest at low latitudes where much of the toroidal field energy is concentrated. Differential rotation stretches fields into toroidal ribbons (Figure 21, panel b) which generally reach higher amplitudes (∼ 3000 G) than poloidal fields (∼ 1000 G). The energy in the mean (axisymmetric) toroidal field exceeds that in the mean poloidal field by about a factor of three, indicating that an Ω-effect is operating (cf. Section 4.5). However, the magnetic energy in the fluctuating (non-axisymmetric) poloidal and toroidal field components is comparable.
In light of the complex topologies evident in Figures 20 and 21 it is no surprise that the axisymmetric field components are relatively small. Fluctuating fields account for 98% of the total magnetic energy in Case M3. Furthermore, there is no clear separation of spatial or temporal scales and nonlinear correlations between fluctuating field components are not small in any sense, calling into question many of the assumptions often used in mean-field dynamo theory (see Section 4.5).
Magnetic fields on the Sun are also complex but they exhibit striking regularities, most notably those associated with the 22-year activity cycle (see Section 3.8). Furthermore, the axisymmetric component of the poloidal field on the Sun is predominantly dipolar, at least during solar minimum. This degree of order amid complexity has not yet been achieved with global simulations. Case M3, for example, does not exhibit cyclic behavior and the mean poloidal field involves dipolar, quadrupolar, and higher-order components (see Brun et al., 2004). Cyclic dipolar dynamos have however been achieved in other parameter regimes. A key element appears to be a strong differential rotation. When the kinetic energy of the differential rotation exceeds that of the convection, cyclic dynamos are more likely. This is a conclusion reached over two decades ago by Gilman (1983) and has generally been borne out in the later work cited at the beginning of this section. Furthermore, many cyclic dynamos operate in the strong field regime in which the magnetic energy exceeds the convection kinetic energy by an order of magnitude or more (e.g., Christensen et al., 1999; Zhang and Schubert, 2000; Ishihara and Kida, 2002). This is appropriate for planetary interiors but not for the Sun, where the rotational influence is much weaker. In Case M3, the kinetic energy in the differential rotation and in the convection are roughly equal while the magnetic energy is about an order of magnitude less.
The importance of a strong differential rotation in achieving cyclic behavior is consistent with mean-field theory where it is known that cycles are more readily achieved with α-Ω dynamos than α2 dynamos (Ossendrijver, 2003). If meridional circulation is neglected, the cycle period is determined by the magnitude of a, which in turn is proportional to the kinetic helicity to a first approximation (Section 4.5). To the author's knowledge, all numerical simulations of thermal convection in rotating spherical shells which have achieved sustained, cyclic, dipolar dynamos (e.g., Gilman, 1983; Glatzmaier, 1985a, b; Kageyama and Sato, 1997; Zhang and Schubert, 2000; Ishihara and Kida, 2002; Busse, 2002) are dominated by so-called banana cells (see Section 6.1). This is either because of low resolution and correspondingly low Reynolds numbers or because of the strong rotational influence characteristic of planetary applications, or both. The Coriolis force acting on these relatively laminar flows induces kinetic helicity which in turn produces efficient poloidal field regeneration via the α-effect12. An unrealistically large effective value of α was identified as a possible reason why early solar dynamo simulations produced cycle periods of only 1–10 yr, significantly less than the 22-year period of the solar activity cycle (Gilman, 1983; Glatzmaier, 1985a). In more turbulent parameter regimes, nonlinear correlations are likely to be reduced, implying a smaller α. Thus, if cyclic, dipolar dynamos can be achieved in such parameter regimes, there is reason to believe that their periods may be more comparable to the Sun. However, this remains to be seen.
Another difficulty exhibited by the early solar dynamo simulations of Gilman (1983) and Glatzmaier (1985b) is that the toroidal field tended to migrate poleward over the course of a cycle rather than equatorward as in the Sun (Section 3.8). This was attributed to the sign of the kinetic helicity, which determines the propagation direction of dynamo waves in mean-field theory (e.g., Ossendrijver, 2003; Charbonneau, 2005). However, it is well known that the sign of the kinetic helicity in rotating convection simulations reverses near the base of the convection zone (Sections 6.2 and 6.3). For this and other reasons (mainly having to do with the storage and amplification of toroidal flux), it has been argued that the lower convection zone and, in particular, the tachocline likely play a key role in the solar dynamo (e.g., Weiss, 1994; Ossendrijver, 2003).
Global convection simulations have not yet achieved a rotational transition region comparable to the solar tachocline. A realistic modeling effort would require very high spatial resolution (see Section 5.1) and may involve long-term processes which would be difficult to capture in a 3D simulation (see Section 8.5). However, a tachocline can be incorporated in a global model in an approximate way, for example by imposing solid body rotation in the interior via boundary conditions or body forces. This is the frontier for global, 3D, solar dynamo simulations.
The presence of a tachocline in a global simulation may promote more regular, cyclic behavior by providing a reservoir for field storage and a mechanism for field amplification, possibly up to super-equipartition values as is though to occur in the Sun (e.g., Fisher et al., 2000; Fan, 2004). Coupling to the radiative interior may also act to regularize the dynamo by providing thermal, mechanical, and electromagnetic inertia. For example, one of the important contributions of global convection simulations to geodynamo theory over the past decade has been the realization that an electrically conducting core adds stability to the dipolar dynamo, preventing overly sporadic and frequent reversals (e.g., Glatzmaier, 2002). Furthermore, the regularity of the solar cycle suggests that essentially linear processes such as dynamo waves may prevail over more chaotic turbulent processes, meaning the relatively quiescent tachocline may set the rhythm of the solar dynamo.
6.6 Comparisons with mean-field theory
Global convection simulations have provided much insight into solar interior dynamics in general and the maintenance of mean flows and fields in particular. However, in light of their limitations (Section 7), it is prudent to also consider alternative modeling approaches. Mean-field models seek to reproduce the structure and evolution of large-scale flows and fields in the Sun using turbulence models or other physical parameterizations for the Reynold stress, convective heat flux, Maxwell stress, and turbulent emf. The motivation and methodology behind mean-field models was discussed briefly in Section 5.3. Here we review some of the results and insights gained from mean-field modeling and compare them with global convection simulations.
This section is not intended as a comprehensive review. For much more discussion of mean-field hydrodynamics see Rüdiger (1989), Canuto and Christensen-Dalsgaard (1998), and Rüdiger and Hollerbach (2004). For a review of mean-field dynamo models see Ossendrijver (2003), Rüdiger and Hollerbach (2004), and Charbonneau (2005).
6.6.1 Mean-field hydrodynamics
In mean-field hydrodynamics, the components of the Reynolds stress which transfer angular momentum are typically represented in terms of a diffusive contribution represented by an anisotropic turbulent viscosity νt and a non-diffusive contribution known as the Λ-effect [Equation (23)]. These two contributions are generally comparable in amplitude so the relative strength of the Coriolis force and the Reynolds stress may be quantified by the Taylor number based on the turbulent viscosity: \({T_{\rm{a}}} = {(2{\Omega _0}R_ \odot ^2/{\nu _{\rm{t}}})^2}\). This is in effect the inverse square of the Rossby number defined in Equation (9).
Many early mean-field models of the solar internal rotation relied only on Reynolds stress parameterizations, with the meridional circulation specified or neglected altogether. An example is the model of Küker et al. (1993) who obtained solar-like rotation profiles using the Λ-effect theory of Kitchatinov and Rüdiger (1993). However, their solutions were inconsistent with the thermal wind balance Equation (11) because they did not take into account the Coriolis-induced circulations which would be driven by the rotation profiles they achieved. At the large Taylor numbers required by their model, Equation (11) implies cylindrical rotation profiles in the absence of baroclinic effects. Other estimates for the amplitude of the turbulent viscosity have similar implications (Rüdiger, 1989; Durney, 1999; Rüdiger and Hollerbach, 2004). This is the "Taylor number puzzle" discussed by Kitchatinov and Rüdiger (1995) and Rüdiger and Hollerbach (2004).
As in global convection simulations, cylindrical rotation profiles can be avoided in two ways, either the Reynolds stress must be substantial (implying smaller Taylor numbers) or latitudinal entropy gradients must be established which maintain a thermal wind differential rotation. Most mean-field models now rely on the latter to achieve solar-like rotation profiles (Kitchatinov and Rüdiger, 1995; Durney, 1999; Küker and Stix, 2001; Rüdiger and Hollerbach, 2004; Rempel, 2005). These models are typically based on anisotropic parameterizations for the convective heat flux obtained from mixing-length theory, modified to account for the influence of rotation. An exception is the mean-field model developed by Rempel (2005) in which the required entropy perturbations originate from thermal wind balance in the tachocline and spread upward into the convection zone without the need for an anisotropic parameterization of the thermal diffusivity (this model is discussed further in Section 7.3). In mean-field models which incorporate convective heat transport, the meridional circulation is usually solved for together with the angular velocity and entropy profiles.
The meridional circulation is generally more sensitive to the parameterizations than the differential rotation (Küker and Stix, 2001). This is again consistent with global simulations where the meridional circulation is maintained by a delicate balance of forces (Section 6.4). For moderate values of the mixing length parameter and for solar-like rotation rates, Küker and Stix (2001) find that the circulation has two cells in radius per hemisphere, with equatorward circulation at the top and bottom of the convection zone and poleward circulation in between. The equatorward surface flow is inconsistent with photospheric measurements and helioseismic inversions (Section 3.4), but multiple-cell structure in depth is also exhibited by global convection simulations (Section 6.4).
The equatorward surface circulation in the Küker and Stix (2001) model can be attributed to the Reynolds stress parameterization. The Kitchatinov and Rüdiger (1993) theory of the Λ-effect yields an outward angular momentum flux near the surface. If the meridional circulation is to balance this outward Reynolds stress as expressed by Equation (8) and if the angular velocity profile is to be solar-like, then the circulation must be equatorward. Conversely, an inward angular momentum flux by the Reynolds stress near the surface implies a poleward circulation. If the Reynolds stress parameterization exhibits inward angular momentum transport near the surface, not only can it produce a poleward circulation as suggested by observations, but it may also establish a subsurface increase in angular velocity analogous to the near-surface shear layer found in helioseismic inversions (Section 3.1). This is indeed the case for the mean-field model developed by (Rempel, 2005). Rempel's model provides important insight into how the solar differential rotation may be maintained but there is currently little physical justification for the Reynolds stress parameterizations which best match observational data.
In global convection simulations, the angular momentum transport by Reynolds stresses is typically outward as shown in Figure 14, although it is nearly balanced by inward viscous diffusion. As higher Reynolds numbers are achieved and the viscous diffusion is reduced, the Reynolds stress and meridional circulation must adjust accordingly if they are to maintain a solar-like differential rotation as well as a poleward surface circulation.
6.6.2 Solar dynamo theory
No review of solar interior dynamics would be complete without some mention of the thriving field of mean-field solar dynamo modeling. These models seek to reproduce observational manifestations of the solar activity cycle, including the butterfly diagram (Section 3.8), the propagation and phase relationship between axisymmetric poloidal and toroidal fields, and long-term or sporadic cycle variations such as the Maunder minimum. Solar dynamo models are generally quite successful in this regard and have provided much insight into the origin of cyclic magnetic activity on the Sun.
Despite this success, there is still much uncertainty with regard to the primary physical mechanisms responsible for regenerating poloidal field from toroidal field (the α-effect) and with regard to the role (or lack thereof) of the meridional circulation (Mestel, 1999; Ossendrijver, 2003; Charbonneau, 2005). Furthermore, the theoretical foundation of many solar dynamo models remains questionable. Global simulations can be used to help validate mean-field theory although they do not yet possess the resolution or physical conditions to explicitly capture many of the processes which are currently parameterized in dynamo models. Examples include flux-tube instabilities in the tachocline and magnetic diffusion in the solar surface layers due to the decay of active regions. The latter is a fundamental component of Babcock-Leighton dynamo models Mestel (1999); Charbonneau (2005). Global MHD simulations have not yet achieved a shear layer at the bottom of the convection zone comparable to the solar tachocline so they cannot currently be used to validate or motivate interface dynamo models in which the toroidal and poloidal field generation occurs in spatially separated regions. However, this will soon change as global convection simulations incorporate a tachocline either self-consistently or by imposed forcing (Section 7.3).
Many of the approximations commonly used in mean-field dynamo theory are not justified by global convection simulations. In particular, there is no clear scale separation in space or time13 so there is no guarantee that series expansions such as that in Equation (22) will converge Ossendrijver (2003). Furthermore, the amplitude of the fluctuating magnetic fields exceeds that of the mean fields and the non-axisymmetric analogue of the turbulent emf v × B − 〈v × B〉 is not small relative to the other terms in the fluctuating induction equation, calling into question the first-order smoothing approximation which is implicit in most mean-field models (Moffatt, 1978; Ossendrijver, 2003).
Although their justification formally breaks down, mean-field models may be still used to interpret some aspects of global MHD convection simulations. The highest resolution achieved to date in such simulations is represented by Case M3, discussed in Section 6.5. Here the toroidal field regeneration due to differential rotation is comparable to that due to the turbulent emf. Thus, Case M3 might be classified as an α2-Ω dynamo in the terminology of mean-field theory. This might help to explain its non-cyclic behavior. In mean-field theory, α-Ω dynamos are generally more likely to yield cyclic, dipolar solutions than α2 or α2-Ω dynamos (Charbonneau and MacGregor, 2001; Rüdiger et al., 2003; Rüdiger and Hollerbach, 2004) In the more laminar MHD convection simulations by Gilman (1983) and Glatzmaier (1985a) differential rotation plays a bigger role, the dynamo was more akin to α-Ω type, and cyclic, dipolar solutions were found. Moreover, the poleward propagation of magnetic flux in these simulations over the course of a cycle is consistent with the Parker-Yoshimura sign rule of mean-field theory (Charbonneau, 2005).
Numerical simulations of MHD convection can be used not only to evaluate mean-field models but also to calibrate them by providing estimates for model parameters such as α and ηt. Furthermore, simulations can provide important insight into nonlinear saturation mechanisms which are often parameterized in mean-field models as quenching of a, ηt, and A. Such efforts have proliferated in recent years (reviewed by Ossendrijver, 2003; Brandenburg and Subramanian, 2004; Rüdiger and Hollerbach, 2004), although most of this work has focused on Cartesian geometries. Further progress in this area promises to improve our understanding of dynamo processes and to improve the reliability of solar and stellar dynamo models.
7 How Can We Do Better? (With Global Simulations)
Global convection simulations (reviewed in Section 6) have provided unparalleled insight into solar interior dynamics and have played an essential role in interpreting helioseismic measurements. Still, many open questions remain. In this section we will discuss how global models can be improved.
7.1 Resolution
In analogy to the shopkeeper's well-known mantra (location, location, location), one could argue that the three most important factors in improving global-scale solar convection models are resolution, resolution, and resolution.
Global convection simulations are generally well-resolved in the sense that the kinetic, thermal, and magnetic energy spectra peak at relatively large scales (ℓ ∼ 10–50) and their amplitude falls off by at least 2–3 orders of magnitude before reaching the grid scale. Furthermore, the results converge in the sense that a higher-resolution with the same parameters will give statistically the same results. However, simulations are far from the solar parameter regimes (Section 5.1). Thus, as the resolution is increased, parameters are generally not held constant.
In particular, higher resolution allows for higher Reynolds, magnetic Reynolds, and Peclet numbers, Re, Rm, and Pe, which quantify the relative importance of advection and diffusion. As these ratios are increased, the flow generally becomes more turbulent and the convective patterns and transport properties may change. For example, as the Reynolds number is increased, the downflow lanes and plumes which currently dominate simulations may alter their entrainment properties or even destabilize completely (e.g., Rast, 1998). New convective modes may become unstable, characterized by smaller spatial scales and rapid time variability (Zhang and Schubert, 2000). Nonlinear processes such as tachocline shear instabilities (Section 8.2) may only occur at sufficiently high Reynolds numbers (Section 8.2). Furthermore, the structure of the overshoot region may be sensitive to the Peclet number (Section 8.1).
The hope and expectation is that these changes only occur up to a point. If enough of the global dynamics is explicitly resolved, smaller-scale dynamics may be reliably treated as an effective diffusion or in terms of a more elaborate sub-grid-scale model (Section 7.2). The question is; how much resolution is enough? At the very minimum, simulations must resolve the energy-containing scales. This has already been accomplished; as Re, Rm, and Pe are further increased, the peaks in the energy spectra will probably not shift significantly. However, spectra only provide part of the story.
Most researchers would agree that the most significant advances in turbulence research over the past few decades have been concerned with coherent structures which arise from self-organization processes such as the selective dissipation of ideal invariants (Cantwell, 1981; Hasegawa, 1985; Lesieur, 1997; Branover et al., 1999). Although such structures may occupy a small volume and possess relatively little energy, they often dominate the transport in inhomogeneous and anisotropic turbulent flows. Symmetry breaking induced by rotation, stratification, and magnetic fields can all give rise to self-organization in the solar convection zone.
A goal for solar convection simulations is therefore to resolve all scales which are significantly influenced by rotation and stratification (i.e., buoyancy) in order to capture such self-organization processes14. Smaller-scale motions may then behave more like isotropic, homogeneous turbulence which is generally diffusive in nature. This goal may be achievable throughout most of the convection zone. However, magnetic fields will be present everywhere above the magnetic dissipation scale which, at several kilometers, is well beyond the resolution of simulations (Section 5.1). Furthermore buoyancy effects remain important even at the smallest resolvable scales near the photosphere and overshoot region. Thus, subgrid-scale models must be developed which can reliably take into account the effects of magnetism and buoyancy, which may be non-diffusive (Section 7.2).
In any case, it is clear that global simulations are not yet in a regime in which the results are insensitive to viscous, thermal, and magnetic dissipation and consequently, to resolution. Convective patterns and mean flows still depend to some extent to the effective values of Re, Pe, and Rm (Miesch et al., 2000; Miesch, 2000; Elliott et al., 2000; Brun and Toomre, 2002; Brun et al., 2004).
The transition regions which couple the convection zone to the radiative interior below and the solar atmosphere above are particularly challenging to resolve in global simulations (see Sections 5.1 and 7.3). Granulation in the surface layers will likely remain outside the scope of global models for some time, as will a realistic depiction of penetrative convection and wave dynamics in the overshoot region and tachocline (Section 8). The effect of these transition regions on global-scale dynamics can however be explored in global simulations with the help of appropriate boundary conditions and subgrid-scale models (Sections 7.2 and 7.3).
Improved numerical methods with enhanced resolution near the boundaries and better parallel efficiency may help to mitigate some of the limitations of global simulations in the coming years. Particularly promising in this respect are finite element and finite volume methods which require less inter-processor communication than spectral methods and which, primarily for this reason15, are becoming more common in atmospheric and oceanic applications (e.g., Lin and Rood, 1997; Marshall et al., 1997; Stuhne and Peltier, 1999; Fournier et al., 2004).
Solar convection simulations must always push the limits of available high-performance computing platforms to achieve ever higher spatial resolution. However, the highest-resolution simulations achievable on a given platform are computationally intensive. Not only do they require more calculations per iteration, but they must take smaller time steps to meet CFL stability conditions, implying more iterations for a particular simulation interval. Thus, it is impractical to run the highest-resolution simulations for the long durations necessary to adequately assess sustained dynamo action or to explore dynamics spanning several solar activity cycles. For such investigations, intermediate-resolution simulations will always remain important. Here again we may be guided by geophysical applications where high-resolution development models may be used to verify and calibrate lower-resolution application models (e.g., Williamson, 2002).
The continued importance of intermediate-resolution simulations further emphasizes the need for reliable subgrid-scale models to account for motions which are not resolved. These will be discussed further in the next section (Section 7.2).
7.2 Subgrid-scale modeling
For as long as we can reasonably speculate, even the most ambitious global simulations will only resolve a small fraction of the dynamical scales which are active in the solar interior. Thus, some type of model is necessary to account for the influence of motions on scales smaller than the grid spacing.
Current subgrid-scale (SGS) models assume that this influence is merely diffusive in nature, acting as an effective scalar viscosity, thermal diffusivity, and magnetic diffusivity which are many orders of magnitude larger than the corresponding molecular values. These scalar coefficients are allowed to vary with depth and are often assumed to be proportional to \(\bar{\rho}^{-1/2}\), as suggested by mixing-length arguments. Such parameterizations are very crude and do not accurately represent the complex dynamics known to occur in rotating, stratified, magnetized flows (e.g., Sections 7.1 and Section 8). More realistic models are necessary in order to make substantial further progress in global simulations.
The primary objectives of a subgrid-scale model may be outlined as follows:
to reduce the influence of dissipation on the largest scales,
to reliably account for cascade processes,
to model processes which are completely unresolved,
to minimize the number of free parameters.
We now proceed to elaborate on these objectives.
The extremely high Reynolds numbers characteristic of the solar convection zone suggest that global-scale motions must be essentially inviscid (Section 5.1). Thermal and magnetic diffusion are similarly expected to be insignificant on large scales. This is not the case for current global simulations in which diffusive transport still makes a substantial contribution to the net momentum and energy balance (e.g., Figure 14) and still influences the generation and evolution of the magnetic field. Thus, the first goal of any successful SGS model must be to reduce the influence of this artificial dissipation.
In a spectral model, the most straightforward way to accomplish this is by imposing hyperdiffusion wherein the Laplacian diffusion operator is replaced by or supplemented with a higher-order equivalent (e.g., ∇4 or ∇8). Thus, the effective diffusion on the largest scales can be greatly reduced while maintaining an efficient dissipation on the smallest scales, preventing a buildup of energy which would otherwise cause numerical instability.
Although hyperdiffusion has benefits, it also has drawbacks. It is a practical construct with little physical justification. Furthermore, higher-order radial derivatives require additional boundary conditions in order to make the problem well-posed, placing artificial constraints on the allowable solutions. Such constraints can be avoided if hyperdiffusion is only implemented on horizontal surfaces while keeping the radial diffusion second-order, an approach which has been used in geodynamo simulations (e.g., Glatzmaier, 2002). However, this introduces an unphysical and largely arbitrary anisotropy into the SGS transport. Hyperviscosity also can introduce spurious overshoot near sharp gradients (related to Gibbs ringing) and may have an adverse effect on dynamo simulations, fundamentally altering the field generation process (Zhang and Schubert, 2000; Busse, 2000). It is therefore important to consider alternatives.
Turbulent flows generally exhibit cascade processes, characterized by a self-similar (scale invariant) exchange of energy or some other ideal invariant between adjacent spectral modes. The most familiar example is the forward cascade of kinetic energy which occurs within the classical inertial range of 3D, homogeneous, isotropic, incompressible turbulence (e.g., Lesieur, 1997; Pope, 2000). Rotation, stratification, and magnetism can also give rise to forward and inverse cascades (e.g., Section 8.3). By narrowing the viscous dissipation range, hyperdiffusion can extend these cascade ranges and thereby better capture the essential dynamics of the largest scales. However, the dynamics within the dissipation range is not accurately represented. A better representation of the resolved flow on all scales might be achieved by assuming from the outset that it will be self-similar on scales comparable to the grid-spacing.
A variety of self-similarity methods have been developed, as reviewed by Meneveau and Katz (2000). These are all based on the Large-Eddy Simulation (LES) framework whereby a low-pass filter is applied to the equations of motion (e.g., Mason, 1994; Pope, 2000). One approach, known as a dynamic SGS model, is based on the Germano identity, which relates the turbulent stress tensor, τ ij between two self-similar scales as follows (Germano et al., 1991):
$$\langle {\bar {{v_i}}} \rangle \langle {\bar {{v_j}}} \rangle - \langle {\bar {{v_i}} \bar {{v_j}}} \rangle = \tau _{ij}^{\prime} - \langle {{\tau _{ij}}} \rangle,$$
$${\tau _{ij}} = \bar {{v_i}} \bar {{v_j}} - \bar {{v_i}{v_j}}$$
$$\tau _{ij}^{\prime} = \langle {\bar {{v_i}}} \rangle \langle {\bar {{v_j}}} \rangle - \langle {\bar {{v_i}{v_i}}} \rangle.$$
In these equations, overbars and brackets denote two spatial filtering operations, characterized by two different cutoff wavenumbers, k1 and k2. The first, k1, corresponds to the grid scale and the associated velocity field \(\overline{v_{i}}\) may be regarded as the resolved flow in the simulation. The second filter is applied at a larger scale, typically chosen such that k1 = 2k2. The tensors τ ij arise when filters are applied to the Navier-Stokes equations and are often referred to as the Leonard stress.
The left-hand-side of Equation (25) can be evaluated directly from the resolved velocity field. However, the right-hand-side involves the unknown correlations \(\overline{v_{i}v_{j}}\) which must be modeled (this is essentially the Reynolds stress). If some parametric form is assumed for τ ij , Equation (25) may then be used to compute the parameters. For example, if the turbulent transport is assumed to be diffusive, then τ ij = 2νte ij where νt is the turbulentviscosity and e ij is the strain rate tensor. Equation (25) can then be used to derive vt as a function of space and time. More commonly, νt itself is assumed to be proportional to the trace of e ij as originally proposed by Smagorinsky (Smagorinsky, 1963; see also Pope, 2000). Equation (25) then yields the proportionality constant (Lesieur and Métais, 1996; Meneveau and Katz, 2000). The only remaining parameter is the ratio of filter scales k1/k2, meeting objective 4 above.
Self-similarity models such as these may in principle be applied separately for velocity, thermal, and magnetic fields and they rank among the most promising SGS approaches for solar applications (other promising strategies are reviewed by Lesieur and Métais 1996 and Foias et al. 2001). However, they do not capture nonlocal spectral transfer between large and small scales. Furthermore, they do not account for distinct small-scale dynamics such as granulation which are entirely unresolved, possessing local energy maxima on scales below the grid resolution. For this, separate models must be developed as outlined in objective 3 above. Such models may be based on local-area simulations or on parameterizations and procedures developed in the context of mean-field theory (Section 5.3). In this respect, global solar convection and dynamo simulations may ultimately resemble global circulation models (GCMs) for the Earth's atmosphere, where unresolved processes are parameterized and where a hierarchy of modeling efforts (macroscale, mesoscale, and microscale) may be used to devise more reliable parameterizations (e.g., Beniston, 1998).
The most straightforward way to evaluate whether an SGS model is reliable and robust is to compare simulations with different resolutions. An intermediate-resolution simulation which incorporates the SGS model should be able to reproduce results from a higher-resolution simulation with only Laplacian diffusion. Furthermore, the LES/SGS model should eventually converge on a statistically equivalent solution as the resolution is increased. Of course, these checks will only work if the assumptions of the model are met. For example, an SGS model which relies on scale invariance will only converge if the cutoff wavenumber corresponding to the grid spacing is well within the inertial range (or some equivalent cascade range). Furthermore, as the resolution is increased, the parameterizations for previously unresolved processes (objective 3) may need to be revised as their characteristic scales begin to overlap with the dynamical range captured by the simulation. This is occurring now in GCMs where increasing the resolution does not necessarily lead to better forecasts (e.g., Williamson, 2002).
Large-eddy simulations with subgrid-scale modeling generally perform well in fundamental turbulence applications (Mason, 1994; Meneveau and Katz, 2000; Pope, 2000). Results have been promising enough that the approach has become standard in engineering and atmospheric applications. It remains to be seen how reliable they will be for solar interior dynamics. Magnetism in particular poses difficult challenges for SGS modeling which have not yet been fully addressed. Rotation and stratification (i.e., buoyancy) must also be incorporated into a realistic model. Furthermore, LES/SGS approaches can run into problems near boundaries where the characteristic scales of the flow can decrease dramatically and where qualitatively different dynamics can occur (Mason, 1994). This is certainly an issue in solar applications where the boundaries of the convection zone are likely to be highly complex (Section 7.3). Still, the prospects are good that improved SGS modeling may lead to substantial advances in global solar convection simulations in the near future.
7.3 Boundary influences
Most global solar convection simulations assume that the upper and lower boundaries of the computational domain are stress-free and impenetrable. Convection is driven by imposing a heat flux on the lower boundary and either a constant heat flux or a constant entropy on the upper boundary. Magnetic boundary conditions are generally either perfectly conducting or matching to an external potential field. All of these conditions are gross simplifications of the complex dynamics which actually couple the solar convection zone to the extended atmosphere above and the radiative interior below.
Although the uppermost layers of the convection zone account for only a small fraction of its total mass, the precipitous drop in entropy near the surface produces strong buoyancy driving. This, when coupled with radiative transfer and ionization effects, maintains granulation and supergranulation. These motions do not penetrate far below the photosphere but stochastic forcing from ensembles of plumes may have a subtle influence on the deeper convective zone. In particular, coupling between supergranulation, mesogranulation, and deeper convective motions may have some bearing on the near-surface shear layer seen in helioseismic rotational inversions (Section 3.1). Furthermore, magnetic flux dispersion by near-surface convective motions might contribute to global polarity reversals as in Babcock-Leighton dynamo models (e.g., Ossendrijver, 2003; Charbonneau, 2005).
Coupling between the convective envelope and the corona can occur through magnetic torques and mass exchange via the solar wind. Such processes generally occur on timescales much longer than the solar activity cycle. However, magnetic helicity flux through the solar surface may play an important role both in the operation of the dynamo (Blackman and Brandenburg, 2003) and in determining the global configuration of the coronal field (Low, 2001; Zhang and Low, 2001, 2003). Understanding the complex process of flux emergence is also essential for interpreting photospheric and coronal observations (e.g., Fan, 2004).
Perhaps an even more important factor in improving global simulations is a more realistic treatment of the complex dynamics occurring at the base of the convection zone, where the solar envelope couples to the radiative interior through the overshoot region and tachocline. This transition region is thought to play a critical role in the solar dynamo (Section 4.5) so it must be represented with some fidelity if global simulations are ever to make meaningful contact with observations of magnetic activity.
The primary difficulty in capturing the dynamics of the overshoot region and tachocline in a global simulation lies in their thin extent (Section 3.2, Section 3.6). Like the near-surface layers, relatively small-scale processes occur which are difficult to resolve (Section 8). This small grid spacing sets corresponding limits on the time steps required for numerical stability, further adding to the computational expense. Such restrictions are overcome in current models by either placing the boundary of the computational domain at the base of the convection zone (no penetration) or by artificially decreasing the subadiabatic stratification in the interior, thereby extending the overshoot region.
Nevertheless, global simulations can potentially capture may aspects of the solar dynamo including turbulent pumping of fields into the overshoot region and amplification by differential rotation in the tachocline (cf. Figure 8). The subsequent formation and rise of flux tubes by buoyancy instabilities may require much higher resolution to reliably model but it should be present to some degree in global simulations which possess a tachocline.
Establishing and maintaining a tachocline in a global convection simulation is a challenge in itself, since it may involve processes which occur on timescales much longer than the solar activity cycle, (Section 8.5). It also requires minimal vertical diffusion to prevent artificial spreading (high Re, Pe; cf. Section 7.1). Global simulations have only begun to explore penetrative convection in detail and have not yet achieved a rotational shear layer comparable to the tachocline16.
The tachocline region is not only important from a dynamo perspective; it also mediates angular momentum transport between the convective envelope and the radiative interior. This may occur through magnetic coupling or by through penetrative convection and gravity waves (Section 8.4). Helioseismic inversions suggest that this transport must be relatively efficient, since the mean rotation rate of the convection zone and radiative interior are comparable (Section 3.1).
Angular momentum exchange between the convection envelope and the deep interior plays an important role in the rotational history of the Sun over evolutionary timescales (Charbonneau and MacGregor, 1993). However, it has little bearing on the differential rotation profile of the envelope which is maintained on shorter timescales (Section 4.3). Still, there are several reasons to believe that the differential rotation profile may be sensitive to dynamics near the base of the convection zone.
Turbulent penetrative convection tends to produce poleward angular momentum transport in the overshoot region due to the rotational alignment of downflow plumes (Section 6.3). Since the overshoot region is artificially deep in simulations, we may be overestimating this transport (Miesch, 2005). More generally, poleward angular momentum transport in the tachocline by instabilities and turbulence may balance equatorward transport in the convection zone, giving rise to a global angular momentum cycle which would ultimately determine the equilibrium rotation profile (Gilman et al., 1989).
Thermal effects may also play an important role. The differential rotation in the lower convection zone is probably in thermal wind balance, maintained by latitudinal entropy gradients (Sections 4.3 and 6.3). The radiative interior provides a large thermal reservoir which can influence this balance depending on how it is tapped by penetrative convection. An example of how this may occur has been described by Rempel (2005) in the context of a mean-field model.
In Rempel's model, differential rotation in the convection zone is maintained by a Λ-effect (Section 5.3) and a meridional circulation which is solved for together with the angular velocity and thermal structure by means of the axisymmetric momentum, energy, and continuity equations. Uniform rotation is imposed on the lower boundary and the system is evolved until a steady state is reached. The competition between the Λ-effect and the lower boundary condition quickly establishes an artificial' tachocline'; i.e., a large vertical angular velocity gradient near the base of the convection zone. This, in turn, sets up latitudinal entropy gradients in accord with thermal wind balance. These entropy gradients are then transmitted upward into the envelope by convective motions, here treated as an effective thermal diffusion. The net result is a solar-like differential rotation profile in which departures from cylindrical alignment are maintained by latitudinal entropy gradients originating in the tachocline. The model involves many crude simplifications but it illustrates how thermal coupling between the convection zone and radiative interior may influence the differential rotation profile.
More generally, since the convection zone is nearly adiabatic, even small entropy variations originating in the strongly subadiabatic radiative interior may be significant. The depth to which penetrative convection mixes entropy with the interior and the efficiency by which energy is transported through the surface together determine the entropy content, or in other words, the adiabat of the convection zone. This is one more reason why a realistic modeling of these boundary regions may be important for global simulations.
There are many other phenomena which will require detailed modeling of the upper and lower convection zone to fully account for. An example is light element depletion in the Sun and other late-type stars which may arise from chemical transport by gravity waves (Section 8.4). Momentum transport by gravity waves may account for the tachocline oscillations found in helioseismic inversions (Section 3.3). Incorporating all of these influences into global convection simulations is a tall order but it must eventually be done to some degree if we are ever to have a reasonably complete and integrated model of solar interior dynamics. Meeting these challenges will require a combination of increased resolution near the boundaries, sophisticated SGS models, and carefully chosen boundary conditions.
8 Dynamics of the Tachocline and Overshoot Region
Although the distinction is sometimes blurred, the tachocline and the overshoot region are empirically two very different things. Whereas the tachocline is defined helioseismically from rotational inversions (Section 3.2), the overshoot region is usually defined in terms of the mean stratification and must be probed instead with structural inversions (Section 3.6). The tachocline encompasses the overshoot region but appears to be wider; whereas the upper tachocline may extend substantially into the convective envelope at high latitudes, the lower tachocline lies below the overshoot region at all latitudes (Section 3.2). What the two have in common is that they are both thin — according to current estimates, the tachocline extends roughly a few percent of the solar radius while the overshoot region occupies less than one percent. Thus, local-area and thin-shell models are particularly useful here (Section 5).
8.1 Convective penetration
Due to its wide applicability in astronomy and geophysics, there is a large body of literature on convective penetration. Much of this work, particularly in a solar context, is concerned with the structure of the overshoot region and how the penetration depth varies with the vigor of the convection and the stiffness of the transition from subadiabatic to superadiabatic stratification.
Figure 22 illustrates the structure of the overshoot region at the base of the solar envelope as suggested by Zahn (1991). In the convection zone, the radial entropy gradient, \(d\overline{S}/dr\), is negative but nearly adiabatic due to the efficient mixing of entropy by turbulent convection. The convective enthalpy flux is positive (outward) and the radiative heat flux normalized by the total flux, L⊙/4πr2 is less than unity. To a good approximation, the normalized convective enthalpy flux and radiative heat flux sum to unity, with smaller contributions from the other terms in Equation (3).
A schematic diagram illustrating the radial entropy gradient, \(d\overline{S}/dr\), the convective enthalpy flux, \({\cal F}^{\rm EN}\), and the radiative heat flux \({\cal F}^{\rm RD}\) near the base of the convection zone (see Equation (3) and Appendix A.3). Each quantity is plotted on a horizontal axis (increasing toward the right) as a function of radius (vertical axis). The radiative flux is normalized with respect to the total solar flux, L⊙/4πr2. Four regimes are indicated as discussed in the text (after Zahn, 1991).
In many theoretical studies, the base of the convection zone is defined as the point where \(d\overline{S}/dr\) changes sign and becomes positive (subadiabatic). The inertia of convective downflows takes them beyond this point into the stably-stratified interior. Here the enthalpy flux becomes negative (inward) and the outward radiative flux must increase to compensate. Downward motions will be quickly decelerated by buoyancy but the turbulent mixing may still be efficient enough to establish a nearly adiabatic penetration region where \(d\overline{S}/dr\geq 0\). Eventually, downflows will be decelerated enough such that their effective Péclet number, Pe = UL/κ, becomes small and turbulent mixing becomes inefficient relative to thermal diffusion. This occurs in a thin thermal adjustment layer where the enthalpy flux falls to zero and the stratification becomes strongly subadiabatic. Deeper in the interior, the radiative heat flux carries the entire solar luminosity.
Sometimes a distinction is made between convective overshoot and convective penetration. The former is used to describe any convective motions which are carried into a region of stable stratification by their own inertia. By contrast, the latter term often has a more specific meaning, implying that the convection is efficient enough to establish a nearly adiabatic penetration region as indicated in Figure 22. From the perspective of solar structure modeling and helioseismic probing, it is often more convenient to define the base of the convection zone as the bottom of the well-mixed, nearly adiabatic penetration region rather than where the entropy gradient changes sign.
The presence of a nearly adiabatic penetration region in the Sun is currently a matter of some debate. Although many early models and relatively low-resolution 2D and 3D simulations produced a true penetration region where \(d\overline{S}/dr\geq 0\) (reviewed by Brummell et al., 2002b; Rempel, 2004), recent high-resolution simulations of turbulent penetrative convection by Brummell et al. (2002b) exhibited only strongly subadiabatic overshoot. They attributed the absence of a nearly adiabatic penetration region to the small filling factor of downflow plumes, which dominate the flow field in turbulent parameter regimes (see Section 5.2). However, reduced models based on the dynamics of intermittent plumes suggest that such numerical simulations may exhibit more adiabatic penetration if they could achieve more solar-like parameter regimes (Zahn, 1991; Rempel, 2004). In particular, higher Péclet numbers and a lower imposed heat flux may modify the balance between advective and diffusive heat transport enough to produce a nearly adiabatic stratification.
Another challenge to numerical simulations of penetrative convection is achieving a high stiffness parameter St, which is a measure of the subadiabatic stratification in the stable zone relative to the superadiabatic stratification in the convection zone. In the Sun this ratio is roughly 105 whereas simulations consider values of at most 10–100. Thus, the depth of penetration, Δp, in simulations is artificially high and much work has focused on establishing scaling relations between Δp and S in order to extrapolate the results to solar conditions. Analytic estimates by Hurlburt et al. (1994) suggest that the extent of the nearly adiabatic penetration region, if present, scales as \(S_{\rm t}^{-1}\) whereas the depth of the thermal adjustment layer scales as \(S_{\rm t}^{-1/4}\). Numerical simulations are generally consistent with these scaling estimates (Hurlburt et al., 1994; Singh et al., 1995; Brummell et al., 2002b). However, Rogers and Glatzmaier (2005a) have recently achieved stiffness values of over 500 in high-resolution simulations of 2D penetrative convection and they find a much shallower scaling law, \(\Delta_{\rm p}\sim S_{\rm t}^{-0.04}\) for \(S_{\rm t}\geq 10\). When extrapolated to solar conditions, most simulations and models imply penetration depths ranging from about 0.01–1 pressure scale heights HP, implying a Δp of a few percent of the solar radius or less (see, e.g., Rempel, 2004; Stix, 2002). By comparison, upper limits from helioseismology suggest that the overshoot region is no more than about 0.05HP, which is less than 0.01R⊙ (Section 3.6). Helioseismic inversions can also set limits on how abruptly the entropy gradient changes at the base of the convection zone, ruling out a very thin thermal adjustment layer (Monteiro et al., 1994; Basu et al., 1994; Roxburgh and Vorontsov, 1994).
Brummell et al. (2002b) also considered the variation of the penetration depth with rotation and latitude, under the f-plane approximation. They found that rotation generally has a stabilizing effect because plumes are tilted away from the vertical by turbulent alignment and weakened by vortex interactions. Similar results were also reported by Julien et al. (1996a, 1999); see Section 5.2. The penetration depth was greatest at the equator and poles, and least at mid-latitudes. The smaller penetration at mid-latitudes relative to high latitudes was attributed to turbulent alignment because tilted plumes have less downward momentum. The enhanced penetration at low latitudes was attributed to the formation of horizontal convective rolls which are analogous to the north-south aligned downflow lanes typically seen in global convection simulations (Section 6.2). Global simulations of penetrative convection by Miesch et al. (2000) do indeed exhibit deeper penetration at the equator, but there is less evidence for enhanced penetration at the poles in turbulent parameter regimes. However, the simulations by Miesch et al. (2000) used a realistic value for the solar luminosity so it was impractical to cover a full thermal equilibration timescale (∼ 105 yr; see Section 5.1). Thus, any conclusions made about the detailed structure of the overshoot region must be regarded as tentative.
Investigating convective penetration with global models remains an important challenge for the near future. Although global models can say little about the thermal structure of the overshoot region at present, they have already produced provocative and robust results regarding its dynamics. In particular, they have indicated that penetrative convection in the Sun is likely to induce equatorward meridional circulation and poleward angular momentum transport in the overshoot region (see Sections 6.3 and 6.4).
Another aspect of penetrative convection which has important implications for solar interior dynamics is the generation of gravity waves. Figure 23 illustrates wave excitation in simulations of penetrative convection by Rogers and Glatzmaier (2005b). The geometry is a 2D circular annulus with the inner boundary placed very near the origin to minimize spurious wave reflection. Gravity waves appear as rings of vorticity in the stable zone propagating outward. This outward phase velocity implies an inward group velocity, and is therefore consistent with wave generation at the base of the convection zone (see Appendix A.7).
mpg-Movie (4004.22558594 kB) Still from a Movie. The vorticity field is shown in a simulation of penetrative convection in a circular annulus (from Rogers and Glatzmaier, 2005b) (courtesy T. Rogers). (For video see appendix)
Although gravity waves are present in all simulations of penetrative convection, little is known about the details of wave excitation in the Sun. Unless steps are taken to avoid it, numerical simulations generally suffer from wave reflection at the lower boundary and imposed horizontal periodicities which can substantially alter the spectra, energetics, and transport properties of the waves. Furthermore, obtaining a reliable estimate of gravity wave amplitudes and spectra in a high-resolution simulation of penetrative convection is not a trivial undertaking (e.g., Dintrans and Brandenburg, 2004). The most straightforward method is based on spectral transforms of the velocity or density field in space and time, but this can be unwieldy in a 3D simulation because it requires storing a substantial volume of data at a high temporal cadence and a long enough duration to achieve stable statistics. To date, most investigations of gravity wave excitation in simulations of penetrative convection have been restricted to 2D flows (Hurlburt et al., 1986; Andersen, 1996; Dintrans et al., 2003; Kiraga et al., 2003; Rogers and Glatzmaier, 2005a, b). Theoretical estimates of wave excitation are sensitive to assumptions made about the structure of the convection which are difficult to justify (Goldreich and Kumar, 1990; Fritts et al., 1998; Kumar et al., 1999).
Despite this uncertainty, some general comments can be made. We expect that the gravity wave spectra will peak at spatial and temporal frequencies which correspond to the characteristic scales of the convection which drives them. These are currently uncertain but may be estimated from numerical simulations (Section 6.2). Modes with very small wavelengths (≤ 1 Mm) will be efficiently dissipated by thermal diffusion while modes with horizontal phase velocities comparable to the local differential rotation will be filtered out by critical level absorption and radiative diffusion (Fritts et al., 1998; Kumar et al., 1999; Talon et al., 2002). If the motions are indeed gravity waves, their frequencies will be bounded from above by the Brunt-Väisälä frequency, N, which corresponds to a period of a few hours in the solar interior. However, since the Sun is rotating and magnetized, we might expect a wide variety of waves to be generated by penetrative convection, including inertial gravity waves, Rossby waves, and Alfvén waves. Characteristic velocity amplitudes will vary substantially with radius but may be ∼ 1–10 m s−1 near the overshoot region based on estimates for the vertical velocity in downward plumes, which may reach 100 m s−1, and a moderate conversion efficiency.
No discussion of penetrative convection would be complete without some mention of transport processes. It is well established that turbulent penetrative convection can efficiently pump magnetic fields out of the convection zone into to the overshoot region, and possibly deeper (Brandenburg et al., 1996; Tobias et al., 1998, 2001; Dorch and Nordlund, 2001; Ziegler and Rüdiger, 2003). This is thought to play an integral role in the solar dynamo by continually supplying the tachocline with disordered field which can then be organized and amplified by rotational shear (Section 4.5). Transport of chemical tracers by penetrative convection and the waves it generates can has important implications for solar structure models and spectroscopic measurements of stellar compositional abundances (Montalbán, 1994; Schatzman, 1996; Hurlburt et al., 1994; Pinsonneault, 1997; Fritts et al., 1998; Brummell et al., 2002b; Ziegler and Rüdiger, 2003). Furthermore, angular momentum transport by gravity waves has important implications for understanding the structure and evolution of the solar internal rotation profile as we will discuss further in Sections 8.4 and 8.5.
We emphasize that convective penetration in the Sun is a very intermittent process, dominated by extreme, impulsive events; particularly strong plumes or ensembles of plumes which penetrate deeper than average and then quickly lose coherence. A jackhammer is a better analogy than a drill. Thus, the transport of magnetic fields, chemical tracers, and momentum, is generally deeper than might be expected from average measures such as the mean stratification or the mean kinetic energy density (e.g., Brummell et al., 2002b).
8.2 Instabilities
Penetrative convection occupies only the upper portion of the tachocline, if it overlaps at all (see Section 3.2). The lower portion of the tachocline is convectively stable. However, a variety of other instabilities are likely to occur, driven by shear, buoyancy, and magnetism.
Shear instabilities have been well studied for many years in light of their important geophysical and engineering applications. Undular perturbations in the direction of the mean velocity gradient grow by extracting kinetic energy from the shear flow, eventually overturning and spreading into a turbulent mixing layer. If the shear is vertical, such perturbations are suppressed by sub-adiabatic stratification to a degree which may be quantified by the Richardson number, Ri = (N/|dU/dz|)2. If Ri ≥ 0.25, the vertical shear is hydrodynamically stable17.
For the lower tachocline, N ∼ 10−3 s and S ∼ 10−6 s−1 (see Section 3.2), implying very large Richardson numbers Ri ∼ 106. Vertical shear instabilities should therefore be strongly suppressed. In the overshoot region, N, and therefore Ri, is much smaller, approaching zero at the base of the convection zone. Taking into account the destabilizing influence of thermal diffusion, Schatzman et al. (2000) investigated this problem and concluded that the vertical shear may be hydrodynamically unstable near the base of the convection zone at r = 0.713R⊙, but that this region of instability is confined to low latitudes and does not extend deeper than r ∼ 0.695. Note that this is a global constraint; stably-stratified flows may still exhibit intermittent turbulence18 even if Ri ≫ 1 due to wave breaking and horizontal layering which can drive the local Richardson number below 0.25 (Anders Pettersson Reif et al., 2002; Fritts et al., 2003; Petrovay, 2003; Hanazaki and Hunt, 2004). Note also that magnetism and baroclinicity may act to destabilize the vertical shear. We will return to this issue toward the end of this section.
Although the angular velocity gradient in the tachocline is mainly vertical (Section 3.2), stratification does little to suppress horizontal shear instabilities so we might expect that the latitudinal component of the differential rotation is more likely to be unstable. In the absence of magnetic fields, the latitudinal differential rotation will be linearly unstable if the corresponding latitudinal potential vorticity gradient (see Appendix A.6) changes sign somewhere in the domain of interest. This is a variation of Fjortoft's criterion for a stably-stratified flow, which is in turn related to Rayleigh's well-known inflexion-point criterion (e.g., Knobloch and Spruit, 1982; Vallis, 2005). Nonlinear stability is another matter; a shear flow which is linearly stable may still be unstable to finite-amplitude perturbations, particularly at high Reynolds numbers (a familiar example is pipe flow; see Drazin and Reid (1981); Tritton (1988); Richard and Zahn (1999)).
In light of the extremely large Reynolds numbers in the solar interior (Section 5.1), Zahn (1992, 1994) has argued that the latitudinal differential rotation should be hydrodynamically unstable to finite-amplitude perturbations. If efficient enough, this nonlinear instability may suppress the latitudinal shear entirely, leading to a state of shellular rotation in which angular velocity is independent of latitude. However, due to the possibly insurmountable difficulties of a complete nonlinear stability analysis, these are mainly empirical arguments based on analogies with laboratory flows. Linear analyses indicate that the latitudinal differential rotation in the tachocline is marginally stable to 2D (latitude/longitude) hydrodynamic perturbations (Charbonneau et al., 1999b) and perhaps only weakly unstable to 3D perturbations near the base of the convection zone (Dikpati and Gilman, 2001c; Cally, 2003). Furthermore, these linear instabilities saturate readily, mixing potential vorticity only enough to smooth local extrema and thus stabilize the flow (Garaud, 2001). It appears then that linear hydrodynamic instabilities, even if they occur, are far too weak to establish uniform rotation on horizontal surfaces. However, the addition of even a weak magnetic field profoundly changes everything.
In a series of papers, Gilman and collaborators have shown that the combination of latitudinal differential rotation and a toroidal field in the tachocline is linearly unstable to 2D perturbations for a wide range of field amplitudes and configurations, from broad distributions which occupy an entire hemisphere to localized bands of flux which span only a few degrees of latitude (Gilman and Fox, 1997, 1999a, b; Dikpati and Gilman, 1999; Gilman and Dikpati, 2000). Possible modes of instability for a toroidal band are illustrated in Figure 24. Similar modes of instability also occur for broad fields.
Schematic illustration of (a) m = 0, (b) m = 1, and (c) m = 2 instabilities for a toroidal band of flux on a 2D spherical surface in the presence of a latitudinal differential rotation (from Dikpati et al., 2004).
A band of toroidal flux will experience a magnetic tension force which will tend to make it contract and move toward the poles (Figure 24, panel a). This is the poleward slip instability first studied using the thin flux tube approximation (Spruit and van Ballegooijen, 1982). In perfectly conducting, 2D, incompressible flow this axisymmetric mode (longitudinal wavenumber m = 0) is excluded because of mass conservation; the ring cannot push fluid uniformly poleward. However, the ring can tip as shown in panel b of Figure 24. This is the m = 1 tipping instability and it generally has the largest growth rate for solar parameter regimes, with timescales of order a few months19. Higher-wavenumber instabilities may also occur for weak fields (≤ 104 G) which deform the ring as shown in panel b of Figure 24.
Unstable modes grow by extracting energy from the differential rotation or from the magnetic energy of the initial toroidal field, the latter of which becomes significant only for strong fields. The nonlinear saturation and evolution of these 2D instabilities was investigated by Cally (2001) and Cally et al. (2003). It was found that for broad fields, the tipping instability could lead to several different behaviors depending on the relative phases of the northern and southern hemispheres. If they tip out of phase, this leads to a clam-shell instability in which field lines spread out one side of the shell and reconnect on the other, eventually achieving a poloidal configuration. If the tipping occurs in phase, oscillatory solutions are possible in which field lines remain parallel and no reconnection occurs. The clam-shell instability does not occur for banded field profiles, but bands do tip, eventually equilibrating at a tilt angle which increases with the latitude of the initial band (high-latitude bands tip more).
There is little evidence for clam-shell patterns and highly tilted toroidal field bands in the Sun so it is interesting to explore possible mechanisms which may suppress or alter these instabilities. One possibility is that the instabilities may not be as efficient for the more complex toroidal field profiles which are likely to exist in the Sun. Cally et al. (2003) found one mixed profile in particular with low-latitude toroidal bands superposed on a broad field which did not exhibit a clam-shell instability. Another suppression mechanism may arise from the coupling of adjacent horizontal layers by turbulent mixing. This was recently incorporated into the 2D calculations of Dikpati et al. (2004) as an effective kinetic and magnetic drag. Results indicated that the clamshell instability was indeed suppressed for large magnetic drag in particular, but that the tipping instabilities for toroidal bands still equilibrated at tilt angles comparable to the nondiffusive cases.
An efficient mechanism for suppressing the poleward slip instability as well as the tipping instability of a toroidal band arises if the band possesses a coincident prograde zonal jet which provides a gyroscopic inertia (Rempel et al., 2000; Dikpati et al., 2003). Such a jet could be established by conservation of angular momentum in a band which begins to slip poleward and is then stabilized. The resulting centrifugal force can fully or partially balance the latitudinal component of the magnetic tension force in an equilibrium state, with the remaining contribution coming from pressure gradients. Jet formation is indeed observed in nonlinear simulations and contributes to a net flattening of the differential rotation profile (Cally et al., 2003). This flattening is achieved mainly by the Maxwell stress, which transport angular momentum poleward as a result of shear-induced correlations; 〈B′θB′ϕ〉.
Subsequent work has shown that similar instabilities also occur in quasi-2D systems under the shallow-water (SW) and thin-shell approximations discussed in Section 5.4 (Dikpati and Gilman, 2001c; Gilman and Dikpati, 2002; Dikpati et al., 2003; Cally, 2003; Gilman et al., 2004). Results again indicate that the tachocline differential rotation is in general unstable and that the m = 1 tipping instability is typically the dominant mode for hydromagnetic perturbations. An additional hydrodynamic mode is also present which may be unstable throughout much of the tachocline even in the absence of magnetic fields (Dikpati and Gilman, 2001c). Although formally allowed, the m = 0 poleward slip instability of a toroidal flux band is suppressed by a restoring pressure force which arise as mass is pushed toward the poles, tending to deform the upper boundary into a prolate shape (Dikpati and Gilman, 2001b).
Growth rates for the m = 1 and m = 2 SW modes of a toroidal band are shown in Figure 25 for parameter values characteristic of the overshoot region and lower tachocline (G is the reduced gravity and s is the fractional angular velocity contrast between the equator and pole). In the overshoot region, weak bands (≤ 104 G) are unstable at all latitudes. For stronger fields, mid-latitude bands are stabilized by a zonal jet but bands at low and high latitudes remain unstable. In the radiative zone, bands at nearly all field strengths considered are stable at low latitudes but unstable at higher latitudes. Strong bands at all latitudes are stabilized by a zonal jet but weak mid-latitude bands remain unstable.
Growth rates for magnetic shear instabilities are plotted as a function of the initial latitude (vertical axes) and field strength (horizontal axes) of a toroidal band. Shaded areas indicate instability (growth rates for one or more modes > 0.0025). The left and right columns correspond to parameter regimes characteristic of the overshoot region and lower tachocline, respectively. The lower plots represent cases in which a zonal jet contributes to the initial force balance as discussed in the text. Cases represented in the upper plots have no such jet. Contour lines represent m = 1 and m = 2 symmetric (S) and antisymmetric (A) modes as indicated. The nondimensional model is normalized such that a growth rate of 0.01 corresponds to an e-folding growth time of 1 year. The parameter s is the fractional angular velocity contrast between equator and pole and, in our notation, the reduced gravity \(G=(d\overline{S}/dr)(g(r_{\rm t})\delta^{2})/(2C_{P}\Omega_{0}^{2})\) (from Dikpati et al., 2003).
Using a thin-shell model, Cally (2003) has identified a polar twist instability in which high-latitude toroidal loops lift and twist out of the horizontal plane. This is a different type of m = 1 instability which does not occur in 2D systems and which can exhibit large growth rates (e-folding timescales of months). However, the polar twist instability only operates at high field strengths (≥ 105 G) and large vertical wavenumbers where it may be suppressed by turbulent diffusion. Furthermore, a poloidal field component may stabilize toroidal flux structures near the poles by essentially forming a twisted tube aligned with the rotation axis.
The magneto-shear instabilities studied by Gilman, Fox, Dikpati, and Cally are concerned with the joint instability of latitudinal differential rotation and strong toroidal fields which are thought to exist in the solar tachocline. They are likely related to the toroidal field instabilities described by Tayler (1973), Acheson (1978), and Spruit (1999) but a precise link has not yet been established. Other classes of hydrodynamic and magnetohydrodynamic (MHD) shear instabilities are also likely to operate in the tachocline and radiative interior. Notable among these is the magneto-rotational instability (MRI) described by Velikhov (1959), Chandrasekhar (1961) and Balbus and Hawley (1991) and applied to stellar interiors by Balbus and Hawley (1994). This instability is thought to generate vigorous turbulence in accretion disks which plays an essential role in the global angular momentum balance (Balbus and Hawley, 1998).
Unlike the quasi-2D instabilities studied by Gilman, Fox & Dikpati, the MRI operates mainly on relatively weak poloidal fields which tether axisymmetric rings of fluid to a particular point in the meridional plane. When these rings are perturbed, magnetic tension tends to resist shearing by the differential rotation. If the angular velocity decreases outward from the rotation axis (∂Ω/∂λ > 0), the resulting torques act to amplify the perturbations, leading to instability. When applied to the radiative interior of the Sun, (Balbus and Hawley, 1994) found that the instability was mainly confined to horizontal surfaces by the subadiabatic stratification, producing equatorward angular momentum transport which tends to drive the system toward shellular rotation. Toroidal fields are also subject to MRI as long as the perturbations allow for a poloidal component. However MRI cannot occur in strictly 2D spherical shells so it is distinct from the Gilman-Fox-Dikpati instabilities even in the toroidal field case. Furthermore, the MRI does not operate in regions where ∂Ω/∂λ > 0 or in the equatorial plane where buoyancy resists motions perpendicular to the rotation axis. The MRI criterion ∂Ω/∂λ yt; 0 is more limiting than its hydrodynamic analogue, the Rayleigh instability criterion, which states that a differential rotation profile is unstable if the specific angular momentum decreases outward: \(\partial{\cal L}/\partial\lambda < 0\) (e.g., Knobloch and Spruit, 1982).
As we have discussed, buoyancy in the subadiabatic radiative interior generally has a stabilizing influence on vertical shear but they can also have a destabilizing effect in the presence of rotation and magnetic fields. Rotation can induce baroclinicity, which refers to a state in which isosurfaces of constant density and pressure do not coincide. Fluid particles can tap the gravitational potential energy in such a configuration if they are allowed to move horizontally as well as vertically, in effect circumventing the Schwarzschild criterion for convective stability which applies only to vertical gradients. If a vertical shear is in thermal wind balance as is likely in the lower tachocline (Section 4.3.2), it may be subject to baroclinic instabilities. Such instabilities represent the main driver of weather systems on the Earth despite the large atmospheric Richardson numbers which suggest that the vertical shear would be stable in the absence of baroclinic effects (e.g., Vallis, 2005). Baroclinic instability in a stellar context was studied by Spruit and Knobloch (1984) who concluded that it is probably only significant very near the base of the convection zone where the stratification is relatively weak and where more standard shear instabilities may also occur. However, this work predated the discovery of the tachocline and should perhaps be revisited.
Cally (2000) has argued that a strong uniform toroidal field can further stabilize the vertical shear in a stably-stratified medium. However, if the field strength decreases with height then the fluid is top-heavy and is susceptible to magnetic buoyancy instabilities. Such instabilities likely play an essential role in tachocline dynamics but they have been comprehensively reviewed elsewhere in these volumes by Fan (2004), so we will not address them again here. We merely note that although shear can inhibit magnetic buoyancy instabilities (Tobias and Hughes, 2004), it can also induce them by forming concentrated magnetic structures (Brummell et al., 2002a; Cline et al., 2003a,b).
The presence of a small but finite thermal, magnetic, and viscous diffusion can also induce secular instabilities such as the Goldreich-Schubert-Fricke (GSF) instability (Knobloch and Spruit, 1982; Menou et al., 2004). These generally operate either on small spatial scales or on long temporal scales so they have little bearing on global-scale dynamics which occur over the course of a solar activity cycle. However, they may play a role in tachocline confinement (Section 8.5). Secular instabilities and rotational shear instabilities may also be important for chemical mixing in the radiative interior and light-element depletion in the solar envelope (Zahn, 1994; Pinsonneault, 1997; Barnes et al., 1999; Mathis and Zahn, 2004).
8.3 Rotating, stratified turbulence
Penetrative convection and instabilities will induce motions in the lower, stably-stratified portion of the tachocline which will, in general, undergo further nonlinear interactions. The resulting dynamics are likely to be turbulent in nature as a result of the low molecular dissipation. Shear instabilities and gravity wave breaking, in particular, can generate vigorous turbulence (e.g., Townsend, 1976; Tritton, 1988; Staquet and Sommeria, 2002).
Turbulence in the lower tachocline will be highly anisotropic due to the strong stable stratification and the large rotational influence. Both effects tend to make the dynamics quasi-2D, but in very different ways. The rotational influence will induce vertical coherence, organizing the flow into vortex columns aligned with the rotation vector (e.g., Bartello et al., 1994; Cambon et al., 1997). This is another manifestation of the Taylor-Proudman theorem which was also discussed in Section 4.3.2. Meanwhile, stable stratification inhibits vertical flows and tends to decouple horizontal layers, favoring pancake-like vortices with large vertical shear (e.g., Métais and Herring, 1989; Riley and Lelong, 2000; Godoy-Diana et al., 2004). The relative influence of these two competing effects can be gauged by the Rossby deformation radius, LD, defined by
$${L_{\rm{D}}} = {{N{\Delta _{\rm{t}}}} \over {2{\Omega _0}}} = {{{\rm{Ro}}} \over {{\rm{Fr}}}}{r_{\rm{t}}},$$
where N is the Brunt-Väisälä frequency. The Rossby number and Froude number are defined as Ro = U/(2Ω0rt) and Fr = U/(NΔt) where U is a characteristic velocity scale. For motions on scales ≤ LD, stratification breaks the vertical coherence induced by rotation (Dritschel et al., 1999). In the lower tachocline LD ∼ 5R⊙ so stratification dominates but LD approaches zero at the base of the convection zone.
Two-dimensional turbulence has been studied extensively both theoretically and numerically. It is now well known that nonlinear interactions involving triads of wavevectors in 2D turbulence conserve enstrophy (vorticity squared) as well as energy, and that this gives rise to an inverse cascade of energy from small to large scales (e.g., Lesieur, 1997; Pope, 2000)20. This is in stark contrast to 3D turbulence which exhibits a forward cascade of energy from large to small scales where dissipation occurs. The inverse cascade is manifested as small vortices interact and coalesce into larger vortices.
The inverse cascade in 2D turbulence will proceed to the largest scales unless some mechanism suppresses it, such as surface drag in the oceans and atmosphere. Another mechanism for halting the inverse cascade which is more relevant for solar applications occurs in geometries which admit Rossby waves such as rotating spherical shells or β-planes. If the rotation is rapid enough, patches of vorticity can propagate as Rossby wave packets and disperse before they coalesce. Since the phase speed of a Rossby wave increases with the wavelength (see Appendix A.6), this occurs only for wavenumbers below a critical value k β , often referred to as the Rhines wavenumber after Rhines (1975). At scales above \(k_{\beta}^{-1}\), the flow has a Rossby-wave character and at scales below \(k_{\beta}^{-1}\), it has the character of 2D turbulence.
The most notable thing about the arrest of the inverse cascade by Rossby wave dispersion is that it is anisotropic (Rhines, 1975; Vallis and Maltrud, 1993). Low latitudinal wavenumbers are suppressed, but the cascade can proceed to low longitudinal wavenumbers21. This tends to produce banded zonal flows as observed in the jovian planets (Yoden and Yamada, 1993; Nozawa and Yoden, 1997; Huang and Robinson, 1998; Danilov and Gurarie, 2004). Similar processes also occur in shallow-water and two-layer systems, in both freely decaying and forced configurations (Panetta, 1993; Rhines, 1994; Cho and Polvani, 1996a, b; Peltier and Stuhne, 2002; Kitamura and Matsuda, 2004). The number of bands, or jets, is roughly given by Ro−½. Taking U ∼ 10 m s−1 yields Ro ∼ 0.004 in the solar tachocline, which implies as many as 15 jets.
Does a quasi-2D inverse cascade occur in real 3D flows? It does in the so-called quasi-geostrophic limit first studied by Charney (1971). He showed that in the limit of strong stratification and rapid rotation (Fr2 ≪ Ro ≪ 1), nonlinear interactions conserve potential enstrophy (potential vorticity squared; see Appendix A.6) as well as energy, again giving rise to an inverse cascade of energy (see also Salmon, 1978; Vallis, 2005). This has been demonstrated in 3D simulations by Métais et al. (1996). However, the quasi-geostrophic limit does not strictly apply to global-scale motions in spherical shells. It is plausible that similar dynamics occur in spherical systems but this has not yet been rigorously demonstrated.
Thus far in our discussion we have neglected magnetic fields, which can have a profound influence on self-organization processes in turbulent fluids. MHD turbulence does not conserve enstrophy even in the 2D limit, so there is nothing to inhibit a forward cascade of kinetic energy (e.g., Biskamp, 1993; Kim and Dubrulle, 2002). An inverse cascade does occur, but it involves a different ideal invariant, namely magnetic helicity (or in 2D, the magnetic potential). Thus, the physical mechanisms described above which can create banded zonal flows probably do not operate globally in the tachocline, although related dynamics likely occur in relatively field-free regions. Self-organization in MHD turbulence generally proceeds by creating large-scale magnetic structures which can then feed back on mean flows through the Maxwell stress.
Another important factor in a tachocline context is the presence of rotational shear imposed by large-scale stresses from the overlying convective envelope. If the turbulence is itself driven by instabilities of this rotational shear, one may expect it to have a diffusive influence, extracting energy from the shear flow by reducing its amplitude. One may also expect a diffusive behavior if the turbulence is small-scale, isotropic, and homogeneous across horizontal surfaces. In other words, if there is a scale separation with local turbulent mixing. Alternatively, if the flow is dominated by waves, one might expect non-local transport which is in general non-diffusive (e.g., McIntyre, 1998, 2003).
The influence of an imposed differential rotation on 2D turbulence in a β-plane was studied by Shepherd (1987). He found that the shearing of vortices by the differential rotation substantially altered the nonlinear transfer rates among spectral modes. In forced-dissipative simulations, small-scale turbulent motions tended to extract energy from the mean shear but the shear-induced Reynolds stress from the larger-scale wave field (k ≤ k β ) tended to amplify the mean flow. The net transfer between the mean flow and the fluctuations about it depended sensitively on the parameters of the problem. Shepherd concluded that this complex interaction could not be modeled with a simple linear parameterization, diffusive or otherwise. More recent simulations by Williams (2003) in 2D spherical shells have also shown that the interaction between Rossby wave turbulence and horizontal shear flows can act either to suppress or enhance the shear, depending on the particular details of the problem.
Research into the interaction between a shear flow and 3D, stably-stratified turbulence has focused mainly on the case of non-rotating Cartesian domains with vertical shear. Here an important parameter is the Richardson number Ri = (N/S)2 where S is the mean shear (cf. Section 8.2). At small Ri (shear-dominated), the turbulent transport of momentum and buoyancy tends to be down-gradient (diffusive) but at large Ri (buoyancy-dominated), turbulent transport is generally oscillatory and can be persistently counter-gradient (Holt et al., 1992; Galmiche et al., 2002; Jacobitz, 2002). These studies are based on numerical simulations of freely-evolving (decaying) turbulence with homogeneous and isotropic initial conditions and an imposed shear. An effective time-dependent viscosity and diffusivity can be defined based on the instantaneous turbulent fluxes and the mean gradients as shown in Figure 26. Counter-gradient transport is manifested as a negative turbulent viscosity after about 2.5 shear timescales in the strongly-stratified simulation (Figure 26, panel b). Although oscillatory, counter-gradient transport is a robust result of these numerical experiments, it may be a consequence of how they are set up; turbulent fluctuations are sheared by a mean flow which is switched on at some arbitrary time. An analysis in terms of rapid distortion theory by Hanazaki and Hunt (2004) suggests that the counter-gradient fluxes become very weak as Ri → ∞ and may be absent altogether in statistically steady flows.
Results are shown from simulations of freely-evolving stably-stratified turbulence with imposed shear. The effective turbulent viscosity nut (black lines) and turbulent thermal diffusivity kappat (red lines) are plotted as a function of time for simulations with vertical shear (solid lines) and horizontal shear (dashed lines). The time is normalized with respect to the shear rate, ∣∇U∣−1 and nut and κt are normalized with respect to the molecular values. Frames (a) and (b) correspond respectively to moderately stratified (Ri = 0.2) and strongly stratified (Ri = 2) cases (from Jacobitz, 2002).
Jacobitz (2002) also considered the case of horizontal shear in a vertically stratified domain. In this case the turbulent transport was generally down-gradient (diffusive) even for strong stratification (Figure 26). Similar conclusions were also reached by Miesch (2003) who found down-gradient horizontal angular momentum transport and counter-gradient vertical transport in simulations of rotating, stably-stratified turbulence in thin spherical shells with random external forcing.
Counter-gradient transport in stably-stratified flows is often associated with the presence of waves (although this is not the only mechanism, (e.g., Holt et al., 1992; Galmiche and Hunt, 2002)). Waves carry pseudo-momentum which is conserved until they dissipate, giving rise to long-range transport as described in Section 8.4.
Magnetic fields generally tend to induce down-gradient momentum transport in turbulent shear flows by suppressing upscale kinetic energy transfer (cf. inverse cascades) and by imposing rigidity via magnetic tension. However, the transport efficiency can be reduced due to the partial offset of Reynolds and Maxwell stresses, which often have opposite senses (e.g., Kim et al., 2001). Magnetic fields can also suppress turbulent magnetic diffusion (Cattaneo and Vainshtein, 1991; Yousef et al., 2003). Still, magnetism can also have non-diffusive effects. For example, the balance between the Lorentz and Coriolis forces in toroidal field bands can induce zonal jets (see Section 8.2).
In summary, turbulent transport and self-organization in the tachocline is complex and not well understood. A variety of processes can act to establish or to suppress mean flows. Which of these prevail will depend on the subtleties of how the tachocline couples to the convection zone and radiative interior, a topic which will likely occupy researchers for many years to come.
8.4 Internal waves
Waves are ubiquitous in rotating, stratified flows. In the tachocline, they may be driven by penetrative convection (Section 8.1), shear, or instabilities (Section 8.2). Restoring forces may be provided by buoyancy (gravity waves), the Coriolis force (Rossby and other inertial waves; see Appendix A.6), magnetic tension (Alfvén waves), or some combination of the three22. We will refer to these modes collectively as internal waves.
Linear, non-dissipative waves cannot redistribute momentum in a time-averaged sense. However, waves can redistribute momentum if they dissipate by wave breaking or by thermal or viscous diffusion. Thus, waves induce a momentum transport from regions of excitation to regions of dissipation which is, in general, long-range (non-local) and can be counter-gradient (non-diffusive). There are multiple examples of wave-driven flows in the Earth's atmosphere where such non-local momentum transport is reasonably well-established (McIntyre, 1998; Shepherd, 2000; Baldwin et al., 2001).
Due to its buoyant nature, penetrative convection is particularly efficient at exciting gravity waves. These are, in general, influenced by the Corioliss force (i.e., they are inertial gravity waves) but if their period is close to the buoyancy period (N−1) of a few hours then rotation may be neglected. For illustration, we consider a Cartesian domain defined such that \(\hat{x}\) and \(\hat{y}\) are the local longitude and latitude coordinates and \(\hat{z}\) is the height (antiparallel to g). Of particular interest in a tachocline context is the interaction of gravity waves with a vertical shear. The dispersion relation for small-wavelength internal gravity waves in a vertically-sheared zonal flow, \(U_{0}(z)\hat{x}\) is
$$\sigma - {k_x}{U_0} = N\cos \psi,$$
where σ is the frequency, k x is the component of the wave vector in the direction of the shear and ψ is the angle it makes with the horizontal (see Appendix A.7). The direction of phase propagation is given by the angle ψ but in a stationary medium (U0 = 0), the group velocity is perpendicular to the phase velocity (Appendix A.7). The highest-frequency waves have σ = N and have a horizontal phase velocity (ψ = 0° or 180°).
The intrinsic frequency of the wave, σ is set by the wave generation process, for example the timescale which characterizes penetrative convection. As the wave propagates vertically, this frequency is Doppler shifted by the background flow, U0. For illustration, we will assume U0 > 0. If the zonal phase speed of the wave is parallel to the mean flow (σk x > 0), the wave may encounter a critical layer where the Doppler-shifted frequency σ − k x U0 approaches zero. The resulting dynamics are illustrated in panel a of Figure 27. In a solar context, the vertical coordinate z may be regarded as increasing downward, with z = 0 at the base of the convection zone.
Resulting dynamics when an internal gravity wave encounters (a) a critical layer zc and (b) a trapping plane y t , indicated by dashed lines (see text). Curved lines represent ray paths while thin and bold arrows indicate the wavevector k and the group velocity including advection by the background flow \({\bf c}_{g}^{\prime}={\bf c}_{g}+U_{0}\hat{x}\) where c g = ∂σ/∂k. Ray paths are everywhere parallel to c′ g . In (a) the zonal velocity gradient is vertical, U0(z) and the perspective shows a longitude-depth (x, z) plane. Two ray paths are shown. As each wave asymptotically approaches zc the vertical wavenumber increases and the group velocity becomes parallel to \(\hat{x}\). In (b) the zonal velocity gradient is latitudinal U0(y) and the perspective shows a horizontal (x, y) plane. The k and \({\bf c}_{g}^{\prime}\) vectors are shown at several points along a single ray path. As y t is approached, \({\bf c}_{g}^{\prime}\) again becomes parallel to \(\hat{x}\) (from Staquet and Sommeria (2002); see also Staquet and Huerre (2002)).
As the wave approaches the critical layer zc, its vertical wavenumber increases and its group velocity slows, making it more susceptible to viscous and thermal diffusion (see Appendix A.7). If it is not dissipated first by diffusion, the wave will increase in amplitude and eventually break before encountering the critical layer. Thus, there is generally a transfer of momentum from waves to the mean flow near a critical layer, a phenomenon which is often referred to as critical layer absorption.
Similar dynamics can also occur in the presence of a horizontal shear, as illustrated in panel b of Figure 27. In this case we have a zonal flow which depends on y, the local latitudinal coordinate, \(U_{0}(y)\hat{x}\). If a wave propagates horizontally against the mean flow, it may encounter a trapping plane at y t where the Doppler-shifted frequency approaches the Brunt-Väisälä frequency, N. The horizontal group velocity again approaches the mean flow speed, and the latitudinal wavenumber, k y increases without limit according to WKB theory. The wave will again break or dissipate by thermal or viscous diffusion before y t is reached, inducing a net momentum flux from the source region of the waves to the vicinity of the trapping plane. The nonlinear breaking of internal gravity waves near a trapping plane and the associated mass and momentum transport has recently been modeled numerically by Staquet and Huerre (2002).
In the Sun, waves are unlikely to dissipate solely by critical layer absorption (or the analogous process near a trapping plane). Rather, they dissipate mainly by radiative diffusion. Still, the processes discussed above give some insight into the resulting momentum transport. In the presence of a prograde zonal flow with vertical shear, a prograde wave will have a lower vertical group velocity and a higher vertical wave number than a retrograde wave. Thus, the prograde wave will be more readily dissipated by thermal diffusion even if it does not encounter a critical layer. The net result is a convergence of prograde momentum which acts to accelerate the mean flow. As the zonal velocity increases, Doppler shifts are amplified and waves travel shorter distances before they are dissipated. The region of convergence moves upward (toward lower z) while lower layers (higher z) decelerate again as a result of the reduced wave flux. In this way, oscillating zonal flows can be established which are analogous to the Quasi-Biennial Oscillation (QBO) in the Earth's stratosphere (Baldwin et al., 2001).
Wave-driven flows such as these in the solar tachocline have been studied by several authors (Fritts et al., 1998; Kumar et al., 1999; Kim and MacGregor, 2001, 2003; Talon et al., 2002). Kim and MacGregor (2001, 2003), in particular, considered a simple 1D model for a zonal flow with vertical shear U0 (z), in which momentum transport by radiatively-damped gravity waves is offset by viscous diffusion. Two waves were included in the model, prograde and retrograde, with horizontal velocities parallel and anti-parallel to the mean flow, respectively. As the turbulent viscosity was decreased, the temporal response of the resulting zonal flow underwent a transition from stationary to periodic, to quasi-periodic, and eventually to chaotic. A periodic solution is illustrated in Figure 28. When only a single wave was included in the presence of a background shear, the solutions were stationary and tended to produce counter-gradient angular momentum transport, accelerating the mean flow.
An oscillating zonal flow driven by gravity waves is shown based on the two-wave model described by Kim and MacGregor (2001) and MacGregor (2003). The left column illustrates the zonal velocity u as a function of height z at several instants in the evolution, with time increasing downward as indicated. The right column illustrates the corresponding rate of change of u induced by prograde waves (solid lines), retrograde waves (dashed lines), and viscous dissipation (dotted lines). All quantities are normalized with respect to a characteristic velocity and vertical length scales u0 and H0. As time proceeds, waves propagating with the same sense as u accelerate the flow in such a way that velocity extrema shift upward (toward the right) while new extrema appear deeper down. Vertical dotted lines in the left column are included as a reference point to illustrate the phase of the oscillation (courtesy K. MacGregor).
The selective dissipation of waves with horizontal phase speeds parallel to the mean zonal flow acts as a filtering mechanism, removing these modes from the wave field. This filtering is latitude-dependent, since the radial angular velocity gradient in the tachocline varies from positive values at the equator to negative values at the poles (Section 3.1). Fritts et al. (1998) argue that the momentum redistribution resulting from this inhomogeneous wave filtering will establish a residual meridional circulation which may have implications for chemical transport and the low abundance of Lithium in the solar envelope relative to cosmic abundances. Chemical transport by gravity waves has also been studied by other authors from the perspective of light-element depletion in stars, and is often parameterized in terms of an effective diffusion (Montalbán, 1994; Schatzman, 1996; Pinsonneault, 1997).
Waves which are not filtered out by shear or other processes in the tachocline will propagate deeper into the solar interior. Eventually, these waves too will dissipate, resulting in an exchange of angular momentum between the convective envelope and the radiative interior. In a steady state the net transport must vanish but over evolutionary timescales the Sun is not steady. Rather, the solar envelope is continually losing angular momentum via the solar wind. In this situation, Talon et al. (2002) argue that gravity waves will systematically extract angular momentum from the radiative interior over the lifetime of the Sun. The resulting coupling between the convection zone and radiative interior may help to explain why the mean rotation rate of these two regions is comparable (Section 3.1).
The dynamical influence of a toroidal magnetic field on gravity wave propagation is similar in some ways to that of a zonal flow. Here a magnetic critical layer exists where the horizontal group velocity of the wave approaches the Alfvén speed relative to the mean flow (Barnes et al., 1998; McKenzie and Axford, 2000; MacGregor, 2003). This is analogous to a hydrodynamic critical layer in that the vertical wavenumber increases without bound but the dynamics in the vicinity of the critical layer can be notably different. The presence of a toroidal field significantly limits the range of wavenumbers which can propagate without becoming evanescent. The Doppler-shifted frequency no longer vanishes in the critical layer; rather, it approaches ±k x υA where υA is the Alfvén speed. If the field is strong, waves are Alfvénic in nature and propagate along the field lines. Gravity waves may therefore be absorbed by the critical layer (dissipated) or they may be converted to Alfvén modes which propagate horizontally. Such filtering by strong toroidal fields in the tachocline may greatly enhance the shear filtering described above (Kim and MacGregor, 2003).
Shear and magnetic fields not only filter waves by selective dissipation, but they can also reflect waves. In some cases, over-reflection can occur wherein there is a net transfer of energy from the field or shear to the waves. This can increase the amplitude of a wave to the point of nonlinear breaking. Since gravity waves are evanescent in the convection zone, wave reflection by angular velocity shear and toroidal fields in the lower tachocline may essentially create a waveguide, channeling gravity and Alfvén waves into a narrow horizontal layer, where they eventually dissipate by wave breaking or radiative diffusion (MacGregor, 2003).
8.5 Tachocline confinement
One of the most compelling questions about the tachocline is: Why is it so thin? The transition from a ∼ 30% latitudinal variation of angular velocity in the convection zone to nearly uniform rotation in the radiative interior occurs over roughly 5% or less of the solar radius (Section 3.2).
The issue is best illustrated by considering one of the pioneering papers on the subject: The very paper which coined the term tachocline. Soon after the first helioseismic indications that a rotational shear layer exists near the base of the convection zone, Spiegel and Zahn (1992) considered the problem from the perspective of an axisymmetric spin-down scenario. They considered a spherical volume of fluid in hydrostatic and geostrophic balance subject to an imposed latitudinal differential rotation on the upper boundary. This was intended to represent the radiative solar interior under the influence of wind stress from the convective envelope. A meridional circulation was quickly established in which the advective heat flux was balanced by radiative diffusion. If momentum transport by unresolved turbulent motions was neglected, they found that this circulation steadily spread into the radiative interior, redistributing angular momentum away from uniform rotation on a timescale of several billion years. If such a radiative spreading had occurred over the lifetime of the Sun, the differential rotation of the envelope would have spread deep into the solar interior, in marked contrast to the nearly uniform rotation inferred from helioseismic inversions (see Section 3.1). Further numerical calculations were later performed by Elliott (1997), confirming these results.
Thus, the question of why the tachocline is so thin is equivalent to asking what can stop this radiative spreading. Or from a somewhat different perspective, one may instead ask: What process or processes can maintain uniform rotation in the radiative interior, in spite of stresses exerted by the convection zone?
Spiegel and Zahn (1992) were the first to suggest a mechanism. They argued that turbulence arising from nonlinear shear instabilities would mix angular momentum in such a way that horizontal transport would be much more efficient than vertical transport, and would therefore drive the radiative interior toward shellular rotation (see Section 8.2). They modeled this turbulent transport as an anisotropic viscosity in which the horizontal component greatly exceeded the vertical component. Their calculations and subsequent calculations by Elliott (1997) demonstrated that this anisotropic transport could effectively halt the radiative spreading, producing an equilibrium profile in which the width of the tachocline, Δt, is given by
$${{{\Delta _{\rm{t}}}} \over {{r_{\rm{t}}}}}\sim{\left({{\Omega \over N}} \right)^{1/2}}{\left({{{{\kappa _r}} \over {{\nu _{\rm{H}}}}}} \right)^{1/4}},$$
where rt ∼ 0.7R⊙ is the tachocline location and νH is the horizontal turbulent viscosity. In the solar tachocline, Ω ∼ 2.7 × 10−6s−1, N ∼ 10−3s−1, and κ r ∼ 107 cm s−2. This implies that a turbulent viscosity as low as νH ∼ 3 × 106 would be sufficient to confine the tachocline to about 5% of the solar radius, consistent with helioseismic inversions (Section 3.2). The figure cited by Elliott (1997) is about an order of magnitude less.
Although Spiegel and Zahn (1992) identified nonlinear hydrodynamic shear instabilities in particular, other mechanisms may produce a similar confinement, provided they induce down-gradient horizontal angular momentum transport. Or, in other words, provided they act as a positive anisotropic turbulent viscosity with νH ≫ νV. One such alternative mechanism may be provided by the 2D and shallow-water magneto-shear instabilities studied by Gilman, Fox, Dikpati, and Cally which generally transport angular momentum poleward via the Maxwell stress (see Section 8.2). Another possible mechanism might be stratified turbulence induced by penetrative convection (Miesch, 2001, 2003).
These mechanisms may help to explain why the latitudinal differential rotation of the convective envelope does not spread inward, but they do little to explain why the radiative interior as a whole is rotating uniformly. Stratified, rotating turbulence near the base of the convection zone may produce down-gradient angular momentum transport in latitude but this is by no means certain and in any case, the vertical transport is likely to be counter-gradient (see Section 8.3). Deeper in the interior, angular momentum transport by gravity waves would also tend to enhance shear rather than suppress it due to the selective dissipation of prograde and retrograde modes (see Section 8.4). These points have been made repeatedly by McIntyre and others (McIntyre, 1994, 1998, 2003; Gough and McIntyre, 1998; Ringot, 1998). Gravity waves may still play a role in tachocline confinement, but only if there is some additional mechanism such as shear turbulence to provide an effective viscous diffusion (Talon et al., 2002)23. Hydrodynamic instabilities alone appear to be too inefficient to maintain uniform rotation (Spruit, 1999; Garaud, 2001; Mathis and Zahn, 2004).
The difficulties in producing diffusive angular momentum transport in rotating, stably-stratified flows by purely hydrodynamical means has led some to suggest that magnetic fields are necessary in order to maintain uniform rotation in the radiative interior (Rüdiger and Kitchatinov, 1997; Gough and McIntyre, 1998). Such fields may arise as a remnant, or fossil, left over from the gravitational collapse of the protostellar cloud from which the Sun formed. An axisymmetric poloidal field will resist differential rotation via magnetic tension. The resulting torques will tend to establish uniform rotation along magnetic field lines on an Alfvénic timescale, a result which is known as Ferraro's theorem (Cowling, 1957; Mestel and Weiss, 1987; MacGregor and Charbonneau, 1999). Turbulence induced by instabilities may then couple adjacent field lines. For example, as the solar wind spins down the convective envelope, angular velocity profiles may be established which decrease outward with cylindrical radius, ∂Ω/∂λ < 0. These would then be subject to magneto-rotational instabilities (Section 8.2) which, together with the torques implied by Ferraro's theorem, could establish uniform rotation throughout the radiative interior. Diffusive instabilities may also play a role (Menou et al., 2004).
According to Ferraro's theorem, the fossil field must be confined entirely to the radiative interior in order to maintain uniform rotation. If poloidal field lines were to extend into the convective envelope, then at least some fraction of the differential rotation there would be transmitted into the interior, which would be inconsistent with helioseismic inversions. This expectation is borne out by numerical calculations (MacGregor and Charbonneau, 1999).
If the fossil field is confined to the radiative interior and meridional circulation is neglected, the tachocline which develops is essentially a classical Hartmann layer in which magnetic tension balances viscous diffusion (Rüdiger and Kitchatinov, 1997; MacGregor and Charbonneau, 1999). In this case the tachocline width is given by
$${{{\Delta _{\rm{t}}}} \over {{r_{\rm{t}}}}}\sim{\left({{{4\pi \rho} \over {r_{\rm{t}}^2}}{{\nu \eta} \over {B_0^2}}} \right)^{1/4}}\sim 5 \times {10^{- 5}}B_0^{- 1/2},$$
where B0 is the poloidal field strength at rt. The final equality in Equation (31) is derived using ρ ∼ 0.2 g cm−3 and molecular values for the diffusivities, ν ∼ 5 cm2 s−1 and η ∼ 103 cm2 s−1. A field strength of B0 ≥ 10−6G would confine the tachocline to less than 4% of the solar radius, well within helioseismic limits. If there is enough vertical mixing to act as a turbulent viscosity and diffusivity, a larger magnetic field would be needed.
In all likelihood, there will be a significant meridional circulation in the tachocline. In the Spiegel and Zahn (1992) scenario discussed above, for example, the differential rotation spreads not by viscous diffusion but by advection due to a radiatively-driven circulation. In this case, MacGregor and Charbonneau (1999) estimate that a field of B0 ∼ 2 × 10−4 G would be required for confinement, about two orders of magnitude larger than the viscous estimate implied by Equation (31).
Meridional circulation also plays an essential role in the tachocline model proposed by Gough and McIntyre (1998). Here the circulation is driven by the Reynolds stress in the convection zone through what may be called gyroscopic pumping (McIntyre, 1998). Consider an axisymmetric ring of fluid. If the ring is subject to a prograde longitudinal force it will tend to drift away from the rotation axis due to the Coriolis force. If the force is retrograde, the ring will drift inward. In the solar convection zone, the Reynolds stress act to accelerate the equator relative to the poles, which would tend to establish a global circulation.
Further insight into how this operates can be obtained by considering the angular momentum balance expressed by Equation (8)
$${\bf{\nabla}} \cdot {{\bf{F}}^{{\rm{MC}}}} = \bar \rho \langle {{{\bf{v}}_{\rm{M}}}} \rangle \cdot {\bf{\nabla}} {\cal L} = - {\bf{\nabla}} \cdot {{\bf{F}}^{{\rm{RS}}}},$$
where we have also used Equation (6). The Reynolds stress produces a flux convergence and divergence at low and high latitudes respectively. By Equation (32), this induces a meridional circulation across lines of constant specific angular momentum, \({\cal L}=\lambda^{2}\Omega\). In the Sun, \({\bf{\nabla }}{\cal L}\) is approximately perpendicular to the rotation axis and directed outward (Figure 6, panel b), so Equation (32) implies a flux divergence at mid-latitudes in the convection zone. Below the convection zone the Reynolds stress is neglected and the circulation follows surfaces of constant \({\cal L}\).
In the Gough & McIntyre model, this gyroscopic circulation is prevented from burrowing deep into the radiative interior by a fossil poloidal field as illustrated in Figure 29. The tachocline itself is non-magnetic but there exists a thin boundary layer at its base, called the tachopause, where the circulation is diverted horizontally by the interior field. This gives rise to a horizontal convergence and an associated upwelling at latitudes of about 30°, where the radial shear across the tachocline vanishes. The tachopause occupies only a few percent of the total tachocline width which is given by
$${{{\Delta _{\rm{t}}}} \over {{r_{\rm{t}}}}}\sim 3 \times {10^{- 2}}B_0^{- 1/9}.$$
This suggests field strengths of ∼ 0.1–1 G but a range of values is consistent with helioseismic inversions because Δt is relatively insensitive to B0.
Schematic diagram from Gough and McIntyre (1998) (http://www.nature.com), illustrating the proposed tachocline structure. A meridional circulation (black lines) is driven by gyroscopic pumping in the convective envelope (orange) and penetrates into the tachocline (green) along lines of constant specific angular momentum \({\cal L}\). A poloidal magnetic field (red lines) in the radiative interior (blue) halts the downward spread of this circulation in a thin boundary layer called the tachopause. In upwelling regions, the field structure is uncertain (dotted lines). The width of the tachocline is exaggerated in this perspective.
The dynamical balance in the tachopause not only keeps the circulation from spreading inward, but it also keeps the fossil field confined to the radiative interior. This can only occur in downwelling regions; upwelling regions are likely to be more complex and may alter this simple picture. In the most recent incarnation of the Gough & McIntyre model (McIntyre, private communication), some of the magnetic field lines in upwelling regions follow the circulation streamlines into the convection zone. Regions in which the angular velocity decreases outward (dΩ/dλ < 0) would then be subject to magneto-rotational instabilities (MRI; see Section 8.2) which would alter the local tachocline structure, still maintaining thermal wind balance.
Although magnetic confinement models are compelling, there are many aspects which need further verification and clarification. Among these is the configuration of the fossil field. Axisymmetric poloidal fields are likely to be unstable over evolutionary timescales so any fossil field which may exist in the solar interior today is probably of mixed poloidal and toroidal topology (Mestel and Weiss, 1987; Spruit, 1999). This has been incorporated into the Gough & McIntyre model, but still only in a schematic way. Another open question is whether a circulation which is driven in the convection zone can overcome the stiff subadiabatic stratification in the lower tachocline and penetrate all the way to the tachopause Gilman and Miesch (2004).
Some aspects of the Gough & McIntyre model have been investigated numerically by Garaud (2002) who solved the axisymmetric MHD equations under the Boussinesq approximation. The circulation in Garaud's model was driven by Ekman pumping and bore little resemblance to the baroclinic circulations considered by either Gough and McIntyre (1998) or Spiegel and Zahn (1992). Nevertheless, the results did demonstrate that a circulation is capable of confining a poloidal field largely to the radiative interior. Furthermore, the field was able to establish nearly uniform rotation in the interior over an intermediate range of field strengths.
A common feature in nearly all magnetic confinement models is the presence of a polar pit. This is a region near the magnetic poles where the poloidal field is primarily radial and therefore cannot confine the tachocline. Here the meridional circulation and consequently the differential rotation spreads much deeper into the radiative interior. This could in principle be probed with helioseismology, although the low sensitivity of frequency splittings to angular velocity variations near the rotation axis would make it difficult to detect. Currently there is little helioseismic evidence either supporting or refuting the presence of a polar pit.
An alternative to tachocline confinement by a weak fossil field in the radiative interior is tachocline confinement by a strong dynamo field originating in the convection zone. This possibility has been explored by Forgács-Dajka and Petrovay (2002) and Forgács-Dajka (2004) who consider a thin, axisymmetric shell of fluid under the anelastic approximation. A latitudinal differential rotation is imposed on the upper boundary along with an oscillatory poloidal field intended to represent dynamo processes in the convective envelope. The characteristic penetration depth of the field is the electromagnetic skin depth for a conductor, (2ηt/ωc, where ηt is a turbulent diffusivity and wc is the frequency of the oscillation. If the turbulent diffusivity is large enough (∼ 1010 cm s−2) and if the imposed field is strong enough (∼ 103 G), then the field can penetrate deep enough to suppress the spread of differential rotation into the interior.
It is an open question how the relatively weak Lorentz force and circulations associated with magnetic confinement by a fossil field may coexist with and couple to the much stronger forces and motions which exist in the convection zone. In this context, a distinction is often made between fast tachocline dynamics which occur on timescales of weeks to decades and slow tachocline dynamics which occur on much longer timescales (e.g., Gilman, 2000a). Nearly all of the processes discussed in Sections 8.1 and 8.4 fall under the category of fast dynamics. Although they involve relatively weak circulations, the tachocline models of Forgács-Dajka and Petrovay (2002) and Forgás-Dajka (2004) may also be classified as fast because they require efficient turbulent mixing to operate and because they are concerned with dynamo-generated fields with an oscillation period of 22 years. The remaining magnetic confinement models discussed in this section represent slow dynamics. For example, the overturning time scale for the tachocline circulation in the Gough & McIntyre model is of order a million years. Fast dynamics are likely to dominate in the upper tachocline which probably overlaps with the convection zone and overshoot region. However, slow dynamics may be ultimately responsible for the nearly uniform rotation of the radiative interior and may therefore determine the lower boundary of the tachocline.
9 Conclusion: Making Sense of the Observations
We are entering an exciting new age in our exploration of the solar interior. The continuous monitoring of global solar oscillations with high-resolution helioseismic instruments is still a relatively recent endeavor, covering roughly half of a (22-yr) solar activity cycle. Further monitoring will improve our understanding of rotational and structural variations and may reveal new patterns. Local helioseismology (Section 2.2) is even younger, and as the field continues to mature it promises ever greater insights into convective flows, magnetic activity, and global circulations below the photosphere. These helioseismic investigations together with increasingly powerful computing resources are fostering progressively more sophisticated and realistic numerical and theoretical models of solar interior dynamics.
Where does this interplay between models and observations now stand? Meaningful comparisons between the convective patterns found in global, 3D simulations and those thought to exist in the upper solar convection zone are now becoming feasible. Solar sub-surface weather (SSW) maps obtained from local helioseismology and large-scale structures inferred from the correlation tracking of surface features both reveal evolving patterns comparable to those seen in global simulations (Section 3.5). Further investigations are required to strengthen this connection and to understand where it currently appears to break down, most notably in flows associated with active regions.
Still, the main point of contact between global convection simulations and solar observations remains the internal rotation profile. There is little doubt that convection drives differential rotation in the solar envelope. Even a cursory look at the angular velocity profile inferred from helioseismology (Figure 1) clearly reveals a profound difference between the dynamics in the convection zone and in the stably-stratified radiative zone below. This, together with sound speed inversions, provides a dramatic validation of solar structure theory as a whole, although there are still discrepancies which must be understood, particularly in light of new elemental abundance determinations. The question is: How does convection redistribute angular momentum in such a systematic way?
Global simulations suggest that the solar differential rotation is maintained both through the Reynolds stress and through inhomogeneous convective heat transport, the latter of which can establish a thermal wind (Sections 4.3 and 6.3). Whereas the Reynolds stress dominates in the upper convection zone, the differential rotation in the lower convection zone is nearly in thermal wind balance. A realistic model must therefore take into account both momentum and heat transport by turbulent convection under the influence of rotation, stratification, and magnetism.
The global redistribution of angular momentum by the Reynolds stress is dominated by extended downflow lanes which are oriented north-south and which are confined primarily to low latitudes (Section 6.3). These exist amid a more intricate, evolving downflow network which becomes more isotropic at high latitudes and which fragments into an ensemble of disconnected and intermittent plumes at deeper layers (Figures 9 and 12). There is a possibility that these convective patterns (and their associated transport properties) may change as the resolution is further increased and the parameters achieve more solar-like conditions (Section 7.1). However, observations of granulation in the solar photosphere demonstrate that such patterns can persist in solar parameter regimes. Furthermore, the close correspondence between observations and simulations of granulation suggest that the essential dynamics of solar convection can indeed be captured using large-eddy simulation approach (e.g., Stein and Nordlund, 1998, 2000; Keller et al., 2004; Vögler et al., 2005; Rincon et al., 2005). The rotation profiles in global simulations are in good agreement with helioseismic inversions in their gross features, if not in their finer details. A conspicuous shortcoming of current simulations is the absence of a self-consistently maintained tachocline. This can likely be attributed to insufficient spatial resolution and temporal duration to accurately capture the wide range of processes which may be occurring near the base of the convection zone (Section 8).
Another difficulty in many (but not all) simulations is a tendency to spin up the poles, producing a high-latitude prograde polar vortex which is not found in helioseismic inversions (Section 6.3). This is mainly due to axisymmetric circulations which tend to conserve their angular momentum; simulations which do not produce a polar vortex exhibit meridional circulation patterns which are confined to low latitudes. These results suggest that the meridional circulation in the solar envelope may not extend all the way to the poles.
The meridional circulation is driven by small differences between relatively large forces which are nearly in balance. This leads to large spatial and temporal variations in numerical simulations and possibly also in the Sun (Section 6.4). Near the surface, simulations typically exhibit poleward circulations at low latitudes in rough agreement with photospheric measurements and helioseismic inversions, although the latitudinal extent of these circulations is generally less in the simulations. Near the base of the convection zone, penetrative convection simulations yield equatorward circulation as is assumed in flux-transport dynamo models (Section 6.4). This equatorward circulation arises from the rotational alignment of downflow plumes, which also produces poleward angular momentum transport in the overshoot region (Section 6.3).
It is particularly important to understand dynamics near the base of the convection zone from the perspective of dynamo theory. Meaningful comparisons between global convection simulations and observations of magnetic activity will only be possible if the simulations incorporate tachocline dynamics to some degree, either by resolving the relevant processes or by parameterizing them. Improved dynamo simulations are necessary to better understand fundamental elements of the solar activity cycle such as the butterfly diagram as well as more subtle aspects such as chirality patterns (Section 3.8). An accurate representation of magnetic activity may also be a prerequisite to reproducing flow patterns such as torsional oscillations which appear to be driven by the Lorentz force associated with the dynamo-generated field (Yoshimura, 1981; Schüssler, 1981; Kitchatinov et al., 1999; Durney, 2000b; Covas et al., 2001, 2004; Bushby and Mason, 2004). Capturing such processes in a 3D global convection simulation represents one of the most challenging and important frontiers of solar modeling.
The further exploration of tachocline dynamics is in itself a diverse and fascinating frontier which will be the focus of many theoretical, computational, and observational efforts in the coming years. The structure of the tachocline and its coupling to the convective envelope and radiative interior involves an intricate interplay between penetrative convection, instabilities, stably-stratified turbulence, and waves in the presence of rotational shear and magnetism.
The most compelling aspect of the tachocline, namely its thinness, can probably be attributed at least in part to magnetic fields. A fossil field permeating the radiative interior is currently the leading explanation for the nearly uniform rotation in this region inferred from helioseismology (Section 8.5). However, magnetic confinement models are still rather schematic and much more theoretical and numerical work is needed to verify and clarify the proposed mechanisms. Furthermore, relatively' fast' dynamics likely dominate in the upper tachocline where penetrative convection, instabilities, waves, and turbulence redistribute momentum and energy on timescales of months to years (fossil-field confinement models operate on timescales of ∼ 106 yr).
The depth and location of the tachocline clearly vary with latitude but the base of the convection zone and overshoot region apparently do not (Section 3.6). This implies that tachocline structure is not governed solely by penetrative convection and, furthermore, that instabilities and turbulence in the lower tachocline do not produce enough vertical mixing to substantially alter the background stratification. The prolate structure of the tachocline may be a result of latitudinal pressure gradients induced by the strong toroidal fields which are thought to exist at low and mid-latitudes over the course of the solar cycle (Dikpati and Gilman, 2001b). Magnetic confinement models also exhibit larger tachocline depths at high latitudes due to the assumed dipolar structure of the poloidal field (the polar pit; see Section 8.5). However, it is unclear from these latter models why the tachocline may be prolate.
Temporal variations provide another means by which to investigate tachocline dynamics. In particular, helioseismic inversions have revealed a 1.3-year oscillation in the angular velocity, which appears to straddle the base of the convection zone at low latitudes (Section 3.3). This may arise from the interaction of gravity waves and shear (Section 8.4). Alternatively, it may be a manifestation of the MHD shear instabilities discussed in Section 8.2, which generally have an oscillatory component. A third possibility is that the tachocline oscillations arise from spatiotemporal fragmentation of the longer-period torsional oscillations (Covas et al., 2001, 2004).
Distinguishing between these alternatives will require more detailed probing of tachocline structure. For example, the joint instability of a banded toroidal field and latitudinal differential rotation predicts the presence of a prograde jet which provides gyroscopic stabilization against the tipping of the band (Section 8.2). The search for such jets in helioseismic inversions is going on now and has produced a few possible candidates (Christensen-Dalsgaard et al., 2004).
The mere presence of a zonal jet in the tachocline does not necessarily indicate the gyroscopic stabilization of a toroidal band. Self-organization processes in rotating, stratified turbulence tend to produce banded zonal flows even in the absence of magnetic fields (Section 8.3). However, the sense of the jet can provide clues as to its origin. For example, gyroscopic stabilization requires a prograde jet whereas a breaking Rossby wave will produce a retrograde jet (e.g., McIntyre, 1998).
Probing the interior of a star is not easy, but we are making progress. Ambitious observing programs and modeling efforts promise more excitement in the near future.
Solar neutrinos are an exception to this maxim. Neutrinos which are generated in the core of the Sun can propagate unhindered through the solar interior and ultimately be detected on Earth. Such measurements provide important constraints on models of solar structure and evolution and they have some potential for probing magnetic fields near the base of the convection zone (Sturrock and Weber, 2002).
Gravity waves also exist in the Sun and are of potential importance in helioseismology (Christensen-Dalsgaard, 2002). However, they are confined to the deep radiative interior so they are much more difficult to observe and have not yet been unambiguously detected.
The Sun is a slow rotator in the sense that the centrifugal force is many orders of magnitude smaller than the gravitational force. Still, large-scale motions in the deep convection zone may be slow enough that the Coriolis force dominates over the inertial force in the rotating frame, which implies small Rossby numbers.
The cylinder which is aligned with the rotation axis and tangent to the base of the convection zone.
Counter-clockwise in the northern hemisphere and clockwise in the southern hemisphere.
Helioseismic measurements do not indicate a polar spin-up ∼ on the contrary, they suggest the pole rotates even slower than expected based on a smooth extrapolation from lower latitudes (Section 3.1). However, inversions become unreliable near the pole so the angular velocity profile at the highest latitudes remains somewhat uncertain.
Note that Ω0·∇ 〈υ ϕ 〉 = λΩ0·∇Ω, so the baroclinic contribution to the zonal velocity gradient, plotted in panel c of Figure 16, is obtained by multiplying Equation (11) by λ/Ω0.
At the top of the shell in case TUR, νt = 3 × 1012 cm s−1 and κt = 3 × 1013 cm s−1 whereas for case P, νt = 2.5 × 1012 cm s−1 and κt = 1 × 1013 cm s−1. In both cases, ν and κ vary with depth in proportion to \(\bar{\rho}^{-1/2}\). The resolution (N θ , N ϕ , N R ) is (256,512,98) and (512,1024,98) in Cases TUR and P, respectively.
Case P does not exhibit this tendency over the time interval shown in Figure 17 but it is present over other averaging intervals and in its progenitor, case TUR; see Figures 16 and 17 of Miesch et al. (2000).
Back-reaction of the dynamo-generated field on the flow via the Lorentz force does not significantly change the convective patterns in case M3 but it does tend to suppress the differential rotation; see Section 6.3.
More laminar simulations do exhibit global patterns, with positive and negative current helicity in the northern and southern hemispheres, respectively (Gilman, 1983; Glatzmaier, 1985a).
We use the term α-effect here in the general sense of a process which converts toroidal field energy to poloidal field energy, without necessarily implying quasi-linearity; see Section 4.5.
In mean-field parlance, the lack of scale separation in time implies that the Strouhal number is not necessarily small. The Strouhal number is the correlation time scale of the velocity fluctuations divided by the advective time scale of the mean flow.
The turnover frequency of a convective eddy is just its vorticity, ω. Since the vorticity spectrum generally increases toward smaller scales (positive slope versus wavenumber), there will come a point where the effective Rossby and Froude numbers, ω/2Ω0 and ω/N become greater than unity (N is the Brunt-Väisälä frequency).
Another motivation for many of these models is to improve the accuracy and conservation properties of the nonlinear advection terms. Furthermore, the scaling of the computational workload in finite element and finite volume models with increasing resolution, N, is much better than spectral models where Legendre transforms, which scale as N2, eventually dominate.
Of course, a tachocline can be imposed in a simulation through boundary conditions or body forces.
This may be derived on energetic grounds (see, e.g., Drazin and Reid, 1981; Tritton, 1988).
…as anyone who flies in airplanes regularly can attest to.
In some parameter regimes, the largest growth rates occur for m > 1 modes but even here nonlinear calculations by Cally et al. (2003) indicate that the m = 1 tipping instability eventually dominates, at least for field strengths ≥ 50 KG.
On spherical surfaces, nonlinear interactions are no longer restricted to wavevector triads but inverse cascades still occur.
This is a simplification. The formation and maintenance of banded zonal flows in forced-dissipative flows may involve non-local spectral transfer or wave interactions which cannot be classified as cascade processes. However, the point is the same; nonlinear spectral transfer, be it local or non-local, can occur freely at low longitudinal wavenumbers but is suppressed at low latitudinal wavenumbers. For a thorough discussion see Rhines (1975, 1994); Vallis and Maltrud (1993); Huang and Robinson (1998) and Vallis (2005). Alternatively, the formation of banded zonal flows can be viewed from the perspective of local potential vorticity mixing coupled with wave-induced momentum transport (McIntyre, 2003).
Acoustic waves are essential diagnostic tools of the solar interior, but they are neither generated near the base of the convection zone nor do they play a significant dynamical role because of the low Mach numbers thought to characterize the rotational shear and the convective motions.
Earlier attempts to account for the uniform rotation of the radiative interior by gravity waves (e.g., Kumar and Quataert, 1997; Zahn et al., 1997; Talon and Zahn, 1998) did not properly account for wave-induced angular momentum transport.
I have benefited from discussions with many of my colleagues on various aspects of this paper and I am indebted to all of them. Special thanks go to (alphabetically) Nic Brummell, A. Sacha Brun, Marc DeRosa, Yuhong Fan, Peter Gilman, Keith MacGregor, Nagi Mansour, Michael McIntyre, Ake Nordlund, Matthias Rempel, Robert Stein, Juri Toomre, and Joe Werne. I also thank many of my colleagues for providing figures, including Nic Brummell, A. Sacha Brun, Jørgen Christensen-Dalsgaard, Mausumi Dikpati, Deborah Haber, David Hathaway, Frank Jacobitz, Michael McIntyre, Tamara Rogers, and Michael Thompson. I am also grateful to Nigel Weiss, Robert Rosner, and David Hughes for organizing the program on MHD of Stellar Interiors at the Isaac Newton Institute of the University of Cambridge, held in the autumn of 2004. This review owes much to the stimulating presentations and discussions I enjoyed during my two visits to the program. Finally, I thank Jean-Paul Zahn, Nigel Weiss, and Tom Bogdan for reviewing this manuscript and offering many helpful comments which have improved its content and presentation. This work was partially supported by NASA through award numbers W-10,177 and W-10,175.
41116_2015_1_MOESM1_ESM.mov (28.4 mb)
mpg-Movie (29075.2099609 kB) Still from a Movie showing the temporal evolution of the radial velocity near the top of the shell (r = 0.98R⊙) in Case F is shown in an orthographic projection as in Figure 9. The movie covers a time span of 7 days.
41116_2015_1_MOESM2_ESM.mov (6.9 mb)
mpg-Movie (7096.71777344 kB) Still from a Movie showing streamlines for the longitudinally-averaged mass flux in Case M3 are shown evolving over the course of 60 days. Contours are indicated as in panel a of Figure 17, which represents a temporal average of this sequence of images. The inset illustrates the mean latitudinal velocity < υ ϕ > near the top of the domain (r = 0.96R⊙) as in the temporal average of panel c in Figure 17.
mpg-Movie (4004.22558594 kB) Still from a Movie. The vorticity field is shown in a simulation of penetrative convection in a circular annulus (from Rogers and Glatzmaier, 2005b) (courtesy T. Rogers).
Acheson, D.J., 1978, "On the Instability of Toroidal Magnetic Fields and Differential Rotation in Stars", Philos. Trans. P. Soc. London, Ser. A, 289, 459–500. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1978RSPTA.289..459A.ADSMathSciNetCrossRefGoogle Scholar
Altrock, R.C., Canfield, R.C., 1972, "Observations of Photospheric Pole-Equator Temperature Differences", Solar Phys., 23, 257–264. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1972SoPh...23..257A.ADSCrossRefGoogle Scholar
Altschuler, M.D., Newkirk, G., 1969, "Magnetic Fields and the Structure of the Solar Corona", Solar Phys., 9, 131–149. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1969SoPh....9..131A.ADSCrossRefGoogle Scholar
Anders Pettersson Reif, B., Werne, J., Andreassen, Ø., Meyer, C., Davis-Mansour, M., 2002, "Entrainment-zone restratification and flow structures in stratified shear turbulence", in Studying Turbulence Using Numerical Simulation Databases — IX: Proceedings of the 2002 Summer Program, (Eds.) P. Bradshaw, P. Moin, N. Mansour, pp. 245–256, (Center for Turbulence Research, Stanford, U.S.A., 2002). Related online version (cited on 15 March 2005): http://ctr.stanford.edu/SP02.html.Google Scholar
Andersen, B.N., 1996, "Theoretical Amplitudes of Solar g-Modes", Astron. Astrophys., 312, 610–614. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996A&A...312..610AADSGoogle Scholar
Andrews, D.G., Holton, J.R., Leovy, C.B., 1987, Middle Atmosphere Dynamics, vol. 40 of International Geophysics Series, Academic Press, Orlando, U.S.A.Google Scholar
Antia, H.M., Chitre, S.M., Thompson, M.J., 2000, "The Sun's Acoustic Asphericity and Magnetic Fields in the Solar Convection Zone", Astron. Astrophys., 360, 335–344. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000A&A...360..335A.ADSGoogle Scholar
Antia, H.M., Basu, S., Hill, F., Howe, R., Komm, R.W., Schou, J., 2001, "Solar-Cycle Variation of the Sound-Speed Asphericity from GONG and MDI Data 1995–2000", Mon. Not. R. Astron. Soc., 327, 1029–1040. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001MNRAS.327.1029AADSCrossRefGoogle Scholar
Antia, H.M., Chitre, S.M., Thompson, M.J., 2003, "On Variation of the Latitudinal Structure of the Solar Convection Zone", Astron. Astrophys., 399, 329–336. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003A&A...399..329A.ADSCrossRefGoogle Scholar
Aschwanden, M.J., Poland, A.I., Rabin, D.M., 2001, "The New Solar Corona", Annu. Rev. Astron. Astrophys., 39, 175–210. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ARA&A..39..175A.ADSCrossRefGoogle Scholar
Asplund, M., Grevesse, N., Sauval, A.J., Allende Prieto, C., Blomme, R., 2005, "Line Formation in Solar Granulation. VI. [CI], CI, CH, and C2 Lines and the Photospheric C Abundance", Astron. Astrophys., 431, 693–705. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2005A&A...431..693A.ADSCrossRefGoogle Scholar
Bahcall, J.N., Basu, S., Pinsonneault, M., Serenelli, A.M., 2005, "Helioseismological Implications of Recent Solar Abundance Determinations", Astrophys. J., 618, 1049–1056. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2005ApJ...618.1049B.ADSCrossRefGoogle Scholar
Balbus, S.A., Hawley, J.F., 1991, "A Powerful Local Shear Instability in Weakly Magnetized Disks. I. Linear Analysis", Astrophys. J., 376, 214–222. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1991ApJ...376..214B.ADSCrossRefGoogle Scholar
Balbus, S.A., Hawley, J.F., 1994, "The Stability of Differentially Rotating, Weakly-Magnetized Stellar Radiative Zones", Mon. Not. R. Astron. Soc., 266, 769–774. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994MNRAS.266..769B.ADSCrossRefGoogle Scholar
Balbus, S.A., Hawley, J.F., 1998, "Instability, Turbulence, and Enhanced Transport in Accretion Disks", Rev. Mod. Phys., 70, 1–53. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998RvMP...70....1B.ADSCrossRefGoogle Scholar
Baldwin, M.P., Gray, L.J., Dunkerton, T.J., Hamilton, K., Haynes, P.H., Randel, W.J., Holton, J.R., Alexander, M.J., Hirota, I., Horinouchi, T., Jones, D.B.A., Kinnersley, J.S., Marquardt, C., Sato, K., Takahashi, M., 2001, "The Quasi-Biennial Oscillation", Rev. Geophys., 39, 179–229. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001RvGeo..39..179B.ADSCrossRefGoogle Scholar
Barnes, G., MacGregor, K.B., Charbonneau, P., 1998, "Gravity Waves in a Magnetized Shear Layer", Astrophys. J. Lett., 498, L169–L172. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998ApJ...498L.169B.ADSCrossRefGoogle Scholar
Barnes, G., Charbonneau, P., MacGregor, K.B., 1999, "Angular Momentum Transport in Magnetized Stellar Radiative Zones. III. The Solar Light-Element Abundances", Astrophys. J., 511, 466–480. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...511..466B.ADSCrossRefGoogle Scholar
Bartello, P., Métais, O., Lesieur, M., 1994, "Coherent Structures in Rotating 3-Dimensional Turbulence", J. Fluid Mech., 273, 1–29. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994JFM...273....1B.ADSMathSciNetCrossRefGoogle Scholar
Basu, S., 1997, "Seismology of the Base of the Solar Convection Zone", Mon. Not. R. Astron. Soc., 288, 572–584. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997MNRAS.288..572B.ADSCrossRefGoogle Scholar
Basu, S., Antia, H.M., 2001, "A Study of Possible Temporal and Latitudinal Variations in the Properties of the Solar Tachocline", Mon. Not. R. Astron. Soc., 324, 498–508. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001MNRAS.324..498B.ADSCrossRefGoogle Scholar
Basu, S., Antia, H.M., 2003, "Changes in Solar Dynamics from 1995 to 2002", Astrophys. J., 585, 553–565. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...585..553B.ADSCrossRefGoogle Scholar
Basu, S., Antia, H.M., Narasimha, D., 1994, "Helioseismic Measurement of the Extent of Overshoot Below the Solar Convection Zone", Mon. Not. R. Astron. Soc., 267, 209–224. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994MNRAS.267..209B.ADSCrossRefGoogle Scholar
Basu, S., Düappen, W., Nayfonov, A., 1999, "Helioseismic Analysis of the Hydrogen Partition Function in the Solar Interior", Astrophys. J., 518, 985–993. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...518..985B.ADSCrossRefGoogle Scholar
Beck, J.G., 2000, "A comparison of differential rotation measurements", Solar Phys., 191, 47–70. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..191...47B.ADSCrossRefGoogle Scholar
Beck, J.G., Duvall, T.L., Scherrer, P.H., 1998, "Long-Lived Giant Cells Detected at the Surface of the Sun", Nature, 394, 653–655. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998Natur.394..653B.ADSCrossRefGoogle Scholar
Beck, J.G., Gizon, L., Duvall, T.L., 2002, "A New Component of Solar Dynamics: North-South Diverging Flows Migrating Toward the Equator with an 11-Year Period", Astrophys. J. Lett., 575, L47–L50. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002ApJ...575L..47B.ADSCrossRefGoogle Scholar
Beniston, M., 1998, From Turbulence to Climate: Numerical Investigations of the Atmosphere with a Hierarchy of Models, Springer, Berlin, Germany; New York, U.S.A.CrossRefGoogle Scholar
Biskamp, D., 1993, Nonlinear Magnetohydrodynamics, Cambridge University Press, Cambridge, U.K.; New York, U.S.A.CrossRefGoogle Scholar
Blackman, E.G., Brandenburg, A., 2003, "Doubly Helical Coronal Ejections From Dynamos and Their Role in Sustaining the Solar Cycle", Astrophys. J. Lett., 584, L99–L102. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...584L..99B.ADSCrossRefGoogle Scholar
Bogart, R.S., 1982, "Recurrence of Solar Activity: Evidence for Active Longitudes", Solar Phys., 76, 155–165. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1982SoPh...76..155B.ADSCrossRefGoogle Scholar
Brandenburg, A., Subramanian, K., 2004, "Astrophysical Magnetic Fields and Nonlinear Dynamo Theory", Phys. Pep., 141, 1502–1512. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004astro.ph..5052B.Google Scholar
Brandenburg, A., Jennings, R.L., Nordlund, A., Rieutord, M., Stein, R.F., Tuominen, I., 1996, "Magnetic Structures in a Dynamo Simulation", J. Fluid Mech., 306, 325–352. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996JFM...306..325B.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Branover, H., Eidelman, A., Golbraikh, E., Moiseev, S., 1999, Turbulence and Structures: Chaos, Fluctuations, and Helical Self-Organization in Nature and the Laboratory, Academic Press, San Diego, U.S.A.Google Scholar
Braun, D.C., Fan, Y., 1998, "Helioseismic Measurements of the Subsurface Meridional Flow", Astrophys. J. Lett., 508, L105–L108. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998ApJ...508L.105B.ADSCrossRefGoogle Scholar
Braun, D.C., Lindsey, C., 2000, "Helioseismic Holography of Active-Region Subphotospheres", Solar Phys., 192, 285–305. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..192..285B.ADSCrossRefGoogle Scholar
Braun, D.C., Lindsey, C., 2001, "Seismic Imaging of the Far Hemisphere of the Sun", Astrophys. J. Lett., 560, L189–L192. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...560L.189B.ADSCrossRefGoogle Scholar
Brouwer, M.P., Zwaan, C., 1990, "Sunspot Nests as Traced by Cluster Analysis", Solar Phys., 129, 221–246. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1990SoPh..129..221B.ADSCrossRefGoogle Scholar
Brummell, N.H., Hurlburt, N.E., Toomre, J., 1996, "Turbulent Compressible Convection with Rotation: I. Flow Structure and Evolution", Astrophys. J., 473, 494–513. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996ApJ...473..494B.ADSCrossRefGoogle Scholar
Brummell, N.H., Hurlburt, N.E., Toomre, J., 1998, "Turbulent Compressible Convection with Rotation: II. Mean Flows and Differential Rotation", Astrophys. J., 493, 955–969. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998ApJ...493..955B.ADSCrossRefGoogle Scholar
Brummell, N.H., Cline, K.S., Cattaneo, F., 2002a, "Formation of Buoyant Magnetic Structures by a Localized Velocity Shear", Mon. Not. R. Astron. Soc., 329, L73–L76. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002MNRAS.329L..73B.ADSCrossRefGoogle Scholar
Brummell, N.H., Clune, T.L., Toomre, J., 2002b, "Penetration and Overshooting in Turbulent Compressible Convection", Astrophys. J., 570, 825–854. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002ApJ...570..825B.ADSCrossRefGoogle Scholar
Brun, A.S., Toomre, J., 2002, "Turbulent Convection under the Influence of Rotation: Sustaining a Strong Differential Rotation", Astrophys. J., 570, 865–885. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002ApJ...570..865B.ADSCrossRefGoogle Scholar
Brun, A.S., Miesch, M.S., Toomre, J., 2004, "Global-Scale Turbulent Convection and Magnetic Dynamo Action in the Solar Envelope", Astrophys. J., 614, 1073–1098. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...614.1073B.ADSCrossRefGoogle Scholar
Brun, A.S., Miesch, M.S., Toomre, J., 2005, unknown format, in preparation.Google Scholar
Bumba, V., Howard, R., 1965, "Large-Scale Distribution of Solar Magnetic Fields", Astrophys. J., 141, 1502–1512. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1965ApJ...141.1502B.ADSCrossRefGoogle Scholar
Bushby, P., Mason, J., 2004, "Understanding the Solar Dynamo", Astron. Geophys., 45, 7–13. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004A&G....45d...7B.CrossRefGoogle Scholar
Busse, F.H., 1970, "Thermal Instabilities in Rapidly Rotating Systems", J. Fluid Mech., 44, 441–460. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1970JFM....44..441B.ADSzbMATHCrossRefGoogle Scholar
Busse, F.H., 2000, "Homogeneous Dynamos in Planetary Cores and in the Laboratory", Annu. Pev. Fluid Mech., 32, 383–408. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000AnRFM..32..383B.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Busse, F.H., 2002, "Convective Flows in Rapidly Rotating Spheres and their Dynamo Action", Phys. Fluids, 14, 1301–1314. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002PhFl...14.1301B.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Busse, F.H., Cuong, P.G., 1977, "Convection in Rapidly Rotating Spherical Fluid Shells", Geophys. Astrophys. Fluid Dyn., 8, 17–44. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1977GApFD...8...17B.ADSzbMATHCrossRefGoogle Scholar
Cally, P.S., 2000, "A Sufficient Condition for Instability in a Sheared Incompressible Magnetofluid", Solar Phys., 194, 189–196. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..194..189C.ADSCrossRefGoogle Scholar
Cally, P.S., 2001, "Nonlinear Evolution of 2D Tachocline Instabilites", Solar Phys., 199, 231–249. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001SoPh..199..231C.ADSCrossRefGoogle Scholar
Cally, P.S., 2003, "Three-dimensional magneto-shear instabilities in the solar tachocline", Mon. Not. P. Astron. Soc., 339, 957–972. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003MNRAS.339..957C.ADSCrossRefGoogle Scholar
Cally, P.S., Dikpati, M., Gilman, P.A., 2003, "Clamshell and Tipping Instabilities in a Two-Dimensional Magnetohydrodynamic Tachocline", Astrophys. J., 582, 1190–1205. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...582.1190C.ADSCrossRefGoogle Scholar
Cambon, C., Scott, J.F., 1999, "Linear and Nonlinear Models of Anisotropic Turbulence", Annu. Rev. Fluid Mech., 31, 1–53. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999AnRFM..31....1C.ADSMathSciNetCrossRefGoogle Scholar
Cambon, C., Mansour, N.N., Godeferd, F.S., 1997, "Energy Transfer in Rotating Turbulence", J. Fluid Mech., 337, 303–332. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997JFM...337..303C.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Cantwell, B.J., 1981, "Organized Motion in Turbulent Flow", Annu. Rev. Fluid Mech., 13, 457–515. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1981AnRFM..13..457C.ADSCrossRefGoogle Scholar
Canuto, V.M., 2000, "The Physics of Subgrid Scales in Numerical Simulations of Stellar Convection: Are They Dissipative, Advective, or Diffusive?", Astrophys. J. Lett., 541, L79–L82. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...541L..79C.ADSCrossRefGoogle Scholar
Canuto, V.M., Christensen-Dalsgaard, J., 1998, "Turbulence in Astrophysics: Stars", Annu. Rev. Fluid Mech., 30, 167–198. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998AnRFM..30..167C.ADSMathSciNetCrossRefGoogle Scholar
Canuto, V.M., Dubovikov, M., 1998, "Stellar Turbulent Convection. I. Theory", Astrophys. J., 493, 834–847. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998ApJ...493..834C.ADSCrossRefGoogle Scholar
Canuto, V.M., Minotti, F.O., Schilling, O., 1994, "Differential Rotation and Turbulent Convection: A New Reynolds Stress Model and Comparison with Solar Data", Astrophys. J., 425, 303–325. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994ApJ...425..303C.ADSCrossRefGoogle Scholar
Carrington, R.C., 1863, Observations of the Spots on the Sun, Williams and Norgate, London, U.K.Google Scholar
Cattaneo, F., 1999, "On the Origin of Magnetic Fields in the Quiet Photosphere", Astrophys. J. Lett., 515, L39–L42. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...515L..39C.ADSCrossRefGoogle Scholar
Cattaneo, F., Vainshtein, S.I., 1991, "Suppression of Turbulent Transport by a Weak Magnetic Field", Astrophys. J. Lett., 376, L21–L24. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1991ApJ...376L..21C.ADSCrossRefGoogle Scholar
Cattaneo, F., Brummell, N.H., Toomre, J., Malagoli, A., Hurlburt, N.E., 1991, "Turbulent Compressible Convection", Astrophys. J., 370, 282–294. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1991ApJ...370..282C.ADSCrossRefGoogle Scholar
Cattaneo, F., Emonet, T., Weiss, N.O., 2003, "On the Interaction Between Convection and Magnetic Fields", Astrophys. J., 588, 1183–1198. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...588.1183C.ADSCrossRefGoogle Scholar
Chae, J., 2000, "The Magnetic Helicity Sign of Filament Chirality", Astrophys. J. Lett., 540, L115–L118. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...540L.115C.ADSCrossRefGoogle Scholar
Chandrasekhar, S., 1961, Hydrodynamic and Hydromagnetic Stability, Clarendon Press, Oxford, U.K.zbMATHGoogle Scholar
Charbonneau, P., 2005, "Dynamo Models of the Solar Cycle", Living Rev. Solar Phys., 2. URL (cited on 15 March 2005): http://solarphysics.livingreviews.org. In preparation.
Charbonneau, P., MacGregor, K.B., 1993, "Angular Momentum Transport in Magnetized Stellar Radiative Zones. II. The Solar Spin-Down", Astrophys. J., 417, 762–780. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1993ApJ...417..762C.ADSCrossRefGoogle Scholar
Charbonneau, P., MacGregor, K.B., 2001, "Magnetic Fields in Massive Stars. I. Dynamo Models", Astrophys. J., 559, 1094–1107. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...559.1094C.ADSCrossRefGoogle Scholar
Charbonneau, P., Christensen-Dalsgaard, J., Henning, R., Larsen, R.M., Schou, J., Thompson, M.J., Tomczyk, S., 1999a, "Helioseismic Constraints on the Structure of the Solar Tachocline", Astrophys. J., 527, 445–460. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...527..445C.ADSCrossRefGoogle Scholar
Charbonneau, P., Dikpati, M., Gilman, P.A., 1999b, "Stability of the Solar Latitudinal Differential Rotation Inferred from Helioseismic Data", Astrophys. J., 526, 523–537. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...526..523C.ADSCrossRefGoogle Scholar
Charney, J.G., 1971, "Geostrophic Turbulence", J. Atmos. Sci., 28, 1087–1095. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1971JAtS...28.1087C.ADSCrossRefGoogle Scholar
Childress, S., Gilbert, A.D., 1995, Stretch, Twist, Fold: The Fast Dynamo, vol. m37 of Lecture Notes in Physics, Springer, Berlin, Germany; New York, U.S.A.zbMATHGoogle Scholar
Cho, J.Y.-K., Polvani, L.M., 1996a, "The Emergence of Jets and Vortices in Freely Evolving, Shallow-Water Turbulence on a Sphere", Phys. Fluids, 8, 1531–1552. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996PhFl....8.1531C.ADSzbMATHCrossRefGoogle Scholar
Cho, J.Y.-K., Polvani, L.M., 1996b, "The Morphogenesis of Bands and Zonal Winds in the Atmospheres on the Giant Outer Planets", Science, 273, 335–337. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996Sci...273..335Y.ADSCrossRefGoogle Scholar
Chou, D.-Y., Dai, E.-C., 2001, "Solar Cycle Variations of Subsurface Meridional Flows in the Sun", Astrophys. J. Lett., 559, L175–L178. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...559L.175C.ADSCrossRefGoogle Scholar
Choudhuri, A.R., Schuüssler, M., Dikpati, M., 1995, "The Solar Dynamo with Meridional Circulation", Astron. Astrophys., 303, L29–L32. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1995A&A...303L..29C.ADSGoogle Scholar
Christensen, U., Olson, P., Glatzmaier, G.A., 1999, "Numerical Modeling of the Geodynamo: A Systematic Parameter Study", Geophys. J. Int., 138, 393–409. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999GeoJI.138..393C.ADSCrossRefGoogle Scholar
Christensen-Dalsgaard, J., 2002, "Helioseismology", Rev. Mod. Phys., 74, 1073–1129. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002RvMP...74.1073CADSCrossRefGoogle Scholar
Christensen-Dalsgaard, J., Däppen, W., 1992, "Solar Oscillations and the Equation of State", Astron. Astrophys. Rev., 4, 267–361. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1992A&ARv...4..267C.ADSCrossRefGoogle Scholar
Christensen-Dalsgaard, J., Gough, D.O., Thompson, M.J., 1991, "The Depth of the Solar Convection Zone", Astrophys. J., 378, 413–437. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1991ApJ...378..413C.ADSCrossRefGoogle Scholar
Christensen-Dalsgaard, J., Corbard, T., Dikpati, M., Gilman, P.A., Thompson, M.J., 2004, "Detection of Jets and Associated Toroidal Fields in the Solar Tachocline", in Helio- and Asteroseismology: Towards a Golden Future, (Ed.) S. Basu, vol. SP-559 of ESA Conference Proceedings, pp. 376–380, (ESA Publications Division, Noordwijk, Netherlands, 2004). Proceedings of the SOHO 14/GONG 2004 Workshop, New Haven, USA, July 12–16 2004.Google Scholar
Christensen-Dalsgaard, J. et al., 1996, "The Current State of Solar Modeling", Science, 272, 1286–1292. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996Sci...272.1286C.ADSCrossRefGoogle Scholar
Christensen-Dalsgaard, J. et al., 2003, "Solar models", personal homepage, University of Aarhus. URL (cited on 15 March 2005): http://astro.phys.au.dk/~jcd/solar_models/.
Cline, K.S., Brummell, N.H., Cattaneo, F., 2003a, "On the Formation of Magnetic Structures by the Combined Action of Velocity Shear and Magnetic Buoyancy", Astrophys. J., 588, 630–644. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...588..630C.ADSCrossRefGoogle Scholar
Cline, K.S., Brummell, N.H., Cattaneo, F., 2003b, "Dynamo Action Driven by Shear and Magnetic Buoyancy", Astrophys. J., 599, 1449–1468. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...599.1449C.ADSCrossRefGoogle Scholar
Clune, T.C., Elliott, J.R., Miesch, M.S., Toomre, J., Glatzmaier, G.A., 1999, "Computational Aspects of a Code to Study Rotating Turbulent Convection in Spherical Shells", Parallel Comput., 25, 361–380.zbMATHCrossRefGoogle Scholar
Corbard, T., Thompson, M.J., 2002, "The Subsurface Radial Gradient of Solar Angular Velocity from MDI f-mode Observations", Solar Phys., 205, 211–229. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002SoPh..205..211C.ADSCrossRefGoogle Scholar
Corbard, T., Blanc-Feraud, L., Berthomieu, G., Provost, J., 1999, "Nonlinear Regularization for Helioseismic Inversions", Astron. Astrophys., 344, 696–708. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999A&A...344..696C.ADSGoogle Scholar
Covas, E., Tavakol, R., Moss, D., 2001, "Dynamical Variations of the Differential Rotation in the Solar Convection Zone", Astron. Astrophys., 371, 718–730. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001A&A...371..718C.ADSzbMATHCrossRefGoogle Scholar
Covas, E., Moss, D., Tavakol, R., 2004, "The Influence of Density Stratification and Multiple Nonlinearities on Solar Torsional Oscillations", Astron. Astrophys., 416, 775–782. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004A&A...416..775C.ADSCrossRefGoogle Scholar
Cowling, T.G., 1957, Magnetohydrodynamics, vol. 4 of Interscience Tracts on Physics and Astronomy, Interscience, New York, U.S.A.zbMATHGoogle Scholar
Danilov, S., Gurarie, D., 2004, "Scaling, Spectra, and Zonal Jets in Beta-Plane Turbulence", Phys. Fluids, 16, 2592–2603. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004PhFl...16.2592D.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Dellar, P.J., 2002, "Hamiltonian and Symmetric Hyperbolic Structures of Shallow Water Magnetohydrodynamics", Phys. Plasmas, 9, 1130–1136. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002PhPl....9.1130D.ADSMathSciNetCrossRefGoogle Scholar
DeRosa, M.L., Toomre, J., 2004, "Evolution of Solar Supergranulation", Astrophys. J., 616, 1242–1260. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...616.1242D.ADSCrossRefGoogle Scholar
DeRosa, M.L., Gilman, P.A., Toomre, J., 2002, "Solar Multiscale Convection and Rotation Gradients Studied in Shallow Spherical Shells", Astrophys. J., 581, 1356–1374. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002ApJ...581.1356D.ADSCrossRefGoogle Scholar
DeToma, G., White, O.R., Harvey, K.L., 2000, "A Picture of Solar Minimum and the Onset of Solar Cycle 23. 1. Global Magnetic Field Evolution", Astrophys. J., 529, 1101–1114. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...529.1101D.ADSCrossRefGoogle Scholar
Dikpati, M., Charbonneau, P., 1999, "A Babcock-Leighton Flux Transport Dynamo with SolarLike Differential Rotation", Astrophys. J., 518, 508–520. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...518..508D.ADSCrossRefGoogle Scholar
Dikpati, M., Gilman, P.A., 1999, "Joint Instability of Latitudinal Differential Rotation and Concentrated Toroidal Fields Below the Solar Convection Zone", Astrophys. J., 512, 417–441. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...512..417D.ADSCrossRefGoogle Scholar
Dikpati, M., Gilman, P.A., 2001a, "Flux-Transport Dynamos with α-effect from Global Instability of Tachocline Differential Rotation: A Solution for Magnetic Parity Selection in the Sun", Astrophys. J., 559, 428–442. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...559..428D.ADSCrossRefGoogle Scholar
Dikpati, M., Gilman, P.A., 2001b, "Prolateness of the Solar Tachocline Inferred from Latitudinal Force Balance in a Magnetohydrodynamic Shallow-Water Model", Astrophys. J., 552, 348–353. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...552..348D.ADSCrossRefGoogle Scholar
Dikpati, M., Gilman, P.A., 2001c, "Analysis of Hydrodynamic Stability of Solar Tachocline Using a Shallow-Water Model", Astrophys. J., 551, 536–564. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...551..536D.ADSCrossRefGoogle Scholar
Dikpati, M., Gilman, P.A., Rempel, M., 2003, "Stability Analysis of Tachocline Latitudinal Differential Rotation and Coexisting Toroidal Band Using a Shallow-Water Model", Astrophys. J., 596, 680–697. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...596..680D.ADSCrossRefGoogle Scholar
Dikpati, M., Cally, P.S., Gilman, P.A., 2004, "Linear Analysis and Nonlinear Evolution of Two-Dimensional Global Magnetohydrodynamic Instabilities in a Diffusive Tachocline", Astrophys. J., 610, 597–615. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...610..597D.ADSCrossRefGoogle Scholar
Dintrans, B., Brandenburg, A., 2004, "Identification of Gravity Waves in Hydrodynamical Simulations", Astron. Astrophys., 421, 775–782. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004A&A...421..775D.ADSCrossRefGoogle Scholar
Dintrans, B., Brandenburg, A., Nordlund, Å., Stein, R.F., 2003, "Stochastic Excitation of Gravity Waves by Overshooting Convection in Solar-Type Stars", Astrophys. Space Sci., 284, 237–240. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003Ap&SS.284..237D.ADSzbMATHCrossRefGoogle Scholar
Dorch, S.B.F., Nordlund, Å., 2001, "On the Transport of Magnetic Fields by Solar-Like Stratified Convection", Astron. Astrophys., 365, 562–570. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001A&A...365..562D.ADSCrossRefGoogle Scholar
Drazin, P.G., Reid, W.H., 1981, Hydrodynamic Stability, Cambridge Monographs on Mechanics and Applied Mathematics, Cambridge University Press, Cambridge, U.K.; New York, U.S.A.zbMATHGoogle Scholar
Dritschel, D.G., de la Torre Juárez, M., Ambaum, M.H.P., 1999, "The Three-Dimensional Vortical Nature of Atmospheric and Oceanic Flows", Phys. Fluids, 11, 1512–1520. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999PhFl...11.1512D.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Durbin, P.A., Pettersson Reif, B.A., 2001, Statistical Theory and Modeling for Turbulent Flows, John Wiley & Sons, Chichester, U.K.; New York, U.S.A.zbMATHGoogle Scholar
Durney, B.R., 1995, "On a Babcock-Leighton Dynamo Model with a Deep-Seated Generating Layer for the Toroidal Magnetic Field", Solar Phys., 160, 213–235. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1995SoPh..160..213D.ADSCrossRefGoogle Scholar
Durney, B.R., 1999, "The Taylor-Proudman Balance and the Solar Rotation Data", Astrophys. J., 511, 945–957. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...511..945D.ADSCrossRefGoogle Scholar
Durney, B.R., 2000a, "Meridional Motions and the Angular Momentum Balance in the Solar Convection Zone", Astrophys. J., 528, 486–492. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...528..486D.ADSCrossRefGoogle Scholar
Durney, B.R., 2000b, "On the Torsional Oscillations in Babcock-Leighton Solar Dynamo Models", Solar Phys., 196, 1–18. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..196....1D.ADSCrossRefGoogle Scholar
Dziembowski, W.A., Goode, P.R., Kosovichev, A.G., Schou, J., 2000, "Signatures of the Rise of Cycle 23", Astrophys. J., 537, 1026–1038. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...537.1026D.ADSCrossRefGoogle Scholar
Eddy, J.A., Gilman, P.A., Trotter, D.E., 1977, "Anomalous Solar Rotation in the Early 17th Century", Science, 198, 824–829. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1977Sci...198..824E.ADSCrossRefGoogle Scholar
Elliott, J.R., 1997, "Aspects of the Solar Tachocline", Astron. Astrophys., 327, 1222–1229. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997A&A...327.1222E.ADSGoogle Scholar
Elliott, J.R., Gough, D.O., 1999, "Calibration of the Thickness of the Solar Tachocline", Astrophys. J., 516, 475–481. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...516..475E.ADSCrossRefGoogle Scholar
Elliott, J.R., Miesch, M.S., Toomre, J., 2000, "Turbulent Solar Convection and its Coupling with Rotation: The Effect of Prandtl Number and Thermal Boundary Conditions on the Resulting Differential Rotation", Astrophys. J., 533, 546–556. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...533..546E.ADSCrossRefGoogle Scholar
Fan, Y., 2004, "Magnetic Fields in the Solar Convection Zone", Living Rev. Solar Phys., 1, lrsp-2004-1. URL (cited on 15 March 2005): http://www.livingreviews.org/lrsp-2004-1.
Fisher, G.H., Fan, Y., Longcope, D.W., Linton, M.G., Pevtsov, A.A., 2000, "The Solar Dynamo and Emerging Flux", Solar Phys., 192, 119–139. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..192..119F.ADSCrossRefGoogle Scholar
Foias, C., Holm, D.D., Titi, E.S., 2001, "The Navier-Stokes-Alpha Model of Fluid Turbulence", Physica D, 152, 505–519. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001PhyD..152..505F.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Forgács-Dajka, E., 2004, "Dynamics of the Fast Solar Tachocline. II. Migrating Field", Astron. Astrophys., 413, 1143–1151. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004A&A...413.1143F.ADSCrossRefGoogle Scholar
Forgács-Dajka, E., Petrovay, K., 2002, "Dynamics of the Fast Solar Tachocline. I. Dipolar Field", Astron. Astrophys., 389, 629–640. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002A&A...389..629F.ADSCrossRefGoogle Scholar
Foukal, P., Jokipii, J.R., 1975, "On the Rotation of Gas and Magnetic Fields at the Solar Photosphere", Astrophys. J. Lett., 199, L71–L73. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1975ApJ...199L..71F.ADSCrossRefGoogle Scholar
Fournier, A., Taylor, M.A., Tribbia, J.J., 2004, "The Spectral Element Atmosphere Model (SEAM): High-Resolution Parallel Computation and Localized Resolution of Regional Dynamics", Mon. Weather Rev., 132, 726–748.ADSCrossRefGoogle Scholar
Fritts, D.C., Vadas, S.L., Andreassen, Ø., 1998, "Gravity Wave Excitation and Momentum Transport in the Solar Interior: Implications for a Residual Circulation and Lithium Depletion", Astron. Astrophys., 333, 343–361. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998A&A...333..343F.ADSGoogle Scholar
Fritts, D.C., Bizon, C., Werne, J., Meyer, C., 2003, "Layering Accompanying Turbulence Generation due to Shear Instability and Gravity-Wave Breaking", J. Geophys. Pes., 108, 8452. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003JGRD.108hPMR20F.ADSCrossRefGoogle Scholar
Gailitis, A., Lielausis, O., Platacis, E., Gerbeth, G., Stefani, F., 2002, "Colloquium: Laboratory Experiments on Hydromagnetic Dynamos", Rev. Mod. Phys., 74, 973–990. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002RvMP...74..973G.ADSCrossRefGoogle Scholar
Galmiche, M., Hunt, J.C.R., 2002, "The Formation of Shear and Density Layers in Stably Stratified Turbulent Flows: Linear Processes", J. Fluid Mech., 455, 243–262. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002JFM...455..243G.ADSMathSciNetzbMATHGoogle Scholar
Galmiche, M., Thual, O., Bonneton, P., 2002, "Direct Numerical Simulation of Turbulence-Mean Field Interactions in a Stably-Stratified Fluid", J. Fluid Mech., 455, 213–242. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002JFM...455..213G.ADSzbMATHGoogle Scholar
Garaud, P., 2001, "Latitudinal Shear Instability in the Solar Tachocline", Mon. Not. P. Astron. Soc., 324, 68–76. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001MNRAS.324...68G.ADSCrossRefGoogle Scholar
Garaud, P., 2002, "Dynamics of the Solar Tachocline — I. An Incompressible Study", Mon. Not. R. Astron. Soc., 329, 1–17. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002MNRAS.329....1G.ADSCrossRefGoogle Scholar
Germano, M., Piomelli, U., Moin, P., Cabot, W.H., 1991, "A dynamic subgrid-scale eddy viscosity model", Phys. Fluids A, 3, 1760–1765. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1991PhFl....3.1760G.ADSzbMATHCrossRefGoogle Scholar
Gibson, S.E., Biesecker, D., Guhathakurta, M., Hoeksema, J.T., Lazarus, A.J., Linker, J., Mikic, Z., Pisanko, Y., Riley, P., Steinberg, J., Strachan, L., Szabo, A., Thompson, B.J., Zhao, X.P., 1999, "The Three-Dimensional Coronal Magnetic Field During Whole Sun Month", Astrophys. J., 520, 871–879. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...520..871G.ADSCrossRefGoogle Scholar
Giles, P.M., Duvall, T.L., Scherrer, P.H., Bogart, R.S., 1997, "A Subsurface Flow of Material From the Sun's Equator to its Poles", Nature, 390, 52–54. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997Natur.390...52G.ADSCrossRefGoogle Scholar
Gilman, P.A., 1974, "Solar Rotation", Annu. Rev. Astron. Astrophys., 12, 47–70. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1974ARA&A..12...47G.ADSCrossRefGoogle Scholar
Gilman, P.A., 1975, "Linear Simulations of Boussinesq Convection in a Deep Rotating Spherical Shell", J. Atmos. Sci., 32, 1331–1352. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1975JAtS...32.1331G.ADSCrossRefGoogle Scholar
Gilman, P.A., 1977, "Nonlinear Dynamics of Boussinesq Convection in a Deep Rotating Spherical Shell I", Geophys. Astrophys. Fluid Dyn., 8, 93–135. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1977GApFD...8...93G.ADSzbMATHCrossRefGoogle Scholar
Gilman, P.A., 1978, "Nonlinear Dynamics of Boussinesq Convection in a Deep Rotating Spherical Shell II: Effects of Temperature Boundary Conditions", Geophys. Astrophys. Fluid Dyn., 11, 157–179. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1978GApFD..11..157G.ADSzbMATHCrossRefGoogle Scholar
Gilman, P.A., 1979, "Model Calculations Concerning Rotation at High Solar Latitudes and the Depth of the Solar Convection Zone", Astrophys. J., 231, 284–292. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1979ApJ...231..284G.ADSCrossRefGoogle Scholar
Gilman, P.A., 1983, "Dynamically Consistent Nonlinear Dynamos Driven by Convection in a Rotating Spherical Shell II. Dynamos with Cycles and Strong Feedbacks", Astrophys. J. Suppl. Ser., 53, 243–268. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1983ApJS...53..243G.ADSCrossRefGoogle Scholar
Gilman, P.A., 1986, "The Solar Dynamo: Observations and Theories of Solar Convection, Global Circulation, and Magnetic Fields", in Physics of the Sun, Vol. 1: The Solar Interior, (Eds.) P. Sturrock, T. Holzer, D. Mihalas, R. Ulrich, vol. 1, pp. 95–160, D. Reidel, Dordrecht, Netherlands; Boston, U.S.A.CrossRefGoogle Scholar
Gilman, P.A., 2000a, "Fluid Dynamics and MHD of the Solar Convection Zone and Tachocline: Current Understanding and Unsolved Problems", Solar Phys., 192, 27–48. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..192...27G.ADSCrossRefGoogle Scholar
Gilman, P.A., 2000b, "Magnetohydrodynamic Shallow Water Equations for the Solar Tachocline", Astrophys. J. Lett., 544, L79–L82. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...544L..79G.ADSCrossRefGoogle Scholar
Gilman, P.A., Dikpati, M., 2000, "Joint Instability of Latitudinal Differential Rotation and Concentrated Toroidal Fields Below the Solar Convection Zone. II. Instability of Narrow Bands at All Latitudes", Astrophys. J., 528, 552–572. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...528..552G.ADSCrossRefGoogle Scholar
Gilman, P.A., Dikpati, M., 2002, "Analysis of Instability of Latitudinal Differential Rotation and Toroidal Field in the Solar Tachocline using a Magnetohydrodynamic Shallow-Water Model. I. Instability for Broad Toroidal Field Profiles", Astrophys. J., 576, 1031–1047. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002ApJ...576.1031G.ADSCrossRefGoogle Scholar
Gilman, P.A., Foukal, P.V., 1979, "Angular Velocity Gradients in the Solar Convection Zone", Astrophys. J., 229, 1179–1185. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1979ApJ...229.1179G.ADSCrossRefGoogle Scholar
Gilman, P.A., Fox, P.A., 1997, "Joint Instability of Latitudinal Differential Rotation and Toroidal Magnetic Fields Below the Solar Convection Zone", Astrophys. J., 484, 439–454. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997ApJ...484..439G.ADSCrossRefGoogle Scholar
Gilman, P.A., Fox, P.A., 1999a, "Joint Instability of Latitudinal Differential Rotation and Toroidal Magnetic Fields Below the Solar Convection Zone. II. Instability for Toroidal Fields that have a Node Between the Equator and Pole", Astrophys. J., 510, 1018–1044. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...510.1018G.ADSCrossRefGoogle Scholar
Gilman, P.A., Fox, P.A., 1999b, "Joint Instability of Latitudinal Differential Rotation and Toroidal Magnetic Fields Below the Solar Convection Zone. III. Unstable Disturbance Phenomenology and the Solar Cycle", Astrophys. J., 522, 1167–1189. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...522.1167G.ADSCrossRefGoogle Scholar
Gilman, P.A., Glatzmaier, G.A., 1981, "Compressible Convection in a Rotating Spherical Shell. I. Anelastic Equations", Astrophys. J. Suppl. Ser., 45, 335–349. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1981ApJS...45..335G.ADSMathSciNetCrossRefGoogle Scholar
Gilman, P.A., Miesch, M.S., 2004, "Limits to Penetration of Meridional Circulation Below the Solar Convection Zone", Astrophys. J., 611, 568–574. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...611..568G.ADSCrossRefGoogle Scholar
Gilman, P.A., Miller, J., 1981, "Dynamically Consistent Nonlinear Dynamos Driven by Convection in a Rotating Spherical Shell", Astrophys. J. Suppl. Ser., 46, 211–238. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1981ApJS...46..211G.ADSCrossRefGoogle Scholar
Gilman, P.A., Miller, J., 1986, "Nonlinear Convection of a Compressible Fluid in a Rotating Spherical Shell", Astrophys. J. Suppl. Ser., 61, 585–608. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1986ApJS...61..585G.ADSCrossRefGoogle Scholar
Gilman, P.A., Morrow, C.A., DeLuca, E.E., 1989, "Angular Momentum Transport and Dynamo Action in the Sun: Implications of Recent Oscillation Measurements", Astrophys. J., 338, 528–537. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1989ApJ...338..528G.ADSCrossRefGoogle Scholar
Gilman, P.A., Dikpati, M., Miesch, M.S., 2004, "Global MHD Instabilities in a Thin-Shell Model of the Solar Tachocline", in Helio- and Asteroseismology: Towards a Golden Future, (Ed.) S. Basu, vol. SP-559 of ESA Conference Proceedings, pp. 440–443, (ESA Publications Division, Noordwijk, Netherlands, 2004). Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004soho...14..440G. Proceedings of the SOHO 14/GONG 2004 Workshop, New Haven, USA, July 12–16 2004.Google Scholar
Gizon, L., Birch, A., 2005, "Local Helioseismology", Living Rev. Solar Phys., 2. URL (cited on 15 March 2005): http://solarphysics.livingreviews.org. In preparation.
Gizon, L., Duvall, T.L., Schou, J., 2003, "Wave-like Properties of Solar Supergranulation", Nature, 421, 43–44. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003Natur.421...43G.ADSCrossRefGoogle Scholar
Glatzmaier, G.A., 1984, "Numerical Simulations of Stellar Convective Dynamos. I. The Model and Method", J. Comput. Phys., 55, 461–484. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1984JCoPh..55..461G.ADSCrossRefGoogle Scholar
Glatzmaier, G.A., 1985a, "Numerical Simulations of Stellar Convective Dynamos. II. Field Propogation in the Convection Zone", Astrophys. J., 291, 300–307. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1985ApJ...291..300G.ADSCrossRefGoogle Scholar
Glatzmaier, G.A., 1985b, "Numerical Simulations of Stellar Convective Dynamos. III. At the Base of the Convection Zone", Geophys. Astrophys. Fluid Dyn., 31, 137–150. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1985GApFD..31..137G.ADSCrossRefGoogle Scholar
Glatzmaier, G.A., 1987, "A Review of What Numerical Simulations Tell Us About the Internal Rotation of the Sun", in The Internal Solar Angular Velocity: Theory, Observations, and Relationship to Solar Magnetic Fields, (Eds.) B. Durney, S. Sofia, vol. 137 of Astrophysics and Space Science Library, pp. 263–274, (D. Reidel, Dordrecht, Netherlands; Boston, U.S.A., 1987). Proceedings of the 8th National Solar Observatory Summer Symposium, held in Sunspot, New Mexico, August 11–14, 1986.CrossRefGoogle Scholar
Glatzmaier, G.A., 2002, "Geodynamo Simulations — How Realistic are They?", Annu. Rev. Earth Planet. Sci., 30, 237–257. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002AREPS..30..237G.ADSCrossRefGoogle Scholar
Glatzmaier, G.A., Gilman, P.A., 1981, "Compressible Convection in a Rotating Spherical Shell. IV. Effects of Viscosity, Conductivity, Boundary Conditions, and Zone Depth", Astrophys. J. Suppl. Ser., 47, 103–116. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1981ApJS...47..103G.ADSCrossRefGoogle Scholar
Godoy-Diana, R., Chomaz, J.-M., Billant, P., 2004, "Vertical Length Scale Selection for Pancake Vortices in Strongly Stratified Viscous Fluids", J. Fluid Mech., 504, 229–238. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004JFM...504..229G.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Goldreich, P., Kumar, P., 1990, "Wave Generation by Turbulent Convection", Astrophys. J., 363, 694–704. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1990ApJ...363..694G.ADSCrossRefGoogle Scholar
Gough, D.O., 1969, "The Anelastic Approximation for Thermal Convection", J. Atmos. Sci., 26, 448–456. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1969JAtS...26..448G.ADSCrossRefGoogle Scholar
Gough, D.O., McIntyre, M.E., 1998, "Inevitability of a Magnetic Field in the Sun's Radiative Interior", Nature, 394, 755–757. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998Natur.394..755G.ADSCrossRefGoogle Scholar
Gough, D.O., Toomre, J., 1991, "Seismic Observations of the Solar Interior", Annu. Rev. Astron. Astrophys., 29, 627–684. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1991ARA&A..29..627G.ADSCrossRefGoogle Scholar
Gough, D.O. et al., 1996, "The Seismic Structure of the Sun", Science, 272, 1296–1300. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996Sci...272.1296G.ADSCrossRefGoogle Scholar
Haber, D.A., Hindman, B.W., Toomre, J., Bogart, R.S., Larsen, R.M., Hill, F., 2002, "Evolving Submerged Meridional Circulation Cells Within the Upper Convection Zone Revealed by Ring-Diagram Analysis", Astrophys. J., 570, 855–864. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002ApJ...570..855H.ADSCrossRefGoogle Scholar
Hanazaki, H., Hunt, J.C.R., 2004, "Structure of Unsteady Stably Stratified Turbulence with Mean Shear", J. Fluid Mech., 507, 1–42. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004JFM...507....1H.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Hanjalić, K., 2002, "One-Point Closure Models for Buoyancy-Driven Turbulent Flows", Annu. Rev. Fluid Mech., 34, 321–347. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002AnRFM..34..321H.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Hansen, C.J., Kawaler, S.D., 1994, Steller Interiors: Physical Principles, Structure, and Evolution, Astronomy and Astrophysics Library, Springer, New York, U.S.A.CrossRefGoogle Scholar
Hart, J.E., Glatzmaier, G.A., Toomre, J., 1986, "Space-Laboratory and Numerical Simulations of Thermal Convection in a Rotating Hemispherical Shell with Radial Gravity", J. Fluid Mech., 173, 519–544. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1986JFM...173..519H.ADSCrossRefGoogle Scholar
Hasegawa, A., 1985, "Self-Organization Processes in Continuous Media", Adv. Phys., 34, 1–42.ADSMathSciNetCrossRefGoogle Scholar
Hathaway, D.H., 1996a, "Doppler Measurements of the Sun's Meridional Flow", Astrophys. J., 460, 1027–1033. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996ApJ...460.1027H.ADSCrossRefGoogle Scholar
Hathaway, D.H., Wilson, R.M., 1990, "Solar Rotation and the Sunspot Cycle", Astrophys. J., 357, 271–274. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1990ApJ...357..271H.ADSCrossRefGoogle Scholar
Hathaway, D.H., Beck, J.G., Bogart, R.S., Bachmann, K.T., Khatri, G., Petitto, J.M., Han, S., Raymond, J., 2000, "The Photospheric Convection Spectrum", Solar Phys., 193, 299–312. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..193..299H.ADSCrossRefGoogle Scholar
Hathaway, D.H., Nandy, D., Wilson, R.M., Reichmann, E.J., 2003, "Evidence that a Deep Meridional Flow Sets the Sunspot Cycle Period", Astrophys. J., 589, 665–670. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...589..665H. Erratum: Astrophys. J. 602, 543.ADSCrossRefGoogle Scholar
Hathaway, D.H. et al., 1996b, "GONG Observations of Solar Surface Flows", Science, 272, 1306–1309. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996Sci...272.1306H.ADSCrossRefGoogle Scholar
Hill, F., 1988, "Rings and Trumpets-Three-Dimensional Power Spectra of Solar Oscillations", Astrophys. J., 333, 996–1013. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1988ApJ...333..996H.ADSCrossRefGoogle Scholar
Hindman, B.W., Gizon, L., Duvall, T.L., Haber, D.A., Toomre, J., 2004, "Comparison of Solar Subsurface Flows Assessed by Ring and Time-Distance Analyses", Astrophys. J., 613, 1253–1262. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...613.1253H.ADSCrossRefGoogle Scholar
Holt, S.E., Koseff, J.R., Ferziger, J.H., 1992, "A Numerical Study of the Evolution and Structure of Homogeneous Stably Stratified Sheared Turbulence", J. Fluid Mech., 237, 499–539. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1992JFM...237..499H.ADSzbMATHCrossRefGoogle Scholar
Howard, R., 1984, "Solar Rotation", Annu. Rev. Astron. Astrophys., 22, 131–155. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1984ARA&A..22..131H.ADSCrossRefGoogle Scholar
Howard, R., LaBonte, B.J., 1980, "The Sun is Observed to be a Torsional Oscillator with a Period of 11 years", Astrophys. J. Lett., 239, L33–L36. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1980ApJ...239L..33H.ADSCrossRefGoogle Scholar
Howe, R., Christensen-Dalsgaard, J., Hill, F., Komm, R.W., Larsen, R.M., Schou, J., Thompson, M.J., Toomre, J., 2000a, "Dynamic Variations at the Base of the Solar Convection Zone", Science, 287, 2456–2460. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000Sci...287.2456H.ADSCrossRefGoogle Scholar
Howe, R., Christensen-Dalsgaard, J., Hill, F., Komm, R.W., Larsen, R.M., Schou, J., Thompson, M.J., Toomre, J., 2000b, "Deeply Penetrating Banded Zonal Flows in the Solar Convection Zone", Astrophys. J. Lett., 533, L163–L166. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...533L.163H.ADSCrossRefGoogle Scholar
Huang, H.-P., Robinson, W.A., 1998, "Two-Dimensional Turbulence and Persistent Zonal Jets in a Global Barotropic Model", J. Atmos. Sci., 55, 611–632. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998JAtS...55..611H.ADSMathSciNetCrossRefGoogle Scholar
Hudson, H.S., 1988, "Observed Variability of the Solar Luminosity", Annu. Rev. Astron. Astrophys., 26, 473–507. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1988ARA&A..26..473H.ADSCrossRefGoogle Scholar
Hurlburt, N.E., Toomre, J., Massaguer, J.M., 1986, "Nonlinear Compressible Convection Penetrating into Stable Layers and Producing Internal Gravity Waves", Astrophys. J., 311, 563–577. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1986ApJ...311..563H.ADSCrossRefGoogle Scholar
Hurlburt, N.E., Toomre, J., Massaguer, J.M., Zahn, J.-P., 1994, "Penetration Below a Convection Zone", Astrophys. J., 421, 245–260. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994ApJ...421..245H.ADSCrossRefGoogle Scholar
Hurlburt, N.E., Alexander, D., Rucklidge, A.M., 2002, "Complete Models of Axisymmetric Sunspots: Magnetoconvection with Coronal Heating", Astrophys. J., 577, 993–1005. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002ApJ...577..993H.ADSCrossRefGoogle Scholar
Ishihara, N., Kida, S., 2002, "Dynamo Mechanism in a Rotating Spherical Shell: Competition between Magnetic Field and Convection Vortices", J. Fluid Mech., 465, 1–32. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002JFM...465....1I.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Jacobitz, F.G., 2002, "A Comparison of the Turbulence Evolution in a Stratified Fluid with Vertical or Horizontal Shear", J. Turbulence, 3, 55–70. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002JTurb...3...55J.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Javaraiah, J., 2003, "Long-Term Variations in the Solar Differential Rotation", Solar Phys., 212, 23–49. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003SoPh..212...23J.ADSCrossRefGoogle Scholar
Julien, K., Legg, S., McWilliams, J., Werne, J., 1996a, "Rapidly Rotating Turbulent Rayleigh-Benard Convection", J. Fluid Mech., 322, 243–273. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996JFM...322..243J.ADSzbMATHCrossRefGoogle Scholar
Julien, K., Legg, S., McWilliams, J., Werne, J., 1996b, "Penetrative Convection in Rapidly Rotating Flows: Preliminary Results from Numerical Simulation", Dynam. Atmos. Oceans, 24, 237–249.ADSCrossRefGoogle Scholar
Julien, K., Legg, S., McWilliams, J., Werne, J., 1999, "Plumes in Rotating Convection. Part 1. Ensemble Statistics and Dynamical Balances", J. Fluid Mech., 391, 151–187. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999JFM...391..151J.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Kageyama, A., Sato, T., 1997, "Velocity and Magnetic Field Structures in a Magnetohydrodynamic Dynamo", Phys. Plasmas, 4, 1569–1575. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997PhPl....4.1569K.ADSMathSciNetCrossRefGoogle Scholar
Keller, C.U., Schüssler, M., Vogler, A., Zakharov, V., 2004, "On the Origin of Solar Faculae", Astrophys. J. Lett., 607, L59–L62. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...607L..59K.ADSCrossRefGoogle Scholar
Kim, E.-J., Dubrulle, B., 2002, "Are the Energy and Magnetic Potential Cascades Direct or Inverse in 2D MHD Turbulence?", Physica D, 165, 213–227. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002PhyD..165..213K.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Kim, E.-J., MacGregor, K.B., 2001, "Gravity Wave-Driven Flows in the Solar Tachocline", Astrophys. J. Lett., 556, L117–L120. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...556L.117K.ADSCrossRefGoogle Scholar
Kim, E.-J., MacGregor, K.B., 2003, "Gravity Wave-Driven Flows in the Solar Tachocline. II. Stationary Flows", Astrophys. J., 588, 645–654. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...588..645K.ADSCrossRefGoogle Scholar
Kim, E.-J., Hahm, T.S., Diamond, P.H., 2001, "Eddy Viscosity and Laminarization of Sheared Flow in Three-Dimensional Reduced Magnetohydrodynamic Turbulence", Phys. Plasmas, 8, 3576–3582. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001PhPl....8.3576K.ADSCrossRefGoogle Scholar
Kiraga, M., Jahn, K., Stnpien, K., Zahn, J.-P., 2003, "Direct NumeriSimulations of Penetrative Convection and Generation of Internal Gravity Waves", Acta Astron., 53, 321–339. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003AcA....53..321K.ADSGoogle Scholar
Kitamura, Y., Matsuda, Y., 2004, "Numerical Experiments of Two-Level Decaying Turbulence on a Rotating Sphere", Fluid Dyn. Pes., 34, 33–57. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004FlDyR..34...33K.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Kitchatinov, L.L., Rüdiger, G., 1993, "Λ-Effect and Differential Rotation in Stellar Convection Zones", Astron. Astrophys., 276, 96–102. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1993A&A...276...96K.ADSGoogle Scholar
Kitchatinov, L.L., Rüdiger, G., 1995, "Differential Rotation in Solar-Type Stars: Revisiting the Taylor-Number Puzzle", Astron. Astrophys., 299, 446–452. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1995A&A...299..446K.ADSGoogle Scholar
Kitchatinov, L.L., Pipin, V.V., Makarov, V.I., Tlatov, A.G., 1999, "Solar Torsional Oscillations and the Grand Activity Cycle", Solar Phys., 189, 227–239. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999SoPh..189..227K.ADSCrossRefGoogle Scholar
Knobloch, E., Spruit, H.C., 1982, "Stability of Differential Rotation in Stars", Astron. Astrophys., 113, 261–268. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1982A&A...113..261K.ADSzbMATHGoogle Scholar
Komm, R.W., Howard, R.F., Harvey, J.W., 1993, "Meridional Flow of Small Photospheric Magnetic Features", Solar Phys., 147, 207–223. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1993SoPh..147..207K.ADSCrossRefGoogle Scholar
Komm, R.W., Corbard, T., Durney, B.R., Gonzáalez-Hernáandez, I., Hill, F., Howe, R., Toner, C., 2004, "Solar Subsurface Fluid Dynamics Descriptors Derived from Global Oscillation Network Group and Michelson Doppler Imager Data", Astrophys. J., 605, 554–567. Related online version(cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...605..554K.ADSCrossRefGoogle Scholar
Kosovichev, A.G., 1996, "Helioseismic Constraints on the Gradient of Angular Velocity at the Base of the Solar Convection Zone", Astrophys. J. Lett., 469, L61–L64. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996ApJ...469L..61K.ADSCrossRefGoogle Scholar
Kosovichev, A.G., Duvall, T.L., Scherrer, P.H., 2000, "Time-Distance Inversion Methods and Results", Solar Phys., 192, 159–176. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..192..159K.ADSCrossRefGoogle Scholar
Kuhn, J.R., Libbrecht, K.G., Dicke, R.H., 1988, "The Surface Temperature of the Sun and Changes in the Solar Constant", Science, 242, 908–911. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1988Sci...242..908K.ADSCrossRefGoogle Scholar
Kuhn, J.R., Armstrong, J.D., Bush, R.I., Scherrer, P., 2000, "Rossby Waves on the Sun as Revealed by Solar Hills", Nature, 405, 544–546. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000Natur.405..544K.ADSCrossRefGoogle Scholar
Küker, M., Stix, M., 2001, "Differential Rotation of the Present and the Pre-Main-Sequence Sun", Astron. Astrophys., 366, 668–675. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001A&A...366..668K.ADSCrossRefGoogle Scholar
Küker, M., Rüdiger, G., Kitchatinov, L.L., 1993, "An αΩ-model of the solar differential rotation", Astron. Astrophys., 279, L1–L4. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1993A&A...279L...1K.ADSGoogle Scholar
Kumar, P., Quataert, E.J., 1997, "Angular Momentum Transport by Gravity Waves and its Effect on the Rotation of the Solar Interior", Astrophys. J. Lett., 475, L143–L146. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997ApJ...475L.143K.ADSCrossRefGoogle Scholar
Kumar, P., Narayan, R., Loeb, A., 1995, "On the Interaction of Convection and Rotation in Stars", Astrophys. J., 453, 480–494. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1995ApJ...453..480K.ADSCrossRefGoogle Scholar
Kumar, P., Talon, S., Zahn, J.-P., 1999, "Angular Momentum Redistribution by Waves in the Sun", Astrophys. J., 520, 859–870. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...520..859K.ADSCrossRefGoogle Scholar
Kupka, F., 1999, "Turbulent Convection: Comparing the Moment Equations to Numerical Simulations", Astrophys. J. Lett., 526, L45–L48. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...526L..45K.ADSCrossRefGoogle Scholar
Lantz, S.R., Fan, Y., 1999, "Anelastic Magnetohydrodynamic Equations for Modeling Solar and Stellar Convection Zones", Astrophys. J. Suppl. Ser., 121, 247–264. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJS..121..247L.ADSCrossRefGoogle Scholar
Latushko, S., 1996, "Meridional Drift of the Large-Scale Solar Magnetic Fields in Different Phases of Solar Activity", Solar Phys., 163, 241–247. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996SoPh..163..241L.ADSCrossRefGoogle Scholar
Leighton, R.B., Noyes, R.W., Simon, G.W., 1962, "Velocity Fields in the Solar Atmosphere. I. Preliminary Report", Astrophys. J., 135, 474–499. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1962ApJ...135..474L.ADSCrossRefGoogle Scholar
Lesieur, M., 1997, Turbulence in Fluids, vol. 40 of Fluid Mechanics and its Applications, Kluwer, Dordrecht, Netherlands, 3rd edn.zbMATHCrossRefGoogle Scholar
Lesieur, M., Métais, O., 1996, "New Trends in Large-Eddy Simulations of Turbulence", Annu. Rev. Fluid Mech., 28, 45–82. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996AnRFM..28...45L.ADSMathSciNetCrossRefGoogle Scholar
Lin, S.-J., Rood, R.B., 1997, "An Explicit Flux-Form Semi-Lagrangian Shallow-Water Model on the Sphere", Quart. J. P. Meteorol. Soc., 123, 2477–2498.ADSCrossRefGoogle Scholar
Lindsey, C., Braun, D.C., 2000a, "Basic Principles of Solar Acoustic Holography", Solar Phys., 192, 261–284. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..192..261L.ADSCrossRefGoogle Scholar
Lindsey, C., Braun, D.C., 2000b, "Seismic Images of the Far Side of the Sun", Science, 287, 1799–1801. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000Sci...287.1799L.ADSCrossRefGoogle Scholar
Lisle, J.P., Rast, M.P., Toomre, J., 2004, "Persistent North-South Alignment of the Solar Super-granulation", Astrophys. J., 608, 1167–1174. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...608.1167L.ADSCrossRefGoogle Scholar
Longcope, D.W., McLeish, T.C.B., Fisher, G.H., 2003, "A Viscoelastic Theory of Turbulent Fluid Permeated with Fibril Magnetic Fields", Astrophys. J., 599, 661–674. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...599..661L.ADSCrossRefGoogle Scholar
Lou, Y.-Q., 2000, "Rossby-Type Wave-Induced Periodicities in Flare Activities and Sunspot Areas or Groups During Solar Maxima", Astrophys. J., 540, 1102–1108. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...540.1102L.ADSCrossRefGoogle Scholar
Low, B.C., 2001, "Coronal Mass Ejections, Magnetic Flux Ropes, and Solar Magnetism", J. Geophys. Pes., 106, 25141–25163. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001JGR...10625141L.ADSCrossRefGoogle Scholar
MacGregor, K.B., 2003, "Gravity Waves in the Radiative Zone and Tachocline", in Turbulence, Waves, and Instabilities in the Solar Plasma, (Eds.) E. Forgács-Dajka, K. Petrovay, R. Erdelyi, pp. 9–20, (Eötvös University, Budapest, Hungary, 2003). Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003PADEU..13....9M. Contribuitions to the NATO Advanced Research Workshop on Turbulence, Waves, and Instabilities in the Solar Plasma, Normafa (Budapest), Hungary, 16–20 September 2002.Google Scholar
MacGregor, K.B., Charbonneau, P., 1999, "Angular Momentum Transport in Magnetized Stellar Radiative Zones. IV. Ferraro's Theorem and the Solar Tachocline", Astrophys. J., 519, 911–917. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999ApJ...519..911M.ADSCrossRefGoogle Scholar
Marik, D., Petrovay, K., 2002, "A New Model for the Lower Overshoot Layer in the Sun", Astron. Astrophys., 396, 1011–1014. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002A&A...396.1011M.ADSCrossRefGoogle Scholar
Marshall, J., Adcroft, A., Hill, C., Perelman, L., Heisey, C., 1997, "A Finite-Volume, Incompressible Navier Stokes Model for Studies of the Ocean on Parallel Computers", J. Geophys. Res., 102, 5753–5766. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997JGR...102.5753M.ADSCrossRefGoogle Scholar
Mason, P.J., 1994, "Large-Eddy Simulation: A Critical Review of the Technique", Quart. J. R. Meteorol. Soc., 120, 1–26.ADSCrossRefGoogle Scholar
Mathis, S., Zahn, J.-P., 2004, "Transport and Mixing in the Radiation Zones of Rotating Stars. I. Hydrodynamical Processes", Astron. Astrophys., 425, 229–242. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004A&A...425..229M.ADSCrossRefGoogle Scholar
McIntyre, M.E., 1994, "The Quasi-Biennial Oscillation (QBO): Some Points About the Terrestrial QBO and the Possibility of Related Phenomena in the Solar Interior", in The Solar Engine and its Influence on the Terrestrial Atmosphere and Climate, (Ed.) E. Nesme-Ribes, vol. I25, pp. 293–320, (Springer, Berlin, Germany, New York, U.S.A., 1994). NATO ASI Series.CrossRefGoogle Scholar
McIntyre, M.E., 1998, "Breaking waves and Global-Scale Chemical Transport in the Earth's Atmosphere, with Spinoffs for the Sun's Interior", Prog. Theor. Phys. Suppl., 130, 137–166. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998PThPS.130..137M. Corrigendum: Prog. Theor. Phys. 101 (1999) 189.ADSMathSciNetzbMATHCrossRefGoogle Scholar
McIntyre, M.E., 2003, "Solar tachocline dynamics: eddy viscosity, anti-friction, or something in between?", in Stellar Astrophysical Fluid Dynamics, (Eds.) M. Thompson, J. Christensen-Dalsgaard, pp. 111–130, Cambridge University Press, Cambridge, U.K.; New York, U.S.A.CrossRefGoogle Scholar
McKenzie, J.F., Axford, W.I., 2000, "Hydromagnetic Gravity Waves in the Solar Atmosphere", Astrophys. J., 537, 516–523. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...537..516M.ADSCrossRefGoogle Scholar
Meneveau, C., Katz, J., 2000, "Scale-Invariance and Turbulence Models for Large-Eddy Simulation", Annu. Rev. Fluid Mech., 32, 1–32. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000AnRFM..32....1M.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Menou, K., Balbus, S.A., Spruit, H.C., 2004, "Local Axisymmetric Diffusive Stability of Weakly Magnetized, Differentially Rotating, Stratified Fluids", Astrophys. J., 607, 564–574. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...607..564M.ADSCrossRefGoogle Scholar
Mestel, L., 1999, Stellar Magnetism, vol. 99 of International Series of Monographs on Physics, Clarendon Press; Oxford University Press, Oxford, U.K.; New York, U.S.A.Google Scholar
Mestel, L., Weiss, N.O., 1987, "Magnetic Fields and Non-Uniform Rotation in Stellar Radiative Zones", Mon. Not. R. Astron. Soc., 226, 123–135. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1987MNRAS.226..123M.ADSCrossRefGoogle Scholar
Métais, O., Herring, J.R., 1989, "Numerical Simulations of Freely Evolving Turbulence in Stratified Fluids", J. Fluid Mech., 202, 117–148. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1989JFM...202..117M.ADSCrossRefGoogle Scholar
Métais, O., Bartello, P., Garnier, E., Riley, J.J., Lesieur, M., 1996, "Inverse Cascade in Stably Stratified Rotating Turbulence", Dynam. Atmos. Oceans, 23, 193–203.ADSCrossRefGoogle Scholar
Miesch, M.S., 2000, "The Coupling of Solar Convection and Rotation", Solar Phys., 192, 59–89. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..192...59M.ADSCrossRefGoogle Scholar
Miesch, M.S., 2001, "Numerical Modeling of the Solar Tachocline. I. Freely Evolving Stratified Turbulence in a Thin, Rotating Spherical Shell", Astrophys. J., 562, 1058–1075. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...562.1058M.ADSCrossRefGoogle Scholar
Miesch, M.S., 2003, "Numerical Modeling of the Solar Tachocline. II. Forced Turbulence with Imposed Shear", Astrophys. J., 586, 663–684. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...586..663M.ADSCrossRefGoogle Scholar
Miesch, M.S., 2005, "Turbulence in the Tachocline", In preparation.Google Scholar
Miesch, M.S., Gilman, P.A., 2004, "Thin-Shell Magnetohydrodynamic Equations for the Solar Tachocline", Solar Phys., 220, 287–305. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004SoPh..220..287M.ADSCrossRefGoogle Scholar
Miesch, M.S., Elliott, J.R., Toomre, J., Clune, T.C., Glatzmaier, G.A., Gilman, P.A., 2000, "Three-Dimensional Spherical Simulations of Solar Convection: Differential Rotation and Pattern Evolution Achieved with Laminar and Turbulent States", Astrophys. J., 532, 593–615. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJ...532..593M.ADSCrossRefGoogle Scholar
Miesch, M.S., Mansour, N.N., Rogers, M., 2004, "Numerical modeling of turbulent convection and acoustic wave propagation in the solar interior", in Studying Turbulence Using Numerical Simulation Databases — X: Proceedings of the 2004 Summer Program, (Eds.) P. Moin, N. Mansour, pp. 11–20, (Center for Turbulence Research, Stanford, U.S.A., 2004). Related online version (cited on 15 March 2005): http://ctr.stanford.edu/SP04.html.Google Scholar
Moffatt, H.K., 1978, Magnetic Field Generation in Electrically Conducting Fluids, Cambridge Monographs on Mechanics and Applied Mathematics, Cambridge University Press, Cambridge, U.K.; New York, U.S.A.Google Scholar
Montalbán, J., 1994, "Mixing by Internal Waves I. Lithium Depletion in the Sun", Astron. Astrophys., 281, 421–432. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994A&A...281..421M.ADSGoogle Scholar
Monteiro, M.J.P.F.G., Christensen-Dalsgaard, J., Thompson, M.J., 1994, "Seismic study of overshoot at the base of the solar convective envelope", Astron. Astrophys., 283, 247–262. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994A&A...283..247M.ADSGoogle Scholar
Müller, P., 1995, "Ertel's Potential Vorticity Theorem in Physical Oceanography", Rev. Geophys., 33, 67–97. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1995RvGeo..33...67M.ADSCrossRefGoogle Scholar
Muller, R., Auffret, H., Roudier, T., Vigneau, J., Simon, G.W., Frank, Z., Shine, R.A., Title, A.M., 1992, "Evolution and Advection of Solar Mesogranulation", Nature, 356, 322–325. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1992Natur.356..322M.ADSCrossRefGoogle Scholar
Nandy, D., Choudhuri, A.R., 2002, "Explaining the Latitudinal Distribution of Sunspots with Deep Meridional Flow", Science, 296, 1671–1673. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002Sci...296.1671N.ADSCrossRefGoogle Scholar
Nesme-Ribes, E., Meunier, N., Vince, I., 1997, "Solar Dynamics over Cycle 19 Using Sunspots as Tracers", Astron. Astrophys., 321, 323–329. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997A&A...321..323N.ADSGoogle Scholar
Niemela, J.J., Skrbek, L., Sreenivasan, K.R., Donnelly, R.J., 2000, "Turbulent Convection at Very High Reynolds Numbers", Nature, 404, 837–840.ADSCrossRefGoogle Scholar
November, L.J., Toomre, J., Gebbie, K.B., Simon, G.W., 1981, "The Detection of Mesogranulation on the Sun", Astrophys. J. Lett., 245, L123–L126. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1981ApJ...245L.123N.ADSCrossRefGoogle Scholar
Nozawa, T., Yoden, S., 1997, "Formation of Zonal Band Structure in Forced Two-Dimensional Turbulence on a Rotating Sphere", Phys. Fluids, 9, 2081–2093. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997PhFl....9.2081N.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Ossendrijver, M., 2003, "The Solar Dynamo", Astron. Astrophys. Rev., 11, 287–367. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003A&ARv..11..2870.ADSCrossRefGoogle Scholar
Panetta, R.L., 1993, "Zonal Jets in Wide Baroclinically Unstable Regions: Persistence and Scale Selection", J. Atmos. Sci., 50, 2073–2106. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1993JAtS...50.2073P.ADSCrossRefGoogle Scholar
Parker, E.N., 1955, "Hydromagnetic Dynamo Models", Astrophys. J., 122, 293–314. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1955ApJ...122..293P.ADSMathSciNetCrossRefGoogle Scholar
Parker, E.N., 1979, Cosmical Magnetic Fields, Clarendon Press; Oxford University Press, Oxford, U.K.;New York, U.S.A.Google Scholar
Parker, E.N., 1985, "Stellar Fibril Magnetic Systems. III. Convective Counterflow", Astrophys. J., 294, 57–65. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1985ApJ...294...57P.ADSCrossRefGoogle Scholar
Pedlosky, J., 1987, Geophysical Fluid Dynamics, Springer, New York, U.S.A., 2nd edn.zbMATHCrossRefGoogle Scholar
Peltier, W.R., Stuhne, G.R., 2002, "The Upscale Turbulent Cascade: Shear Layers, Cyclones and Gas Giant Bands", in Meteorology at the Millennium, (Ed.) R. Pearce, vol. 83 of International Geophysics Series, pp. 43–53, (Academic Press, San Diego, U.S.A., 2002).CrossRefGoogle Scholar
Petrovay, K., 2003, "A Consistent One-Dimensional Model for the Turbulent Tachocline", Solar Phys., 215, 17–30. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003SoPh..215...17P.ADSCrossRefGoogle Scholar
Pevtsov, A.A., 2002, "Sinuous Coronal Loops at the Sun", in Multi-Wavelength Observations of Coronal Structure and Dynamics: Yohkoh 10th Anniversary Meeting, (Eds.) P. Martens, D. Cauffman, vol. 13 of COSPAR Colloquia Series, pp. 125–134, (Pergamon, Boston, U.S.A., 2002). Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002mwoc.conf..125P. Proceedings of the COSPAR Colloquium held in Kona, Hawaii, USA, 20–24 January 2002.CrossRefGoogle Scholar
Pevtsov, A.A., Canfield, R.C., Metcalf, T.R., 1994, "Patterns of Helicity in Solar Active Regions", Astrophys. J. Lett., 425, L117–L119. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994ApJ...425L.117P.ADSCrossRefGoogle Scholar
Pevtsov, A.A., Canfield, R.C., Metcalf, T.R., 1995, "Latitudinal Variation of Helicity of Photo-spheric Magnetic Fields", Astrophys. J. Lett., 440, L109–L112. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1995ApJ...440L.109P.ADSCrossRefGoogle Scholar
Pinsonneault, M., 1997, "Mixing in Stars", Annu. Rev. Astron. Astrophys., 35, 557–605. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997ARA&A..35..557P.ADSCrossRefGoogle Scholar
Pope, S.B., 2000, Turbulent Flows, Cambridge University Press, Cambridge, U.K.; New York, U.S.A.zbMATHCrossRefGoogle Scholar
Porter, D.H., Woodward, P.R., 2000, "Three-Dimensional Simulations of Turbulent Compressible Convection", Astrophys. J. Suppl. Ser., 127, 159–187. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000ApJS..127..159P.ADSCrossRefGoogle Scholar
Rast, M.P., 1998, "Compressible Plume Dynamics and Stability", J. Fluid Mech., 369, 125–149. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998JFM...369..125R.ADSMathSciNetzbMATHGoogle Scholar
Rast, M.P., 2003, "The Scales of Granulation, Mesogranulation, and Supergranulation", Astrophys. J., 597, 1200–1210. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...597.1200R.ADSCrossRefGoogle Scholar
Rast, M.P., Lisle, J.P., Toomre, J., 2004, "The Spectrum of the Solar Supergranulation: Multiple Nonwave Components", Astrophys. J., 608, 1156–1166. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...608.1156R.ADSCrossRefGoogle Scholar
Rempel, M., 2004, "Overshoot at the Base of the Solar Convection Zone: A Semianalytical Approach", Astrophys. J., 607, 1046–1064. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...607.1046R.ADSCrossRefGoogle Scholar
Rempel, M., 2005, "Solar Differential Rotation and Meridional Flow: The Role of a Subadiabatic Tachocline for the Taylor-Proudman Balance", Astrophys. J., 622, 1320–1332. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2005ApJ...622.1320R.ADSCrossRefGoogle Scholar
Rempel, M., Dikpati, M., 2003, "Storage and Equilibrium of Toroidal Magnetic Fields in the Solar Tachocline: A Comparison Between MHD Shallow-Water and Full MHD Approaches", Astrophys. J., 584, 524–527. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ApJ...584..524R.ADSCrossRefGoogle Scholar
Rempel, M., Schüssler, M., Töth, G., 2000, "Storage of Magnetic Flux at the Bottom of the Solar Convection Zone", Astron. Astrophys., 363, 789–799. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000A&A...363..789R.ADSGoogle Scholar
Rhines, P.B., 1975, "Waves and Turbulence on a Beta-Plane", J. Fluid Mech., 69, 417–443. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1975JFM....69..417R.ADSzbMATHCrossRefGoogle Scholar
Rhines, P.B., 1994, "Jets", Chaos, 4, 313–339. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994Chaos...4..313R.ADSCrossRefGoogle Scholar
Ribes, J.C., Nesme-Ribes, E., 1993, "The Solar Sunspot Cycle in the Maunder Minimum AD 1645 to AD 1715", Astron. Astrophys., 276, 549–563. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1993A&A...276..549R.ADSGoogle Scholar
Richard, D., Zahn, J.-P., 1999, "Turbulence in Differentially Rotating Flows", Astron. Astrophys., 347, 734–738. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999A&A...347..734R.ADSGoogle Scholar
Rieutord, M., Zahn, J.-P., 1995, "Turbulent Plumes in Stellar Convective Envelopes", Astron. Astrophys., 296, 127–138. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1995A&A...296..127R.ADSGoogle Scholar
Riley, J.J., Lelong, M.-P., 2000, "Fluid Motions in the Presence of Strong Stable Stratification", Annu. Rev. Fluid Mech., 32, 613–657. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000AnRFM..32..613R.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Rincon, F., Lignieres, F., Rieutord, M., 2005, "Mesoscale Flows in Large Aspect Ratio Simulations of Turbulent Compressible Convection", Astron. Astrophys., 430, L57–L60. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2005A&A...430L..57R.ADSCrossRefGoogle Scholar
Ringot, O., 1998, "About the Role of Gravity Waves in the Angular Momentum Transport Inside the Radiative Zone of the Sun", Astron. Astrophys., 335, L89–L92. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998A&A...335L..89R.ADSGoogle Scholar
Roberts, P.H., Glatzmaier, G.A., 2000, "Geodynamo Theory and Simulations", Rev. Mod. Phys., 72, 1081–1123. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000RvMP...72.1081R.ADSCrossRefGoogle Scholar
Rogers, T.M., Glatzmaier, G.A., 2005a, "Penetrative Convection within the Anelastic Approximation", Astrophys. J., 620, 432–441. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2005ApJ...620..432R.ADSCrossRefGoogle Scholar
Rogers, T.M., Glatzmaier, G.A., 2005b, unknown format, in preparation.Google Scholar
Roxburgh, I.W., Vorontsov, S.V., 1994, "Seismology of the Solar Envelope — The Base of the Convective Zone as Seen in the Phase Shift of Acoustic Waves", Mon. Not. R. Astron. Soc., 268, 880–888. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994MNRAS.268..880R.ADSCrossRefGoogle Scholar
Rüdiger, G., 1989, Differential Rotation and Stellar Convection: Sun and Solar-Type Stars, vol. 5 of The Fluid Mechanics of Astrophysics and Geophysics, Gordon and Breach, New York, U.S.A.Google Scholar
Rüdiger, G., Hollerbach, R., 2004, The Magnetic Universe. Geophysical and Astrophysical Dynamo Theory, Wiley-VCH, Weinheim, Germany.CrossRefGoogle Scholar
Rüdiger, G., Kitchatinov, L.L., 1997, "The Slender Solar Tachocline: A Magnetic Model", Astron. Nachr., 5, 273–279. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997AN....318..273R.zbMATHCrossRefGoogle Scholar
Rüdiger, G., von Rekowski, B., Donahue, R.A., Baliunas, S.L., 1998, "Differential Rotation and Meridional Flow for Fast-Rotating Solar-Type Stars", Astrophys. J., 494, 691–699. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998ApJ...494..691R.ADSCrossRefGoogle Scholar
Rüdiger, G., Elstner, D., Ossendrijver, M., 2003, "Do Spherical α2-Dynamos Oscillate?", Astron. Astrophys., 406, 15–21. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003A&A...406...15R.ADSzbMATHCrossRefGoogle Scholar
Salby, M.L., 1996, Fundamentals of Atmospheric Physics, vol. 61 of International Geophysics Series, Academic Press, San Diego, U.S.A.CrossRefGoogle Scholar
Salmon, R., 1978, "Two-Layer Quasi-Geostrophic Turbulence in a Simple Special Case", Geophys. Astrophys. Fluid Dyn., 10, 25–52.ADSzbMATHCrossRefGoogle Scholar
Schatzman, E., 1996, "Diffusion Process Produced by Random Internal Waves", J. Fluid Mech., 322, 355–382. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996JFM...322..355S.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Schatzman, E., Zahn, J.-P., Morel, P., 2000, "Shear Turbulence Beneath the Solar Tachocline", Astron. Astrophys., 364, 876–878. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000A&A...364..876S.ADSGoogle Scholar
Schecter, D.A., Boyd, J.F., Gilman, P.A., 2001, ""Shallow-Water" Magnetohydrodynamic Waves in the Solar Tachocline", Astrophys. J. Lett., 551, L185–L188. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...551L.185S.ADSCrossRefGoogle Scholar
Schmitt, J.H.M.M., Rosner, R., Bohn, H.U., 1984, "The Overshoot Region at the Bottom of the Solar Convection Zone", Astrophys. J., 282, 316–329. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1984ApJ...282..316S.ADSCrossRefGoogle Scholar
Schou, J., Howe, R., Basu, S., Christensen-Dalsgaard, J., Corbard, T., Hill, F., Komm, R., Larsen, R.M., Rabello-Soares, M.C., Thompson, M.J., 2002, "A Comparison of Solar p-mode Parameters from the Michelson Doppler Imager and the Global Oscillation Network Group: Splitting Coefficients and Rotation Inversions", Astrophys. J., 567, 1234–1249. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002ApJ...567.1234S.ADSCrossRefGoogle Scholar
Schou, J. et al., 1998, "Helioseismic Studies of Differential Rotation in the Solar Envelope by the Solar Oscillations Investigation Using the Michelson Doppler Imager", Astrophys. J., 505, 390–417. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998ApJ...505..390S.ADSCrossRefGoogle Scholar
Schrijver, C.J., DeRosa, M.L., 2003, "Photospheric and Heliospheric Magnetic Fields", Solar Phys., 212, 165–200. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003SoPh..212..165S.ADSCrossRefGoogle Scholar
Schrijver, C.J., Zwaan, C., 2000, Solar and Stellar Magnetic Activity, vol. 34 of Cambridge Astrophysics Series, Cambridge University Press, Cambridge, U.K.; New York, U.S.A.CrossRefGoogle Scholar
Schröter, E.H., 1985, "The Solar Differential Rotation: Present Status of Observations", Solar Phys., 100, 141–169. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1985SoPh..100..141S.ADSCrossRefGoogle Scholar
Schüssler, M., 1981, "The Solar Torsional Oscillation and Dynamo Models of the Solar Cycle", Astron. Astrophys., 94, L17–L18. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1981A&A....94L..17S.ADSGoogle Scholar
Shepherd, T.G., 1987, "Rossby Waves and Two-Dimensional Turbulence in a Large-Scale Zonal Jet", J. Fluid Mech., 183, 467–509. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1987JFM...183..467S.ADSzbMATHCrossRefGoogle Scholar
Shepherd, T.G., 2000, "The Middle Atmosphere", J. Atmos. Sol.-Terr. Phys., 62, 1587–1601. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000JATP...62.1587S.ADSCrossRefGoogle Scholar
Siggia, E.D., 1994, "High Rayleigh Number Convection", Annu. Rev. Fluid Mech., 26, 137–168. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1994AnRFM..26..137S.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Simon, G.W., Weiss, N.O., 1991, "Convective Structures in the Sun", Mon. Not. P. Astron. Soc., 252, 1P–5P. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1991MNRAS.252P...1S.ADSCrossRefGoogle Scholar
Simon, G.W., Title, A.M., Weiss, N.O., 2001, "Sustaining the Sun's Magnetic Network with Emerging Bipoles", Astrophys. J., 561, 427–434. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...561..427S.ADSCrossRefGoogle Scholar
Singh, H.P., Roxburgh, I.W., Chan, K.L., 1995, "Three-Dimensional Simulation of Penetrative Convection: Penetration Below a Convection Zone", Astron. Astrophys., 295, 703–709. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1995A&A...295..703S.ADSGoogle Scholar
Smagorinsky, J., 1963, "General Circulation Experiments with the Primitive Equations. I. The Basic Experiment", Mon. Weather Rev., 91, 99–164.ADSCrossRefGoogle Scholar
Snodgrass, H.B., Dailey, S.B., 1996, "Meridional Motions of Magnetic Features in the Solar Photosphere", Solar Phys., 163, 21–42. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996SoPh..163...21S.ADSCrossRefGoogle Scholar
Spiegel, E.A., Zahn, J.-P., 1992, "The Solar Tachocline", Astron. Astrophys., 265, 106–114. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1992A&A...265..106S.ADSGoogle Scholar
Spruit, H.C., 1999, "Differential Rotation and Magnetic Fields in Stellar Interiors", Astron. Astrophys., 349, 189–202. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999A&A...349..189S.ADSGoogle Scholar
Spruit, H.C., 2003, "Origin of the Torsional Oscillation Pattern of Solar Rotation", Solar Phys., 213, 1–21. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003SoPh..213....1S.ADSCrossRefGoogle Scholar
Spruit, H.C., Knobloch, E., 1984, "Baroclinic Instability in Stars", Astron. Astrophys., 132, 89–96. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1984A&A...132...89S.ADSGoogle Scholar
Spruit, H.C., van Ballegooijen, A.A., 1982, "Stability of Toroidal Flux Tubes in Stars", Astron. Astrophys., 106, 58–66. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1982A&A...106...58S.ADSMathSciNetzbMATHGoogle Scholar
Staquet, C., Huerre, G., 2002, "On Transport Across a Barotropic Shear Flow by Breaking Inertia-Gravity Waves", Phys. Fluids, 14, 1993–2006. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002PhFl...14.1993S.ADSzbMATHCrossRefGoogle Scholar
Staquet, C., Sommeria, J., 2002, "Internal Gravity Waves: From Instabilities to Turbulence", Annu. Rev. Fluid Mech., 34, 559–593. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002AnRFM..34..559S.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Stein, R.F., Nordlund, Å., 1998, "Simulations of Solar Granulation: I. General Properties", Astrophys. J., 499, 914–933. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998ApJ...499..914S.ADSCrossRefGoogle Scholar
Stein, R.F., Nordlund, Å., 2000, "Realistic Solar Convection Simulations", Solar Phys., 192, 91–108. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..192...91S.ADSCrossRefGoogle Scholar
Stix, M., 2002, The Sun: An Introduction, Astronomy and Astrophysics Library, Springer, Berlin, Germany, New York, U.S.A., 2nd edn.zbMATHCrossRefGoogle Scholar
Stuhne, G.R., Peltier, W.R., 1999, "New Icosahedral Grid-Point Discretizations of the Shallow-Water Equations on the Sphere", J. Comput. Phys., 148, 23–58. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1999JCoPh.148...23S.ADSMathSciNetzbMATHCrossRefGoogle Scholar
Sturrock, P.A., Weber, M.A., 2002, "Comparative Analysis of GALLEX-GNO Solar Neutrino Data and SOHO/MDI Helioseismology Data: Further Evidence for Rotational Modulation of the Solar Neutrino Flux", Astrophys. J., 565, 1366–1375. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002ApJ...565.1366S.ADSCrossRefGoogle Scholar
Sun, Z.-P., Schubert, G., 1995, "Numerical Simulations of Thermal Convection in a Rotating Spherical Fluid Shell at high Taylor and Rayleigh Numbers", Phys. Fluids, 7, 2686–2699. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1995PhFl....7.2686S.ADSzbMATHCrossRefGoogle Scholar
Talon, S., Zahn, J.-P., 1998, "Towards a Hydrodynamical Model Predicting the Observed Solar Rotation Profile", Astron. Astrophys., 329, 315–318. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998A&A...329..315T.ADSGoogle Scholar
Talon, S., Kumar, P., Zahn, J.-P., 2002, "Angular Momentum Extraction by Gravity Waves in the Sun", Astrophys. J. Lett., 574, L175–L178. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002ApJ...574L.175T.ADSCrossRefGoogle Scholar
Tassoul, J.-L., 1978, Theory of Potating Stars, Princeton University Press, Princeton, U.S.A.Google Scholar
Tayler, R.J., 1973, "The Adiabatic Stability of Stars Containing Magnetic Fields-I. Toroidal Fields", Mon. Not. P. Astron. Soc., 161, 365–380. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1973MNRAS.161..365T.ADSCrossRefGoogle Scholar
Thompson, M.J., Christensen-Dalsgaard, J., Miesch, M.S., Toomre, J., 2003, "The Internal Rotation of the Sun", Annu. Rev. Astron. Astrophys., 41, 599–643. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003ARA&A..41..599T.ADSCrossRefGoogle Scholar
Tilgner, A., Busse, F.H., 1997, "Finite-Amplitude Convection in Rotating Spherical Fluid Shells", J. Fluid Mech., 332, 359–376. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1997JFM...332..359T.ADSzbMATHGoogle Scholar
Tobias, S.M., 2004, "The Solar Tachocline: Formation, Stability, and its Role in the Solar Dynamo", in Fluid Dynamics and Dynamos in Astrophysics and Geophysics, (Eds.) A. Soward, C. Jones, D. Hughes, N. Weiss, pp. 193–234, (Taylor & Francis Group, Oxford, U.K., 2004).Google Scholar
Tobias, S.M., Hughes, D.W., 2004, "The Influence of Velocity Shear on Magnetic Buoyancy Instability in the Solar Tachocline", Astrophys. J., 603, 785–802. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2004ApJ...603..785T.ADSCrossRefGoogle Scholar
Tobias, S.M., Brummell, N.H., Clune, T.L., Toomre, J., 1998, "Pumping of Magnetic Fields in Turbulent Penetrative Convection", Astrophys. J. Lett., 502, L177–L180. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1998ApJ...502L.177T.ADSCrossRefGoogle Scholar
Tobias, S.M., Brummell, N.H., Clune, T.L., Toomre, J., 2001, "Transport and Storage of Magnetic Field by Overshooting Turbulent Compressible Convection", Astrophys. J., 549, 1183–1203. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...549.1183T.ADSCrossRefGoogle Scholar
Toomre, J., 2002, "Solar physics — Order amidst turbulence", Science, 296, 64–65.CrossRefGoogle Scholar
Toomre, J., Christensen-Dalsgaard, J., Howe, R., Larsen, R.M., Schou, J., Thompson, M.J., 2000, "Time Variability of Rotation in Solar Convection Zone from SOHO-MDI", Solar Phys., 192, 437–448. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..192..437T.ADSCrossRefGoogle Scholar
Toomre, J., Christensen-Dalsgaard, J., Hill, F., Howe, R., Komm, R.W., Schou, J., Thompson, M.J., 2003, "Transient Oscillations near the Solar Tachocline", in Local and Global Helioseismology: The Present and Future, (Ed.) H. Sawaya-Lacoste, vol. SP-517 of ESA Conference Proceedings, pp. 409–412, (ESA, Noordwijk, Netherlands, 2003). Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003soho...12..409T. Proceedings of SOHO 12/GONG+ 2002, 27 October — 1 November 2002, Big Bear Lake, California, U.S.A.Google Scholar
Townsend, A.A., 1976, The Structure of Turbulent Shear Flow, Cambridge University Press, Cambridge, U.K., New York, U.S.A., 2nd edn.zbMATHGoogle Scholar
Tritton, D.J., 1988, Physical Fluid Dynamics, Clarendon Press; Oxford University Press, Oxford, U.K.; New York, U.S.A., 2nd edn.zbMATHGoogle Scholar
Ulrich, R.K., 1993, "The Controversial Sun", in Inside the Stars: IAU Colloquium 137, (Eds.) W. Weiss, A. Baglin, vol. 40 of ASP Conference Series, pp. 25–42, (ASP, San Francisco, U.S.A., 1993). Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1993ASPC...40...25U.Google Scholar
Ulrich, R.K., 2001, "Very Long Lived Wave Patterns Detected in the Solar Surface Velocity Signal", Astrophys. J., 560, 466–475. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2001ApJ...560..466U.ADSCrossRefGoogle Scholar
Ulrich, R.K., Boyden, J.E., Webster, L., Snodgrass, H.B., Padilla, S.P., Gilman, P.A., Shieber, T., 1988, "Solar Rotation Measurements at Mount Wilson. V — Reanalysis of 21 Years of Data", Solar Phys., 117, 291–328. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1988SoPh..117..291U.ADSCrossRefGoogle Scholar
Usoskin, I.G., Mursula, K., 2003, "Long-Term Solar Cycle Evolution: Review of Recent Developments", Solar Phys., 218, 319–343. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003SoPh..218..319U.ADSCrossRefGoogle Scholar
Vallis, G.K., 2005, Atmospheric and Oceanic Fluid Dynamics: Fundamentals and Large-Scale Circulation, Cambridge University Press, Cambridge, U.K., New York, U.S.A. Related online version (cited on 15 March 2005): http://www.princeton.edu/~gkv/aofd/. In press.Google Scholar
Vallis, G.K., Maltrud, M.E., 1993, "Generation of Mean Flows and Jets on a Beta Plane and Over Topography", J. Phys. Oceanogr., 23, 1346–1362.ADSCrossRefGoogle Scholar
Velikhov, E.P., 1959, "Stability of an Ideally Conducting Liquid Flowing Between Cylinders Rotating in a Magnetic Field", Sov. Phys. JETP, 36, 995–998.Google Scholar
Vögler, A., Shelyag, S., Schuüssler, M., Cattaneo, F., Emonet, T., 2005, "Simulations of Magneto-Convection in the Solar Photosphere. Equations, Methods, and Results of the MURaM code", Astron. Astrophys., 429, 335–351. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2005A&A...429..335V.ADSCrossRefGoogle Scholar
Vorontsov, S.V., Christensen-Dalsgaard, J., Schou, J., Strakhov, V.N., Thompson, M.J., 2002, "Helioseismic Measurements of Torsional Oscillations", Science, 296, 101–103. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002Sci...296..101V.ADSCrossRefGoogle Scholar
Weiss, N.O., 1994, "Solar and Stellar Dynamos", in Lectures on Solar and Planetary Dynamos, (Eds.) M. Proctor, A. Gilbert, pp. 59–95, (Cambridge University Press, Cambridge, U.K.; New York, U.S.A., 1994).CrossRefGoogle Scholar
Weiss, N.O., Brownjohn, D.P., Matthews, P.C., Proctor, M.R.E., 1996, "Photospheric Convection in Strong Magnetic Fields", Mon. Not. R. Astron. Soc., 283, 1153–1164. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1996MNRAS.283.1153W.ADSCrossRefGoogle Scholar
Weiss, N.O., Proctor, M.R.E., Brownjohn, D.P., 2002, "Magnetic Flux Separation in Photospheric Convection", Mon. Not. R. Astron. Soc., 337, 293–304. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2002MNRAS.337..293W.ADSCrossRefGoogle Scholar
Williams, G.P., 2003, "Barotropic Instability and Equatorial Superrotation", J. Atmos. Sci., 60, 2136–2152. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003JAtS...60.2136W.ADSMathSciNetCrossRefGoogle Scholar
Williamson, D.L., 2002, "Numerical Methods for Atmospheric General Circulation Models: Refinements or Fundamental Advances?", in Meteorology at the Millennium, (Ed.) R. Pearce, vol. 83 of International geophysics series, pp. 23–28, (Academic Press, San Diego, U.S.A., 2002). Papers presented at a conference celebrating the 150 years of the Royal Meteorological Society (RMS).CrossRefGoogle Scholar
Woodard, M.F., 2000, "Theoretical Signature of Solar Meridional Flow in Global Helioseismic Data", Solar Phys., 197, 11–20. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2000SoPh..197...11W.ADSCrossRefGoogle Scholar
Woodard, M.F., Libbrecht, K.G., 2003, "Spatial and Temporal Variations in the Solar Brightness", Solar Phys., 212, 51–64. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?2003SoPh..212...51W.ADSCrossRefGoogle Scholar
Yoden, S., Yamada, M., 1993, "A Numerical Experiment on Two-Dimensional Decaying Turbulence on a Rotating Sphere", J. Atmos. Sci., 50, 631–643. Related online version (cited on 15 March 2005): http://adsabs.harvard.edu/cgi-bin/bib_query?1993JAtS...50..631Y.ADSCrossRef | CommonCrawl |
Mathematics /
5th Grade /
Unit 1: Place Value with Decimals
Students build upon their understanding of the place value system by extending its patterns to decimals, and continue to read, write, compare, and round numbers, including decimals, in various forms.
Unit Practice
In the first unit of Grade 5, students will build on their understanding of the structure of the place value system from Grade 4 (MP.7) by extending that understanding to decimals. By the end of the unit, students will have a deep understanding of the base-ten structure of our number system, as well as how to read, write, compare, and round those numbers.
In Grade 4, students developed the understanding that a digit in any place represents ten times as much as it represents in the place to its right (4.NBT.1). With this deepened understanding of the place value system, students read and wrote multi-digit whole numbers in various forms, compared them, and rounded them (4.NBT.2—3).
Thus, Unit 1 starts off with reinforcing some of this place value understanding of multi-digit whole numbers to 1 million, building up to that number by multiplying 10 by itself repeatedly. After this repeated multiplication, students are introduced to exponents to denote powers of 10. Then, students review the relationship in a whole number between a place value and the place to its left (4.NBT.1) and learn about the reciprocal relationship of a place value and the place to its right (5.NBT.1). Students also extend their work from Grade 4 on multiplying whole numbers by 10 to multiplying and dividing them by powers of 10 (5.NBT.2). After extensive practice with whole numbers, students then divide by 10 repeatedly to extend their place value system in the other direction, to decimals. They then apply these rules and perform these operations with powers of 10 to decimal numbers. Lastly, after deepening their understanding of the base-ten structure of our place value system, students read, write, compare, and round numbers in various forms (5.NBT.3—4).
As mentioned earlier, students will look for and make use of structure throughout the unit (MP.7). Students will also have an opportunity to look for and express regularity in repeated reasoning (MP.8), such as "when students explain patterns in the number of zeros of the product when multiplying a number by powers of 10 (5.NBT.2)" (PARCC Model Content Frameworks, p. 24).
This content represents the culmination of many years' worth of work to deeply understand the structure of our place value system, starting all the way back in Kindergarten with the understanding of teen numbers as "10 ones and some ones" (K.NBT.1). Moving forward, students will rely on this knowledge later in the Grade 5 year to multiply and divide whole numbers (5.NBT.5—6) and perform all four operations with decimals (5.NBT.7). Students will also use their introduction to exponents to evaluate more complex expressions involving them (6.EE.1). Perhaps the most obvious future grade-level connection exists in Grade 8, when students will represent very large and very small numbers using scientific notation and perform operations on numbers written in scientific notation (8.EE.3—4). Thus, this unit represents an important conclusion to the underlying structure of our number system and opens the door to more complex mathematics with very large and very small numbers.
Pacing: 16 instructional days (13 lessons, 2 flex days, 1 assessment day)
For guidance on adjusting the pacing for the 2020-2021 school year due to school closures, see our 5th Grade Scope and Sequence Recommended Adjustments.
Expanded Assessment Package
Problem Sets for Each Lesson
This assessment accompanies Unit 1 and should be given on the suggested assessment day or after completing the unit.
Download Unit Assessment
Download Unit Assessment Answer Key
Download Student Self-Assessment
Intellectual Prep for All Units
Read and annotate "Unit Summary" and "Essential Understandings" portion of the unit plan.
Do all the Target Tasks and annotate them with the "Unit Summary" and "Essential Understandings" in mind.
Take the unit assessment.
Essential Understandings
A digit in any place represents 10 times as much as it represents in the place to its right and $${\frac{1}{10}}$$ of what it represents in the place to its left.
Multiplying a number by 10 repeatedly (or a power of 10) results in the digits shifting to the left. The digits shift the same number of places as are factors of 10.
Dividing a number by 10 repeatedly (or a power of 10) results in the digits shifting to the right. The digits shift the same number of places as are factors of 10.
Comparing numbers written in standard form uses the understanding that one of any unit is greater than any amount of a smaller unit. Thus, the largest place values in each number contains the most relevant information when comparing numbers. If both numbers have the same number of largest units, the next largest place value should be attended to next, iteratively until one digit is greater than another in the same unit.
When rounding a number, the goal is to approximate the number by the closest number with no units of smaller value (e.g., so 4.56 to the nearest tenth is 4.60; and to the nearest whole is 5).
Related Teacher Tools:
5th Grade Vocabulary Glossary
Unit Materials, Representations and Tools
Tape or staplers
paper hundreds flats (20 count)
Millions Place Value Chart
Thousandths Place Value Chart
base ten blocks
Additional Unit Practice
With Fishtank Plus you can access our Daily Word Problem Practice and our content-aligned Fluency Activities created to help students strengthen their application and fluency skills.
Topic A: Place Value with Whole Numbers
5.NBT.A.1
Build whole numbers to 1 million by multiplying by 10 repeatedly.
Use whole numbers to denote powers of 10. Explain patterns in the number of zeros when multiplying any powers of 10 by any other powers of 10.
Explain patterns in the number of zeros of the product when multiplying a whole number by 10. Recognize that in a multi-digit whole number, a digit in any place represents 10 times as much as it represents in the place to its right.
Explain patterns in the number of zeros of the product when multiplying a whole number by powers of 10.
Explain patterns in the number of zeros of the quotient when dividing a whole number by 10. Recognize that in a multi-digit whole number, a digit in any place represents $${\frac{1}{10}}$$ as much as it represents in the place to its left.
Explain patterns in the number of zeros of the quotient when dividing a whole number by powers of 10.
Topic B: Place Value with Decimals
Build decimal numbers to thousandths by dividing by 10 repeatedly.
Explain patterns in the placement of the decimal point when a decimal is multiplied by any power of 10. Recognize that in a multi-digit decimal, a digit in any place represents 10 times as much as it represents in the place to its right.
Explain patterns in the placement of the decimal point when a decimal is divided by a power of 10. Recognize that in a multi-digit decimal, a digit in any place represents $${\frac{1}{10}}$$ as much as it represents in the place to its left.
Topic C: Reading, Writing, Comparing, and Rounding Decimals
5.NBT.A.3.A
Read and write decimals to thousandths using base-ten numerals, number names, and expanded form.
5.NBT.A.3.B
Compare multi-digit decimals to the thousandths based on meanings of the digits using $${>}$$, $${<}$$, or $$=$$ to record the comparison.
Use place value understanding to round decimals to the nearest whole.
Use place value understanding to round decimals to any place.
Key: Major Cluster Supporting Cluster Additional Cluster
Number and Operations in Base Ten
5.NBT.A.1 — Recognize that in a multi-digit number, a digit in one place represents 10 times as much as it represents in the place to its right and 1/10 of what it represents in the place to its left.
5.NBT.A.2 — Explain patterns in the number of zeros of the product when multiplying a number by powers of 10, and explain patterns in the placement of the decimal point when a decimal is multiplied or divided by a power of 10. Use whole-number exponents to denote powers of 10.
5.NBT.A.3 — Read, write, and compare decimals to thousandths.
5.NBT.A.3.A — Read and write decimals to thousandths using base-ten numerals, number names, and expanded form, e.g., 347.392 = 3 × 100 + 4 × 10 + 7 × 1 + 3 × (1/10) + 9 × (1/100) + 2 × (1/1000).
5.NBT.A.3.B — Compare two decimals to thousandths based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons.
5.NBT.A.4 — Use place value understanding to round decimals to any place.
Foundational Standards
3.NBT.A.1 — Use place value understanding to round whole numbers to the nearest 10 or 100.
4.NBT.A.1 — Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. For example, recognize that 700 ÷ 70 = 10 by applying concepts of place value and division.
4.NBT.A.2 — Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons.
4.NBT.A.3 — Use place value understanding to round multi-digit whole numbers to any place.
Number and Operations—Fractions
4.NF.B.4.B — Understand a multiple of a/b as a multiple of 1/b, and use this understanding to multiply a fraction by a whole number. For example, use a visual fraction model to express 3 × (2/5) as 6 × (1/5), recognizing this product as 6/5. (In general, n × (a/b) = (n × a)/b.)
4.NF.C.5
4.NF.C.5 — Express a fraction with denominator 10 as an equivalent fraction with denominator 100, and use this technique to add two fractions with respective denominators 10 and 100. Students who can generate equivalent fractions can develop strategies for adding fractions with unlike denominators in general. But addition and subtraction with unlike denominators in general is not a requirement at this grade. For example, express 3/10 as 30/100, and add 3/10 + 4/100 = 34/100.
4.NF.C.6 — Use decimal notation for fractions with denominators 10 or 100. For example, rewrite 0.62 as 62/100; describe a length as 0.62 meters; locate 0.62 on a number line diagram.
4.NF.C.7 — Compare two decimals to hundredths by reasoning about their size. Recognize that comparisons are valid only when the two decimals refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusions, e.g., by using a visual model.
Future Standards
6.EE.A.1
6.EE.A.1 — Write and evaluate numerical expressions involving whole-number exponents.
8.EE.A.3 — Use numbers expressed in the form of a single digit times an integer power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other. For example, estimate the population of the United States as 3 × 108 and the population of the world as 7 × 109, and determine that the world population is more than 20 times larger.
5.NBT.B.5 — Fluently multiply multi-digit whole numbers using the standard algorithm.
5.NBT.B.6 — Find whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models.
5.NBT.B.7 — Add, subtract, multiply, and divide decimals to hundredths, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used.
Standards for Mathematical Practice
CCSS.MATH.PRACTICE.MP1 — Make sense of problems and persevere in solving them.
CCSS.MATH.PRACTICE.MP2 — Reason abstractly and quantitatively.
CCSS.MATH.PRACTICE.MP3 — Construct viable arguments and critique the reasoning of others.
CCSS.MATH.PRACTICE.MP4 — Model with mathematics.
CCSS.MATH.PRACTICE.MP5 — Use appropriate tools strategically.
CCSS.MATH.PRACTICE.MP6 — Attend to precision.
CCSS.MATH.PRACTICE.MP7 — Look for and make use of structure.
CCSS.MATH.PRACTICE.MP8 — Look for and express regularity in repeated reasoning. | CommonCrawl |
Immobilization of transaminase from Bacillus licheniformis on copper phosphate nanoflowers and its potential application in the kinetic resolution of RS-α-methyl benzyl amine
Shraddha Lambhiya1 na1,
Gopal Patel1,3 na1 &
Uttam Chand Banerjee ORCID: orcid.org/0000-0002-7363-40421,2
This study reports the isolation and partial purification of transaminase from the wild species of Bacillus licheniformis. Semi-purified transaminase was immobilized on copper nanoflowers (NFs) synthesized through sonochemical method and explored it as a nanobiocatalyst. The conditions for the synthesis of transaminase NFs [TA@Cu3(PO4)2NF] were optimized. Synthesized NFs revealed the protein loading and activity yield—60 ± 5% and 70 ± 5%, respectively. The surface morphology of the synthesized hybrid NFs was examined by scanning electron microscopy (SEM) and transmission electron microscopy (TEM), which revealed the average size to be around 1 ± 0.5 μm. Fourier-transform infrared (FTIR) was used to confirm the presence of the enzyme inside the immobilized matrix. In addition, circular dichroism and florescence spectroscopy were also used to confirm the integrity of the secondary and tertiary structures of the protein in the immobilized material. The transaminase hybrid NFs exhibited enhanced kinetic properties and stability over the free enzyme and revealed high reusability. Furthermore, the potential application of the immobilized transaminase hybrid NFs was demonstrated in the resolution of racemic α-methyl benzylamine.
Biocatalysts are extensively used in biotransformation applications, particularly in the environmental and industrial segments. The cell-free biocatalysts are recognized and expected to be used more in biotransformation reactions, because they have greater specificity with regard to substrate and reaction rate, a higher tolerance towards higher substrate concentration, and are appropriate for separation of the product (Rollin et al. 2013). Enantiomerically pure amines and α/β-amino acids play a crucial role in living organisms, and also play an important role in agrochemical, chemical, and pharmaceutical industries, as intermediate or final products (Schätzle et al. 2011). Thus, synthesis of achiral amine compound is an efficient and cost-effective approach in the biocatalyst reaction, and is an attractive alternative for the conventional chemical methods (Paetzold and Bäckvall 2005). Among the different biocatalysts, transaminase (TA) has recently been received great attention as a promising catalyst, due to its ability to produce a wide range of optically pure amines and unnatural amino acids (Schätzle et al. 2011; Mathew and Yun 2012; Shin et al. 2013, 2015; Päiviö and Kanerva 2013; Paul et al. 2014). Transaminases (TAs) catalyse the transfer of an amino group from the amino donor to the acceptor, employing the approach of kinetic resolution or asymmetric synthesis (Höhne and Bornscheuer 2012; Nestl et al. 2014). The amino group transfer is mediated by a vitamin B6-based cofactor pyridoxal 5′-phosphate (PLP), reversibly bound to a catalytic lysine of the enzyme, via an imine bond, which assists the reaction by acting as a transient "custodian" of the amino group (Höhne and Bornscheuer 2012; Homaei et al. 2013; Guo and Berglund 2017).The extensive industrial applications and desirable characteristics of the biocatalyst are often hindered by operation stability, such as, easy degradation of their molecular structure (at higher temperatures, at acidic or basic pH, in the presence of organic solvents, and in long-term storage) and their cumbersome recovery and re-use, which strictly limits their use. In recent times, numerous immobilization techniques have been used to overcome these problems (Homaei et al. 2013; Ahmad and Sardar 2015). At present various efficient methods are being used for immobilization of enzymes, such as, adsorption, covalent binding, entrapment, and cross-linking (Lei et al. 2006; Sheldon 2007; Brady and Jordaan 2009; Wang et al. 2014; Altinkaynak et al. 2016). Nanobiocatalysis is an emerging innovation that synergistically fuses nanotechnology with biocatalysts, and has more advantages, for example, large surface area, mass ratios, control over the size on a nanometer-scale, with a broad range of functionalities, and other attractive electronic, optical, magnetic, and catalytic properties (Kim et al. 2008; Jariwala et al. 2013; Misson et al. 2015; Lin et al. 2016a; Mansouri et al. 2017; Pakapongpan and Poo-arporn 2017). Within the past few decades various novel approaches, such as, single enzyme nanoparticles, metal–organic frameworks(Chen et al. 2017; Liu et al. 2021; Luan et al. 2021), silica nanocarriers (Du et al. 2013), polymer nanocarriers (Lin et al. 2012), cross-linked enzyme aggregates (Kartal 2016; Care et al. 2017), and enzyme nanocarrier fabricated hybrid organic–inorganic nanostructures (Kharisov 2008) have been reported for structural and functional modification of enzymes. Hybrid organic–inorganic NFs is a promising breakthrough in enzyme immobilization, which exhibits enhanced enzymatic activity and stability as compared to free enzymes, which may be attributed to the confinement of the enzyme in the core of the NFs and high-surface area (Ge et al. 2012; Lee et al. 2015; Li et al. 2021). The combined functionalities of the protein and the inorganic material of the hybrid NFs, enables its application in biosensors (Gao et al. 2020; Zhu et al. 2017, 2018), biofuel cells (Maleki et al. 2019), and in biocatalysis (Rai et al. 2018).The most peculiar facet of the protein–inorganic hybrid NFs is synthesized by the conventional incubation method (reaction of mixture after three days, at room temperature), which significantly reduces their use in actual application. Nanoflower synthesis through the ultrafast sonochemical method accomplishes the limitation of the conventional method (Batule et al. 2015; Dwivedee et al. 2018).
The present study reports the isolation and purification of transaminase (EC 2.6.1.B16) from Bacillus licheniformis, along with its hybrid enzyme–inorganic nanoflower synthesis [TA@Cu3(PO4)2NF]. The transaminase hybrid nanoflower morphology and enzyme activity have been modulated by the duration of ultrasonic treatment, sonication power, enzyme/metal salt concentration, and buffer pH. The surface morphology of hybrid NFs has been characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), Fourier-transform infrared (FTIR), circular dichroism (CD), and florescence spectroscopy. The transaminase hybrid NFs exhibit enhanced kinetic properties and stability over the free enzyme and reveal high reusability. Furthermore, the potential application of the immobilized transaminase hybrid NFs has been demonstrated in the resolution of racemic α-methyl benzylamine.
Materials and method
Bacillus licheniformis MTCC 429 was procured from Institute of Microbial Technology, Chandigarh, India. CaCl2.2H2O, CoCl2.6H2O, CuSO4.5H2O, MnSO4.H2O, benzyl amine, (S)-α-methyl benzyl amine (S-α-MBA), (RS)-α-methyl benzyl amine, pyruvic acid, pyridoxal-5′-phosphate (PLP), phenyl methyl sulfonyl fluoride (PMSF), and methyl tertiary butyl ether (MTBE) were purchased from Sigma Aldrich. Yeast extract, meat extract, K2HPO4, MgSO4, glutamic acid were obtained from HiMedia.
Production and purification of transaminase
An inoculum (2% v/v) of B. licheniformis was grown in the production medium containing galactose (5 g/L), yeast extract (15 g/L), meat extract (15 g/L), K2HPO4 (4 g/L), MgSO4 (0.2 g/L), glutamic acid (1 g/L); pH was adjusted to 6. The fermentation was carried out at 37 °C for 28 h and 150 RPM. At the end of fermentation, the cells were harvested by centrifugation (7000 RPM for 20 min at 4 °C) and washed three times with 50 mM Tris–HCl buffer (8 pH).
The wet cells (2 g) were suspended in 10 mL 50 mM Tris–HCl buffer (8 pH) containing 20 µM PLP and 1.0 mM PMSF. Subsequently the cells were disrupted by probe sonication for 10 min at 4 °C. The sonicated cell suspension was centrifuged at 10,000 RPM for 30 min at 4 °C and cell-free lysate (crude enzyme solution) was collected and stored at 4 °C. Afterward transaminase was partially purified using 50 mM Tris buffer (pH 8) containing 10 µM PLP pre-equilibrated Macro-Prep High strong anion exchange column (Patil et al. 2017b). The cell-free extract was loaded onto a column and unbound protein washed with same buffer until no protein was found. Protein was eluted with a gradient of NaCl from 0.075 M, 0.1 M, 0.15 M, 0.2 M, 0.25 M, 0.3 M, 0.4 M to 1 min 50 mM Tris buffer (pH 8). The protein elution pattern was determined by Bradford assay (Bradford 1976) and elution of transaminase was measured through transaminase activity assay for all the collected fraction. Purification profile of enzymes was confirmed by reducing SDS PAGE (12% polyacrylamide gel) and Coomassie blue staining. The transaminase active fractions were collected, concentrated and washed with phosphate-buffered saline (pH 7.4) thrice using MILLIPORE® centricon tube (30,000 MWCO) (Patil et al. 2017a).
Activity measurement
Free and immobilized transaminase activity were measured by copper sulphate methanol assay (Hwang and Kim 2004). The staining solution was prepared by mixing 300 mg copper sulphate in 0.5 mL water followed by the addition of 30 mL methanol. For the assay benzyl amine (200 µL, 200 mM) was used as an amino donor and pyruvic acid (200 µL, 100 mM) was used as an amino acceptor in presence of pyridoxal 5′-phosphate (100 µL, 0.5 mM) as a cofactor of transaminase in phosphate buffer (300 µL, 50 mM, and pH 7). Enzyme solution (200 µL) was added in the above solution and incubated at 37 °C for 10 min. Reaction mixture was then cooled at room temperature for 10 min and 200 µL staining solution was added. It was further centrifuged to remove precipitate and UV absorbance of blue colour supernatant was measured at 650 nm. One unit (U) of transaminase is expressed as the amount of enzyme that releases 1 µM L-alanine per minute under optimal assay conditions (Du et al. 2013). In this reaction, alanine is formed which gives a blue complex with the Cu2+ ion and maximum absorbance at 650 nm.
Synthesis of copper phosphate nanoflower with transaminase [TA@Cu3(PO4)2NF]
The transaminase hybrid nanoflowers were synthesized by mixing 3 mL phosphate-buffer saline (pH 7.4) containing 0.25 mg/mL enzyme with 20 µL CuSO4.5H2O in water (120 mM), mixed vigorously using vortex mixer. The mixture was then sonicated for definite time period in bath sonicator (power-sonic 505) at room temperature and 40 kHz frequency. After sonication, transaminase hybrid nanoflowers were centrifuged (3500 RPM, 20 min, and 4 °C) and washed twice with phosphate-buffered saline (pH 7.4). The enzyme formed a complex with copper ions, which built a nucleation site for the growth of primary crystals of copper phosphate. The interaction of transaminase with copper ions leads to the growth of flower-like particles that have nanoscale structures (Ge et al. 2012). Previous studies also revealed that this type of hybrid nanoflowers improved enzyme stability and activity as compared to free enzyme (Lee et al. 2015; Zhao et al. 2021).
Optimization of immobilization parameters for the synthesis of TA@Cu3(PO4)2NF
Effect of ultrasonic treatment time
The time of ultrasonic treatment greatly influences the appropriate formation of nanoflower and encapsulation of enzyme (Soni et al. 2018). Here we studied the effect of sonication time (5, 10, 15, 20, 25 and 30 min) on NFs synthesis by sonicating the reaction mixture of phosphate-buffer saline (pH 7.4) containing 0.25 mg/mL enzyme with 20 µL CuSO4.5H2O (120 mM), in a bath sonicator. The other conditions were kept constant as following: sonication power medium (170 W), pH 7.4 of phosphate-buffer saline and treatment frequency 40 kHz.
Screening of metal salts
Salts played an important role in the synthesis of NFs. Initially specified metal salts (CaCl2.2H2O, CoCl2.6H2O, CuSO4.5H2O and MnSO4.H2O) were screened individually at a concentration of 120 mM for the synthesis of NFs with the maximum transaminase entrapment, optimum size and shape (Dwivedee et al. 2018). The other conditions were kept constant as following: sonication power medium (170 W), pH 7.4 of phosphate-buffer saline containing 0.25 mg/mL enzyme, and treatment time 20 min.
Effect of ultra-sonication power
Power of sonication is also one of the main factors which played a significant role in the synthesis of nanoflower; sometime small power is not enough to initiate the NFs synthesis and higher power may disturb the shape and size of nanoflower (Dwivedee et al. 2018). So the effects of different sonication power [high (200 W), medium (170 W) and low 140 (W)] were also studied individually on the synthesis of nanoflower in a reaction solution comprising enzyme (0.25 mg/mL in PBS, pH 7.4) and metal salt (CuSO4.5H2O, 120 mM) sonicated for 20 min in bath sonicator.
Optimization of enzyme/salt concentration ratio
Furthermore, the effect of enzyme and salt concentration on NFs formation were studied by sonicating 3 mL PBS (pH7.4) comprising different concentrations of enzyme (0.2 mg/mL, 0.25 mg/mL, and 0.3 mg/mL) and copper salt (0.66 mM, 0.8 mM, and 1 mM final concentration) at optimized sonication power (170 W) and treatment time (20 min).
Optimization of buffer pH
Various studies had shown that buffer pH plays a crucial role in the NFs synthesis and it influences the size and texture of nanoflower (Jung et al. 2009; Luo et al. 2017). Hence, size of NFs is controlled by varying buffer pH. The effect of pH in NFs synthesis was studied at different pH's (3.4, 5.5, 7.4, 8, and 9) keeping all other optimized conditions constant.
Immobilization efficiency
Transaminase hybrid nanoflowers [TA@Cu3(PO4)2NF] were studied for their capacity to immobilize the enzyme and it was evaluated through the transaminase activity, specific activity, protein loading, yield and activity yield of the immobilized enzyme (Neto et al. 2015).
$${\text{Specific}} \; \text{activity}\; (\text{U}/\text{mg} \;\text{protein}) = \frac{{\text{Activity}\; \text{of} \;\text{immobilized} \; \text{transaminase}}} {{\text{Amount}\, \text{of} \;\text{protein}\, \text{loading}}},$$
$${\text{Protein}} {\text{loading}}\; {\text{yield }}\left( \% \right) = \frac{{{\text{Amount}}\; {\text{of}} \;{\text{protein}}\, {\text{loading}}}}{{{\text{Amount}} \;{\text{of}}\; {\text{protein}}\; {\text{introduced}}}} \times 100\% ,$$
$${\text{Activity}} {\text{yield }}\left( \% \right) = \frac{{{\text{Specific}} \;{\text{activity}}\; {\text{of}}\, {\text{immobilized}}\; {\text{transaminase}}}}{{{\text{Specific}}\; {\text{activity}} \;{\text{of}}\; {\text{free}}\; {\text{transaminase}}}} \times 100\% .$$
Characterization of transaminase nanoflower
The surface morphology of hybrid nanoflowers has been characterized by scanning electron microscopy (SEM, Hitachi S3400N) and transmission electron microscopy (TEM, FEI Tecnai™). Fourier-transfer infrared (FTIR) spectroscopy was used to examine functional groups of chemical compounds; spectra were collected from Perkin Elmer FTIR spectrometer with ATR synthesis monitoring system in 4000–650 cm−1 infrared region. The secondary structure changes in enzyme was investigated using circular dichroism (CD) spectroscopy, CD spectra were recorded on a JASCO J-810 CDD instrument at 25 °C. Spectra manager software was used to analyse the protein secondary structures fraction ratio. The change in tertiary structure was determined by using fluorescence spectroscopy (Perkin Elmer, LS-50B). The fluorescence spectra were scanned at 300–450 nm emission range at 280 nm excitation wavelength (λex) (Soni et al. 2018; Dwivedee et al. 2018).
Application of TA@Cu3(PO4)2NF in the resolution of (RS)-α-methyl benzyl amine
The reusability of hybrid transaminase nanoflowers was studied for the kinetic resolution of (RS)-α-methyl benzyl amine. The reaction was carried out with 300 µL α-methyl benzyl amine (500 mM) as amino donor, 200 µL pyruvic acid (100 mM) as amino acceptor, 200 µL transaminase hybrid nanoflowers solution (0.5 mg/mL enzyme concentration), 100 µL PLP (0.5 mM) as a cofactor of the enzyme and 200 µL sodium phosphate buffer (50 mM, pH 7.4) at 37 °C and 150 RPM. The reaction was terminated after 6 h; subsequently it was centrifuged (3500 RPM, 20 min, and 4 °C) and collected the nanoflowers. From the supernatant residual α-methyl benzyl amine was extracted in MTBE, dried, suspended in isopropanol and analysed using chiral HPLC (Chiralcel OD-H column, 0.5 mL/min flow rate, hexane:2-propanol::90:10, 210 nm UV detection, and 25 °C column temperature). Collected TA@Cu3(PO4)2NF were washed with phosphate buffer (50 mM, pH 7.4) and used in the successive cycle. The initial activity of enzyme was considered as 100% kinetic resolution of (RS)-α-methyl benzyl amine.
Purification of transaminase
The total protein was measured in all the eluted fractions using the Bradford assay, which depicted a protein elution pattern (Fig. 1A). The specific activity of the partially purified transaminase was increased from 0.31 U/mg (cell-free extract) to 9.03 U/mg (Macro-Prep High Q active fraction). Partial purification of the transaminase yielded a 29-fold purification of the transaminase with a single step. The representative results of the purification procedure are given in Table 1. The significant increase in TA activity obtained using the Macro-Prep High Q protein purification step could be a result of the removal of an inhibitory substance in the cell-free extract. Appearance of a prominent band, around 40 kDa, in a 0.1 M NaCl elution fraction, with the highest transaminase activity (13 U/mL), demonstrated the presence of a partially purified transaminase in this fraction (Fig. 1B).
A Chromatogram of transaminase purification in anion exchange column; B SDS-PAGE analysis of different elution fractions of anion exchange chromatography*; C SEM images of sonication time optimization study on nanoflower synthesis, a 5 min, b 10 min, c 15 min, d 20 min, e 25 min, and f 30 min; D TA activity of hNFs synthesized at different sonication time intervals. *Proteins were separated on a 12% polyacrylamide gel in the presence of 1% SDS. Here, lane 1 represents marker proteins (molecular mass 203, 124, 80, 49.1, 34.8, 28.9, 20.6, 7.1 kDa), lane 2 depicts unbound fraction, lane 3 depicts 0.075 M NaCl elution fraction, lane 4 depicts 0.1 M NaCl elution fraction, lane 5 depicts 0.15 M NaCl elution fraction, lane 6 depicts 0.2 M NaCl elution fraction, lane 7 depicts 0.25 M NaCl elution fraction, lane 8 depicts 0.3 M NaCl elution fraction, lane 9 depicts 0.4 M NaCl elution fraction, lane 10 depicts 1.0 M NaCl elution fraction, and lane 11 depicts cell lysate
Table 1 Purification scheme of TA from B. licheniformis
Synthesis of transaminase nanoflowers [TA@Cu3(PO4)2NF]
Synthesis of hybrid NFs through sonication hastened the process as compared to the conventional incubation method. Sonochemical synthesis of the nanoflower process included three steps: (i) nucleation and formation of primary crystals; (ii) growth of the crystals, and (iii) formation of a nanoflower assembly, by incorporation of metal salts and proteins (Dwivedee et al. 2018; Ge et al. 2012; Lee et al. 2015). To synthesize the protein–inorganic hybrid NFs through sonication, an aqueous PBS solution containing copper (II) sulfate and enzyme was sonicated for a definite time, followed by centrifugation at 3500 RPM for 20 min, when it obtained blue colour precipitates. Sonication parameters, such as, sonication time, ultra-sonication power, pH of the buffer, and enzyme–metal salt concentration were optimized, to obtain robust and effective transaminase NFs. Ultrasonication treatment is an innovative method of nanoflower synthesis which considerably decreases the synthesis time. As conventional methods were taken 3 days for completions of a reaction while the same reaction was completed within 10–20 min by ultrasound treatment (Chung et al. 2018). Here the sonication method might have permitted the copper phosphate to rapidly complete self-assembly progression by consistently providing high energy to the structure (Ge et al. 2012; Lee et al. 2015; Zhao et al. 2021).
A reaction mixture containing 0.25 mg/mL enzyme with 20 µL CuSO4.5H2O (120 mM) in a bath sonicator was investigated to find out the effect of sonication time. The surface morphology and growth steps of the nanoflower, at different time intervals, were observed through SEM (Fig. 1C). Sonication of the reaction mixture for 5 min resulted in spherical precipitates of enzyme–metal ion combinations (Fig. 1Ca). At this early stage of growth, only few proteins (transaminase) formed complexes with Cu+2, predominantly through the coordination of amide groups in the protein backbone (Ge et al. 2012; Hua et al. 2016; Lin et al. 2016b). These complexes provided a location for nucleation of the primary crystal. The first time petal formation was observed at 10 min sonication of the reaction mixture due to successive growth of protein–Cu+2 crystals into large agglomerates (Fig. 1Cb). The nanoflower formation through the anisotropic growth of protein nanopetals aggregates and primary crystals was observed at 15 min of sonication (Fig. 1Cc). Further sonication of the reaction mixture up to 20 min displayed blooming flowers (Fig. 1Cd). The assembling steps for NFs are mentioned herewith: protein induces the nucleation of the Cu3(PO4)2 crystals to form the scaffold for the petals and serve as 'glue' to bind the petals together (Ge et al. 2012; Lin et al. 2016b). Prolonged sonication for 25 min leads to distortion of the nanoflower assembly into individual petals (Fig. 1Ce). Sonication of the reaction mixture beyond 25 min resulted in complete disruption of the nanopetal morphology (Fig. 1Cf). In addition to the morphological study, transaminase activity was also observed in different time-sonicated samples. Figure 1D clearly reveals that enzyme activity increased continuously from 5 to 20 min and it was maximum in the 20-min sonicated sample, where the NFs are completely formed. After 20 min of treatment, the enzyme activity again decreased, might be due to distortion of the nanoflower assembly into individual petals, as shown in Fig. 1Ce, f (Wang et al. 2014). Based on the observation of nanoflower morphology and the enzyme activity of NFs prepared at different sonication times, 20 min was chosen as the optimum time for nanoflower synthesis.
The morphology of organic–inorganic hybrid nanomaterials and their activities solely depend on the type of metal salts (inorganic components) and their complex formations with the protein (organic substance) (Dwivedee et al. 2018). Different metal salts (CaCl2.2H2O, CoCl2.6H2O, CuSO4.5H2O, and MnSO4.H2O) were evaluated individually for nanomaterial formation and their effect on the TA activity (Fig. 2A, B). Sonication of the reaction mixture with CaCl2.2H2O (Fig. 2Aa) and MnSO4.H2O (Fig. 2Ad) showed no nanoflower formation, butMnSO4.H2O precipitates had enzyme activity (Fig. 2B). CoCl2.6H2O (Fig. 2Ab) and CuSO4.5H2O (Fig. 2Ac) metal salts formed a flower-like morphology but only the CuSO4.5H2O NFs exhibited higher enzyme activity and a uniform assembly of NFs (Fig. 2A, B). The results clearly revealed that the CuSO4.5H2O was the most appropriate salt for the synthesis of NFs, with the highest transaminase activity, and these results were consistent with the previous studies on nanoflower synthesis (Ge et al. 2012).
A SEM images of sonication-mediated transaminase nanoflower synthesis with different metal salt in 120 mM concentration, a CaCl2.2H2O, b CoCl2.6H2O, c CuSO4.5H2O, d MnSO4.H2O, and c1 magnified image of c. B TA activity of hybrid nanoflower synthesis with different metal salts. C SEM images of hybrid nanoflowers synthesized at different power level, a low, b medium, and c high. D TA activity of nanoflower synthesis at different power level
Effect of the power of ultra-sonication
Sonication of a reaction mixture comprising 120 mM copper(II) sulfate and 0.25 mg/mL enzyme was performed in an ordinary bath sonicator (frequency 40 kHz) at different sonic power levels (low-1, medium-2, and high-3) for 20 min (Fig. 2C). The SEM analysis of the samples depicted that a lower sonication power (140 W) was not sufficient for nanoflower formation (Fig. 2Ca) and higher sonication power (200 W) showed distortion of the nanoflower assembly (Fig. 2Cc), whereas, the medium sonication power (170 W) resulted in the formation of well-structured NFs (Fig. 2Cb). Figure 2D exhibits the enzyme activity of different sonication power levels, where 170 W (medium) showed the maximum enzyme activity as compared to the others. These results were in consistent with the earlier studies on nanoflower synthesis (Dwivedee et al. 2018).
Optimization of the enzyme/salt concentration ratio
The effect of different concentrations of metal salts and enzymes on the morphology of transaminase NFs and enzyme activity is depicted in Fig. 3A, B, respectively. The morphology of the NFs was observed to vary distinctly between 0.9 and 6 µm under the reaction conditions (Fig. 3A). The surface morphology in SEM analysis was observed that low enzyme concentration (0.2 mg/mL) with low metal salt concentration (0.66 mM) formed nanoflowers (Fig. 3Aa). The nitrogen atom in the amide group of the protein backbone, forms complexes with Cu+2. Nucleation and growth of the primary crystal originates at these sites to form a separate petal (Ge et al. 2012). An unorganized large petal, without clear NFs, similar to the morphology obtained at a low enzyme concentration (0.2 mg/mL) and high salt concentration (1.0 mM) was observed, presumably revealing a lower nucleation site for nanoflower growth at a high salt concentration (Fig. 3Ac). Moderate enzyme concentration (0.25 mg/mL) formed good nanoflower arrangement with higher salt concentration (1.0 mM) (Fig. 3Af), but moderate metal salt (0.8 mM) (Fig. 3Ae), showed highest enzyme activity. A higher enzyme concentration (0.35 mg/mL) with increasing metal salt concentration demonstrated a decrease in nanoflower size, from 3 to 1 µm in diameter. A higher enzyme concentration (0.35 mg/mL) with higher metal salt [1.0 mM copper (II) sulphate] resulted in small-sized NFs, with higher enzyme activity and protein loading (Fig. 3Ai). Therefore, a 0.35 mg/mL enzyme loading and 1.0 mM metal salt concetration were optimized for the synthesis of hybrid NFs. Interactive effect of enzyme and salt concentrations have been shown in a 3D plot (Fig. 3B), which revealed a higher transaminase activity with higher concentrations of both.
A SEM images of hybrid nanoflower [TA@Cu3(PO4)2NF] synthesis with various enzyme (mg/mL)/salt (mM) concentration, a 0.2/0.66, b 0.2/0.8, c 0.2/1.0, d 0.25/0.66, e 0.25/0.8, f 0.25/1.0, g 0.35/0.66, h 0.35/0.8, i 0.35/1.0, a1 magnified image of a, b1 magnified image b, c1 magnified image of c, f1 magnified image of f, g1 magnified image of g, and h1 magnified image of h. B TA activity of TA@Cu3(PO4)2NF with different enzyme/salt concentration
The effect of pH (ranging from pH 3.4 to 9.0) on nanoflower synthesis is shown in Fig. 4A, B. The nanoflower formation had not occurred in the acidic pH (3.4) (which is clearly evident in Fig. 4B) due to the negligible enzyme activity that was observed. Increasing the pH to 5.5 led to a drastic increase in the TA activity (3.29 U/mL, Fig. 4B). However, on examining the morphology of the NFs, the reaction mixture at pH 5.5 displayed a poor morphology of NFs (Fig. 4Aa), probably because the H2PO4−, a major anion of the acidic condition, was not easily accessible to convert the Cu3(PO4)2 crystal for nanoflower synthesis, compared to the neutral or alkaline pH, which had HPO4−, a major anion (Jung et al. 2009; Luo et al. 2017). However, a significant improvement in the formation of NFs had occurred at pH 7.4 (Fig. 4Ab), although the TA activity was lower (2.22 U/mL) when compared with that of at pH 5.5 (Fig. 4B). A further increase in pH, to 8 and 9, led to a reduction in the TA activity. Thus, a buffer pH 7.4 was used as the optimized pH in subsequent experiments.
A SEM images of transaminase hybrid nanoflowers synthesized at, a pH 5.5, b pH 7.4. B TA activity of TA@Cu3(PO4)2NF synthesis at different pH. C Characterization of hybrid NFs synthesized under optimum condition, a, b SEM images of TA@Cu3(PO4)2NF, c TEM image of TA@Cu3(PO4)2NF, d magnified TEM image of c, e magnified TEM image of d
Characterization of TA@Cu3(PO4)2NF
Based on the above conditions, the optimum values for the sonication-mediated synthesis of TA@Cu3(PO4)2NF were as follows: duration of ultra-sonication treatment 20 min; CuSO4.5H2O as a metal salt; sonication power 170 W (medium level); buffer pH 7.4; 150 mM Copper (II) sulfate, and enzyme 0.35 mg/mL.TA@Cu3(PO4)2NF synthesized under the optimum condition revealed a protein loading of 60 ± 5% and activity yield of 70 ± 5%. The SEM image of TA@Cu3(PO4)2NF clearly revealed a flower-like structure (Fig. 4Ca, b), with average size of 1 ± 0.5 μm. However, the average size of NFs in the TEM analysis was comparatively low, around 0.7 µm (Fig. 4Cc). The TEM image of Fig. 4Cd, e showed multiple petals, which confirmed the synthesis of NFs by the aggregation of nanosized petals.
Fourier-transform infrared spectroscopy
The FTIR spectra of the immobilized matrix and the free and immobilized enzyme were scanned in the region of 650–4000 cm−1to confirm the presence of transaminase in the NFs (Fig. 5A). The immobilized matrix displayed the peak at 1154 cm−1, and 1046 cm−1and 989 cm−1 were recognized as the Cu–OH bending vibrations, the asymmetric and symmetric stretching vibrations of PO43−, respectively (He et al. 2015). The amide I and II bands of the transaminase were observed at 1637 cm−1, derived mainly from the C = O stretching vibrations of the peptide linkages, and 1537 cm−1was primarily attributed to the in-plane NH bending vibration and CN stretching vibration in the transaminase (Wu et al. 2017). TA@Cu3(PO4)2NF exhibited peaks of the immobilized matrix and the free enzyme validated the presence of proteins in the NFs. The hybrid nanoflower spectrum did not show any new absorption peaks or significant peak shifts in comparison to the immobilized matrices and free enzyme. These results indicated the enzyme immobilization was through self-assembly in hybrid NFs, instead of covalent conjugation.
A Chromatogram of (RS)-α-MBA; B chromatogram of (S)-α-MBA; C chromatogram for catalytic conversion of (R)-α-MBA into acetophenone at different time periods
Circular dichroism spectroscopy
Circular dichroism spectroscopy in a distant UV wavelength range (260–170) covers peptide bond absorption, and can be used to characterize and quantify secondary structural features such as, α-helical, β-strand, and an unordered structure (Miles and Wallace 2015). The reaction mixture without the enzyme (immobilized matrix) had no contribution to the CD spectrum. Figure 5B depicted the CD spectrum of TA@Cu3(PO4)2 and free transaminase. From Table 2, it was clear that there were no significant changes in the secondary structure fraction ratio of the immobilized enzyme as compared to the free transaminase. These results demonstrated the preservation of secondary enzyme structure inside the immobilized matrix.
Table 2 Secondary structure fraction ratio of free and immobilized enzyme
Fluorescence study
The fluorescence spectra of the free enzyme solution exhibited emission maxima (λem) at 332 nm, with an intensity of 248.69 arbitrary units, and TA@Cu3(PO4)2 NF exhibited an emission maxima at 330 nm, with an intensity of 172.75 arbitrary units (Fig. 5C). TA@Cu3(PO4)2NF did not show any notable λem shift, which showed a preservation of the tertiary enzyme structure inside the immobilized matrix. The quenching effect was observed due to the Cu (II) metal ion (Plotnikova et al. 2016).
Reusability is a significant factor in the industrial application of enzymes. The reusability study of developed hybrid NFs [TA@Cu3(PO4)2NF] was carried out through the kinetic resolution of the (RS)-α-methyl benzyl amine (Fig. 6A; Table 3). The kinetic resolution of the racemic mixture of α-MBA by TA caused (R)-α-MBA deamination and (S)-α-MBA remained without any structural changes (Fig. 6B; Table 4). The enantioselective catalytic conversion of (R)-α-MBA into acetophenone at different time periods (3.5, 5.5, and 6.0 h) is summarized in Table 5. The enzymatic resolution by TA@Cu3(PO4)2NF resulted in 49.93% conversion of (R)-α-MBA to acetophenone with 99.85% enantiomeric excess after 6.0 h of reaction (Fig. 6C). The TA@Cu3(PO4)2NF was consecutively used in four cycles for the catalytic conversion of (R)-α-MBA into acetophenone (Fig. 6D). As shown in Fig. 6D, in the initial two cycles, the percentage conversion of substrate into product was more than 47% and later it decreased to 18%. A decrease in substrate conversion might be due to the loss of nanoflower preparation during the recycling or washing process. It may be due to the leaching of the enzyme from the nano-preparation, however, it retained up to 37% of relative activity after four cycles of reuse. The reusability can be improved by adding some stabilizer during nanoflower synthesis which is known to reduce leaching of enzymes and also by the careful washings of reaction mixture during recycling. These results are in consistent with previous studies carried out by other researchers in which they immobilized different enzymes for the kinetic resolution of the racemic mixture (Rai et al. 2018; Soni et al. 2018; Dwivedee et al. 2018).
A FTIR spectra overlay of free enzyme, immobilized enzyme [TA@Cu3(PO4)2NF], and immobilized matrix. B CD spectra overlay of free enzyme, TA@Cu3(PO4)2NF, sonicated enzyme, and immobilized matrix. C The overlay of fluorescence spectra: free enzyme and TA@Cu3(PO4)2NF. D reusability study of TA@Cu3(PO4)2NF
Table 3 HPLC data of (RS)-α-methyl benzyl amine before catalysed by (R)-specific transaminase
Table 4 HPLC data of (RS)-α-methyl benzyl amine after catalysed by (R)-specific transaminase
Table 5 Catalytic conversion of (R)-α-MBA into acetophenone at different time periods
In this study, we established a novel method of transaminase copper phosphate nanoflowers synthesis. Transaminase copper hybrid nanoflowers were basically synthesized by sonication for 20 min at room temperature. It produced hierarchically designed flower-like morphology with greater stability and enzyme activity. The effects of all the reaction parameters (such as, sonication time, amplitude, buffer pH, enzyme, and metal salt concentration) on the morphology of the NFs and TA activity were systematically investigated and optimized. The resultant hybrid NFs exhibited higher reusability up to four cycles, with a retention of 37% activity. Additionally, TA@Cu3(PO4)2NF was applied in the kinetic resolution of (RS)-α-methyl benzyl amine. Moreover, this developed method could be applied to promptly synthesize NFs for numerous applications in enzyme catalysis, biofuel cells, and biosensors, and would magnify the exploitation of NFs in the various fields of biotechnology.
All data generated or analysed during this study are included in this published article.
Ahmad R, Sardar M (2015) Enzyme immobilization: an overview on nanoparticles as immobilization matrix. Biochem Anal Biochem 04:1–8. https://doi.org/10.4172/2161-1009.1000178
Altinkaynak C, Tavlasoglu S, Özdemir N, Ocsoy I (2016) A new generation approach in enzyme immobilization: organic-inorganic hybrid nanoflowers with enhanced catalytic activity and stability. Enzyme Microb Technol 93–94:105–112. https://doi.org/10.1016/j.enzmictec.2016.06.011
Batule BS, Park KS, Il KM, Park HG (2015) Ultrafast sonochemical synthesis of protein-inorganic nanoflowers. Int J Nanomed 10:137–142. https://doi.org/10.2147/IJN.S90274
Bradford MM (1976) A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem 72:248–254. https://doi.org/10.1016/0003-2697(76)90527-3
Brady D, Jordaan J (2009) Advances in enzyme immobilisation. Biotechnol Lett 31:1639–1650. https://doi.org/10.1007/s10529-009-0076-4
Care A, Petroll K, Gibson ESY et al (2017) Biotechnology for biofuels solid—binding peptides for immobilisation of thermostable enzymes to hydrolyse biomass polysaccharides. Biotechnol Biofuels. https://doi.org/10.1186/s13068-017-0715-2
Chen Y, Hong S, Fu CW et al (2017) Investigation of the mesoporous metal-organic framework as a new platform to study the transport phenomena of biomolecules. ACS Appl Mater Interfaces 9:10874–10881. https://doi.org/10.1021/acsami.7b00588
Chung M, Nguyen TL, Tran TQN, Yoon HH et al (2018) Ultrarapid sonochemical synthesis of enzyme-incorporated copper nanoflowers and their application to mediatorless glucose biofuel cell. Appl Surf Sci 429:203–209
Du X, Shi B, Liang J et al (2013) Developing functionalized dendrimer-like silica nanoparticles with hierarchical pores as advanced delivery nanocarriers. Adv Mater. https://doi.org/10.1002/adma.201302189
Dwivedee BP, Soni S, Laha JK, Banerjee UC (2018) Self assembly through sonication: an expeditious and green approach for the synthesis of organic-inorganic hybrid nanopetals and their application as biocatalyst. ChemNanoMat 4:670–681. https://doi.org/10.1002/cnma.201800110
Gao L, Wang Z, Liu Y et al (2020) Co-immobilization of metal and enzyme into hydrophobic nanopores for highly improved chemoenzymatic asymmetric synthesis. ChemCommun 56:13547–13550
Ge J, Lei J, Zare RN (2012) Protein-inorganic hybrid nanoflowers. Nat Nanotechnol 7:428–432. https://doi.org/10.1038/nnano.2012.80
Guo F, Berglund P (2017) Transaminase biocatalysis: optimization and application. Green Chem 19:333–360. https://doi.org/10.1039/C6GC02328B
He G, Hu W, Li CM (2015) Spontaneous interfacial reaction between metallic copper and PBS to form cupric phosphate nanoflower and its enzyme hybrid with enhanced activity. Colloids Surfaces B Biointerfaces 135:613–618. https://doi.org/10.1016/j.colsurfb.2015.08.030
Höhne M, Bornscheuer UT (2012) Application of transaminases. Enzym Catal Org Synth Third Ed 2:779–820. https://doi.org/10.1002/9783527639861.ch19
Homaei AA, Sariri R, Vianello F, Stevanato R (2013) Enzyme immobilization: an update. J Chem Biol 6:185–205. https://doi.org/10.1007/s12154-013-0102-9
Hua X, Xing Y, Zhang X (2016) Controlled synthesis of an enzyme-inorganic crystal composite assembled into a 3D structure with ultrahigh enzymatic activity. RSC Adv 6:46278–46281. https://doi.org/10.1039/c6ra04664a
Hwang BY, Kim BG (2004) High-throughput screening method for the identification of active and enantioselective ω-transaminases. Enzyme Microb Technol 34:429–436. https://doi.org/10.1016/j.enzmictec.2003.11.019
Jariwala D, Sangwan VK, Lauhon LJ et al (2013) Carbon nanomaterials for electronics, optoelectronics, photovoltaics, and sensing. Chem Soc Rev 42:2824–2860. https://doi.org/10.1039/c2cs35335k
Jung SH, Oh E, Lim H et al (2009) Shape-selective fabrication of zinc phosphate hexagonal bipyramids via a disodium phosphate-assisted sonochemical route. Cryst Growth Des 9:3544–3547. https://doi.org/10.1021/cg900287h
Kartal F (2016) Enhanced esterification activity through interfacial activation and cross-linked immobilization mechanism of Rhizopus oryzae lipase in a nonaqueous medium. Biotechnol Prog 32:899–904. https://doi.org/10.1002/btpr.2288
Kharisov B (2008) A review for synthesis of nanoflowers. Recent Pat Nanotechnol 2:190–200. https://doi.org/10.2174/187221008786369651
Kim J, Grate JW, Wang P (2008) Nanobiocatalysis and its potential applications. Trends Biotechnol 26:639–646. https://doi.org/10.1016/j.tibtech.2008.07.009
Lee SW, Cheon SA, Il KM, Park TJ (2015) Organic-inorganic hybrid nanoflowers: types, characteristics, and future prospects. J Nanobiotechnol 13:1–10
Lei C, Shin Y, Magnuson JK et al (2006) Characterization of functionalized nanoporous supports for protein confinement. Nanotechnology 17:5531–5538. https://doi.org/10.1088/0957-4484/17/22/001
Li Y, Luan P, Zhao L et al (2021) Purification and immobilization of His-tagged organophosphohydrolase on yolk−shell Co/C@SiO2@Ni/C nanoparticles for cascade degradation and detection of organophosphates. Biochem Eng J 167:107895
Lin M, Lu D, Zhu J et al (2012) Magnetic enzyme nanogel (MENG): a universal synthetic route for biocatalysts. ChemComm 48:3315–3317. https://doi.org/10.1039/c2cc30189j
Lin Y, Chen Z, Liu XY (2016a) Using inorganic nanomaterials to endow biocatalytic systems with unique features. Trends Biotechnol 34:303–315. https://doi.org/10.1016/j.tibtech.2015.12.015
Lin Z, Xiao Y, Yin Y et al (2016b) Correction to facile synthesis of enzyme-inorganic hybrid nanoflowers and its application as a colorimetric platform for visual detection of hydrogen peroxide and phenol. ACS Appl Mater Interfaces 8:13180–13180. https://doi.org/10.1021/acsami.6b04715
Liu Y, Wang Z, Guo N et al (2021) Polydopamine-encapsulated dendritic organosilica nanoparticles as amphiphilic platforms for highly efficient heterogeneous catalysis in water. Chin J Chem 39(7):1975–1982
Luan P, Liu Y, Li Y et al (2021) Aqueous chemoenzymatic one-pot enantioselective synthesis of tertiary α-aryl cycloketones via Pd-catalyzed C–C formation and enzymatic C=C asymmetric hydrogenation. Green Chem 23:1960–1964
Luo YK, Song F, Wang XL, Wang YZ (2017) Pure copper phosphate nanostructures with controlled growth: a versatile support for enzyme immobilization. CrystEngComm 19:2996–3002. https://doi.org/10.1039/c7ce00466d
Maleki N, Kashanian S, Nazari M, Shahabadi N (2019) A novel and enhanced membrane-free performance of glucose/O2 biofuel cell integrated with biocompatible laccase nanoflower biocathode and glucose dehydrogenase bioanode. IEEE Sens J. https://doi.org/10.1109/JSEN.2019.2937814
Mansouri N, Babadi AA, Bagheri S, Hamid SBA (2017) Immobilization of glucose oxidase on 3D graphene thin film: novel glucose bioanalytical sensing platform. Int J Hydrogen Energy 42:1337–1343. https://doi.org/10.1016/j.ijhydene.2016.10.002
Mathew S, Yun H (2012) ω-Transaminases for the production of optically pure amines and unnatural amino acids. ACS Catal 2:993–1001. https://doi.org/10.1021/cs300116n
Miles AJ, Wallace BA (2015) Circular dichroism spectroscopy for protein characterization. Biophysical characterization of proteins in developing biopharmaceuticals. Elsevier, Amsterdam, pp 109–137
Misson M, Zhang H, Jin B (2015) Nanobiocatalyst advancements and bioprocessing applications. J R Soc Interface 12:20140891. https://doi.org/10.1098/rsif.2014.0891
Nestl BM, Hammer SC, Nebel BA, Hauer B (2014) New generation of biocatalysts for organic synthesis. Angew Chem Int Ed 53(12):3070–3095. https://doi.org/10.1002/anie.201302195
Neto W, Schürmann M, Panella L et al (2015) Immobilisation of ω-transaminase for industrial application: screening and characterisation of commercial ready to use enzyme carriers. J Mol Catal B Enzym 117:54–61. https://doi.org/10.1016/j.molcatb.2015.04.005
Paetzold J, Bäckvall JE (2005) Chemoenzymatic dynamic kinetic resolution of primary amines. J Am Chem Soc 127:17620–17621. https://doi.org/10.1021/ja056306t
Päiviö M, Kanerva LT (2013) Reusable ω-transaminase sol–gel catalyst for the preparation of amine enantiomers. Process Biochem 48:1488–1494. https://doi.org/10.1016/j.procbio.2013.07.021
Pakapongpan S, Poo-arporn RP (2017) Self-assembly of glucose oxidase on reduced graphene oxide-magnetic nanoparticles nanocomposite-based direct electrochemistry for reagentless glucose biosensor. Mater Sci Eng C 76:398–405. https://doi.org/10.1016/j.msec.2017.03.031
Patil MD, Dev MJ, Shinde AS et al (2017a) Surfactant-mediated permeabilization of Pseudomonas putida KT2440 and use of the immobilized permeabilized cells in biotransformation. Process Biochem 63:113–121. https://doi.org/10.1016/j.procbio.2017.08.002
Patil MD, Dev MJ, Tangadpalliwar S et al (2017b) Ultrasonic disruption of Pseudomonas putida for the release of arginine deiminase: kinetics and predictive models. Bioresour Technol 233:74–83. https://doi.org/10.1016/j.biortech.2017.02.074
Paul CE, Rodríguez-Mata M, Busto E et al (2014) Transaminases applied to the synthesis of high added-value enantiopure amines. Org Process Res Dev 18:788–792. https://doi.org/10.1021/op4003104
Plotnikova OA, Melnikov GV, Melnikov AG, Kovalenko AV (2016) Comparative studies of the effects of copper sulfate and zinc sulfate on serum albumins. Third International Symposium on Optics and Biophotonics and Seventh Finnish-Russian Photonics and Laser Symposium (PALS). International Society for Optics and Photonics, Bellingham, p 99170Z
Rai SK, Narnoliya LK, Sangwan RS, Yadav SK (2018) Self-assembled hybrid nanoflowers of manganese phosphate and l-arabinose isomerase: a stable and recyclable nanobiocatalyst for equilibrium level conversion of d-galactose to d-tagatose. ACS Sustain Chem Eng 6:6296–6304. https://doi.org/10.1021/acssuschemeng.8b00091
Rollin JA, Tam TK, Zhang YHP (2013) New biotechnology paradigm: cell-free biosystems for biomanufacturing. Green Chem 15:1708–1719. https://doi.org/10.1039/c3gc40625c
Schätzle S, Steffen-Munsberg F, Thontowi A et al (2011) Enzymatic asymmetric synthesis of enantiomerically pure aliphatic, aromatic and arylaliphatic amines with (R)-selective amine transaminases. Adv Synth Catal 353:2439–2445. https://doi.org/10.1002/adsc.201100435
Sheldon RA (2007) Cross-linked enzyme aggregates (CLEA s): stable and recyclable biocatalysts. Biocatal Biochem Soc Trans 35:1583–1587
Shin G, Mathew S, Shon M et al (2013) One-pot one-step deracemization of amines using ω-transaminases. Chem Commun 49:8629–8631
Shin G, Mathew S, Yun H (2015) Kinetic resolution of amines by (R)-selective omega-transaminase from Mycobacterium vanbaalenii. J Ind Eng Chem 23:128–133. https://doi.org/10.1016/j.jiec.2014.08.003
Soni S, Dwivedee BP, Banerjee UC (2018) An ultrafast sonochemical strategy to synthesize lipase-manganese phosphate hybrid nanoflowers with promoted biocatalytic performance in the kinetic resolution of β-aryloxyalcohols. ChemNanoMat 4:1007–1020
Wang M, Bao WJ, Wang J et al (2014) A green approach to the synthesis of novel "Desert rose stone"-like nanobiocatalytic system with excellent enzyme activity and stability. Sci Rep 4:1–8. https://doi.org/10.1038/srep06606
Wu Z, Li H, Zhu X et al (2017) Using laccases in the nanoflower to synthesize viniferin. Catalysts 7:188. https://doi.org/10.3390/catal7060188
Zhao B, Zheng K, Liu C et al (2021) Bio-dissolution process and mechanism of copper phosphate hybrid nanoflowers by Pseudomonas aeruginosa and its bacteria-toxicity in life cycle. J Hazard Mater 419:126494
Zhu X, Huang J, Liu J et al (2017) A dual enzyme-inorganic hybrid nanoflower incorporated microfluidic paper-based analytic device (μPAD) biosensor for sensitive visualized detection of glucose. Nanoscale 9:5658–5663. https://doi.org/10.1039/c7nr00958e
Zhu J, Wen M, Wen W et al (2018) Recent progress in biosensors based on organic-inorganic hybrid nanoflowers. Biosens Bioelectron 120:175–187. https://doi.org/10.1016/j.bios.2018.08.058
GP and SL gratefully acknowledge Department of Biotechnology (DBT), New Delhi, India, for the providing the fellowship.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Shraddha Lambhiya and Gopal Patel contributed equally to this work
Department of Pharmaceutical Technology (Biotechnology), National Institute of Pharmaceutical Education and Research, Sector-67, S.A.S. Nagar, 160062, Punjab, India
Shraddha Lambhiya, Gopal Patel & Uttam Chand Banerjee
Departments of Biotechnology, Amity University, Sector 82A, IT City, International Airport Road, Mohali, 5300016, India
Uttam Chand Banerjee
Sagar Institute of Pharmacy and Technology, Gandhi Nagar Campus Opposite International Airport, Bhopal, 462036, MP, India
Gopal Patel
Shraddha Lambhiya
UCB contributed in "idea and overall outline of the work". SL and GP curated and performed all the experiments and written the manuscript. All authors contributed to data analysis and proof-reading of the manuscript. All authors read and approved the final manuscript.
Correspondence to Uttam Chand Banerjee.
This article does not contain any studies with human participants or animals performed by any of the authors.
All authors declare that he/she has no competing interests.
Lambhiya, S., Patel, G. & Banerjee, U.C. Immobilization of transaminase from Bacillus licheniformis on copper phosphate nanoflowers and its potential application in the kinetic resolution of RS-α-methyl benzyl amine. Bioresour. Bioprocess. 8, 126 (2021). https://doi.org/10.1186/s40643-021-00474-3
Transaminase
Hybrid nanoflowers (NFs)
Nanobiocatalyst | CommonCrawl |
How does the hydrogen atom know which frequencies it can emit photons at?
At university, I was shown the Schrodinger Equation, and how to solve it, including in the $1/r$ potential, modelling the hydrogen atom.
And it was then asserted that the differences between the eigenvalues of the operator were the permitted frequencies of emitted and absorbed photons.
This calculation agrees with experimentally measured spectral lines, but why would we expect it to be true, even if we accept that the electron moves according to the Schrodinger equation?
After all, there's no particular reason for an electron to be in an eigenstate.
What would make people think it was anything more than a (very suggestive) coincidence?
quantum-mechanics schroedinger-equation hilbert-space atomic-physics hydrogen
John Lawrence AspdenJohn Lawrence Aspden
migrated from math.stackexchange.com Apr 6 '14 at 17:40
This question came from our site for people studying math at any level and professionals in related fields.
$\begingroup$ A basic "premise" of quantum mechanics is that a system can only be in discrete physical states, which requires a specific amount of energy (within some "uncertainty") to change from one state to another. [This is the interpretation of observing spectral lines, rather than all possible frequencies of light.] This was the challenge in formulating the theory, since "classical" mechanics assumes quantities related to physical systems can change continuously, so any amount of energy can change a system from one state to another. $\endgroup$ – colormegone Apr 6 '14 at 17:03
$\begingroup$ In quantum mechanics "state" often means the function $\psi$. This can be any normalizable function, not just Hamiltonian eigenfunction and it can change continuously. You explanation would work if allowed "states" would be discrete as in the old quantum theory of Bohr. $\endgroup$ – Ján Lalinský Apr 6 '14 at 21:11
This calculation agrees with experimentally measured spectral lines, but why would we expect it to be true, even if we accept that the electron moves according to the Schrodinger equation? After all, there's no particular reason for an electron to be in an eigenstate.
Good question! The function $\psi$ does not need to be Hamiltonian eigenfunction. Whatever the initial $\psi$ and whatever the method used to find future $\psi(t)$, the time-dependent Schroedinger equation $$ \partial_t \psi = \frac{1}{i\hbar}\hat{H}\psi $$ implies that the atom will radiate EM waves with spectrum sharply peaked at the frequencies given by the famous formula $$ \omega_{mn} = \frac{E_m-E_n}{\hbar}, $$
where $E_m$ are eigenvalues of the Hamiltonian $\hat{H}$ of the atom.
Here is why. The radiation frequency is given by the frequency of oscillation of the expected average electric moment of the atom
$$ \boldsymbol{\mu}(t) = \int\psi^*(\mathbf r,t) q\mathbf r\psi(\mathbf r,t) d^3\mathbf r $$ The time evolution of $\psi(\mathbf r,t)$ is determined by the Hamiltonian $\hat{H}$. The most simple way to find approximate value of $\boldsymbol{\mu}(t)$ is to expand $\psi$ into eigenfunctions of $\hat{H}$ which depend on time as $e^{-i\frac{E_n t}{\hbar}}$. There will be many terms. Some are products of an eigenfunction with itself and contribution of these vanishes. Some are products of two different eigenfunctions. These latter terms depend on time as $e^{-i\frac{E_n-E_m}{\hbar}}$ and make $\boldsymbol{\mu}$ oscillate at the frequency $(E_m-E_n)/\hbar$. Schroedinger explained the Ritz combination principle this way, without any quantum jumps or discrete allowed states; $\psi$ changes continuously in time. Imperfection of this theory is that the function oscillates indefinitely and is not damped down; in other words, this theory does not account for spontaneous emission.
Ján LalinskýJán Lalinský
$\begingroup$ This looks like a very good answer! Thanks. I'm going to think about it for a bit and see what pictures I can make out of it. $\endgroup$ – John Lawrence Aspden Apr 6 '14 at 22:05
$\begingroup$ Yes, I think I see, so if we find any way to measure the electric moment, eigenvalue differences will appear in the values we get. And since the electric moment of the ground state is zero, and we expect the electric moment to be somehow connected to the field, getting a photon from the atom is effectively a measurement. Thank you very much! $\endgroup$ – John Lawrence Aspden Apr 10 '14 at 20:49
$\begingroup$ I think you're still being little bit confused. That's OK, this is rarely explained well in textbooks. The point of the above is that the average dipole moment of many atoms will oscillate in a complicated way but the oscillation will consist mainly of frequencies given by the difference formula. Oscillating electric moment is connected to radiation in a well-known way (from electromagnetic theory): the electric field of the system at great distance $r$ is given schematically by $E(r,t) \approx C\frac{\ddot{p}(t)}{r}$. $\endgroup$ – Ján Lalinský Apr 10 '14 at 21:37
$\begingroup$ The frequency of oscillation of the moment $p$ translates directly into the frequency of radiation. There are no photons in this explanation. $\endgroup$ – Ján Lalinský Apr 10 '14 at 21:38
$\begingroup$ @Lalinsky good answer. You are the only one who is remotely correct on this question. The other answers are all such nonsense. $\endgroup$ – Marty Green Dec 9 '14 at 13:38
The idea here is increasingly complex depending on how deep into modern physics you want to delve, but also key to understanding quantum mechanics. So, I'll give a bit deeper explanation than it seems you've seen, but there's plenty more.
It's understood that a photon acts both as a particle and a wave. As a particle it has an amount of energy associated with it, and as a wave it has a wavelength and frequency. These two values are directly related; you can know one from the other.
A good first thought experiment is to consider a particle in a hypothetical one-dimensional box. It can only bounce back and forth along one direction and in a finite distance. It will settle into any one of a number of quantized states thay have a wavelength that "fit," as I'm guessing you understand from your studies.
Extend that idea to an electron, then, which is confined to "orbit" the atom. It is three dimensional and the forces involved are not infinite potential barriers, but the idea of the particle's wave settling into a frequency that "fits" still holds.
Now, when an atom absorbs or emits a photon, the energy is absorbed into or emitted by one of the quantized electrons, causing it to gain or lose energy equal to that of the photon. Since the electron can only have discrete amounts of energy, we can calculate the energy of the photons emitted!
$\begingroup$ As I understand it, the particle in the box doesn't settle into an eigenstate, it just continues obeying the schrodinger equation, and so it stays in whatever mix of states it starts in, modulo constant factors. $\endgroup$ – John Lawrence Aspden Apr 6 '14 at 17:51
$\begingroup$ Well, yes, but the particle achieves a local minima in potential energy by settling into a wave that has a discrete number of half wavelengths within the boundaries of the box. $\endgroup$ – user140858 Apr 6 '14 at 17:56
$\begingroup$ I don't understand this. The particle in a box setup has the potential either 0 or infinity, and the particle is never in the infinity bit. $\endgroup$ – John Lawrence Aspden Apr 6 '14 at 18:00
$\begingroup$ Yes, that's true for the potential caused by the box. What I mean is that the particle interacts with itself, and that a wavelength that doesn't fit neatly actually interferes with itself. In other words, it creates a higher potential. In doing so, the particle adjusts itself to fit the box, if that makes any sense. $\endgroup$ – user140858 Apr 6 '14 at 18:17
$\begingroup$ user1140858, your idea of "particle adjusting to the box" is interesting but it is not a standard part of the theory. Such behaviour is in contradiction to the Schroedinger equation; once the function $\psi$ is a superposition of many Hamiltonian eigenfunctions, it will remain such. Another theory (equation?) is needed to explain such adjusting. Spontaneous emission of light is very closely related. $\endgroup$ – Ján Lalinský Apr 6 '14 at 21:15
Your puzzlement arises because you are putting the cart in-front of the horse. The cart is the theoretical model of quantum mechanics and the horse is the data. As your question is migrated from math.SE one can understand this orientation, which is dominant also here.
The whole theoretical package of Quantum Mechanics did not arrive by a seemingly holy inspiration ( as some physical theories having to do with apples is said to have been), but was a slow accumulation of observations that forced physicists to think outside of the box of the mathematics used in classical mechanics and thermodynamics.
It started with the table of elements, the photoelectric effect, the black body radiation, the spectral lines in atomic spectra. All these could not be squeezed within the classical models. Bohr tried with his model.
The photoelectric effect forced thinking into light as particles, (once more , as Newton had proposed particles), the photons.
Then it was known and expected in classical electromagnetism that an accelerating electron would lose energy in the form of radiation into light,( so photons come into any radiation). This would be a continuous spectrum. Classical mechanics and classical electromagnetism could not produce the spectral lines, because by the classical equations the electron should fall on the nucleus emitting a continuous spectrum in the field of the protons, not the distinct spectra lines which were observed . So Bohr postulated that the electron was staying into orbits with specific energy and could only lose energy in photons ( the classical expectation) in quantized steps. This explained the phenomena mathematically by fitting series to the spectral lines, but was not satisfactory because it gave no framework for the other observations listed above, of forced states , quantized states for energy changes in the atomic micro framework.
I explained the particular reason, if it were not in a stable orbit there would not be spectral lines to be observed and we would not have atoms, and be here discussing this in the physical form we have.
The postulates of Quantum Mechanics imposed on the mathematical solution of the Schrodinger equation brought logic and a causal path to the random efforts for a theoretical framework, outside the box of classical theories. So the appropriation of the differential equation now called "Schrodinger equation" to interpret the data was not a coincidence but a great think outside the box of classical theories. By imposing the physical postulates on the interpretation of the solutions, the fortuitous fits of the Bohr model series could be understood as derived from a formal mathematical physical theory.
anna vanna v
$\begingroup$ This is an excellent answer and it most directly answers the question on the level at which it was asked. $\endgroup$ – J... Apr 7 '14 at 8:58
$\begingroup$ @Anna v A very interesting answer, thanks a lot. To the OP: if you like the sound of this answer, you may find an excellent introduction of the mathematical description of quantum mechanics from the experimental principles done in a book by A. Connes, available on his website alainconnes.org/en/downloads.php (Non-Commutative geometry, chapter 1, note that the rest of the book is absolutely not understandable for me, but the first chapter is really nice, and gives a short and profound discussion :-) $\endgroup$ – FraSchelle Apr 9 '14 at 8:32
Conservation of energy.
If we measure the energy of an atom, we will always report an eigenvalue, because we are forcing it into an eigenstate (this is something like the quantum mechanical definition of measurement). Now suppose that we measure the energy of an atom twice, before and after it emits a photon. For conservation of energy to hold, the energy of the photon must be the difference of the two eigenvalues.
It may be that the atom is not in an eigenstate exactly when it emits the photon, but an emission with energy level not a difference of eigenvalues would produce apparent contradictions as soon as we attempted to measure the change in energy.
you-sir-33433you-sir-33433
$\begingroup$ But we don't normally measure the energy of atoms before and after they emit photons. So why would they behave as if we did? $\endgroup$ – John Lawrence Aspden Apr 6 '14 at 17:57
$\begingroup$ @JohnLawrenceAspden This is much less strange than it would be if every atom remembered if it had ever had its energy measured, and anticipated whether it would ever be measured again. The point is that if we measured, certain changes would be nonsensical from the perspective of conservation of energy. Of course, this is an approximate picture (and, as Ján Lalinský points out, the answer is approximate as well). $\endgroup$ – you-sir-33433 Apr 6 '14 at 23:26
To have emission (or absorption) of photons you must have a Hamiltonian that includes those degrees of freedom also. If your system consists of (a) the electromagnetic field and (b) a hydrogen atom, you can specify the state with (a) for each frequency, the number of photons with that frequency and (b) the state of the hydrogen atom, in your favorite way, for example $1s$ or $2p$. You could write $\vert n_\omega=1, 1s\rangle$ for a state with 1 photon of frequency $\omega$ and the atom in the state 1s.
To calculate the probability for a transition between the states $\vert i\rangle$, meaning no photons and hydrogen atom in initial state $i$, and $\vert n_\omega =1, f\rangle$ where $f$ is some final state, you need to calculate an inner product like $$P = \langle n_\omega =1, f|O|i\rangle$$ where $O$ is some operator. The probability for the transition is then something proportional to $|P|^2$. The most significant contribution comes from the electric dipole moment operator and this is a standard calculation in textbooks. The result is that $P$ is proportional to $$P\propto \frac{\sin(t(\omega + \omega_f - \omega_i)/2)}{(\omega + \omega_f - \omega_i)/2}$$ where $\omega_f, \omega_i$ are the related to the initial and final energies by $\hbar\omega_f = E_f$ and similarly for $i$, and $t$ is the elapsed time. Clearly $P$ can be non-zero even if energy isn't conserved.
However, in the limit $t \to \infty$, $|P|^2$ approaches something proportional to $t\delta(\omega + \omega_f - \omega_i)$ where the $\delta$ is a Dirac delta. This is where conservation of energy comes from. The statement that atoms can emit photons only at specific frequencies is false if taken literally, each spectral line comes with a natural width corresponding to that $P$ for finite $t$ is non-zero even away from $\Delta E = 0$.
You can find a detailed calculation of $P$ in any textbook on quantum mechanics. I learned from Townsend's A Modern Approach to Quantum Mechanics, but I think you will find this calculation Sakurai's or Griffiths's books also.
Robin EkmanRobin Ekman
$\begingroup$ This looks like a really interesting answer. I'm going to think about it for a bit. Thanks. $\endgroup$ – John Lawrence Aspden Apr 6 '14 at 19:42
Not the answer you're looking for? Browse other questions tagged quantum-mechanics schroedinger-equation hilbert-space atomic-physics hydrogen or ask your own question.
Do electrons "check in" at the quantized energy radius before they leap?
How do you assign an observable to spectral lines in Heisenberg's resolution of Rydberg-Ritz?
Does every measurement correspond to an eigenstate of an observable?
Stability of the hydrogen atom and positronium
How big is an excited hydrogen atom?
How does the time evolution of a hydrogen atom work?
The point of charge neutrality in the Hydrogen Atom
$SO(4,2)$ symmetry of the hydrogen atom
Is it a coincidence that quantum harmonic oscillators and photons have energy quantised as $E=hf$?
The hydrogen atom, Legendre and Laguerre equations
Strange behavior difference between light and heavy hydrogen atoms | CommonCrawl |
Average Variable Cost Calculator
What is the average variable cost?
How to calculate the average variable cost?
The average variable cost formula
The average variable cost curve
Whether you are writing your business plan or already has the production process, you need to be familiar with the costs that arise from that process. One of these costs is the average variable cost, and you can calculate it straightforwardly with the help of our calculator for calculating the average variable costs.
But why is this information important to us for our business at all?
Well, for a simple reason. A minor production process also means a lower average variable cost and vice versa. If we increase production, we must know that our expenses will also increase. In this way, we can calculate the extent to which we can increase output without knowing that we have enough money to cover the costs incurred in the process.
Also, this calculation can help us stop the production process promptly or reduce costs in a certain period to avoid expenses that the company cannot bear.
Average variable costs are costs per unit of output. They appear only with the beginning of production and disappear with the cessation of production. With each change in production volume, the total variable costs also change, while the unit variable costs remain the same.
To calculate the average variable cost, you must first find out what your total variable cost is. The total variable costs directly depend on the number of goods produced in a given period. For example, the hourly cost of your employee in the production process or the cost of electricity required for each production facility is included in the total variable costs. To get the amount of the total variable costs, add up all the marginal costs for each of the production units.
Average variable costs are equal to the ratio of total variable costs to production volume. The formula used to calculate the average variable cost is:
AVC = \frac{VC}{Q}
VC – total variable costs
Q – the amount of production in a given period.
And here is what it looks like in the example:
Let's say we are engaged in the production of socks. Our production factory employs ten employees, and we know their schedule. They work on machines that use electricity exclusively. At the end of the production process, they pack everything neatly in ready-made packaging, then in large cardboard boxes and forward it to the transport center.
After we add up all the costs we have every month for this production facility, we get the amount of $ 20,000. Annually, that amount is $ 240,000. These are our total variable costs (VC).
In the production process, we produce 10,000 pairs of socks per month. In a year, we will reach the number of 120,000 pairs of socks. It is our annual production volume.
With the help of our calculator for calculating the average variable costs, you can easily get the information that the average variable cost for this example is:
AVC = \frac{240000}{120000}
AVC = 2
If the product's price on the market is higher than the AVC of that product, we can say that the manufacturer successfully covers all variable costs and part of fixed costs. It is a sign that you can continue production at the planned pace. If the opposite scenario occurs, consider temporarily suspending planned production until the market situation improves.
A diagram and a curve showing the growth or decrease of the average variable costs of a production plant can be used to represent the average variable costs graphically.
The x-axis shows the number of goods produced, and the y-axis shows the total variable costs.
The most common result should be a U-shaped cost curve. At the beginning of production, the angle decreases, but as the end of the process nears, it starts to grow. The reason is simple. At the beginning of the production process, the costs increase because it is necessary to activate the entire production factory to get the product. As the end of production approaches, specific functions disappear, and costs decrease.
U-shaped average variable cost curve
Finally, it is good to know that AVC is directly correlated with marginal costs. Marginal costs represent the change in total cost that occurs with an additional unit of product. You can use this relationship between marginal cost and AVC to predict the relationship between marginal cost and average variable cost.
If the marginal cost curve is under the average variable cost curve, you can say that the average variable cost should decrease. On the other hand, the average variable cost will increase.
If the average variable cost is minimum, the marginal cost will be the same as the average variable cost. | CommonCrawl |
Methane potentials of wastewater generated from hydrothermal liquefaction of rice straw: focusing on the wastewater characteristics and microbial community compositions
Huihui Chen1,
Cheng Zhang1,
Yue Rao1,
Yuhang Jing1,
Gang Luo1 &
Shicheng Zhang1
Hydrothermal liquefaction (HTL) has been well studied for the bio-oil production from biomass. However, a large amount of wastewater with high organic content is also produced during the HTL process. Therefore, the present study investigated the methane potentials of hydrothermal liquefaction wastewater (HTLWW) obtained from HTL of rice straw at different temperatures (170–320 °C) and residence times (0.5–4 h). The characteristics (e.g., total organic content, organic species, molecular size distribution, etc.) of the HTLWW were studied, and at the same time, microbial community compositions involved in AD of HTLWW were analyzed.
The highest methane yield of 314 mL CH4/g COD was obtained from the sample 200 °C–0.5 h (HTL temperature at 200 °C for 0.5 h), while the lowest methane yield 217 mL CH4/g COD was obtained from the sample 320 °C–0.5 h. These results were consistent with the higher amounts of hard biodegradable organics (furans, phenols, etc.) and lower amounts of easily biodegradable organics (sugars and volatile fatty acids) present in sample 320 °C–0.5 h compared to sample 200 °C–0.5 h. Size distribution analysis showed that sample 320 °C–0.5 h contained more organics with molecular size less than 1 kDa (79.5%) compared to sample 200 °C–0.5 h (66.2%). Further studies showed that hard biodegradable organics were present in the organics with molecular size higher than 1 kDa for sample 200 °C–0.5 h. In contrast, those organics were present in both the organics with molecular size higher and less than 1 kDa for sample 320 °C–0.5 h. Microbial community analysis showed that different microbial community compositions were established during the AD with different HTLWW samples due to the different organic compositions. For instance, Petrimonas, which could degrade sugars, had higher abundance in the AD of sample 200 °C–0.5 h (20%) compared to sample 320 °C–0.5 h (7%). The higher abundance of Petrimonas was consistent with the higher content of sugars in sample 200 °C–0.5 h. The higher Petrimonas abundance was consistent with the higher content of sugars in sample 200 °C–0.5 h. The genus Syntrophorhabdus could degrade phenols and its enrichment in the AD of sample 320 °C–0.5 h might be related with the highest content of phenols in the HTLWW.
HTL parameters like temperature and residence time affected the biodegradability of HTLWW obtained from HTL of rice straw. More hard biodegradable organics were produced with the increase of HTL temperature. The microbial community compositions during the AD were also affected by the different organic compositions in HTLWW samples.
Due to the challenge of energy security, demand of bioenergy (derived from biomass) has been growing very fast in recent decades. Biomass can be converted into fuels (e.g., bio-oil, methanol, ethanol, biodiesel, etc.) and valuable chemicals (e.g., xylose, phenols, etc.) by various physicochemical and biological methods [1, 2]. Hydrothermal liquefaction (HTL) is an option to generate renewable bio-oil from biomass. Since water works as solvent in the HTL process, a large amount of wastewater with high concentrations of both organics and nutrients are produced [3, 4]. Previous studies mainly focused on the characterization and potential utilization of the bio-oil [5, 6], and little attention has been paid on the utilization of hydrothermal liquefaction wastewater (HTLWW) even though a significant fraction (20–50%) of the organic components in the biomass could enter HTLWW [7, 8].
Inappropriate disposal of HTLWW would result in environmental pollution considering its high organic contents. AD is a proven technique and widely used in the treatment of organic wastes/wastewater [9]. A previous study investigated the methane potential of HTLWW obtained from HTL of algae [4], and it was reported that around 44–61% of the COD was removed and converted to biogas by AD. However, the HTLWW in the above study was not well characterized. Our recent publication showed that the methane yield of HTLWW obtained from HTL of straw at 27 days can be enhanced by organic solvent extraction, and the HTLWW conversion efficiency was 53% [10]. A biohythane production [11] for high energy recovery and carbon recovery is 79.0 and 67.7%, respectively, from HTL cornstalk, but it was limited by low hydrogen fermentation performance. The long lag phase and even complete inhibition of methane production during AD of HTLWW obtained from HTL of swine manure was also reported [12]. Reasons for low conversion efficiency were the presence of recalcitrant organics formed during the HTL, while they were not well characterized.
Lignocellulosic biomass, which is different from algae in both chemical composition and structure, is abundant in the world [13]. Their HTL under various conditions (temperatures and resident times) for either bio-oil or carbon production has been studied previously [14, 15]. Lignocellulosic biomass is mainly composed of cellulose, hemicellulose, and lignin [16]. It has been reported that cellulose and hemicellulose could be fully hydrolyzed within the range of 150–270 °C [17, 18] forming sugar compounds and furan derivatives including HMF and furfural. Lignin had a two-phase mechanism reaction with a very broad temperature spectrum (>200 °C) [19]: very fast reaction of lignin fragments by breaking lignin-sugar polymers (mainly hemicelluloses) bonds into soluble fragments and slower reaction where the fragments react with one another (sugar and/or sugar degradation products such as furfural [18]) to produce phenolic compounds [20]. However, the organics in the HTLWW were not well characterized. Especially, the changes of organic compositions in the HTLWW with the changes of HTL conditions remains to be elucidated, which closely related with the anaerobic biodegradability of HTLWW. Understanding the organics in the HTLWW would facilitate the selection of HTL conditions to avoid the formation of recalcitrant organics. Different technologies have been applied to characterize the organics in wastewater originated from different sources (Landfill leachate, sludge hydrolysate, straw hydrolysate, etc.) [10, 21, 22]. For example, GC–MS was used to identify the main classes of molecules compositions, GC-FID was used to quantify the C2–C6 fatty acids, 3D-EEM fluorescence spectroscopy was used to identify fluorescent organics, and size distribution analysis was used to understand the physical characteristic of organics. The combination of the above-mentioned technologies can provide detailed information of organics in wastewater from different aspects, which has not been applied for HTLWW. In addition, the microorganisms which could degrade the organic compositions in HTLWW would be enriched in the AD. Therefore, the characterization of microorganisms involved in the AD of HTLWW obtained from various HTL conditions would improve our understanding of AD from microbiological aspect.
Rice straw is a major source of lignocellulosic biomass. Annually, around 731 million tons of rice straw is produced by Asia alone [23]. Therefore, the present study investigated the methane potentials of HTLWW from rice straw obtained at different temperatures (170, 200, 230, 260, 290, and 320 °C). Besides, sample 200 °C had the highest sugar content (hemicellulose and cellulose hydrolysate), which was easy to be biodegraded. Therefore, we studied different residence times at 200 °C (0.5, 1, 2 and 4 h). The combination of UV, 3D-EEM, GC-FID, GC–MS, and size distribution analysis were used to characterize the HTLWW in order to reveal the correlation between organics and methane potentials. Microbial communities established in the AD with different HTLWW samples were also investigated by high-throughput sequencing of 16S rRNA genes.
Characteristics of HTLWW samples
COD and TOC were determined as they were two parameters that reflect the total organic content in wastewater. As shown in Table 1 and Additional file 1: Figure S1, the COD and TOC concentrations of HTLWW varied from 11.35 to 29.02 and 3.92 to 10.27 g/L, respectively, when the HTL temperatures varied from 170 to 320 °C. There were two peaks for COD and TOC values at 200 and 290 °C in the tested temperature range when the residence time was constant (0.5 h). The pH of all the HTLWW samples varied from 5.56 to 3.68, and it was due to the formation of organic acids, which was shown in the following part.
Table 1 pH, COD, TOC, and UV245 values of HTLWW samples under different HTL conditions
Methane potentials of HTLWW samples
Figure 1a shows the time courses of cumulative methane production from HTLWW obtained from different temperatures with the residence time 0.5 h. The rapid increase of methane production was observed after 12 days of acclimatization, and maximum methane production was achieved after around 21 days of digestion. The methane yields (Fig. 1b) at the end of the experiments were 302, 314, 258, 248, 251, and 217 mL CH4/g COD (calculated at STP conditions) for samples 170 °C–0.5 h, 200 °C–0.5 h, 230 °C–0.5 h, 260 °C–0.5 h, 290 °C–0.5 h, and 320 °C–0.5 h, respectively. ANOVA analysis showed that there was significant difference in methane yields for all the HTLWW samples (p < 0.05). The time courses of cumulative methane production from HTLWW obtained from HTL at 200 °C with varying residence times are shown in Fig. 2. The increase of residence time from 0.5 to 1, 2 and 4 h resulted in the decrease of methane yield from 315 mL CH4/g COD to around 250 mL CH4/g COD.
Methane yields of HTLWW obtained under different HTL temperatures: a time courses of methane production and b methane yields. The methane production from inoculum was subtracted for the calculation of the methane yields of different HTLWW samples
Methane yields of HTLWW obtained under different HTL residence times. The methane production from inoculum was subtracted for the calculation of the methane yields of different HTLWW samples
UV spectra analysis of HTLWW samples
Figure 3 shows the UV spectra of HTLWW samples obtained at various HTL conditions. All samples contained a single prominent peak at approximately 280 nm, shouldered at 320 and 250 nm. The peak values of absorbance at the range of 320–250 nm increased first when HTL temperature rose from 170 to 230 °C and then it decreased when the temperature was further increased to 320 °C. The absorbance of samples 170 °C–0.5 h, 290 °C–0.5 h, and 320 °C–0.5 h were similar, while sample 230 °C–0.5 h had the biggest absorbance. As Fig. 3b shows, the peak values of absorbance at the range of 320–250 nm was also affected by residence time, and the highest value was found for sample 200 °C–1 h at 280 nm.
UV–Vis spectra of the HTLWW: a spectra at different liquefaction temperatures; b spectra at different residence times when temperature was 200 °C
3D-EEM analysis of HTLWW samples
Samples 200 °C–0.5 h, 260 °C–0.5 h, 320 °C–0.5 h, and 200 °C–4 h were chosen for the analysis. Figure 4 shows fluorescent components and their relative concentrations of samples at 200, 260, and 320 °C. Fluorescent components were detected in all samples and increased when the temperature of HTL increased from 200 to 320 °C. Region of Ex/Em = 250–275/300–350 referred to accessible and easily biodegradable compounds such as fatty acids [24]. Fatty acids such as acetic acid (Ex/Em wavelength is 260/305) were verified to be produced even at lower temperature [25]. However, region Ex/Em = 280–325/380–425 nm was correlated to the hard biodegradable organics such as phenols like compounds [24, 25], which might be a notable reason for the low methane yield of sample 320 °C–4 h. The EEM spectra of samples 260 °C–0.5 h (Fig. 4b) and 200 °C–4 h (Fig. 4d) were similar which might indicate that the increase of either residence time or temperature of HTL could produce similar fluorescent compounds.
3D-EEM fluorescence spectrums of different HTLWW samples and (I) represents easily biodegradable compounds; (II) hard biodegradable organics such as phenols. a Sample 200 °C–0.5 h; b sample 260 °C–0.5 h; c sample 320 °C–0.5 h; d sample 200 °C–4 h
GC–MS analysis of HTLWW samples
Table 2 shows the organic species in HTLWW samples identified by GC–MS. The organics were classified into acetic acid, alcohols, furfurals, ketones, and phenols. There were almost no volatile organic substances detected by GC–MS except little amounts of furans and acetic acid when HTL temperature was 170 °C. Rice straw was converted to alcohols (pentanol, butanol, and hexanol) and furans (furfural, methylfurfural, and 5-hydroxymethylfurfural) firstly when the temperatures were in the range of 170–230 °C, and more ketones (butanone, cyclopentenone, and hexanedione) were produced when the HTL temperature was increased from 170 to 230 °C.
Table 2 Relative abundances of organic species identified by GC–MS in the HTLWW samples based on peak areas
Quantitative identification of typical compounds of HTLWW samples
Table 3 summarizes the concentrations of typical organic fatty acids and total sugars in COD values in different HTLWW samples. Increased fatty acid production was observed with the increase of HTL temperature or residence time, which was also consistent with the decreased pH values as shown in Table 1. The dominant organic compound was acetic acid with the highest concentration 140 mg/g COD value for sample 320 °C–0.5 h. The content of sugars in sample 200 °C–0.5 h was 454 mg/g COD, while it sharply decreased to 175 mg/g COD for sample 230 °C–0.5 h and 22 mg/g COD for sample 320 °C–0.5 h.
Table 3 Summary of typical organics in COD values of different HTLWW samples
Size distribution analysis of HTLWW samples
Organics with larger molecular weight need to be hydrolyzed before acidification and methanation, which is a relatively slow step [26] and therefore size distribution analysis of HTLWW might provide valuable information on the distribution of organics with different molecular sizes and their degradability. Size distribution of organics was for the first time used to characterize the HTLWW samples. Two samples (200 °C–0.5 h and 320 °C–0.5 h) were chosen since they had the highest difference in methane yields. Figure 5 and Table 4 show the COD values and their removal rates of samples 200 °C–0.5 h and 320 °C–0.5 h by membrane ultrafiltration with 100, 10, and 1 kDa membranes. It was obvious that organics with molecular size less than 1 kDa were dominant in the two HTLWW samples. Table 4 also shows that the organics with molecular size higher than 1 kDa were mainly distributed between 1 and 10 kDa. The methane yields of samples 200 °C–0.5 h and 320 °C–0.5 h after 1 kDa filtration were further evaluated in order to understand how the different molecular size of organics affected the methane yields of HTLWW, and the results are shown in Fig. 6. The methane yield of sample 200 °C–0.5 h ≤1 kDa was 345 mL CH4/g COD, which was close to the theoretical value (350 mL CH4/g COD), indicating this part of organics was easily biodegradable. Based on the methane yields of sample 200 °C–0.5 h and sample 200 °C–0.5 h ≤1 kDa, the methane yield of organics with molecular size higher than 1 kDa in sample 200 °C–0.5 h was calculated to be 253 mL CH4/g COD. For sample 320 °C–0.5 h, the methane yield of sample 320 °C–0.5 h ≤1 kDa was 239 mL CH4/g COD, and it was relative higher than that of sample 320 °C–0.5 h. However, the value was still much lower than the theoretical methane yield.
Percentages of molecular weight distributions of samples 200 °C–0.5 h 320 °C–0.5 h (in COD values)
Table 4 COD values and removal rates before and after membrane ultrafiltration with 100, 10, and 1 kDa
Methane yields of samples 200 °C–0.5 h and 320 °C–0.5 h before and after 1 KDa ultrafiltration
Microbial community compositions
As previously mentioned, the HTLWW contains various organics, and some of the organics were reported to be difficult to be biodegraded. The degradation of the organics to acetate and H2 by bacteria is crucial for methane production by methanogens. Therefore, it is necessary to understand the bacteria involved in the AD of HTLWW. As shown in Fig. 7a, all the samples were dominated by Proteobacteria, Firmicutes, and Bacteroidetes, which were commonly found in various biogas reactors [27,28,29]. The percentage of Proteobacteria in the sample 200 °C–4 h was 63%, which was obviously higher than that in the other samples (generally lower than 40%). It was further found that more than 70% of sequences in Proteobacteria were assigned to the genus Alcaligenes in sample 200 °C–4 h as shown in Fig. 7b. The abundance of genus Petrimonas, belonging to phylum Bacteroidetes, was 20% in sample 200 °C–0.5 h, which was much higher than that (<7%) in the other samples. The genus Syntrophorhabdus (phylum Proteobacteria), capable of degrading phenols to acetate in obligate syntrophic associations with hydrogenotrophic methanogens [30], was enriched only in the sample 320 °C–0.5 h. Another genus Sedimentibacter (phylum Firmicutes), which could also convert phenols, was found to be present in all the samples (2–5.5%) except control sample [31].
Classification of the sequences belonging to bacteria in different samples. a Phylum-level classification and b genus-level classification
The archaeal community was also analyzed as shown in Fig. 8. The order Methanosarcinales was dominant in most of the samples. It is known that the order Methanosarcinales mainly contains genus Methanosaeta and Methanosarcina [32]; however, only genus Methanosarcina was found in all the samples. The hydrogenotrophic order Methanomicrobiales was also found to be dominant in all the samples except control, and the highest abundance (52%) was observed in sample 260 °C–0.5 h, indicating hydrogenotrophic methanogenesis as the predominant pathway.
Classification of the sequences belonging to archaea in different samples. a Order-level classification and b genus-level classification
The energy recovery as methane from HTLWW obtained under different HTL conditions were calculated and shown Fig. 9. The recovered energy as methane from HTLWW through AD were in the range of 122.59 to 309.72 MJ per 100 kg of dry rice straw when HTL temperatures were increased from 170 to 320 °C and residence times were increased from 0.5 to 4 h at 200 °C, which equals to the energy recovery between 11.15 and 28.17% from rice straw.
Energy recoveries of methane in the HTLWW acquired under different conditions
The polytropic changes of COD and TOC values in Table 1 and Additional file 1: Figure S1 could be resulted from the hydrolyzation (decomposition) and repolymerization of different components in rice straw. A previous study showed that the TOC concentrations of HTLWW obtained from HTL of Pontianak tropical peat increased with the increase of temperature from 150 to 250 °C, but decreased in the range of 250–270 °C, possibly due to repolymerization [17]. Repolymerization reactions occurred along with this process and the products transferred into the solid phase [33]. The decrease of COD and TOC at 200 °C along with residence time might be related with repolymerization. The percentages of organics transferred into HTLWW were also calculated and are shown in Table 1, and it was in the range of 12–31%, which was lower than that (35–40%) for HTL of algae [34]. It could be mainly due to the different characteristics such as higher protein and carbohydrate contents in the algae utilized for HTL.
In a previous study, it was reported that around 44–61% of the COD in HTLWW obtained from HTL of algae was converted to methane [4]. In addition, the methane yield of 180 mLCH4/g COD was obtained from the HTLWW obtained by HTL of sewage sludge [35]. The methane yields obtained in the present study were between 217 mLCH4/g COD and 314 mLCH4/g COD, which was corresponded to 62–90% of the theoretical value (350 mL CH4/g COD). The conversion efficiencies of COD in the present study were higher than previously reported, which could be due to the different substrates for HTL; e.g., both algae and sewage sludge were rich in protein, while rice straw was mainly composed by carbohydrates. HTLWW obtained at 170 and 200 °C had relatively higher methane yields compared to the other conditions, indicating lower HTL temperature is crucial to achieve higher anaerobic biodegradability displays in Fig. 1. It was also demonstrated that the anaerobic biodegradability of HTLWW was lower when the temperature of hydrothermal processing increased although the biomass for HTL was quite different [36]. Insufficient conversion indicating recalcitrant organics was present in all HTLWW samples, especially the samples obtained at high HTL temperature (e.g., 320 °C). Therefore, characterization of the different HTLWW samples was conducted to understand how the HTL conditions affected the biodegradability of HTLWW.
All HTLWW samples had absorption peak in the range of 320–250 nm. It was reported that many degradation products of sugars (from hemicellulose and cellulose) and lignin had absorption peak in the range of 320–250 nm [37]. For the hydrolysates of sugars, the peaks at 278 and 284 nm corresponded to furfural and the mixture of furan and 5-hydroxymethylfurfural (HMF), respectively [37]. A rapid method for determination of furfural using UV spectroscopy also showed furfural compound had a peak absorbance at 276 nm [38]. Besides, UV254 provides an indication of the concentration of unsaturated bonds (double and triple) structures and aromatic ring matters, which was difficult to be biodegraded [39]. The UV254 to COD ratio values are listed in Table 1. It could be confirmed that the absorption at the range of 320–250 nm was mainly attributed to the sugar degradation products of unsaturated bonds instead of lignin. This is because sample 320 °C–0.5 h had higher phenols but the UV254/COD ratio of sample 320 °C–0.5 h was the lowest (lignin degradation products would be discussed in following GC–MS analysis section). It was also reported that sugars could be converted into furan derivatives, HMF, and furfural products at around 200 °C [17, 40]. Particularly, as Fig. 3b shows, sample 200 °C–1 h had a high absorption peak at 278 nm, and it also had relatively lower methane yield in Fig. 2. Nevertheless, the reason for lower methane yields of samples 320 °C–0.5 h and 200 °C–1 h was still not clear and other methods were needed to identify other organics to better understand the AD of HTLWW.
It further confirmed the conclusion that sugar degradation products attributed to the absorption at the range of 320–250 nm when comparing GC–MS and UV results. The highest amounts of phenols (phenol, ethyl-phenol, and methoxy-phenol) were detected in sample 320 °C–0.5 h. In particularly, the highest content of furans was confirmed in the sample 200 °C–1 h, which was exactly consistent with the UV analysis. Besides, the content of furans in hydrolysates was used as a predictor for the toxicity of the hydrolysates from the HTL of hemicellulose [37], and this could be one of the reasons that the methane yield of sample 200 °C–1 h was much lower than that of sample 200 °C–0.5 h. For samples 320 °C–0.5 h and 200 °C–4 h, relative higher concentrations of furans, ketones, and phenols might contribute to their lower methane yields compared to other samples [41]. It still needs other characterizations to directly explain samples such as 200 °C–0.5 h with higher methane yield.
As Table 3 shows, 200 °C–0.5 h had the highest total sugars and short-chain organic acids content (524 mg/g COD) especially the sugars content (454 mg/g COD). It could be one reason for the highest methane yield obtained from sample 200 °C–0.5 h since sugars and short-chain organic fatty acids were relatively easier to be biodegraded by AD [42, 43]. It should be noted that sample 200 °C–2 h had higher amount of sugars and organic fatty acids (432 mg/g COD) compared to sample 170 °C–0.5 h (403 mg/g COD); however, it had significantly lower methane yield (259 mL CH4/g COD) compared to sample 170 °C–0.5 h (302 mL CH4/g COD), which indicated that other easily degradable organics might be present in 170 °C–0.5 h but not quantified in the present study. In addition, the relatively higher content of refractory compounds (furans, similar value to 200 °C–1 h Table 2) was also found in sample 200 °C–2 h as mentioned before, which could be one reason for the relatively lower methane yield.
A new vision from size fractionation of organics in HTLWW on AD was tested. Comparing with sample 200 °C–0.5 h, sample 320 °C–0.5 h contained more organics with molecular size less than 1 kDa (66.2% vs 79.5%). This indicated that there were more high molecular size compounds presented in sample 200 °C–0.5 h and higher temperature facilitated the decomposition of high molecular size compounds [44]. As Fig. 5 displays, lower methane yield of sample 200 °C–0.5 h compared to theoretical value was mainly attributed to the presence of refractory organics with molecular size higher than 1 kDa. It was obvious for sample 320 °C–0.5 h that the refractory organics were still present in the organics with molecular size lower than 1 kDa, which was different from the sample 200 °C–0.5 h. The result was consistent with GC–MS analysis since more phenols and ketones which were difficult for biodegradation were detected in sample 320 °C–0.5 h compared to sample 200 °C–0.5 h. The molecule weights of eighteen ketones and six phenols identified in Table 2 were around 200 Da whose molecular size was lower than 1 kDa and distributed in the phase with molecular size lower than 1 kDa. The methane yield of organics with molecular size higher than 1 kDa in sample 320 °C–0.5 h was also calculated and it was only 132 mL CH4/g COD, indicating the presence of refractory organics with molecular size higher than 1 kDa in sample 320 °C–0.5 h.
Although there were several studies focusing on the methane production from HTLWW originated from different biomasses [4, 35], the microbial community was seldom investigated. Only our previous study made such analysis and found Firmicutes was dominant in the microbial community during AD of HTLWW [10], which was consistent with the present study. It should be noted that Proteobacteria was also dominant in the samples of the present study. E.g, the relative abundance of Proteobacteria was 63% in the sample 200 °C–4 h and more than 70% of the Proteobacteria was assigned to the genus Alcaligenes as shown in Fig. 7b. Previous literature showed that the genus Alcaligenes is strictly aerobic and some strains are capable of anaerobic respiration in the presence of nitrate or nitrite [45, 46]. The relative abundance of Alcaligenes in the control sample was very low (2.2%), while it was dominant (between 25 and 50%) in samples 200 °C–0.5 h, 200 °C–4 h, and 260 °C–0.5 h. In addition, a previous study also detected the genus Alcaligenes in the AD of mixed-microalgal biomass [47]. Therefore, it seems that the genus Alcaligenes played an important role in the AD. However, the functional roles of genus Alcaligenes in AD needs further elucidation. It should be noted that the abundance of genus Alcaligenes in sample 320 °C–0.5 h was extremely low (around 1%). Tables 2 and 3 show that the concentrations of furfurals and sugars in samples 200 °C–0.5 h, 260 °C–0.5 h, and 200 °C–4 h were much higher than that in sample 320 °C–0.5 h, which might be related with the enrichment of the genus Alcaligenes. Besides, it has been reported the microorganisms in genus Petrimonas (belonging to phylum Bacteroidetes) could ferment sugars to produce acetate [48], and its abundance in sample 200 °C–0.5 h could be related with the higher sugar content (454 mg/g COD) compared with the other samples. The genus Syntrophorhabdus (phylum Proteobacteria) could be related with the much higher content of phenols in HTLWW at 320 °C–0.5 h compared to the other samples as shown in Table 2.
In addition, the abundance of genus Sedimentibacter decreased with the increase of HTL temperature. For example, the abundance of Sedimentibacter was 5.5% in sample 200 °C–0.5 h, while it decreased to 1.9% in sample 320 °C–0.5 h. The above results showed that some known bacteria relating with sugar and phenol degradations were enriched during the AD of HTLWW. Considering the complex organics present in HTLWW, more functional versatile bacteria should be enriched. There might be two reasons. On the one hand, the functional properties of the isolated bacteria might not be fully explored, such as the genus Alcaligenes as previously mentioned. On the other hand, there were relatively high percentages (between 14 and 50%) of unclassified sequences in genus level in the samples, which were unknown bacteria and remain to be investigated.
Methanosarcina was found in all the samples, while Methanosaeta was not for archaeal community as shown in Fig. 8. It could be due to the fact that Methanosaeta are strictly aceticlastic methanogens and are sensitive to the environmental conditions, while Methanosarcina can mediate both aceticlastic and hydrogenotrophic methanogenesis [49]. In all the HTLWW samples, various kinds of organics were present, some of which may be toxic to methanogens, and therefore only the metabolically versatile Methanosarcina could survive. Previous study also showed that hydrogenotrophic methanogens are more tolerant to the changes of environmental conditions compared to aceticlastic methanogens [50]. As mentioned before, hydrogenotrophic methanogens are important for the degradation of certain organics (phenols) by syntrophic associations with bacteria [30]. The above results clearly showed that Methanosarcina instead of Methanosaeta was dominant for the AD of HTLWW at various conditions.
The energy recoveries as methane from rice straw were between 11.15 and 28.17% (Fig. 9), and they were lower than that reported in a previous study where the energy recovery as high as 50% was obtained [36]. It could be due to the fact that relatively lower percentages of organic components were transferred to aqueous phase for HTL of rice straw in the present study (between 12.11 and 29.42% in COD value as shown in Table 1). However, in the above-mentioned study, food waste was used and the organics transferred into aqueous phase by HTL were in the range of 35 to 87%.
The combination of HTL and AD can achieve efficient utilization of biomass for biofuels production in the form of bio-oil and biogas. The present study clearly showed that the HTL conditions significantly affected the compositions of HTLWW, and thereby resulted in variation of microbial community compositions in AD and finally affected the methane potentials of HTLWW. The increase of HTL temperature (higher than 230 °C) and residence time (longer than 1 h) were not beneficial for biogas production from HTLWW, since hard degradable or even inhibitors like phenols, furan, and 5-hydroxymethylfurfural were produced. Therefore, very high HTL temperature and long residence time should be avoided for HTL of rice straw and similar lignocellulose biomass if HTLWW will be treated by AD for methane production. In addition, separation of the inhibitory compounds before AD might also be applied in order to increase the biogas production from HTLWW. For example, several methods are developed in order to separate furans, phenols, and ketones in our group [51, 52], which could not only get high value chemicals but also might improve the anaerobic degradability of HTLWW. Although rice straw was used in the present study, the results obtained might also be transferable to other HTLWW samples obtained from HTL of lignocellulose materials.
The present study showed that HTL temperature and residence time obviously affected the anaerobic biodegradability of HTLWW. The highest and lowest methane yields were 314 and 217 mL CH4/g COD obtained from sample 200 °C–0.5 h and sample 320 °C–0.5 h, respectively. The methane production potential was related to different contents of hard biodegradable organics (furans and phenols) and easily biodegradable organics (sugars and volatile fatty acids) in the samples. The study also showed that the organics with molecular size less than 1 kDa for sample 200 °C–0.5 h could be almost fully converted to methane (methane yield 345 mLCH4/g COD). However, organics with molecular size higher than 1 kDa for 200 °C–0.5 h contained recalcitrant organics and could not be fully converted as the methane yield showed 253 mLCH4/g COD. In addition, for sample 320 °C–0.5 h, the organics with molecular size both less (methane yield 249 mL CH4/g COD) and higher (methane yield 132 mLCH4/g COD) than 1 kDa had lower methane yields compared to those for sample 200 °C–0.5 h.
Further microbial community analysis showed that different microbial community compositions were established during the AD with different HTLWW samples, which was correlated with the different organic compositions. The higher Petrimonas abundance was consistent with the higher content of sugars in sample 200 °C–0.5 h and the enrichment of genus Syntrophorhabdus was related with the highest content of phenols in sample 320 °C–0.5 h. Besides, Methanosarcina instead of Methanosaeta was dominant for the AD of complicated HTLWW at various conditions.
Preparation of HTLWW
Rice straw with particle size ranging between 0.2 and 1.0 mm was for HTL. The characteristics of rice straw are shown in Additional file 1: Table S1. HTL of rice straw was performed in a 250-mL completely mixed stainless steel (316L) reactor (Yan Zheng experiment instrument Co., Ltd, Shanghai, China). The temperature of the reactor was controlled with a programmable temperature controller and a digital thermometer. In a typical run, 15 g of rice straw and 150 mL MilliQ water were loaded into the reactor to obtain the water/biomass ratio of 10:1 [53]. Then the reactor was sealed and heated to the desired temperature (170, 200, 230, 260, 290, and 320 °C) with the residence time 0.5 h. In addition, different residence times (0.5, 1, 2, and 4 h) were also conducted for the HTL at 200 °C. After reaching the desired residence time, the reactor was removed from the heater and quenched rapidly in a water bath to stop the reactions. The solid and liquid products were collected after depressurization, and liquid products (HTLWW) were separated from solid products by a vacuum buchner funnel through 0.45-μm membranes. HTLWW samples were stored in refrigerator at −20 °C for further utilization.
Methane potentials of HTLWW
Batch experiments were conducted to determine the methane potentials of HTLWW obtained from different HTL conditions. The experiments were conducted in 118-mL serum bottles with 60 mL working volume. 15 mL inoculum and 45 mL BA medium containing HTLWW and 5 g/L NaHCO3 were added to each bottle, and the initial COD value of all assays were controlled at 0.75 g/L by adding different amounts of HTLWW to the BA medium. pH values were adjusted to 7.5 by the addition of 2 M NaOH and 2 M HCl. All the bottles were flushed with N2 to remove oxygen before incubation, and then sealed with butyl rubber stoppers and aluminum screw caps. The bottles were placed in an incubator with temperature controlled at 37 °C. The inoculum used in the study was obtained from a lab-scale biogas reactor treating sewage sludge with TS 17.4 g/L, VS 12.9 g/L, and pH 7.2. All the experiments were conducted in triplicates. The bottles with only inoculum were used as control.
Size fractionation of HTLWW
Two HTLWW samples 200 °C–0.5 h and 320 °C–0.5 h were chosen based on their methane potentials (the maximum and minimum values) to determine the effects of size fractionation of organics in HTLWW on methane potentials. Size fractionation of organics in HTLWW was carried out in a dead-end batch ultrafiltration apparatus. The apparatus was consisted of 400-mL stirred ultrafiltration cell (model 8400, Amicon, Belford, MA), a nitrogen gas tank (pressure: 200 kPa), and membrane disks (Millipore, Billerica, MA) with diameter of 76 mm. The MW cutoffs of membrane disks used in the study were 1, 10, and 100 kDa, (PLAC, PLGC, and PLHK Millipore, Billerica, MA). HTLWW samples were filtered and collected, and then they were stored at 4 °C for further analysis and methane potential tests. Based on the results from size fractionation, the methane potentials of HTLWW after 1 kDa filtration were determined to understand how the size fractionation of organics affected the methane potential of different HTLWW samples.
Microbial community analysis
Four samples (200 °C–0.5 h, 260 °C–0.5 h, 320 °C–0.5 h and 200 °C–4 h) were collected when the cumulative methane production achieved maximum values in the batch experiments for methane potentials test. Sample 200 °C–0.5 h had the highest methane yield, while sample 320 °C–0.5 h had the lowest yield. Temperature 260 °C was the medium temperature of 200 and 320 °C and the methane yield value was medial. Besides, since the methane yields were not significantly changed at 200 °C when the HTL residence time was increased from 1 to 4 h, the sample 200 °C–4 h was chosen. Total genomic DNA was extracted from each sample using QIAamp DNA Stool Mini Kit (QIAGEN, 51504). The quantity and purity of the extracted DNA were checked by Nanodrop 2000 (Thermo Scientific, USA). PCR was then conducted with the universal primers 515 F (5′-GTGCCAGCMGCCGCGGTAA-3′) and 806 R (5-GGACTACHVGGGTWTCTAAT-3′) targeting both bacteria and archaea according to previous studies [49, 54]. The PCR products were purified, quantified, and used for barcoded libraries preparation and then sequenced on an Illumina Miseq platform according to the standard protocols. The obtained sequences were deposited in the NCBI sequence read archive database (Accession ID: SUB2302564). The low-quality sequences without exact matches to the forward and reverse primers, with length shorter than 100 bp, and containing any ambiguous base calls, were removed from the raw sequencing data by RDP tools (http://pyro.cme.msu.edu/). Chimeras were removed from the data using the Find Chimeras web tool (http://decipher.cee.wisc.edu/FindChimeras.html). The numbers of sequences after quality filtration from different samples are shown in Additional file 1: Table S2. The high-quality sequences were phylogenetically assigned to taxonomic classifications by RDP Classifier with a confidence threshold of 80%.
COD was measured according to APHA (APHA, 1995). Total organic carbon (TOC) was analyzed by a TOC analyzer (TOC-L CPH, Shimadzu, Japan). pH value was measured using a pH meter (FE20, Mettler Toledo, Switzerland). UV spectra and UV254 were measured by TU-1901 ultraviolet–visible light spectrophotometer which was made by general analysis instrument co., LTD of Beijing.
3-Dimensional excitation–emission matrix (EEM) fluorescence spectroscopy was measured for excitation wavelengths of λex = 240–600 nm at 3-nm increments across an emission range of λem = 280–550 nm at 3-nm intervals by Fluorescence Spectrophotometer (Horiba, Japan). Inner filter effect was minimized by diluting the samples until the absorbance at wavelength of 254 nm was smaller than 0.05/cm. Data were processed using the FL Toolbox v1.91 for MATLAB 7.0 and presented as an EEM. Before analysis, the EEM correction process was carried out consisting of blank EEM subtraction, scatter line removal, application of excitation and emission correction factors, correction for inner filter effects, and normalization to Raman Units.
GC–MS (Focus DSQ, Thermoelectron, America) was used to characterize the chemical compositions of HTLWW samples. Gas chromatography was performed on a 30-m HP-INNOWax quartz capillary column with 0.25 mm inner diameter (I.D.) and 0.25 μm film thickness with injection temperature of 250 °C. The column was initially held at 60 °C for 2 min and heated to 250 °C and held for 10 min. Helium was served as the carrier gas (1.0 mL/min). A NIST Mass Spectral Database (https://www.nist.gov/srd/nist-standard-reference-database-1a-v14) was used for compound identification.
The concentrations of volatile fatty acids were determined by GC (Shimadzu G2010) with a flame ionization detector, and the lactic acid concentrations were measured by high-performance liquid chromatography. Detailed information about the above analysis can be found in our previous studies [10].
Calculation of methane production efficiency
COD degradation efficiencies were calculated based on measured methane volumes in the fermentation serum bottles as well as on the COD input (0.045 g COD) equivalent of methane volume. The percentage of methanogenesis was calculated using Eq. (1), where VSTP-CH4 is the cumulative methane production volume from external carbon sources. It is calculated by subtracting three average cumulative productions observed in control bottles from the average cumulative production observed in the bottles fed with HTLWW as carbon source, and V0-CH4 is the theory methane volume provided by mass of organic matter in each trial. The maximum theoretical methanogenic potential was calculated as 350 mL of CH4 generated per gram of removed COD [55]. All values of methane yield reported at standard temperature and pressure (STP) throughout the study.
$${\text{CH}}_{ 4} (\% ) = \frac{{V_{{{\text{STP}} - {\text{CH}}_{ 4} }} }}{{V_{{0 - {\text{CH}}_{ 4} }} }} \times 100\%.$$
EEM:
excitation–emission matrix
HMF:
hydroxymethylfurfural
HTL:
hydrothermal liquefaction
HTLWW:
hydrothermal liquefaction wastewater
STP:
standard temperature and pressure
total organic carbon
Goudriaan F, Peferoen D. Liquid fuels from biomass via a hydrothermal process. Chem Eng Sci. 1990;45(8):2729–34.
Knezevic D, van Swaaij W, Kersten S. Hydrothermal conversion of biomass. II. Conversion of wood, pyrolysis oil, and glucose in hot compressed water. Ind Eng Chem Res. 2009;49(1):104–12.
Panisko E, Wietsma T, Lemmon T, Albrecht K, Howe D. Characterization of the aqueous fractions from hydrotreatment and hydrothermal liquefaction of lignocellulosic feedstocks. Biomass Bioenergy. 2015;74:162–71.
Tommaso G, Chen WT, Li P, Schideman L, Zhang Y. Chemical characterization and anaerobic biodegradability of hydrothermal liquefaction aqueous products from mixed-culture wastewater algae. Bioresour Technol. 2015;178:139–46.
Garcia-Perez M, Chaala A, Pakdel H, Kretschmer D, Roy C. Characterization of bio-oils in chemical families. Biomass Bioenergy. 2007;31(4):222–42.
Xu C, Lad N. Production of heavy oils with high caloric values by direct liquefaction of woody biomass in sub/near-critical water. Energy Fuel. 2008;22(1):635–42.
Gai C, Zhang Y, Chen WT, Zhou Y, Schideman L, Zhang P, et al. Characterization of aqueous phase from the hydrothermal liquefaction of Chlorella pyrenoidosa. Bioresour Technol. 2015;184:328–35.
Villadsen SR, Dithmer L, Forsberg R, Becker J, Rudolf A, Iversen SB, et al. Development and application of chemical analysis methods for investigation of bio-oils and aqueous phase from hydrothermal liquefaction of biomass. Energy Fuel. 2012;26(11):6988–98.
Chan YJ, Chong MF, Law CL, Hassell D. A review on anaerobic–aerobic treatment of industrial and municipal wastewater. Chem Eng J. 2009;155(1):1–18.
Chen H, Wan J, Chen K, Gang L, Fan J, Clark J, et al. Biogas production from hydrothermal liquefaction wastewater (HTLWW): focusing on the microbial communities as revealed by high-throughput sequencing of full-length 16S rRNA genes. Water Res. 2016;106:98–107.
Si BC, Li JM, Zhu ZB, Zhang YH, Lu JW, Shen RX. Continuous production of biohythane from hydrothermal liquefied cornstalk biomass via two-stage high-rate anaerobic reactors. Biotechnol Biofuels. 2016;9(1):254.
Zhou Y, Schideman L, Zheng M, Martin-Ryals A, Li P, Tommaso G, et al. Anaerobic digestion of post-hydrothermal liquefaction wastewater for improved energy efficiency of hydrothermal bioenergy processes. Water Sci Technol. 2015;72(12):2139–47.
Monlau F, Sambusiti C, Barakat A, Quéméneur M, Trably E, Steyer JP, et al. Do furanic and phenolic compounds of lignocellulosic and algae biomass hydrolyzate inhibit anaerobic mixed cultures? A comprehensive review. Biotechnol Adv. 2014;32(5):934–51.
Jain A, Balasubramanian R, Srinivasan M. Hydrothermal conversion of biomass waste to activated carbon with high porosity: a review. Chem Eng J. 2016;283:789–805.
Singh R, Chaudhary K, Biswas B, Balagurumurthy B, Bhaskar T. Hydrothermal liquefaction of rice straw: effect of reaction environment. J Supercrit Fluid. 2015;104:70–5.
Younas R, Zhang S, Zhang L, Luo G, Chen K, Cao L, et al. Lactic acid production from rice straw in alkaline hydrothermal conditions in presence of NiO nanoplates. Catal Today. 2016;274:40–8.
Mursito AT, Hirajima T, Sasaki K, Kumagai S. The effect of hydrothermal dewatering of Pontianak tropical peat on organics in wastewater and gaseous products. Fuel. 2010;89(12):3934–42.
Garrote G, Dominguez H, Parajo J. Hydrothermal processing of lignocellulosic materials. Eur J Wood Wood Prod. 1999;57(3):191–202.
Tsubaki S, Iida H, Sakamoto M, Azuma JI. Microwave heating of tea residue yields polysaccharides, polyphenols, and plant biopolyester. J Agr Food Chem. 2008;56(23):11293–9.
Ruiz HA, Rodriguez-Jasso RM, Fernandes BD, Vicente AA, Teixeira JA. Hydrothermal processing, as an alternative for upgrading agriculture residues and marine biomass according to the biorefinery concept: a review. Renew Sust Energ Rev. 2013;21:35–51.
Campagna M, Çakmakcı M, Büşra Yaman F, Özkaya B. Molecular weight distribution of a full-scale landfill leachate treatment by membrane bioreactor and nanofiltration membrane. Waste Manag. 2013;33(4):866–70.
Eskicioglu C, Kennedy KJ, Droste RL. Characterization of soluble organic matter of waste activated sludge before and after thermal pretreatment. Water Res. 2006;40(20):3725–36.
Karimi K, Emtiazi G, Taherzadeh MJ. Ethanol production from dilute-acid pretreated rice straw by simultaneous saccharification and fermentation with Mucor indicus, Rhizopus oryzae, and Saccharomyces cerevisiae. Enzyme Microb Technol. 2006;40(1):138–44.
Sun J, Guo L, Li Q, Zhao Y, Gao M, She Z, et al. Three-dimensional fluorescence excitation–emission matrix (EEM) spectroscopy with regional integration analysis for assessing waste sludge hydrolysis at different pretreated temperatures. Environ Sci Poll Res. 2016;23(23):24061–7.
Heo J, Yoon Y, Kim D-H, Lee H, Lee D, Her N. A new fluorescence index with a fluorescence excitation-emission matrix for dissolved organic matter (DOM) characterization. Desalin Water Treat. 2015;57:1–13.
Mata-Alvarez J, Macé S, Llabrés P. Anaerobic digestion of organic solid wastes. An overview of research achievements and perspectives. Bioresour Technol. 2000;74(1):3–16.
Luo G, Fotidis IA, Angelidaki I. Comparative analysis of taxonomic, functional, and metabolic patterns of microbiomes from 14 full-scale biogas reactors by metagenomic sequencing and radioisotopic analysis. Biotechnol Biofuel. 2016;9:51.
Sundberg C, Al-Soud WA, Larsson M, Alm E, Yekta SS, Svensson BH, et al. 454 pyrosequencing analyses of bacterial and archaeal richness in 21 full-scale biogas digesters. FEMS Microbiol Ecol. 2013;85(3):612–26.
Treu L, Kougias PG, Campanaro S, Bassani I, Angelidaki I. Deeper insight into the structure of the anaerobic digestion microbial community; the biogas microbiome database is expanded with 157 new genomes. Bioresour Technol. 2016;216:260–6.
Qiu Y-L, Hanada S, Ohashi A, Harada H, Kamagata Y, Sekiguchi Y. Syntrophorhabdus aromaticivorans gen. nov., sp. nov., the first cultured anaerobe capable of degrading phenol to acetate in obligate syntrophic associations with a hydrogenotrophic methanogen. Appl Environ Microbiol. 2008;74(7):2051–8.
Levén L, Nyberg K, Schnürer A. Conversion of phenols during anaerobic digestion of organic solid waste—a review of important microorganisms and impact of temperature. J Environ Manag. 2012;95:S99–103.
Karakashev D, Batstone DJ, Angelidaki I. Influence of environmental conditions on methanogenic compositions in anaerobic biogas reactors. Appl Environ Microbiol. 2005;71(1):331–8.
Akhtar J, Amin NAS. A review on process conditions for optimum bio-oil yield in hydrothermal liquefaction of biomass. Renew Sust Energ Rev. 2011;15(3):1615–24.
Biller P, Ross AB, Skill S, Lea-Langton A, Balasundaram B, Hall C, et al. Nutrient recycling of aqueous phase for microalgae cultivation from the hydrothermal liquefaction process. Algal Res. 2012;1(1):70–6.
Wirth B, Reza T, Mumme J. Influence of digestion temperature and organic loading rate on the continuous anaerobic treatment of process liquor from hydrothermal carbonization of sewage sludge. Bioresour Technol. 2015;198:215–22.
Posmanik R, Labatut RA, Kim AH, Usack JG, Tester JW, Angenent LT. Coupling hydrothermal liquefaction and anaerobic digestion for energy valorization from model biomass feedstocks. Bioresour Technol. 2017;233:134–43.
Martinez A, Rodriguez ME, York SW, Preston JF, Ingram LO. Use of UV absorbance to monitor furans in dilute acid hydrolysates of biomass. Biotechnol Prog. 2000;16(4):637–41.
Zhang C, Chai XS, Luo XL, Fu SY, Zhan HY. Rapid method for determination of furfural and 5-hydroxymethyl furfural in pre-extraction stream of biomass using UV spectroscopy. Spectrosc Spectr Anal. 2010;30(1):247–50.
Stemann J, Putschew A, Ziegler F. Hydrothermal carbonization: process water characterization and effects of water recirculation. Bioresour Technol. 2013;143:139–46.
Chen W-T, Zhang Y, Zhang J, Yu G, Schideman LC, Zhang P, et al. Hydrothermal liquefaction of mixed-culture algal biomass from wastewater treatment system into bio-crude oil. Bioresour Technol. 2014;152:130–9.
Cheng J-R, Liu X-M, Chen Z-Y, Zhang Y-S, Zhang Y-H. A novel mesophilic anaerobic digestion system for biogas production and in situ methane enrichment from coconut shell pyroligneous. Appl Biochem Biotechnol. 2016;178(7):1303–14.
Demirel B, Scherer P. Production of methane from sugar beet silage without manure addition by a single-stage anaerobic digestion process. Biomass Bioenergy. 2008;32(3):203–9.
Hamidreza S, Waynej P, Syeda S. The effect of volatile fatty acids on the inactivation of Clostridium perfringens in anaerobic digestion. World J Microbiol Biotechnol. 2008;24(5):659–65.
Yamauchi K, Saka S. Characterization of oligosaccharides with MALDI-TOF/MS derived from Japanese beech cellulose as treated by hot-compressed water. Zero-Carbon Energy Kyoto 2010. New York: Springer; 2011. p. 95–99.
van Niel EWJ, Braber KJ, Robertson LA, Kuenen JG. Heterotrophic nitrification and aerobic denitrification in Alcaligenes faecalis strain TUD. Antonie Van Leeuwenhoek. 1992;62(3):231–7.
Lechner S, Conrad R. Detection in soil of aerobic hydrogen-oxidizing bacteria related to Alcaligenes eutrophus by PCR and hybridization assays targeting the gene of the membrane-bound (NiFe) hydrogenase. FEMS Microbiol Ecol. 1997;22(3):193–206.
Cho S, Park S, Seon J, Yu J, Lee T. Evaluation of thermal, ultrasonic and alkali pretreatments on mixed-microalgal biomass to enhance anaerobic methane production. Bioresour Technol. 2013;143:330–6.
Grabowski A, Tindall BJ, Bardin V, Blanchet D, Jeanthon C. Petrimonas sulfuriphila gen. nov., sp. nov., a mesophilic fermentative bacterium isolated from a biodegraded oil reservoir. Int J Syst Evol Microbiol. 2005;55(3):1113–21.
Lü F, Luo C, Shao L, He P. Biochar alleviates combined stress of ammonium and acids by firstly enriching Methanosaeta and then Methanosarcina. Water Res. 2016;90:34–43.
Demirel B, Scherer P. The roles of acetotrophic and hydrogenotrophic methanogens during anaerobic conversion of biomass to methane: a review. Rev Environ Sci Biol Technol. 2008;7(2):173–90.
Chandra R, Takeuchi H, Hasegawa T, Kumar R. Improving biodegradability and biogas production of wheat straw substrates using sodium hydroxide and hydrothermal pretreatments. Energy. 2012;43(1):273–82.
Yang X, Lyu H, Chen K, Zhu X, Zhang S, Chen J. Selective extraction of bio-oil from hydrothermal liquefaction of Salix psammophila by organic solvents with different polarities through multistep extraction separation. BioResources. 2014;9(3):5219–33.
Bates ST, Berg-Lyons D, Caporaso JG, Walters WA, Knight R, Fierer N. Examining the global distribution of dominant archaeal populations in soil. ISME J. 2011;5(5):908–17.
Angelidaki I, Sanders W. Assessment of the anaerobic biodegradability of macropollutants. Rev Environ Sci Biol Technol. 2004;3(2):117–29.
GL and SZ designed the experiment. HC, CZ, YR, and YJ carried out the experiment. HC made the bioinformatics analysis and draft the manuscript. All authors read and approved the final manuscript.
Availability of supporting data
Additional file 1: Table S1, S2 and Figure S1, S2, and COD calculation methods are available in supporting information.
All the authors agree to publish in Biotechnology for Biofuels.
The study was funded by the National Key Technology Support Program (2015BAD15B06), National Natural Science Foundation of China (51408133, 21577025), State Key Laboratory of Pollution Control and Resource Reuse Foundation (No. PCRRF16009), Yangfan project from Science and Technology Commission of Shanghai Municipality (14YF1400400), and Shanghai Talent Development Fund (201414).
Shanghai Key Laboratory of Atmospheric Particle Pollution and Prevention (LAP3), Department of Environmental Science and Engineering, Fudan University, Shanghai, 200433, China
Huihui Chen, Cheng Zhang, Yue Rao, Yuhang Jing, Gang Luo & Shicheng Zhang
Huihui Chen
Cheng Zhang
Yue Rao
Yuhang Jing
Gang Luo
Shicheng Zhang
Correspondence to Gang Luo or Shicheng Zhang.
Characteristics of rice straw and the following line is standard error of each value; Table S2. Number of the high-quality sequences; Figure S1. COD, TOC and pH values of HTLWW samples under different HTL conditions; Figure S2. Comparison of methane production potentials of samples 200 °C–0.5 h, 260 °C–0.5 h and 200 °C–4 h.
Chen, H., Zhang, C., Rao, Y. et al. Methane potentials of wastewater generated from hydrothermal liquefaction of rice straw: focusing on the wastewater characteristics and microbial community compositions. Biotechnol Biofuels 10, 140 (2017). https://doi.org/10.1186/s13068-017-0830-0
Methane yields
Organic compositions | CommonCrawl |
Flooding occurs when a river's discharge exceeds its channel's volume causing the river to overflow onto the area surrounding the channel known as the floodplain. The increase in discharge can be triggered by several events. The most common cause of flooding is prolonged rainfall. If it rains for a long time, the ground will become saturated and the soil will no longer be able to store water leading to increased surface runoff. Rainwater will enter the river much faster than it would if the ground wasn't saturated leading to higher discharge levels and floods.
As well as prolonged rainfall, brief periods of heavy rain can also lead to floods. If there's a sudden "burst" of heavy rain, the rainwater won't be able to infiltrate fast enough and the water will instead enter the river via surface runoff. This leads to a sudden and large increase in the river's discharge which can result in a flash flood.
Although many floods are triggered directly by precipitation just a few hours after it falls some floods can be triggered by precipitation that fell many months ago. Precipitation that falls as snow can remain as snow on the ground until it melts. This mightn't be until the end of winter, so potentially several months. When the snow does melt, large volumes of meltwater will enter the river increasing its discharge and triggering floods. These floods are often annual, occurring every year when snow melts in the spring. In Bangladesh, for example, melting snow in the Himalayas triggers annual floods in the summer.
Flash floods can also be triggered by slightly more catastrophic events. Erupting volcanoes can trigger very large flash floods called jökulhlaups when glaciers are partially or even fully melted by an erupting volcano or some other form of geothermal activity. The meltwater can enter rivers and greatly increase the river's discharge leading to a flood. The eruption of Eyjafjallajökull1 in 2010 triggered jökulhlaups as the volcano had been capped by a glacier that melted when it erupted2.
Factors Affecting Flood Frequency
Physical Factors
The size and shape of a river's drainage basin dictates how much precipitation the river can receive and how quickly it will arrive (the lag time). A large drainage basin means that the river's catchment area is large so it will collect a lot of water, increasing discharge. If the basin is circular in shape, the precipitation will enter the river at roughly the same time because all points in the basin are equidistant from one another. This will produce a high peak discharge and can lead to flash floods.
The permeability of the soil and rock in a drainage basin is a big factor in flooding. If the basin's soil is impermeable, maybe because it has been saturated by previous rainfall or has been baked by prolonged heating, then any precipitation that falls won't infiltrate and will instead run straight into the river, increasing the river's discharge and triggering floods. Similarly, if the rocks in the area are non-porous or impermeable (such as granite or clay) then water won't be able to infiltrate into the rocks and will, again, run straight off into the river increasing its discharge.
The vegetation cover in a basin will affect flooding. If a basin has very dense vegetation cover, the vegetation will intercept precipitation and store it, reducing the volume of water entering a river. Conversely, if a basin is sparsely vegetated then there will be no interception and so more water will enter a river. Vegetation helps bind soil too. With no vegetation, the soil is highly susceptible to mass wasting which can cause large volumes of soil to enter a river and reduce the river's capacity.
The relief and steepness of the basin affects how quickly water enters a river and so how likely a river is to flood. If the river's valley has steep sides, water will quickly enter a river increasing the river's discharge.
The number of tributaries flowing into a river affects the likelihood of floods. If a river has a lot of tributaries, the river's discharge will be much higher because lots of water will be entering it from its tributaries. After heavy precipitation, the discharge will rise even more and floods are likely, especially at confluences (where a tributary meets the river) as this is where discharge is highest.
If a river's drainage basin or floodplain has been heavily urbanised, a river becomes much more prone to flooding. Urbanisation (generally) involves the laying down of tarmac and concrete, impermeable substances that will increase surface runoff into the river and therefore increase the river's discharge.
Urbanisation often involves deforestation. This (obviously) reduces vegetation cover, reducing infiltration and increasing surface runoff into a river.
To stop roads and streets from flooding, humans will often build storm drains that collect rainwater and channel it into a river or stream. Stupid/cheap humans will often send this water to the local river or stream so, although roads and streets won't be flooded by rainwater the entire town will be as the rainwater enters the river much faster than it would without the storm drains.
Climate change is a physical factor that could, potentially, be a human factor. Changes in the climate mean that certain areas are going to experience more frequent and more intense storms that can lead to large floods. Whether this is a human factor is debatable as, while climate change is definitely happening, whether it's the result of human activity is still uncertain. We're probably not helping but keep in mind that the planet's climate would be changing regardless of humanity's existence because at this moment in time, we're still in the tail end of an ice age.
The Effects of Flooding
Flooding can have numerous social, economic and environmental effects that can vary depending on the demographics of a population and the economic development of an area.
Social Effects
The biggest, most obvious effect is death. Floods, especially flash floods, will kill people. Flood water can travel surprisingly quickly and weighs3 a lot, so people can easily get swept away by floods. Large chunks of debris and objects like cars can easily get picked up by floodwater and can easily kill a person should they get hit by the debris. In a LEDC, you're generally going to get much more deaths than you would in a MEDC. In a MEDC, people and governments are better prepared for floods. Rescue services can be dispatched to a flood quickly in a MEDC whereas in a LEDC, rescue teams mightn't arrive until several hours after the flood started.
During a flood, sewage pipes are often broken and raw sewage leaks into the floodwater. This has two effects. First, it contaminates not just floodwater but drinking water too which leads to a spread of waterborne diseases such as cholera especially in LEDCs where emergency drinking water mightn't be available. Second, the sewage gets into people's homes which is just horrible, disgusting and incredibly difficult to clean.
In LEDCs, famines can follow floods which can lead to even more deaths. Floods will commonly inundate farmland because farmland normally develops on floodplains. If the floodwater is polluted by sewage, it will contaminate the farmland and make any food grown on it dangerous to eat. Furthermore, cattle are often killed by floods which can lead to people starving because they either don't have a source of food or don't have a source of income to buy food with.
Economic Effects
The big economic effect of a flood is property damage. Water can cause a lot of damage to property and when it picks up large chunks of debris such as cars, it can act like a wrecking ball, taking out chunks of buildings when cars crash into them. Very large and powerful floods can even dislodge buildings from their foundations and move them. In a MEDC, property damage is often extensive as people have lots of expensive possessions. This isn't the case in LEDCs but that's only because people don't have a lot to lose in the first place. This means that the overall cost of a flood is generally substantially higher in a MEDC than in a LEDC.
Floods can cause extensive damage to infrastructure such as power lines, roads, water pipes etc. Bridges frequently collapse during a flood as they aren't designed to withstand the high discharge of the river. The Northside Bridge in Workington, Cumbria collapsed when there were large floods in 2009. Repairing bridges and other types of infrastructure is very costly. Not only this, it can lead to a decline in the local economy as businesses are unable to operate without power or road connections. Unemployment can even increase if businesses are unable to fully recover from a flood. The economic impact of infrastructure damage and unemployment is larger in MEDCs since these countries have modern and expensive infrastructure in place. In LEDCs, this infrastructure is lacking, so there isn't much economic damage. In fact, in a LEDC, floods can lead to positive economic effects in the long term. An influx of funding to a less developed area from charities and NGOs after a flood can result in new infrastructure being constructed that is substantially better than the previously existing infrastructure. This, in turn, creates new economic opportunities in an area by, for example, creating new trade routes.
Another economic benefit comes from when a river floods and deposits sediment across the floodplain. This improves the fertility of the floodplain and can improve agricultural yield in an area (assuming the floodwater wasn't polluted).
Floodwater that is contaminated with sewage will pollute rivers and land when it drains back into the river. Similarly, if the river floods onto farmland, the water can be polluted by pesticides and other chemicals sprayed onto the farmland that, when drained back into the river, can pollute it and kill off wildlife that inhabits the river. If the floodwater isn't polluted though, flooding can create wetlands that can help introduce new habitats for many species of animals.
The Recurrence Interval
The recurrence interval is a way of measuring the frequency of a flood of a specific size occurring. The accuracy of the recurrence interval is dependent on the amount of historical data available about previous floods. The recurrence interval tells you how many years you'd expect to have between a flood of a certain size. In general, a large flood has a large recurrence interval so it isn't very frequent. A small flood will have a smaller and more frequent recurrence interval. The recurrence interval can be calculated using the following formula4:
\[ T = \frac{n+1}{m} \]
\(T\) is the recurrence interval, \(n\) is the number of years on record and \(m\) is the ranking of the flood relative to all the other floods on record for a specific river.
For example, a flood with a discharge of 200m3s-1 occurred at some point in the river's past. Out of a data set spanning 199 years5, this flood was the 2nd largest in terms of discharge. Using the formula, this means that a flood of this size is expected to occur once every 100 years (\(\frac{199+1}{2}\)). We'd describe it as a 1 in 100 year flood.
Of course, this doesn't mean that a flood of this size won't occur for another 100 years. It just means that statistically it won't. More than anything, the recurrence interval is a nice way of describing a complicated topic using simple Maths. At the end of the day though, the recurrence interval raises more questions than it answers because after a big flood all the newspapers say that it was a 1 in 300 hundred year flood and everybody feels safe that a big flood won't affect them for another 300 years and then the river goes and floods again the next year and everybody's all "But it wasn't supposed to flood for another 300 years" and then somebody explains that that's just an average and really the river can flood at anytime. People don't like that sort of unpredictability though, so they'll just blame the scientists/statisticians and say they got it wrong.
The point I'm trying to get at is that the recurrence interval isn't the most useful thing in the world because it's just an average and averages can (and often do) have anomalies. There's nothing to stop a river having a 1 in 1000 year flood and then doing the same the next day. The other problem with the recurrence interval is that it's based on past data. Rivers are dynamic beasts, they change and when they change, so does how they flood. While the recurrence interval's accuracy increases as you add more data, the reliability decreases because the river's flooding patterns will have changed over time.
Flooding in a MEDC - 2004 Boscastle Floods
On the 16th of August, 2004, the small town of Boscastle was almost completely destroyed in the space of just two hours when a 1 in 400 year flash flood occurred at around 3pm and inundated most of the town.
Boscastle is located in southwest England less than a kilometre from the coastline. The River Valency flows directly through the town and meets the River Jordan at a confluence in the town. The river valleys are steep and composed of shale, an impermeable rock.
The approximate location of Boscastle in the UK.
Map modified from this map by Nilfanion/Wikipedia. Licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.
The topography of the Boscastle area and the courses of the River Valency and the River Jordan. Map from Google.
A combination of the remnants of Hurricane Alex and convectional rainfall triggered by the intense heating of the ground by the summer heat led to heavy rainfall over the south of England. In two hours over 60mm of rain fell in Boscastle and by the end of the flood, nearly 200mm had fallen.
The rain fell over Bodmin Moor, an area composed of impermeable shale that is has sparse vegetation cover. This increased surface runoff due to a lack of both infiltration and interception, increasing the volume of water entering the River Valency and its tributaries.
The River Valency's valley has a high relief and steep sides so surface runoff was increased.
The ground had been saturated by previous rainfall6, reducing infiltration and increasing surface runoff.
The River's floodplain had been urbanised reducing infiltration and increasing surface runoff.
The River Valency had a confluence with the River Jordan directly in Boscastle leading to huge volumes of water flowing through the town when both river's discharge increased.
As the River Valency flowed through Boscastle, its channel had been walled off preventing it from adjusting to the increased discharge and limiting its efficiency, ultimately causing it flood.
Insurance claims came in between £15,000 and £30,000 per property. There was the potential for insurance prices to rise as a result of the flood but this was unlikely because of the rarity of a flood of this scale.
Businesses were badly damaged with property destroyed or filled with silt, sewage and debris.
One of the main sources of income in Boscastle was tourism. After the events of the flood, people were less willing to travel to Boscastle because of the (low) risk of another flood occurring.
76 cars were washed out to sea because of the low lying nature of the town's car park.
The "lower bridge" was badly damaged when debris blocked it and water pooled behind it. When the temporary dam finally gave, a 3m wave was released that caused even more damage to buildings downstream of the bridge.
Nobody was killed thanks to the rescue efforts but some people suffered from broken bones & hypothermia.
Houses were flooded and silt, sewage & debris was deposited inside of them.
Water & power supplies were taken out during the flood.
Raw sewage was washed out to sea and into the River Valency.
75 cars & 6 buildings were washed out to sea.
Short Term Responses
A flood warning was issued for parts of Cornwall at 3:30pm but Boscastle wasn't specifically warned.
Just a few hours after the river flooded, a search and rescue operation was underway which lasted until 2:30am the next day. Over 150 people were saved by search and rescue operations.
11 Days after the flood people were allowed to return to their homes to salvage their belongings. Living in their homes wasn't really viable at this stage.
Prince Charles visited the town 2 days after the flood and donated a large sum of money to the town.
A few days after the flood, geologists flew over the area to assess the risk of landslides triggered by the heavy rain.
North Cornwall Council provided accommodation for 11 tourists who were unable to return home after the flood. The night after the disaster, 100 people used the Camelford leisure centre as a refuge.
Long Term Responses
Reconstruction didn't begin until 2005 as the council waited on a report from hydrologists to determine the recurrence interval of the flood.
By early 2005, power and water was back up.
The council invested money into improving Boscastle's flood defences and the Environmental Agency also built new flood defences. In 2006, the channel was widened and deepened to increase its capacity and ability to handle sudden increases in discharge. In 2007, these defences were put to the test and a much smaller, more controlled flood occurred.
The remains of the "lower bridge" that triggered a 3m wave were demolished and replaced with a larger bridge that would be more difficult to block with debris.
The effects of the flood caused people in Boscastle to take their environmental footprint far more seriously because they were led to believe that climate change exacerbated by human activity was responsible for the "freak weather" that caused the river to flood. When buildings were reconstructed, they were done so in an environmentally friendly manner with insulation, double glazing etc. being installed in the new buildings. The town won 5 awards for its eco-friendliness.
Devastation in Boscastle (BBC)
Praise and donation from their duke (The Guardian)
Villagers clean up after flash floods (The Guardian)
Boscastle: safe to rebuild (The Guardian)
Boscastle reborn as a green beacon (The Guardian)
Displaced Boscastle residents return home (The Guardian)
'No deaths in Boscastle flood' (The Guardian)
Flooding in a LEDC - 2008 Bihar Floods
During the months of August and September in 2008 there was a long period of heavy rainfall along the foothills of the Himalayas. The rainfall ultimately led to widespread floods in Bihar, an Indian state, that made millions homeless and claimed the lives of hundreds of people.
Bihar is located in the north east of India, to the south of the Himalayas bordering Nepal. It is one of the poorest states in India where the caste class system, despite its lack of legality, is still in widespread use. In Bihar, 42% of the population lives below the poverty line. Through Bihar flows the Kosi River, a tributary to the Ganges.
The location of Bihar in India. Its borders are highlighted in blue.
Map modified from this map by PlaneMad/Wikipedia. Licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.
Monsoon season in India occurs in the late summer months and is caused by the seasonal reversal of winds in the area. The monsoon brought heavy rainfall to the foothills of the Himalayas and dramatically increased the discharge of the Kosi. The river was forced to flow into a channel that it hadn't flown through in over 100 years. In doing so, it flooded a large portion of Bihar.
The flooding was worsened by the deforestation that had taken place in the Kosi River's drainage basin. The lack of vegetation cover meant that rain water wasn't intercepted and easily flowed into the river via surface runoff.
The Kosi River had flood defences that were supposed to handle approximately 30,000m3 of water per second7 but were breached at a fraction of that capacity suggesting that the defences were defective or poorly maintained.
A map of Bihar showing the major rivers flowing through the state. The Kosi River is highlighted in bright blue.
Map modified from this map by NordNordWest/Wikipedia. Licensed under the Creative Commons Attribution-Share Alike 3.0 German license.
The flood killed 500-2000 people. Figures vary because government figures don't include missing people while figures from aid agencies do.
3 million people were made homeless and sent to refuge camps.
A shortage of clean drinking water and the warm climate meant that water born and vector diseases were easily spread.
Shortages of food and emergency grain quickly developed. 70% of Bihar's population are farmers, many of which are subsistence farmers. When 100,000 Ha of land was flooded, most of their food was destroyed.
There were allegations of discrimination when it came to evacuating people. It was claimed that the rich were evacuated first and given the most emergency food while some members of the "untouchables" (the lowest class in the caste system and Hindu society) weren't evacuated at all.
70% of Bihar's population are farmers and 100,000 Ha of land was inundated by floodwater, destroying wheat and rice that could be traded.
Roads were destroyed, costing money to repair and disrupting trade.
The disaster ended up costing nearly $542 million according to some reports.
The flood will have washed sewage and pollutants into the Kosi River, polluting it and killing off some wildlife.
The Indian government created a £115m relief package to be sent to Bihar.
The government released 125,000 tonnes of emergency grain that was to be distributed in Bihar. Allegations that the grain wasn't evenly distributed arose though, with members of the lower class of Indian society being left with minimal amounts of grain while the upper classes received most of the grain.
400,000 people were evacuated to relief camps.
1,500 soldiers were sent to help rescue citizens and disperse aid. Helicopters were also provided but were limited in their effectiveness due to the continued heavy rain.
Aid agencies were dispatched to Bihar and were especially important in ensuring that the lower classes of society were given aid. Aid agencies provided rescue efforts, food, clean water supplies and shelter.
The World Bank gave significant funds to help Bihar recover and rebuild after the disaster. It also helped in creating plans to help improve the quality of living to standards above those seen before the disaster.
An "Owner Driven Housing Reconstruction" scheme was created, funded by the World Bank, to give money to homeowners to rebuild their homes using bamboo, bricks, corrugated iron & concrete. Each household was given $1,200 for the reconstruction with $50 going towards a toilet and $110 towards solar powered lighting. The scheme also gave $110 to households that did not own their own land so that they could go out and buy some.
New bridges and roads were constructed to a higher standard than those that had previously been constructed. When the plan to reconstruct infrastructure was laid out, it was expected that 90 bridges and 290km of road would be reconstructed, benefiting 2 million people.
New flood defences were constructed and people were educated on how to maintain them. It was estimated to cost $500 million to build new embankments, strengthen existing embankments, improve flood management and improve flood prediction technologies.
For a hugely detailed list of the responses laid out by the World Bank, have a look at this PDF file. It's long but very detailed.
India: Untouchables suffer 'relief discrimination' after flood (The Guardian)
India: Up to 2,000 feared dead in Bihar floods (The Guardian)
Heavy rain stalls Indian flood relief (The Guardian)
Indian government hits back at claims of inadequate supplies in flood zones (The Guardian)
Tensions rise over Indian flood relief (The Guardian)
Disease outbreak feared in wake of flood (The Guardian)
Indian monsoon floods leave a million homeless (The Guardian)
Indian Red Cross Report
Government of Bihar, World Bank & Global Facility for Disaster Reduction & Recovery Report
Proposed Emergency Recovery (The World Bank)
Interestingly, Eyjafjallajökull is the name of the glacier that capped the volcano and not the volcano itself.↩
This type of volcanic eruption is known as a subglacial eruption. ↩
For reference, 1ℓ of (pure) water weighs 1kg. ↩
This formula is for events with a magnitude (in this case discharge). If you're trying to measure the recurrence interval of an event that doesn't have a magnitude (such as the recurrence interval of a flood of no specific size) use the following formula: \[T=\frac{n}{m} \] \(T\) is, again, the recurrence interval, \(n\) is the number of years on record and \(m\) is the number of times this event has occurred. ↩
Weird number, I know, I'm just trying to produce a nice round answer here. ↩
I know it was the middle of summer and it was hot but this is Britain we're talking about. ↩
A rounded conversion from 1×106 feet3 per second. ↩
Flood Management »
« Rejuvenation | CommonCrawl |
N dimensional
The dimensionality of an array specifies the number of indices that are required to uniquely specify one of its entries. ккунø—тэнге. чмок. n-dimensional boys (16+). Культура. твоя подружка Part of the Mathematics Commons Repository Citation Nguyen, Tan Mai, N-Dimensional 8 Chapter 2: Definition of the N-Dimensional Quasipolar Coordinates and A Direct Proof of Their.. Looking for definition of N-dimensional? Define N-dimensional by Webster's Dictionary, WordNet Lexical Database, Dictionary of Computing, Legal Dictionary, Medical Dictionary, Dream Dictionary
d3_array[1], which recall is shorthand for d3_array[1, :, :], selects both rows and both columns of sheet-1:This dataset contains 6 grade-values. It is almost immediately clear that storing these in a 1-dimensional array is not ideal:
n-dimensional - Wiktionar
Dimensional Dungeons adds limitless procedurally generated dungeons to Minecraft! These dungeons are all placed in a single, separate dimension which can be accessed from a portal anywhere on the..
e how objects recede in space
n-dimensional involves computation in higher dimensions. Calculate n-dimensional euclidean distance from group centroids for each sample and select the lowest 3 for each group in R
Suppose you have an \(N\)-dimensional array, and only provide \(j\) indices for the array; NumPy will automatically insert \(N-j\) trailing slices for you. In the case that \(N=5\) and \(j=3\), d5_array[0, 0, 0] is treated as d5_array[0, 0, 0, :, :]
It provides tools for handling n-dimensional arrays (especially vectors and matrices). • Note that a matrix is actually a 2 dimensional array
Two-dimensional Arrays. Daniel Shiffman. An array keeps track of multiple pieces of information in linear order, a one-dimensional list. However, the data associated with certain systems..
N-dimensional Math Geek Net Fando
While no data has been lost, accessing this data using a single index is less than convenient; we want to be able to specify both the student and the exam when accessing a grade - it is natural to ascribe two dimensions to this data. Let's construct a 2D array containing these grades: N-Dimensional Datasets (astropy.nddata)¶. Introduction¶. The nddata package provides classes to represent images and other gridded data, some essential functions for manipulating images.. Definition: N dimensional space (or Rn for short) is just the space where the points are n-tuplets of You will notice that we are in a sense working backwards: for three dimensional space, we construct.. n-dimensional (not comparable). (mathematics) Having an arbitrary number of dimensions. -dimensional. multidimensional آبادیس - معنی کلمه n dimensional. معنی n dimensional در دیکشنری تخصصی
A matrix is a two-dimensional data structure where numbers are arranged into rows and columns. NumPy is a package for scientific computing which has support for a powerful N-dimensional array.. A complete listing of the available array-manipulation functions can be found in the official NumPy documentation. Among these functions, the reshape function is especially useful.NumPy is able to see the repeated structure among the list-of-lists-of-numbers passed to np.array, and resolve the two dimensions of data, which we deem the 'student' dimension and the 'exam' dimension, respectively. dimension 150 dimension elaboration of tradition dimension of a matrix dimension of a quantity dimension of elaboration dimension stone dimensional dimensional analysis dimensional change.. N-dimensional simplex. [ən-dɪˈmenʃənl ˈsɪmpleks]. N-мерный симплекс. N-мерный октаэдр (Гипероктаэдр). ��. N-dimensional cube
This page is about the meanings of the acronym/abbreviation/shorthand ND in the Academic & Science field in general and in the Mathematics terminology in particular. N-Dimensional Interpolation and fitting. single-dimensional interpolation. fast scattered N-dimensional interpolation. least squares curve fitting
Examples of n-dimensional vectors
Tinkercad is a free, easy-to-use app for 3D design, electronics, and coding NumPy provides an assortment of functions that allow us manipulate the way that an array's data can be accessed. These permit us to reshape an array, change its dimensionality, and swap the positions of its axes:
n-dimensional - перевод - Английский-Русский Словарь - Glosb
N-dimensional space — In mathematics, an n-dimensional space is a topological space whose dimension is n (where n is a fixed natural number). The archetypical example is n-dimensional Euclidean space..
size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto. Bigger size values require more training data, but can lead to better (more accurate)..
For simplicity, he is only a two-dimensional object as he is confined to lie in the plane of your computer screen. We've fixed his position and the direction his body is pointing. Nonetheless, just to specify the angles of his arms, legs, and head requires a vector in nine-dimensional space. (We'd need even more dimensions if we also wanted to specify his position or his cholesterol level.)
dimensional definition: The definition of dimensional is something a shape that can be measured. (adjective) An example of dimensional is a physical object with length, width and depth, like a table... For the dimension of a quantity, see Dimensional analysis. For display on a two-dimensional surface such as a screen, the 3d cube and 4d tesseract require projection Table of Contents. The N-dimensional array (ndarray). Constructing arrays. A segment of memory is inherently 1-dimensional, and there are many different schemes for arranging the items of an.. The following simple examples reveal how quickly the required number of dimensions increases as we try to describe physical objects. Dimensional Transceiver is a block used to teleport energy, fluids, items, and rails over distance and dimensions. It has an internal buffer of 25 000 MJ and can send power at a rate of 100 MJ/tick. Tesseract. View All FTB Twitter Feed. 10 Apr - More news
Examples of n-dimensional vectors - Math Insigh
Definition of n-dimensional in the Definitions.net dictionary. Information and translations of n-dimensional in the most comprehensive dictionary definitions resource on the web
Re-imagining discovery and access to research: grants, datasets, publications, citations, clinical trials, patents and policy documents in one place
Thus, if we want to access Brad's (item-1 along axis-0) score for Exam 1 (item-0 along axis-1) we simply specify:
One-dimensional - length only Two-dimensional - length and width only Three-dimensional As an illustration, a straight line is one-dimensional, a plane is two-dimensional, and a cube is..
Dimensional Mercenary Manga: Would you like to find a job? Even at the cost of your soul? If so, then you've found the right place. Our job hunting advice website, Soul Sellers..
This is not equivalent to a length-1 1D-array: np.array([15.2]). According to our definition of dimensionality, zero numbers are required to index into a 0-D array as it is unnecessary to provide an identifier for a standalone number. Thus you cannot index into a 0-D array.
What happens if we only supply one index to our array? It may be surprising that grades[0] does not throw an error since we are specifying only one index to access data from a 2-dimensional array. Instead, NumPy it will return all of the exam scores for student-0 (Ashley):
World Web Math: Vector Calculus: N Dimensional Geometr
#n-dimensional. Top. Views count
We can also uses slices to access subsequences of our data. Suppose we want the scores of all the students for Exam 2. We can slice from 0 through 3 along axis-0 (refer to the indexing diagram in the previous section) to include all the students, and specify index 1 on axis-1 to select Exam 2:
The first thing you should know about n dimensional space is that it is absolutely nothing to worry about. You aren't going to be asked to visualize 17 dimensional space or anything freaky like that, because nobody can visualize anything higher than 3 dimensional space (many of us aren't even very good at that). And you can throw out any ideas you might have about the fourth dimension being time or love or what have you, because all it is is an extra number hanging around. To be specific, Definition: A space is just a set where the elements are called points. and Definition: N dimensional space (or Rn for short) is just the space where the points are n-tuplets of real numbers. You will notice that we are in a sense working backwards: for three dimensional space, we construct cartesian coordinates to get a 3-tuple for every point; now, we forget about the middleman and simply define the point to be the 3-tuple. The origin, in any dimension, is just the n-tuplet (0,0, ... 0). What about vectors, you ask? Before we defined them to be a magnitude and a direction and then showed how there is a one-to-one correspondence between them and points; now we again invert the order of things and define vectors to be points. Since points are tuples and we know how to add, subtract and scalar multiply tuples, we know how to do all those things for vectors, too. It is also easy to extend the dot product to vectors in higher dimensions, via the algebraic definition. Just let ( x1, x2, ..., xn ) · ( y1, y2, ..., yn ) = x1 y1 + x2 y2 + ... + xn yn Having a dot product around allows us to define the length of a vector |v| = sqrt( v · v ) and the angle between two vectors: angle = cos-1 ( v &183; w / |v| |w| ) There is no cross product in dimensions greater than 3. For one thing, in dimensions 4 or higher, there are infinitely many unit vectors orthogonal to any given two. Lines and planes can also be found in higher dimensions, but there isn't often much reason to use them. Before, lines in two or three dimensions could be expressed as l(t) = OP + t v for P a point and v a vector on the line; the same formula works for higher dimensions. The familiar property of having exactly one line run through two distinct points is maintained. Planes in three dimensions live a double life: they are both two dimensional flat surfaces and n-1 dimensional flat things. If you want a two dimensional flat surface in n dimension, you are best off using the parametric formula S(r,s) = OP + r v + s w If, on the other hand, you want a n-1 dimensional flat thing, you are better off using the implicit formula A1 x1 + A2 x2 + ... + An xn = B These are usually called hyperplanes and are useful for approximating the graphs of functions. For example, functions from R to R have graphs in R2 which we approximate using 2 dimensional hyperplanes (i.e., lines). Exercises: What is the distance between the points (1,2,3,4) and (-5,2,0,12)? What is the angle between the vectors (1,0,2,0) and (-3,1,4,-5) ? Project the point (1,2,3,4,5) onto the vector (7,7,7,1,9). Give a formula for the line going through (1,2,3,4,5,6) and (0,0,0,17,0,0). Find the plane containing the three points (1,2,3,4), (2,7,2,7), and (-7,-5,-1,0). The unit hypercube in four dimensions is described by the equations 0
Given this indexing scheme, only one integer is needed to specify a unique entry in the array. Similarly only one slice is needed to uniquely specify a subsequence of entries in the array. For this reason, we say that this is a 1-dimensional array. In general, the dimensionality of an array specifies the number of indices that are required to uniquely specify one of its entries. Supplying Fewer Indices Than Dimensions. N-dimensional Arrays. The dimensionality of an array specifies the number of indices that are required to uniquely specify one of its entries Units and dimensions - Understand Dimensional analysis with Limitations and Applications. Know Dimensional Formulas of Quantities and Quantities with Same Dimensional Formula
IEEE Xplore. Delivering full text access to the world's highest quality technical literature in engineering and technology In this post, we will learn how to apply data augmentation strategies to n-Dimensional images get the most of our limited number of examples
What does N-Dimensional mean? Physics Forum
Geometrically, determinant represents the volume of $n$-dimensional parallelepiped spanned by the column or row vectors of the matrix. The vector product and the scalar product are the two ways of.. Before proceeding further down the path of high-dimensional arrays, let's briefly consider a very simple dataset where the desire to access the data along multiple dimensions is manifestly desirable. Consider the following table from a gradebook:
N-dimensional Article about n-dimensional by The Free Dictionar
No bookmarks yet, upload a bookmarks file or add a new bookmark by clicking the + below
Dimensional analysis definition, a method for comparing the dimensions of the physical quantities occurring in a problem to find relationships between the quantities without having to solve the problem..
What exactly does the term n-dimensions mean? What does N-Dimensional mean? Thread starter nomisrosen. Start date Jun 4, 2011
N-Vector (N-dimensional). Loading... (if this message do not disappear, try to refresh this page). How to calculate the norm of a vector? In a space of dimension $ n $, a vector $ \vec(v) $ of..
1-dimension DP problem. Suppose we want to find the fibonacci number at a particular index of the sequence. So fib(n) = nth element in the fibonacci sequence. So how should we go ahead and solve it..
Euclidean Distance In 'n'-Dimensional Space. One Dimension. In an example where there is only 1 variable describing each cell (or case) there is only 1 Dimensional space
Length is nothing but Height. W and H make the dimensions of a 2 dimensional rectangle while D, depth adds the 3rd dimension. Hope that answers your question You can think of axis-0 denoting which of the 2x2 "sheets" to select from. Then axis-1 specifies the row along the sheets, and axis-2 the column within the row:Let's begin our discussion by constructing a simple ND-array containing three floating-point numbers. an n-dimensional uniform design and is denoted by t-(v, k, λ)n design. Hence the n-dimensional design has more applications in real world problems disk layout and striping, partial match queries of
Newest 'n-dimensional' Questions - Stack Overflo
ology. An -dimensional hypersphere (or -sphere) of radius is the set of points in satisfying (I'll place the center at the origin for simplicity)
Because grades has three entries along axis-0 and two entries along axis-1, it has a "shape" of (3, 2).
Fills the 2-dimensional input Tensor with the identity matrix. Preserves the identity of the inputs in tensor - an n-dimensional torch.Tensor. sparsity - The fraction of elements in each column to be set..
N-Dimensional, Preston, United Kingdom. 34 likes. We are a small, well run, web design company. We offer a professional, but personal service to all our..
If one considers non-rigid objects, then the number of dimensions required to specify the configuration of the object can be quite high. As a simple example, consider this little guy below. For the dimension of a quantity, see Dimensional analysis. The first four spatial dimensions, represented in a two-dimensional picture. Two points can be connected to create a line segment ..n-dimensional cube, the perpendiculars that with that main diagonal compose the right-angled triangle are the main diagonal of the n-1-dimensional cube and another R-length-ed perpendicular
How many dimensions does it take to specify the position of a rigid object (for example, an airplane) in space? Naively, one would think that it would take three dimensions: one each to specify the $x$-coordinate, $y$-coordinate, and the $z$-coordinate of the object. It is correct that one needs only three dimensions to specify, for example, the center of the object. However, even if the center of a rigid object is specified, the object could also rotate. In fact, it can rotate in three different directions, such as the roll, pitch, and yaw of an airplane. Consequently, we need six dimensions to specify the position of a rigid object: three to specify the location of the center of the object, and three to specify the direction in which the object is pointing. As with Python sequences, you can specify an "empty" slice to include all possible entries along an axis, by default: grades[:, 1] is equivalent to grades[0:3, 1], in this instance. More generally, withholding either the 'start' or 'stop' value in a slice will result in the use smallest or largest valid index, respectively:
Accessing Data Along Multiple Dimensions in an Arra
Deutsch-Englisch-Übersetzung für: n dimensional. n dimensional in anderen Sprachen: Deutsch - Englisch
In fact, it appears that N-dimensional matrices aren't really addressed by any of the standard C++ So, to get my 4-dimensional matrix, I have to make an array of pointers pointing to an array of..
Additional dimensions can be mimicked using a dict: # Create a 3D k-by-m-by-n variable. x = {} for i N-dimensional variables #198. Closed. SteveDiamond opened this issue Jun 22, 2015 · 18 comments
Dimensional. Dimensional, adjectiv Sinonime: bidimensional, cvadridimensional, cvasidimensional, monodimensional, multidimensional, n-dimensional, pluridimensional, polidimensional, tridimensional..
Although accessing data along varying dimensions is ultimately all a matter of judicious bookkeeping (you could access all of this data from a 1-dimensional array, after all), NumPy's ability to provide users with an interface for accessing data along dimensions is incredibly useful. It affords us an ability to impose intuitive, abstract structure to our data.
Video: n-dimensional benchmark functions BenchmarkFcn
The first row of numbers gives the position of the indices 0…3 in the array; the second row gives the corresponding negative indices. The slice from \(i\) to \(j\) returns an array containing of all numbers between the edges labeled \(i\) and \(j\), respectively:
The basic format is. magic number size in dimension 0 size in dimension 1 size in dimension 2.. size in dimension N data
Listen to n-dimensional audio | SoundCloud is an audio platform that lets you listen to what you love and share Stream Tracks and Playlists from n-dimensional audio on your desktop or mobile device
imum size in 5-D the puzzle is far from..
Discover amazing music and directly support the artists who make it
Accessing Multi-dimensional Arrays. Multidimensional array elements are accessed using the row Let's see an example of a two-dimensional array with dimensions [3][3]. Below is the code to..
In this subreddit, we will attempt to synthesize these strands into a rigorous understanding of toroidal metaphysics and its n-dimensional hypertorus based on a rigorous numerology that is accessible..
Video: Area-Volume Formulas for N-Dimensional Spheres and Ball
C. n-dimensional Gaussian and multivariate normal densities. Now, let us examine the n-dimensional case. The n-variate normal den-sity with mean µ = (µ1, µ2, . . . , µn) and covariance.. dimensional riftunknown. A crack, split, or break between dimensions. There was a dimensional rift in between our world and theirs. by jambalaya October 07, 2016. 4 The declension of the adjective n-dimensional uses the incomparable form n-dimensional. The adjective has no forms for the comparative and superlative. The adjective n-dimensional can be used.. Multidimensional arrays can be described as arrays of arrays. For example, a bidimensional array can be imagined as a two-dimensional table made of elements, all of them of a same uniform data type
Is it harder to think about Fred's head than to think about a 60,000,000-dimensional vector? But it is much preferable to put electrodes in Fred's head than in the head of children suffering with severe epilepsy, as still needs to be done in some cases. Won't it be great if we can develop scientific means to avoid the latter? But in very high-dimensional spaces, Euclidean distances tend to become inflated (this is an instance of the so-called curse of dimensionality). Running a dimensionality reduction algorithm such as.. N-dimensional population structures. A population is a collection of organisms of the same species located within a prescribed area. This suggests that a population has the characteristics of..
The reshape function allows you to change the dimensionality and axis-layout of a given array. This adjusts the indexing interface used to access the array's underlying data, as was discussed in earlier in this module. Let's take a shape-(6,) array, and reshape it to a shape-(2, 3) array: I was trying to get a better intuition for the curse of dimensionality in machine learning, and needed to know the volume of a unit n-sphere -- so I..
We run into high dimensional vectors even in fields like neuroscience. Let's say we stick 100 electrodes in the head of our friend Fred, the lab rat, to simultaneously record the activity of 100 of his neurons. Right away, you can see we'll need a 100-dimensional vector to describe Fred's neuronal activity at any point in time. It gets worse, though, if you think about recording Fred's neural activity over an extend period of time. Let's say we record from Fred's neurons while he's working for ten minutes (or 600 seconds). If we sample his neural activity 1000 times a second, that means we will take $1000 \times 600 =$ 600,000 samples during those ten minutes. Multiplying that by the 100 neurons, and we see we need a 60,000,000-dimensional vector to represent Fred's neural activity during those ten minutes. Benchmarkfcns is a personal effort to provide a public repository of sources and documents for well-known optimization benchmark functions. Please visit the About page for more information.
Although NumPy does formally recognize the concept of dimensionality precisely in the way that it is discussed here, its documentation refers to an individual dimension of an array as an axis. Thus you will see "axes" (pronounced "aks-ēz") used in place of "dimensions"; however, they mean the same thing. In two dimensions there are the formulas that the area of disk enclosed withing a circle of radius R is πR2 The purpose of this material is to derived the formulas for the volume n-dimensional balls and.. Dask arrays scale Numpy workflows, enabling multi-dimensional data analysis in earth science, satellite imagery, genomics, biomedical applications, and machine learning algorithms
Nykamp DQ, "Examples of n-dimensional vectors." From Math Insight. http://mathinsight.org/n_dimensional_vector_examples It takes a vector in nine-dimensional space to specify the angles of this stick figure's arms, legs, and For simplicity, he is only a two-dimensional object as he is confined to lie in the plane of your..
Softmax function takes an N-dimensional vector of real numbers and transforms it into a vector of real number in range (0,1) which add upto 1. As the name suggests, softmax function is a soft version of.. What rhymes with n-dimensional space? Lookup it up at Rhymes.net - the most comprehensive rhyming words dictionary on the web Looking for n-dimensional? Find out information about n-dimensional. Some number of dimensions. See multidimensional views Explanation of n-dimensional N-dimensional Auto-Bäcklund Transformation. and Exact Solutions to n-dimensional Burgers System. Mingliang Wang1,2 , Jinliang Zhang 1* & Xiangzheng Li 1 1. School of Mathematics & Statistcs.. Examples of n-dimensional vectors by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us.
Moment of force unit conversion between newton meter and newton millimeter, newton millimeter to newton meter conversion in batch, N.m N.mm conversion chart.. -dimensional definition: 1. having measurements in the stated directions: 2. having measurements a multi-dimensional problem. (Definition of -dimensional from the Cambridge Academic Content..
Stick figure position. It takes a vector in nine-dimensional space to specify the angles of this stick figure's arms, legs, and head. We denote the configuration vector specifying these angles by $\vec{\theta} = (\theta_1, \theta_2, \theta_3, \theta_4, \theta_5, \theta_6,\theta_7, \theta_8,\theta_9)$. You can drag the sliders with your mouse to change the components of the vector. The components $\theta_1$ and $\theta_2$ specify the angles of his left arm, $\theta_3$ and $\theta_4$ specify the angles of his right arm, $\theta_5$, $\theta_6$, $\theta_7$, and $\theta_8$ specify the angles of his left and right legs, and, finally, $\theta_9$ specifies the angle of his head. Thus far, we have discussed some rules for accessing data in arrays, all of which fall into the category that is designated "basic indexing" by the NumPy documentation. We will discuss the details of basic indexing and of "advanced indexing", in full, in a later section. Note, however, that all of the indexing/slicing reviewed here produces a "view" of the original array. That is, no data is copied when you index into an array using integer indices and/or slices. Recall that slicing lists and tuples do produce copies of the data.NumPy specifies the row-axis (students) of a 2D array as "axis-0" and the column-axis (exams) as axis-1. You must now provide two indices, one for each axis (dimension), to uniquely specify an element in this 2D array; the first number specifies an index along axis-0, the second specifies an index along axis-1. The zero-based indexing schema that we reviewed earlier applies to each axis of the ND-array: We make use of n-dimensional hypervolumes to define ecosystem states and assess how much they shift after environmental changes have occurred Перевод слова dimensional, американское и британское произношение, транскрипция, словосочетания, однокоренные слова, примеры использования
The normalizer scales each value by dividing each value by its magnitude in $n$-dimensional space for $n$ number of features. Say your features were x, y and z Cartesian co-ordinates your scaled.. Note the value of using the negative index is that it will always provide you with the latest exam score - you need not check how many exams the students have taken. Check out N-dimensional's art on DeviantArt. Browse the user profile and get inspired. Paint a picture. Experiment with DeviantArt's own digital drawing tools. N-dimensional
N-dimensional Matrices in C++ studywol
Translations in context of n-dimensional in German-English from Reverso Context: Sie ist Mitbegründerin von n-Dimensional game studio und Mitglied des Hyphen-Labs Kollektivs Alt. titles. Other World Warrior, The Dimensional Mercenary, Two-Dimensional Mercenary, 이차원 용병
Dimensional analysis is a method of reducing the number of variables required to describe a given physical situation by making use of the information implied by the units of the physical quantities.. At first, it may seem that going beyond three dimensions is an exercise in pointless mathematical abstraction. One might thing that if we want to describe something in our physical world, certainly three dimensions (or possibly four dimensions if we want to think about time) will be sufficient. It turns out that this assumption is far from the truth. To describe even the simplest objects, we will typically need more than three dimensions. In fact, in many applications of mathematics, it is challenging to develop mathematical models that can realistically describe a physical system and yet keep the number of dimensions from becoming incredibly large. This is because NumPy will automatically insert trailing slices for you if you don't provide as many indices as there are dimensions for your array. grades[0] was treated as grades[0, :].Let's build up some intuition for arrays with a dimensionality higher than 2. The following code creates a 3-dimensional array:
For example, the two-dimensional triangle and the three-dimensional tetrahedron can be seen as specific There are also notions of dimension (fractal dimensions such as Hausdorff dimension in.. Synonyms for one-dimensional at Thesaurus.com with free online thesaurus, antonyms, and That informants have a one-dimensional view of earlier periods in Shoshone history may well be expected Looking for n-dimensional? Find out information about n-dimensional. Some number of dimensions. See multidimensional views Explanation of n-dimensional Because the size of an input array and the resulting reshaped array must agree, you can specify one of the dimension-sizes in the reshape function to be -1, and this will cue NumPy to compute that dimension's size for you. For example, if you are reshaping a shape-(36,) array into a shape-(3, 4, 3) array. The following are valid: xit is a K -dimensional vector of explanatory variables, without a const term. β0, the intercept, is independent of i and t. β, a (K × 1) vector, the slopes, is independent of i and t. it , the error, varies..
Indexing means refer to an element of the array. In the following examples, we used indexing in single dimensional and 2-dimensional arrays as well: import numpy Number of points in n-dimensional space in a volume of any size at all around the optimal point: uncountable. Does jwest understand multivarients and n-dimensional systems
How to derive the volume of an n-dimensional hypersphere - YouTub
dimensions, a cube has three, and a tesseract has four. For some calculations, time may be added as a third dimension to two-dimensional (2D) space or a fourth dimension to three-dimensional (3D).. Dimension definition is - measure in one direction; specifically : one of three coordinates determining a position in space or four coordinates determining a position in How to use dimension in a sentence You can also supply a "step" value to the slice. grades[::-1, :] will returns the array of grades with the student-axis flipped (reverse-alphabetical order). You will see updates in your activity feed. You may receive emails, depending on your notification preferences. N-Dimensional Histogram Count
The site owner hides the web page description High-dimensional spaces - spaces with a dimensionality substantially greater than 3 - have properties that are substantially different from normal common-sense intuitions of distance, volume.. Similar to Python's sequences, we use 0-based indices and slicing to access the content of an array. However, we must specify an index/slice for each dimension of an array: Two-dimensional arrays can be passed as parameters to a function, and they are passed by reference. When declaring a two-dimensional array as a formal parameter, we can omit the size of the first..
As indicated above, negative indices are valid too and are quite useful. If we want to access the scores of the latest exam for all of the students, you can specify:Keeping track of the meaning of an array's various dimensions can quickly become unwieldy when working with real datasets. xarray is a Python library that provides functionality comparable to NumPy, but allows users provide explicit labels for an array's dimensions; that is, you can name each dimension. Using an xarray to select Brad's scores could look like grades.sel(student='Brad'), for instance. This is a valuable library to look into at your leisure.The output of grades[:, :1] might look somewhat funny. Because the axis-1 slice only includes one column of numbers, the shape of the resulting array is (3, 1). 0 is thus only valid (non-negative) index for axis-1, since there is only one column to specify in the array. Higher Dimensional Learning. Read the seven chapters of the Beginner's Guide to Dimensional Rifting scattered across Azsuna
Homogeneous coordinates are a way of representing N-dimensional coordinates with N+1 numbers. To make 2D Homogeneous coordinates, we simply add an additional variable, w, into existing.. The dimensioned types are therefore closely modeled on dictionaries but support n-dimensional keys. Therefore they allow you to express a mapping between a multi-variable key and other viewable.. HDF® supports n-dimensional datasets and each element in the dataset may itself be a complex object
This definition of dimensionality is common far beyond NumPy; one must use three numbers to uniquely specify a point in physical space, which is why it is said that space consists of three dimensions. Show declension of n-dimensional. n-dimensional ( not comparable). en There is also one compound of n-simplexes in n-dimensional space provided that n is one less than a power of two.. Calculator Use. Enter 2 sets of coordinates in the 3 dimensional Cartesian coordinate system, (X1, Y1, Z1) and (X2, Y2, Z2), to get the distance formula calculation for the 2 points and calculate distance.. n-dimensional voronoi diagram. Ask Question. Qhull performs the computations; see www.qhull.org. $d$-dimensional Delaunay triangulation computations are part of CGAL, specifically this module
N-dimensional space — Wikimedia Foundatio
Deklination und Steigerung von n-dimensional: Adjektiv n-dimensional dekliniert und kompariert im Plural, Superlativ, stark, schwach, Tabellen, Regeln, Downloads, Sprachausgabe How to abbreviate N-dimensional? N-dimensional can be abbreviated as N-D. What is N-D All Acronyms. N-D - N-dimensional [Internet]; Apr 7, 2020 [cited 2020 Apr 7]. Available from: https.. From left to right, the square, the cube, and the tesseract. The square isbounded by 1-dimensional lines, the cube by 2-dimensional areas, and thetesseract by 3-dimensional volumes. A projection of the cube is given since itis viewed on a two-dimensional screen Incremental n-dimensional convex hull algorithm. angle between the center and each corner of a regular minimal polygon in n dimensions Below mentioned libraries are very important and basics for learning and implementing projects on machine learning. · NumPy (numpy.org) N-dimensional array for numerical computation
Dot product formula for n dimensional space problems. In the case of the n dimensional space problem the dot product of vectors a = {a1 ; a2 ; ; an} and b = {b1 ; b2 ; ; bn} can be found by.. ND4J: N-Dimensional Arrays for Java. ND4J and ND4S are scientific computing libraries for the JVM. They are meant to be used in production environments, which means routines are designed to run..
In four dimensions, one can think of "stacks of sheets with rows and columns" where axis-0 selects the stack of sheets you are working with, axis-1 chooses the sheet, axis-2 chooses the row, and axis-3 chooses the column. Extrapolating to higher dimensions ("collections of stacks of sheets …") continues in the same tedious fashion. 2-Dimensional Benchmark Functions. n-Dimensional Benchmark Functions. Online 3D Function Visualizer For all straightforward applications of reshape, NumPy does not actually create a new copy of an array's data when performing a reshape operation. Instead, the original array and the reshaped array reference the same underlying data. The reshaped array simply provides a new index-interface for accessing said data, and is thus referred to as a "view" of the original array (more on this "views" in a later section). All Functions 1 Dimensional 2 Dimensional 3 Dimensional n Dimensional Convex Differentiable Multimodal Non-differentiable n-dimensional non-separable multimodal non-convex differentiable
Suomen isoin kauppakeskus 2018.
Valkoisen miehen taakka esimerkki.
Eminem vaatteet.
Puvunkengät etiketti.
Pikku kakkonen näyttelijät.
Seiti keittäminen.
Partnerka na impreze.
Paw patrol cake.
Pirkko arstila twitter.
Saunaonline peruutus.
Iltalehti vauva.
Kuriositeetti englanniksi.
Kaakaonibsit hyödyt.
Free movies with finnish subtitles.
Temptation island jonne.
Ronnie o'sullivan age.
Eyeliner pigmentointi.
Sammon puolustus sommittelu.
Wow legion daily reset.
Kissa märkäruoka määrä.
Relativa pronomen tyska.
Lahti kaavoituskohteet.
Sanoja humalalle.
Samsung galaxy j5 muistikortin koko.
Pivaset pv 202.
Handwerkskammer hildesheim.
Ripsienpidennykset hinta pori.
Resoribletti.
Työympäristö wikipedia.
Perheentalo iisalmi.
Burpee treeni.
Digi anna fi.
Lasinen juomapullo verkkokauppa.
Windside turbiini hinta.
Maapallo uskomukset.
Chilin säilöminen öljyyn.
Senaste nytt sverige.
Jakkihedelmä prisma.
Jack russell parson russell ero.
Mäntyhavun kennel.
Lapuan ponnistus jalkapallo. | CommonCrawl |
Washroom Services
Ambient Scenting
Dust Mats
Windows 7 Ultimate PT PT _HOT_
August 28, 2022 in Uncategorized
Download ✸ DOWNLOAD (Mirror #1)
Windows 7 Ultimate PT PT
Get Script Recorder to automatically capture your computer screen so that you can record computer screen in real time. It is designed to record screen activity in a high-quality video file that can be used as a demonstration for training purposes….
Get Script Recorder to automatically capture your computer screen so that you can record computer screen in real time. It is designed to record screen activity in a high-quality video file that can be used as a demonstration for training purposes. Although it is designed for screen recording, the code also has screen capturing capabilities for when you don't have a screen hooked up to your computer. Unlike screen recording software, it does not record and save to a file and it has no fixed video size. Instead it will record on the fly with the same resolution that you have your screen set on. In this article, you will learn how to set up and install, the basics of screen recording, how to use screen recording software, and alternatives to screen recording for PC users.
FREE Script Recorder Download
WHAT TO RECORD ON SCREEN | (Opens in a new window)
For users who are on Windows Vista and Windows 7
Which version of your screen do you want to record:
Screen recording.
Screen capture.
Screen recording during webcam video recording.
Video recording screen capture.
Video recording screen recording during webcam video recording.
Video recording video recording.
windows 7 64 bit version
Windows Vista Basic – 64 bit basic;
Windows Vista Ultimate – 64 bit ultimate;
Windows Vista Business – 64 bit business;
Windows Vista Home Premium – 64 bit home premium;
Windows Vista Enterprise – 64 bit enterprise.
2.windows 7 64 bit version
Press Win + R and type "scrnsaveas" into the dialog box, then click on OK.
Windows 7 display record for screen;
Windows 7 screen capture
Download.exe
A double-blind study of a new active drug in the treatment of schizophrenic patients. Comparison with an established neuroleptic, chlorpromazine.
A double-blind, cross-over study of the effects of a new anti-psychotic drug, 3-hydroxy-2-piperidinepropylidene-1,2-dione (HD-7508), was carried out in schizophrenic inpatients (n = 34). It was compared with an established neuro
https://documenter.getpostman.com/view/21872019/UzkQYdUT
https://documenter.getpostman.com/view/21841505/UzkQYdUV
https://documenter.getpostman.com/view/21828195/UzkQYdUW
https://documenter.getpostman.com/view/21828365/UzkQYdUX
https://documenter.getpostman.com/view/21829494/UzkQYdUY
License: Freeware | Size: 133 kb | Compatible with Windows 7, Vista, and Me. Freeware: An open source case report of a 61 year-old woman with meningococcal septic shock, severe renal failure, and thrombocytopenia. An open source case report of a 61 year-old woman with meningococcal septic shock, severe renal failure, and thrombocytopenia. link]
iXperience 2-6G (Pudongsidan 9020) Windows 7 Ultimate x64 Download. iXperience 2-6G (Pudongsidan 9020) Windows 7 Ultimate x64-Bit. iXperience 2-6G (Pudongsidan 9020) Windows 7 Ultimate X64 Free Download. iXperience 2-6G (Pudongsidan 9020) Windows 7 Ultimate X64-Bit-Bit.
[/link]1. Field of the Invention
This invention is in the area of medical devices and is more specifically directed to a brachial plexus nerve block device for blocking posterior interosseous nerve branches.
2. Description of the Prior Art
Posterior interosseous nerve block is commonly referred to as axillary plexus block and has been used to provide post-operative analgesia and as a post-operative mobilization procedure. An anterior interosseous nerve block, which extends from the lateral aspect of the elbow to the anterior aspect of the distal forearm, may be combined with an axillary plexus block to provide analgesia for the wrist and hand. A posterior interosseous nerve block may be employed to provide post-operative analgesia or to provide post-operative mobilization of the fingers and wrist. For a detailed discussion of the anatomy and physiology of brachial plexus and peripheral nerve blocks, reference is made to "The Brachial Plexus," ed. by E. Bronstein, M.D., and E. Finley, M.D., Williams and Williams, Stamford, Conn. (1981).
Brachial plexus blocks and peripheral nerve blocks have been accomplished by
50b96ab0b6
ISO file not working on Win7.. Tested on pc with win 7 ultimate. SP1 (X64) clean install in UEFI. Please help.. .
Windows 7 Ultimate SP1 (Pro-Soccer-Edition – ESD czytając, specyfikując firmę – in Saksa [69]).Q:
Find the following limit: $\lim_{n\rightarrow\infty}(1+1/n)^{n^2}$
Using the squeeze theorem, find the following limit:
$$\lim_{n\rightarrow\infty}(1+\frac{1}{n})^{n^2}$$
I have tried Wolfram Alpha and it says the limit is approximately $1.45$ which makes no sense to me as I know the limit of $e^x$ is $e$ but the limit in this case should be $1.49$
I can't see where I made the error.
You are correct that the answer is in fact $1.49$
$$\lim_{n \to \infty} \left(1+\frac{1}{n}\right)^{n^2}$$
By the Squeeze Theorem
$$\left(1+\frac{1}{n}\right)^{n^2} \le \left(1+\frac{1}{n}\right)^n=\left(1+\frac{1}{n}\right)^{\left(\frac{n}{n-1}\right)^2n^2} \le \left(1+\frac{1}{n}\right)^{n^2}$$
So you have
$$\frac{1}{e} < \lim_{n \to \infty} \left(1+\frac{1}{n}\right)^{n^2}$$ Thus, $$\lim_{n \to \infty} \left(1+\frac{1}{n}\right)^{n^2}=e$$ A: Since $$1+\frac 1n \leq \left(1+\frac 1n\right)^{n^2} \leq \left(1+\frac{
https://bitcointrading.se/?p=14152
https://teenmemorywall.com/new-crack-para-ejay-techno-4-reloaded-223/
https://konnektion.com/advert/crack-vmware-vsphere-client-5-5-0-1281650-exe-2013-exclusive/
https://www.opticnervenetwork.com/wp-content/uploads/2022/08/Download_free_ebooks_online_for_kobo_The.pdf
http://www.todaynewshub.com/wp-content/uploads/2022/08/alelata.pdf
https://cambodiaonlinemarket.com/cas-200-alfa-laval-new-version-_top_/
http://berlin-property-partner.com/?p=50254
http://www.prokaivos.fi/wp-content/uploads/inmunologia_basica_y_clinica_stites_pdf_25.pdf
https://brinke-eq.com/advert/buku-surat-yasin-dan-tahlil-files-pdf/
https://twincitiesblack.com/wp-content/uploads/2022/08/Nch_Software_Serial_Keygen_Crack.pdf
http://www.giffa.ru/financeinvesting/long-range-shooting-simulation-iii-full-_top_-download/
https://www.townlifeproperties.com/wp-content/uploads/2022/08/Trainz_Railroad_Simulator_2019_HOT_Download_Crack_Serial_Key_Keygen.pdf
https://www.webcard.irish/barbie-zauberhafte-pferdewelt-rar/
https://www.2el3byazici.com/serial-number-mlb-2k12-81-__link__/
https://greenearthcannaceuticals.com/homeland-complete-s02-720p-hdtv-x264ipt-hit-verified/
About The Author: Janaissa
More posts by janaissa
Previous Post Scorpions Acoustica ((FREE)) Full Album.rar 📥
Next Post Exclusive Turkish Citizenship Companies
August 4, 2022 How Effective SMM Dialog box Impacts Your Merchandising Programme In 2021? Read More
January 31, 2023 Slot Bonus New Member 100 Di Awal 2023 Resmi Read More
October 2, 2022 Life On The Atkins Diet Read More
Copyright © 2018 WASS Hygiene. All rights reserved.
Hello, how can we help you?
Click below to chat on WhatsApp or send us an email: [email protected]
Wass Hygiene | CommonCrawl |
Sigma-algebra (Computer Science)
2010 Mathematics Subject Classification: Primary: 68P05 [MSN][ZBL]
$\Sigma$-Algebras are the semantical counterpart to the signatures, which are pure syntactical objects. In order to give the function symbols $f\in F$ of a signature $\Sigma=(S,F)$ a meaning, a (total) $\Sigma$-algebra provides an object with the same structure as $\Sigma$ but consisting of concrete elements and concrete functions operating on these elements. The elements and functions of a $\Sigma$-algebra correspond to the sorts and function symbols of the signature $\Sigma$.
1 $\Sigma$-Algebras
1.1 Definition of $\Sigma$-Algebras
1.2 Category of $\Sigma$-Algebras
2 $\Sigma$-Subalgebras
2.1 Definition of $\Sigma$-Subalgebras
2.2 Properties of $\Sigma$-Subalgebras
3 Initial and Terminal $\Sigma$-Algebras
3.1 Initial $\Sigma$-Algebras
3.2 Terminal $\Sigma$-Algebras
3.3 Example of Initial and Terminal $\Sigma$-Algebras
4 Special Topics
4.1 Free $\Sigma$-Algebras
4.2 $\Sigma$ Number Algebras
4.3 Reduct of $\Sigma$-Algebras
$\Sigma$-Algebras
Definition of $\Sigma$-Algebras
Formally, a $\Sigma$-algebra $A=((s^A)_{s\in S},(f^A)_{f\in F})$ consists of a family $(s^A)_{s\in S}$ of carrier sets $s^A$ corresponding to the sorts $s\in S$ and a family $(f^A)_{f\in F}$ of functions on these carrier sets corresponding to the function symbols $f\in F$. The compatibility requirement is that for a function symbol $f$ of type$(f)= s_1\times\cdots\times s_n \longrightarrow s$, the function $f^A$ must have the form $f^A\colon s_1^A\times\cdots\times s_n^A \longrightarrow s^A$.
Category of $\Sigma$-Algebras
Let $\Sigma=(S,F)$ be a signature and let $A,B$ be $\Sigma$-algebras. A $\Sigma$-algebra-morphism $m\colon A\longrightarrow B$ is a family $(m_s\colon s^A \longrightarrow s^B)_{s\in S}$ of mappings between the carrier sets of $A,B$ fulfilling the following compatibility properties
$m_s(f^A)= f^B$ for $f\in F$ with ar$(f)=0$ and $\mathrm{type}(f)=\,\, \rightarrow s$
$m_s(f^A(a_1,\ldots,a_n))=f^B(m_{s_1}(a_1),\ldots,m_{s_n}(a_n))$ for $f\in F$ with $\mathrm{type}(f)= s_1\times\cdots\times s_n \longrightarrow s$ and for $a_i\in s_i^A$.
A map $f\colon A\longrightarrow B$ is a $\Sigma$-algebra-morphism, iff $\forall t\in T(\Sigma)\colon f(t^A)=t^B$. The class of $\Sigma$-algebras together with the $\Sigma$-algebra-morphisms forms a category [W90].
The option to use partially defined functions complicates the situation considerably and leads to refined versions of the definition. Though this does not belong to the scope of this entry, a short remarks seems to be appropriate. For example it is possible under this generalization that $m_s(f^A(a_1,\ldots,a_n))$ is undefined though $f^A(a_1,\ldots,a_n)$ is defined. This can be caused by an undefinedness of a term $m_{s_j}(a_j)$ or of $f^B(m_{s_1}(a_1),\ldots,m_{s_n}(a_n))$ [M89].
$\Sigma$-Subalgebras
Definition of $\Sigma$-Subalgebras
One can easily imagine that typically many different $\Sigma$-algebras exist for the same signature $\Sigma$. This holds even in the case of algebraic specifications, which can restrict the set of admissable $\Sigma$-algebras by additional axioms. In effect, this means that an abstract signature $\Sigma$ can be 'implemented' by concrete $\Sigma$-algebras with different semantics. This leads to an interest in the relationships between these $\Sigma$-algebras. One method to discuss the relationships is to use $\Sigma$-algebra-morphisms, another is the notion of $\Sigma$-subalgebras.
$\Sigma$-Subalgebras of $\Sigma$-algebras are defined in the usual way. A $\Sigma$-Algebra $A$ is called a $\Sigma$-subalgebra of a $\Sigma$-Algebra $B$, if $\forall s\in S\colon s^A\subseteq s^B$ and if $f^A(a_1,\ldots,a_n) = f^B(a_1,\ldots,a_n)$ for $f\in F$ with $\mathrm{type}(f)= s_1\times\cdots\times s_n \longrightarrow s$ and for $a_i\in s_i^A$. The subalgebra-property is written as $A\subseteq B$. The relation $\subseteq$ is reflexive, antisymmetric, and transitive [EM85].
Another way to characterize the subalgebra-property is the existence of a $\Sigma$-algebra-morphism $m\colon A\longrightarrow B$, which is the identity on each carrier set of $A$, i.e. $m=(\mathrm{id}_{s^A})_{s\in S}$.
Properties of $\Sigma$-Subalgebras
(Carriers of) $\Sigma$-Subalgebras are closed under intersection. For a family $((s^{A_i})_{s\in S},(f^{A_i})_{f\in F})_{i\in I}$ of $\Sigma$-subalgebras $A_i$, their intersection $A=((s^{A})_{s\in S}, (f^{A})_{f\in F})$ is a $\Sigma$-subalgebra given by carrier sets $s^A:= \bigcap\limits_{i\in I} s^{A_i}$ for all $s\in S$ and functions $f^A:= f^{A_k}|_{s^A}$ for all $f\in F$ for an arbitrarily chosen $k\in I$. The declaration $f^A$ is well-defined, because according to the definition of a $\Sigma$-subalgebra the functions $f^{A_i}$ must behave in the same way on $s^A$.
The closedness of the set of $\Sigma$-subalgebras under intersections has an important consequence. For a $\Sigma$-algebra $A=((s^A)_{s\in S}, (f^A)_{f\in F})$ and an $S$-sorted set $X=(\bar s)_{s\in S}$ with $\bar s\subseteq s^A$ for all $s\in S$, it assures the existence of a smallest $\Sigma$-subalgebra $A'\subseteq A$ of $A$ containing $X$, i.e. $\bar s\subseteq s^{A'}$. The $\Sigma$-algebra $A'$ is called the $\Sigma$-subalgebra of $A$ generated by $X$ [ST99].
The subalgebra-property is compatible with $\Sigma$-algebra-morphisms. Let $A,B$ be $\Sigma$-algebras and let $m=(m_s)_{s\in S}\colon A \longrightarrow B$ be a $\Sigma$-algebra-morphism. For a $\Sigma$-subalgebra $A'\subseteq A$ of $A$, its image $m(A')\subseteq B$ is a $\Sigma$-subalgebra of $B$. The expression $B':=m(A')$ is defined in the obvious way, i.e. for $A'=((s^{A'})_{s\in S},(f^{A'})_{f\in F})$ the $\Sigma$-subalgebra $B'=((s^{B'})_{s\in S},(f^{B'})_{f\in F})$ is given by $s^{B'}:=\{m_s(a)|a\in s^{A'}\}$ for all $s\in S$ and $f_{B'}(m_{s_1}(a_1),\ldots,m_{s_n}(a_n)) = m_s(f_{A'}(a_1,\ldots,a_n))$ for all function symbols $f \colon s_1 \times \ldots\times s_n \longrightarrow s$ with $f\in F$ and $a_i \in s_i^{A'}$. The coimage of a $\Sigma$-subalgebra $B'\subseteq B$ of $B$ under $m$ is a $\Sigma$-subalgebra $m^{-1}(B')\subseteq A$ of $A$. The expression $m^{-1}(B')$ can be defined analogously to $m(A')$ and is omitted here [ST99].
Initial and Terminal $\Sigma$-Algebras
Initial $\Sigma$-Algebras
Many $\Sigma$-algebras have unpleasant properties like junk (i.e. elements, which are not term-generated). Thus, there is a natural interest in $\Sigma$-algebras behaving appropriately. An example for such desirable well-behaving $\Sigma$-algebra are initial $\Sigma$-algebras.
Let $K$ be a class of $\Sigma$-algebras. An element $A \in K$ is initial in $K$ if for every $B \in K$ there is a unique $\Sigma$-homomorphism $h\colon A \longrightarrow B$ [ST99]. The initial $\Sigma$-algebra does not always exist, but if it does, it is uniquely determined up to isomorphism [W90]. In the case of a sensible signature $\Sigma$, for a $\Sigma$-algebra $A$ it exists exactly one $\Sigma$-algebra-morphism $m^A\colon T(\Sigma)\longrightarrow A; t\mapsto t^A$ (see the entry 'evaluation'). Thus, in this case $T(\Sigma)$ is initial in $\mathrm{Alg}(\Sigma)$ [M89][W90].
For $A\in K$ initial in a class $K$ of $\Sigma$-algebras it holds $$A\models t_1=t_2 \Longleftrightarrow \forall B\in K\colon (B\models t_1=t_2)$$ for ground terms $t_1,t_2\in T(\Sigma)$ [M89]. In an initial $\Sigma$-algebra, terms are identified if necessary. The initial $\Sigma$-algebra is the finest $\Sigma$-algebra [M89].
If an element $A\in K$ of a class $K$ of $\Sigma$-algebras is term-generated and if it holds $$A\models t_1= t_2 \Longleftrightarrow \forall B\in K\colon (B\models t_1= t_2)$$ for all ground terms $t_1,t_2\in T(\Sigma)$ then $A$ is initial in $K$ [M89].
Terminal $\Sigma$-Algebras
Terminal $\Sigma$-algebras are the dual objects of initial $\Sigma$-algebras in the sense of category theory. This leads to the following definition based on the discussion of initial $\Sigma$-algebras given above. Let $K$ be a class of $\Sigma$-algebras. An element $A \in K$ is terminal in $K$ if for every $B \in K$ there is a unique $\Sigma$-homomorphism $h\colon B \longrightarrow A$ [ST99]. The terminal algebra does not always exist, but if it does, it is uniquely determined up to isomorphism [W90]. In the case of a sensible signature $\Sigma$, the terminal $\Sigma$-algebra $U\in \mathrm{Alg}(\Sigma)$ is the so-called unit algebra, where every carrier set consists of exactly one element [M89] [W90].
For $A\in K$ terminal in a class $K$ of $\Sigma$-algebras it holds $$A\models t_1=t_2 \Longleftrightarrow \exists B\in K\colon (B\models t_1=t_2)$$ for ground terms $t_1,t_2\in T(\Sigma)$ [M89]. In a terminal $\Sigma$-algebra, terms are identified if possible. The terminal algebra is the coarsest algebra [M89].
If $A\in K$ is an element of a class $K\subseteq \mathrm{Gen}(\Sigma)$ of term-generated $\Sigma$-algebras and if it holds $$A\models t_1= t_2 \Longleftrightarrow \exists B\in K\colon (B\models t_1= t_2)$$ for all ground terms $t_1,t_2\in T(\Sigma)$ then $A$ is terminal in $K$ [M89].
The very simple structure of the $\Sigma$-algebra $U$ terminal in $\mathrm{Alg}(\Sigma)$ makes it uninteresting both from a practical and theoretical point of view. As soon as the class $\mathrm{Alg}(\Sigma)$ of all $\Sigma$-algebras belonging to a signature $\Sigma$ is restricted to a subclass $K\subseteq \mathrm{Alg}(\Sigma)$, the situation changes, however. If $K$ still has a terminal $\Sigma$-algebra $T$, it has not necessarily a simple structure anymore. If $K$ is 'small' enough, $T$ and the initial $\Sigma$-algebra of $K$ may even be isomorphic. In such a case, $K$ is called monomorphic.
Such a subclass $K$ can be defined for example by a set $E$ of axioms characterizing the $\Sigma$-algebras $A\in K$ contained in $K$ as models of the theory given by $E$. This situation is typical for algebraic specifications.
Example of Initial and Terminal $\Sigma$-Algebras
Let $A,B$ be term-generated $\Sigma$-algebras and let $m\colon A\longrightarrow B$ be a $\Sigma$-algebra-morphism. In the class $K:=\{A,B\}$, the algebra $A$ is initial and $B$ terminal. The existence of a morphisms means, that elements distinguished in $A$ are eventually identified in $B$. As one can easily see in this example, initial and terminal $\Sigma$-algebras are typically not isomorphic.
Free $\Sigma$-Algebras
The idea of a free object (as a kind of generic structure) in the sense of category theory resp. universal algebra can be applied to $\Sigma$-algebras as well. For giving an explicit definition, let $K$ be a class of $\Sigma$-algebras. A $\Sigma$-algebra $A \in \mathrm{Alg}(\Sigma,X)$ with $A\in K$ is called free over $X$ in $K$, if it exists an assignement $u\colon X\longrightarrow A$ (serving as universal mapping) such that for every assignement $h\colon X\longrightarrow B$ to $B\in K$ it exists exactly one $\Sigma$-algebra-morphism $\bar h\colon A\longrightarrow B$ with $h=\bar h\circ u$ [EM85]. Due to the definition, two $\Sigma$-algebras $A_1, A_2\in K$ free over $X$ in $K$ are isomorphic $A_1\cong A_2$ to each other [EM85]. A $\Sigma$-algebra $I\in K$ is free over $\emptyset$ in $K$ iff $I$ is initial in $K$ [EM85].
$\Sigma$ Number Algebras
Let $\Sigma=(S,F)$ be a signature. A $\Sigma$-algebra is called a $\Sigma$ number algebra, if its carrier sets are recursive subsets of the natural numbers $\mathbb{N}$. It is called recursive if its functions are recursive [W90].
Reduct of $\Sigma$-Algebras
Every $\Sigma$-algebra $A$ is associated with a specific signature $\Sigma =(S,F)$. The so-called reduct offers the possibility, to adapt $A$ to a modification of $\Sigma$, if the modified signature $\Sigma'=(S',F')$ is related to $\Sigma$ via a signature morphism $m\colon \Sigma' \longrightarrow \Sigma$. Formally, the $m$-reduct $A':=A|_m$ of $A$ is a $\Sigma'$-algebra defined as follows [W90]: $A'$ has carrier sets $s^{A'} :=m(s)^A$ for each $s\in S'$ and operations $f^{A'}:=m(f)^A$ for each $f\in F'$. Similarly, for a $\Sigma$-algebra-morphism $h\colon A \longrightarrow B$ the $m$-reduct $h':=h|_m$ of $h$ is a $\Sigma'$-algebra-morphism $h'\colon A|_m \longrightarrow B|_m$ defined as $(h|_m)_s := h|_{m(s)}$ for each $s\in S'$ [W90]. Together, the mappings $A\rightarrow A|_m$ and $h\rightarrow h|_m$ form a functor $\cdot |_m\colon \mathrm{Alg}(\Sigma)\longrightarrow \mathrm{Alg}(\Sigma')$ [W90]. If $\Sigma'$ is a subsignature of $\Sigma$, then the $i$-reduct $A':=A|_i$ of $A$ (with $i \colon \Sigma' \longrightarrow \Sigma$ as canonical inclusion) is also written as $A|_\Sigma$. In this case, $A'$ is just $A$ with some carrier sets and/or functions removed [ST99].
[EM85] H. Ehrig, B. Mahr: "Fundamentals of Algebraic Specifications", Volume 1, Springer 1985
[M89] B. Möller: "Algorithmische Sprachen und Methodik des Programmierens I", lecture notes, Technical University Munich 1989
[ST99] D. Sannella, A. Tarlecki, "Algebraic Preliminaries ", in Egidio Astesiano, Hans-Joerg Kreowski, Bernd Krieg-Brueckner, "Algebraic Foundations of System Specification", Springer 1999
[W90] M. Wirsing: "Algebraic Specification", in J. van Leeuwen: "Handbook of Theoretical Computer Science", Elsevier 1990
Sigma-algebra (Computer Science). Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Sigma-algebra_(Computer_Science)&oldid=29689
Retrieved from "https://encyclopediaofmath.org/index.php?title=Sigma-algebra_(Computer_Science)&oldid=29689"
Numerical analysis and scientific computing
TeX done | CommonCrawl |
Home→Uncategorized→eigenvalue of matrix
eigenvalue of matrix
Icon 2X2. flashcard set{{course.flashcardSetCoun > 1 ? Multiplying by a constant. Recipes in FORTRAN: The Art of Scientific Computing, 2nd ed. Arfken, G. "Eigenvectors, Eigenvalues." 7.2 FINDING THE EIGENVALUES OF A MATRIX Consider an n£n matrix A and a scalar '.By definition ' is an eigenvalue of A if there is a nonzero vector ~v in Rn such that A~v = '~v '~v ¡ A~v = ~0 ('In ¡ A)~v = ~0An an eigenvector, ~v needs to be a nonzero vector. credit by exam that is accepted by over 1,500 colleges and universities. For A to have 0 as an eigenvalue, k must be \underline{\quad\quad}. We'll use a 2x2 identity matrix here because we want it to be the same size as A. Matrix calculator Solving systems of linear equations Determinant calculator Eigenvalues calculator Examples of solvings Wikipedia:Matrices. Create your account. Compact that are sometimes also known as characteristic roots, characteristic values (Hoffman MathWorld--A Wolfram Web Resource. and the system is said to be nondegenerate. v. In this equation A is an n-by-n matrix, v is a non-zero n-by-1 vector and λ is a scalar (which may be either real or complex). There are a couple of things we need to note here. Enrolling in a course lets you earn progress by passing quizzes and exams. "Eigensystems." Eigenvalues are a special set of scalars associated with a linear system of equations (i.e., a matrix equation) Eigenvalues and Eigenvectors of a 3 by 3 matrix Just as 2 by 2 matrices can represent transformations of the plane, 3 by 3 matrices can represent transformations of 3D space. Plus, get practice tests, quizzes, and personalized coaching to help you 19th Jun, 2013. Let's review what we've learned about determining the eigenvalues of a matrix. A=2&-1&-11&4&1-1&-1&2 =1&-1&0-1&1&-10&-1&1 3&0&00&2&00&0. Then, the book says, $(I-A)^{-1}$ has the same eigenvector, with eigenvalue $\frac{1}{1-\lambda_{1}}$. • STEP 2: Find x by Gaussian elimination. Get the unbiased info you need to find the right school. The Mathematics Of It. 11 in Numerical Solving for the roots will give us our eigenvalues. When v equals zero, lambda's value becomes trivial because any scalar or matrix multiplied by the zero vector equals the zero vector. For a square matrix A, an Eigenvector and Eigenvalue make this equation true:. courses that prepare you to earn 4. The next thing we need to do is multiply lambda*v by an identity matrix (I). lessons in math, English, science, history, and more. Theorem. that. A is not invertible if and only if is an eigenvalue of A. To solve this determinant, we'll look at each of the three elements in the top row consecutively, and cross out everything else in the same row and column as it. The diagonal elements of a triangular matrix are equal to its eigenvalues. Unlimited random practice problems and answers with built-in Step-by-step solutions. Let A = \begin{bmatrix} -6 & 3 \\ 2 & k \end{bmatrix}. so the new eigenvalues are the old multiplied by . New York: Dover, p. 145, 1988. Eigenvalues and eigenvectors calculator. Eigenvector and Eigenvalue. Therefore, the term eigenvalue can be termed as characteristics value, characteristics root, proper values or latent roots as well. For one, the zero here is not a scalar, but rather the zero vector. If is any number, then is an eigenvalue of . Find an invertible matrix S and a diagonal matrix D such that \begin{pmatrix} 1 & \ \ \ 4 \\ 1 & -2 \end{pmatrix} = SDS^{-1}. Due to the complexity of solving all this, we won't cover every single step but, as you can see, after we've solved everything, our lambdas equal 2, 1, and -1. If A is a 2 \times 2 matrix with eigenvalues \lambda_1 = 2 \enspace and \enspace \lambda_2=3 , then A is invertible. This can only occur if = 0 or 1. We'll start with the simple eigenvector. That is, convert the augmented matrix An n × n matrix A has at most n eigenvalues. Numerical Methods for Computers: Linear Algebra and Function Minimisation, 2nd ed. Icon 2X2. For a square matrix A, an Eigenvector and Eigenvalue make this equation true:. decomposition, and the fact that this decomposition is always possible as long Let be a linear So if lambda is an eigenvalue of A, then this right here tells us that the determinant of lambda times the identity matrix, so it's going to be the identity matrix in R2. Numerical How to Determine the Eigenvectors of a Matrix, Quiz & Worksheet - Eignevalues of a Matrix, Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, Diagonalizing Symmetric Matrices: Definition & Examples, Biological and Biomedical Eigenvalues of a triangular matrix. Next we want to simplify everything inside the determinant to get a single matrix. This calculator allows to find eigenvalues and eigenvectors using the Characteristic polynomial. Marcus, M. and Minc, H. Introduction Eigenvalues and -vectors of a matrix. Matrices for which the eigenvalues and right eigenvectors will be computed A is not invertible if and only if is an eigenvalue of A. Eigenvalues of a triangular matrix. Some of the first fundamentals you learn for working with matrices are how to multiply them by scalars, vectors, and other matrices. Parameters a (…, M, M) array. Cambridge University Press, pp. If is an eigenvector of the transpose, it satisfies By transposing both sides of the equation, we get. All rights reserved. Let be the determinant Again we start by inserting our matrix for A, and writing out the identity matrix. {{courseNav.course.topics.length}} chapters | 102-118, 1990. diagonalization and arises in such common applications as stability analysis, Explore anything with the first computational knowledge engine. 9 in Compact credit-by-exam regardless of age or education level. We figured out the eigenvalues for a 2 by 2 matrix, so let's see if we can figure out the eigenvalues for a 3 by 3 matrix. Sciences, Culinary Arts and Personal https://mathworld.wolfram.com/Eigenvalue.html, Eigenvalues, Curvature, and Quadratic Eigenvalues are a special set of scalars associated with a linear system of equations (i.e., a matrix equation) that are sometimes also known as characteristic roots, characteristic values (Hoffman and Kunze 1971), proper values, or latent roots (Marcus and Minc 1988, p. 144).. Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices. The second printed matrix below it is v, whose columns are the eigenvectors corresponding to the eigenvalues in w. Meaning, to the w[i] eigenvalue, the corresponding eigenvector is the v[:,i] column in matrix v. In NumPy, the i th column vector of a matrix v is extracted as v[:,i] So, the eigenvalue w[0] goes with v[:,0] w[1] goes with v[:,1] §6.2 in Linear where is the Kronecker {{courseNav.course.mDynamicIntFields.lessonCount}} lessons Since both A and lambda*I are multiplied by v, we can factor it out. Nonnegative matrix A has the largest eigenvalue $\lambda_{1}$<1. The decomposition of a square matrix into eigenvalues This may require more trial and error than our 2x2 example, since the quadratic equation only works for second order polynomials and we have a third order one here. A.8. The eigen-value could be zero! It turns out that the left eigenvectors of any matrix are equal to the right eigenvectors of the transpose matrix. When we solve for the determinant, we're going to get a polynomial with eigenvalues as its roots. Select the size of the matrix and click on the Space Shuttle in order to fly to the solver! Note that we used the same method of computing the determinant of a \(3 \times 3\) matrix that we used in the previous section. §4.7 in Mathematical Methods for Physicists, 3rd ed. Earlier we stated that an nxn matrix has n eigenvalues. determinant vanishes, so the solutions of equation Secondly, in order for this equation to be true, the matrix we multiply by v must be noninvertible. Works with matrix from 2X2 to 10X10. Why? This would be true more generally for commuting normal matrices. "Eigenvalue." Eigen Decomposition. An easy and fast tool to find the eigenvalues of a square matrix. The values of λ that satisfy the equation are the generalized eigenvalues. numpy.linalg.eig¶ numpy.linalg.eig (a) [source] ¶ Compute the eigenvalues and right eigenvectors of a square array. (5) are given by. as the matrix consisting of the eigenvectors of is square Nash, J. C. "The Algebraic Eigenvalue Problem." You can test out of the 4. characteristic polynomial. The matrix A = \begin{bmatrix} 1 & 7 \\ -7 & -1 \end{bmatrix} has complex eigenvalues, \displaystyle \lambda_{1,2} = a \pm bi. Click on the Space Shuttle and go to the 2X2 matrix solver! these back in gives independent equations for the components of each corresponding eigenvector, The row vector is called a left eigenvector of . See Also. They have many uses! Earn Transferable Credit & Get your Degree, Eigenvalues: Definition, Properties & Examples, Eigenvalues & Eigenvectors: Definition, Equation & Examples, Newton-Raphson Method for Nonlinear Systems of Equations, Cayley-Hamilton Theorem Definition, Equation & Example, Singular Matrix: Definition, Properties & Example, Modulus of a Complex Number: Definition & Examples, Convergent Sequence: Definition, Formula & Examples, How to Use Newton's Method to Find Roots of Equations, Amdahl's Law: Definition, Formula & Examples, Drift Velocity & Electron Mobility: Definitions & Formula, Cumulative Distribution Function: Formula & Examples, Implicit Differentiation: Examples & Formula, Maclaurin Series: Definition, Formula & Examples, High School Algebra II: Homework Help Resource, High School Geometry: Homework Help Resource, High School Trigonometry: Tutoring Solution, Smarter Balanced Assessments - Math Grade 6: Test Prep & Practice, High School Algebra I: Homework Help Resource, ASVAB Mathematics Knowledge: Study Guide & Test Prep, CSET Math Subtest II (212): Practice & Study Guide, VCE Further Mathematics: Exam Prep & Study Guide, Common Core Math Grade 8 - Functions: Standards, Indiana Core Assessments Mathematics: Test Prep & Study Guide, NY Regents Exam - Algebra I: Test Prep & Practice. The Jacobi method is to repeatedly carry out such rotations so that eventually all off-diagonal elements of the matrix become zero, i.e, is converted into a diagonal eigenvalue matrix , by a sequence of orthogonal rotation matrices whose product is the eigenvector matrix . Using the Jacobian matrix, classify (if pos. The power method finds the eigenvalue of a matrix A with the "largest modulus", which might be different from the largest eigenvalue of A. Working Scholars® Bringing Tuition-Free College to the Community. As shown in Cramer's rule, a linear However, A2 = Aand so 2 = for the eigenvector x. succeed. Blended Learning | What is Blended Learning? The diagonal elements of a triangular matrix are equal to its eigenvalues. If .A I/ x D 0 has a nonzero solution, A I is not invertible. An nxn matrix will have n eigenvalues. vector such The row vector is called a left eigenvector of . matrix. Bristol, England: Adam Hilger, pp. To find this, we'll start with our equation from the last section, and rearrange it to get everything on one side of the equals sign, as you can see in the equation on your screen now. equation. Eigenvectors and eigenvalues can be returned together using the command Eigensystem[matrix]. Setting the … The number of eigenvalues A has depends on its size. The matrix decomposition of a square matrix into so-called eigenvalues and eigenvectors is an extremely important one. λ is an eigenvalue (a scalar) of the Matrix [A] if there is a non-zero vector (v) such that the following relationship is satisfied: [A](v) = λ (v) Every vector (v) satisfying this equation is called an eigenvector of [A] belonging to the eigenvalue λ.. As an example, in the case of a 3 X 3 Matrix … Eigenvalues first. X' = \bigl(\begin{smallmatrix} -1 & 7\\ -7 & 13 \end{smallmatrix}\bigr) X, Given \frac{\mathrm{d} x}{\mathrm{d} t}= -2x+4xy, \quad \frac{\mathrm{d} y}{\mathrm{d} t}= 2y(1-\frac{y}{2})-3xy , find all critical (equilibrium) points. Find the values of a and b. There exists a special case for this rule where instead of getting a new vector you get a scaled version of the same vector from before. Select the size of the matrix and click on the Space Shuttle in order to fly to the solver! There's also a special case where, instead of getting a completely new vector, you get a scaled version of the same vector you started with. Try refreshing the page, or contact customer support. | {{course.flashcardSetCount}} The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. In general, when we multiply a matrix (A) times a vector (v) we get a new vector (x). system of equations has nontrivial solutions iff the and eigenvectors is known in this work as eigen thus allowing solution for the eigenvectors. are not linearly independent. Q.9: pg 310, q 23. 449-489, 1992. Suppose that A is a square matrix. We have some properties of the eigenvalues of a matrix. This means its determinant must equal zero. Is there any other formulas between inverse matrix and eigenvalue that I don't know? 229-237, An error occurred trying to load this video. For this example, we'll look at the following matrix with 4, 2, 1, and 3. So, we've got a simple eigenvalue and an eigenvalue of multiplicity 2. Nonnegative matrix A has the largest eigenvalue $\lambda_{1}$<1. [email protected] Thanks to: Philip Petrov (https://cphpvb.net) for Bulgarian translationManuel Rial Costa for Galego translation then the system is said to be degenerate and the eigenvectors Ch. We have some properties of the eigenvalues of a matrix. When v equals zero, lambda's value becomes trivial because any scalar or matrix multiplied by a zero vector equals another zero vector. South Dakota (SD): Overview of South Dakota's Educational System, How to Become an Apartment Property Manager, Summary of Oklahoma Colleges, Universities, & Career Schools, Rhode Island (RI): Colleges, Trade Schools, and Universities Overview, New Jersey (NJ): Trade Schools, Colleges, and Universities, Texas (TX): Colleges, Universities, and Career Schools, Overview of Pennsylvania's Higher Education & Career Education, Summary of Oregon Universities, Colleges, & Career Education, Chapman University: Academics, Admissions & Tuition Info, Tennessee (TN): Overview of Tennessee's Colleges and Universities, Military-Friendly Online Master's Degree Programs, Automotive Sales Manager: Job & Career Information, Bioethics Career Options and Education Requirements, HIV AIDS Health Counselor Jobs Career Options and Requirements, Compliance Engineer Salary Duties and Requirements, How to Determine the Eigenvalues of a Matrix, Eigenvalue and Eigenvector in Linear Algebra, High School Trigonometry: Homework Help Resource, UExcel Statistics: Study Guide & Test Prep, CLEP Precalculus: Study Guide & Test Prep, CSET Math Subtest I (211): Practice & Study Guide, Solving PSAT Math Problems with Number Lines, The Great Global Conversation: Reading Passages on the SAT, Quiz & Worksheet - Locating Evidence in a Reading Passage, Quiz & Worksheet - The PSAT Reading Section, Quiz & Worksheet - PSAT Reading Strategies, Quiz & Worksheet - PSAT Writing Strategies, Problems with Radical Expressions & Equations, Problems with Algebraic Expressions and Equations, CPA Subtest IV - Regulation (REG): Study Guide & Practice, CPA Subtest III - Financial Accounting & Reporting (FAR): Study Guide & Practice, ANCC Family Nurse Practitioner: Study Guide & Practice, Advantages of Self-Paced Distance Learning, Advantages of Distance Learning Compared to Face-to-Face Learning, Top 50 K-12 School Districts for Teachers in Georgia, Finding Good Online Homeschool Programs for the 2020-2021 School Year, Coronavirus Safety Tips for Students Headed Back to School, Hassan in The Kite Runner: Description & Character Analysis, Self-Care for Mental Health Professionals: Importance & Strategies, Soraya in The Kite Runner: Description & Character Analysis, The Pit and the Pendulum: Theme & Symbolism, Quiz & Worksheet - Physiology of Language & Speech, Quiz & Worksheet - Analyzing the Declaration of Independence, Quiz & Worksheet - Data Modeling in Software Engineering, Quiz & Worksheet - Conductivity of Aluminum Foil, Flashcards - Real Estate Marketing Basics, Flashcards - Promotional Marketing in Real Estate. By using this website, you agree to our Cookie Policy. Forms. Find the general solution of x_1 prime = 3x_1 + x_2, x_2 prime = 2x_1 + 4x_2 using the eigenvalue method. Find the eigenvalues and an eigen vector corresponding to each eigenvalue for the matrix A = \begin{bmatrix} 1 & -4\\ 4 & -7 \end{bmatrix}. Theorem. All other trademarks and copyrights are the property of their respective owners. where is the characteristic polynomial of A. In general, the way acts on is complicated, but there are certain cases where the action maps to the same vector, multiplied by a scalar factor.. Eigenvalues and eigenvectors have immense applications in the physical sciences, especially quantum mechanics, among other fields. Not sure what college you want to attend yet? This means there must not exist a matrix B such that C*B = B*C = I, where C = A - lambda*I in our case. Eigenvalue. Let A be a square matrix of order n. If is an eigenvalue of A, then: 1. is an eigenvalue of A m, for 2. When this happens we call the scalar (lambda) an eigenvalue of matrix A. In simple words, the eigenvalue is a scalar that is used to transform the eigenvector. First, we are searching for a solution to the equation under the condition that v isn't equal to zero. Before getting into examples, we need to find the general solution for finding the eigenvalues of an nxn matrix. Mathematical Methods for Physicists, 3rd ed. Numerical Methods for Computers: Linear Algebra and Function Minimisation, 2nd ed. This decomposition generally goes under the name "matrix diagonalization. Already registered? To unlock this lesson you must be a Study.com Member. Press, W. H.; Flannery, B. P.; Teukolsky, S. A.; and Vetterling, W. T. The eigenvalue tells whether the special vector x is stretched or shrunk or reversed or left unchanged—when it is multiplied by A. To find the eigenvalues, we're going to use the determinant equation we found in the previous section. A simple example is that an eigenvector does not change direction in a transformation:. https://mathworld.wolfram.com/Eigenvalue.html. matrix A I times the eigenvector x is the zero vector. When you multiply a matrix (A) by a vector (v) you get a new vector (x). The matrix S has the real eigenvalue as the first entry on the diagonal and the repeated eigenvalue represented by the lower right 2-by-2 block. Let's now get the eigenvectors. If A is the identity matrix, every vector has Ax D x. Let A be an n×n matrix and let λ1,…,λn be its eigenvalues. To learn more, visit our Earning Credit Page. When v isn't equal to zero, this equation is true only if the matrix we multiply v by is noninvertible. The Mathematics Of It. Walk through homework problems step-by-step from beginning to end. and career path that can help you find the school that's right for you. If we multiply the right side of our equation by the identity matrix (I) and rearrange our formula, we can get the following equation: There are two things to remember here. To finish, we just need to get our eigenvalues by finding the roots of the characteristic polynomial. (or, in general, a corresponding right eigenvector Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. • STEP 1: For each eigenvalue λ, we have (A −λI)x= 0, where x is the eigenvector associated with eigenvalue λ. Show Instructions In general, you can skip … Add to solve later Sponsored Links The Lanczos algorithm is an algorithm for computing the eigenvalues and eigenvectors for large symmetric sparse matrices. Then, the book says, $(I-A)^{-1}$ has the same eigenvector, with eigenvalue $\frac{1}{1-\lambda_{1}}$. Let's practice finding eigenvalues by looking at a 2x2 matrix. If there is a Is the following statement True or False? Practice online or make a printable study sheet. The basic equation is AX = λX The number or scalar value "λ" is an eigenvalue of A. If you love it, our example of the solution to eigenvalues and eigenvectors of 3×3 matrix will help you get a better understanding of it. 4. When we know an eigenvalue , we find an eigenvector by solving.A I/ x D 0. All the matrices are square matrices (n x n matrices). We will see how to find them (if they can be found) soon, but first let us see one in action: Hints help you try the next step on your own. The determination of the eigenvalues and eigenvectors of a system is extremely important in physics and engineering, where it is equivalent to matrix We may find D 2 or 1 2 or 1 or 1. Eigenvalue Calculator. And I think we'll appreciate that it's a good bit more difficult just because the math becomes a little hairier. Justify your answer. Hide Ads Show Ads. 3. Weisstein, Eric W. So, if we take the transpose and use eigen() , we can easily find the left eigenvector, and then the reproductive values: In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. Each eigenvalue is paired with a corresponding so-called eigenvector Together we'll learn how to find the eigenvalues of any square matrix. just create an account. Once we've got that down we'll practice finding eigenvalues by going through an example with a 2x2 matrix, and one with a 3x3 matrix. If 2 positive matrices commute, than each eigenvalue of the sum is a sum of eigenvalues of the summands. for some scalar , then is called the eigenvalue of with corresponding Let A be a square matrix of order n. If is an eigenvalue of A, then: 1. is an eigenvalue of A m, for 2. the physics of rotating bodies, and small oscillations of vibrating systems, to name (right) eigenvector . Now we're set to solve for the determinant and find the matrix's characteristic polynomial. If g is an eigenvalue for a correlation matrix, then an asymptotic confidence interval is g ± z * sqrt( 2 g 2 / n) where z * is the standard normal quantile, as computed in the following program: Find the general solution of the given system. In other words, it doesn't actually affect the values in our equation, as you can see on screen. only a few. Cite. is known as the eigen decomposition theorem. There are a few different methods you can use to try and find the roots of a second order polynomial, but the only method that always works is using the quadratic equation, which we can see play out here on screen. The matrix equation = involves a matrix acting on a vector to produce another vector. Services. The matrix equation = involves a matrix acting on a vector to produce another vector. Secondly, we're searching for a solution to the above equation under the condition that v isn't equal to zero. satisfy, which is equivalent to the homogeneous system, where is the identity By definition of the kernel, that study When this happens we call the scalar (lambda) an eigenvalue of matrix A. In this lesson, we're going learn how to find the eigenvalues of a given matrix. Join the initiative for modernizing math education. with eigenvalue , then the corresponding eigenvectors The eigenvectors make up the nullspace of A I . For arbitrary positive matrices, the largest eigenvalue of the sum will be less than or equal to the sum of the largest eigenvalues of the summands. 4.1. We'll then multiply that element by a 2x2 determinant made of everything we didn't cross out, and put the three 2x2 determinants we get together in an equation as seen in the diagram on your screen. Now we just need to solve the 2x2 determinants and simplify the equation to get our characteristic polynomial. This equation is known as the characteristic equation of , and the left-hand side is known as the If the eigenvalues are -fold degenerate, If A is invertible, then is an eigenvalue of A-1. In such cases, the additional constraint that the eigenvectors be orthogonal. So lambda is an eigenvalue of A. Multiplying by an identity matrix is like multiplying by one for scalar equations. Show that (1) det(A)=n∏i=1λi (2) tr(A)=n∑i=1λi Here det(A) is the determinant of the matrix A and tr(A) is the trace of the matrix A. Namely, prove that (1) the determinant of A is the product of its eigenvalues, and (2) the trace of A is the sum of the eigenvalues. transformation represented by a matrix . Now consider a similarity transformation of . 1 Recommendation. Algebra, 2nd ed. Eigenvalues may be computed in the Wolfram Language using Eigenvalues[matrix]. Englewood Cliffs, NJ: Prentice-Hall, p. 182, 1971. Expert Advice on Bullying for Teachers | Bullying Prevention in Schools, FTCE School Psychologist PK-12 (036): Test Practice & Study Guide, 12th Grade English: Homework Help Resource, Introduction to Financial Accounting: Certificate Program, Psychosocial Development in Adolescence: Homework Help, DNA Replication & Mutation - Middle School Life Science: Homeschool Curriculum, Quiz & Worksheet - Influences on the Environmental Lapse Rate, Quiz & Worksheet - The Role of Notes on Financial Statements, Quiz & Worksheet - Characteristics of Multiple Personalities Disorder, Quiz & Worksheet - Pros & Cons of the Cognitive Model, Quiz & Worksheet - Characteristics of Addictive Hallucinogens, Length-Tension Relationship in Skeletal Muscle, International Baccalaureate vs. Advanced Placement Tests, Tech and Engineering - Questions & Answers, Health and Medicine - Questions & Answers, The matrix A is factored in the form PDP^-1 . For the matrix, A= 3 2 5 0 : Find the eigenvalues and eigenspaces of this matrix. Log in here for access. Definitions and terminology Multiplying a vector by a matrix, A, usually "rotates" the vector , but in some exceptional cases of , A is parallel to , i.e. In Mathematics, eigenve… The determinant of A I must be zero. For example, for a matrix, the eigenvalues are, which arises as the solutions of the characteristic All we have left to do is find the roots of the characteristic polynomial to get our eigenvalues. Hoffman, K. and Kunze, R. "Characteristic Values." Knowledge-based programming for everyone. So lambda times 1, 0, 0, 1, minus A, 1, 2, 4, 3, is going to be equal to 0. Recipes in FORTRAN: The Art of Scientific Computing, 2nd ed. Is there any other formulas between inverse matrix and eigenvalue that I don't know? This calculator allows you to enter any square matrix from 2x2, 3x3, 4x4 all the way up to 9x9 size. As you can see, you add the determinants together with alternating positive and negative signs between them. Comput. A matrix is noninvertible only when its determinant equals zero, as you can see on your screen right now. Kaltofen, E. "Challenges of Symbolic Computation: My Favorite Open Problems." Adding a constant times the identity matrix to , so the new eigenvalues equal the old plus . In general, an identity matrix is written as an nxn matrix with ones on the diagonal starting at the top left and zeroes everywhere else, which you can see in the matrices that are appearing on your screen right now. They have many uses! Works with matrix from 2X2 to 10X10. J. Symb. Did you know… We have over 220 college and a corresponding left eigenvector; there is In other words, a matrix times a vector equals a scalar (lambda) times that same vector. If A is invertible, then is an eigenvalue of A-1. Even if and have the same eigenvalues, they do not necessarily have the same eigenvectors. Finding of eigenvalues and eigenvectors. As a member, you'll also get unlimited access to over 83,000 We will see how to find them (if they can be found) soon, but first let us see one in action: Just like before, we need to simplify the inside of the determinant to get a single matrix. Use the Diagonalization theorem to find the eigenvalues of A and a basis for each eigenspace. Study.com has thousands of articles about every In general, the way acts on is complicated, but there are certain cases where the action maps to the same vector, multiplied by a scalar factor.. Eigenvalues and eigenvectors have immense applications in the physical sciences, especially quantum mechanics, among other fields. 1985. Damien has a master's degree in physics and has taught physics lab to college students. Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. Cambridge, England: © copyright 2003-2020 Study.com. Eigenvalue Calculator. Why? How many eigenvalues a matrix has will depend on the size of the matrix. If B has eigenvalues 1, 2, 3, C has eigenvalues 4, 5, 6, and D has eigenvalues 7, 8, 9, what are the eigenvalues of the 6 by 6 matrix A=B&C0&D? Multiplying a matrix by a matrix or a scalar gives you another matrix, but multiplying by a vector works a little differently. The calculator will find the eigenvalues and eigenvectors (eigenspace) of the given square matrix, with steps shown. So a 2x2 matrix should have 2 eigenvalues. First we insert our matrix in for A, and write out the identity matrix. imaginable degree, area of Log in or sign up to add this lesson to a Custom Course. Suppose is any eigenvalue of Awith corresponding eigenvector x, then 2 will be an eigenvalue of the matrix A2 with corresponding eigenvector x. Let's walk through it step by step: Get access risk-free for 30 days, Even if and have the same eigenvalues, they do not necessarily have the same eigenvectors. of , then. An easy and fast tool to find the eigenvalues of a square matrix. Calculator of eigenvalues and eigenvectors. delta, can be applied to yield additional constraints, This is how to recognize an eigenvalue : We already know how to check if a given vector is an eigenvector of A and in that case to find the eigenvalue. From The #1 tool for creating Demonstrations and anything technical. We call this polynomial the matrix's characteristic polynomial. and Kunze 1971), proper values, or latent roots (Marcus and Minc 1988, p. 144). A simple example is that an eigenvector does not change direction in a transformation:. 'Eigen' is a German word which means 'proper' or 'characteristic'. Create an account to start this course today. \({\lambda _{\,1}} = 2\) : • Once the eigenvaluesof a matrix (A) have been found, we can find the eigenvectors by Gaussian Elimination. Orlando, FL: Academic Press, pp. We'll be using the matrix you see on our screen for this example, with the numbers 1, 2, 1, -2, 1, 1, 4, 2, and 0. We can then figure out what the eigenvalues of the matrix are by solving for the roots of the characteristic polynomial. If is an eigenvector of the transpose, it satisfies By transposing both sides of the equation, we get. no analogous distinction between left and right for eigenvalues). 's' : ''}}. 29, 891-919, 2000. Free Matrix Eigenvalues calculator - calculate matrix eigenvalues step-by-step This website uses cookies to ensure you get the best experience. Choose your matrix! Anyone can earn The eigenvalues of the 2-by-2 block are also eigenvalues of A: eig(S(2:3,2:3)) ans = 1.0000 + 0.0000i 1.0000 - 0.0000i. Finding the determinant will give us the matrix's characteristic polynomial with eigenvalues as its roots. Then Ax D 0x means that this eigenvector x is in the nullspace. Ch. We just didn't show the work. First let's reduce the matrix: This reduces to the equation: There are two kinds of students: those who love math and those who hate it. The identity matrix will be a 3x3 matrix to match the size of A. to Linear Algebra. By definition, if and only if-- I'll write it like this. first two years of college and save thousands off your degree. Subsection 5.1.2 Eigenspaces. Eigenvalues are the special set of scalars associated with the system of linear equations. Select a subject to preview related courses: Now that we've found the eigenvalues for a 2x2 matrix, let's try something a little more complicated by finding them for a 3x3 matrix. Eigenvector and Eigenvalue. An nxn matrix will have n eigenvalues. When this happens we call the scalar (lambda) an eigenvalue of matrix A.How many eigenvalues a matrix has will depend on the size of the matrix. An nxn matrix will have n eigenvalues. Finally, while we looked specifically at examples of a 2x2 and 3x3 matrix, you should remember that this formula works for finding the eigenvalues for a square matrix of any size. It is mostly used in matrix equations. Visit the Linear Algebra: Help & Tutorials page to learn more. These eigenvalue algorithms may also find eigenvectors. If all eigenvalues are different, then plugging It will find the eigenvalues of that matrix, and also outputs the corresponding eigenvectors.. For background on these concepts, see 7.Eigenvalues … 3. Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. If is any number, then is an eigenvalue … In this section K = C, that is, matrices, vectors and scalars are all complex.Assuming K = R would make the theory more complicated. Choose your matrix!
Cod Warzone Server Status, Chelsea Creek River, Spyderco Lock Knife, Fundamental Principles Of Computer Science, Royal Chef Secret Basmati Rice 40lb, Current Accounting Issues 2020, Baked Shrimp And Broccoli Alfredo, Bamboo Sock Yarn,
eigenvalue of matrix — No Comments | CommonCrawl |
Vignette of R package hdiVAR
Xiang Lyu
Problem setup
This package considers the estimation and statistical inference of high-dimensional vector autoregression with measurement error, also known as linear gaussian state-space model. A sparse expectation-maximization (EM) algorithm is provided for parameter estimation. For transition matrix inference, both global testing and simultaneous testing are implemented, with consistent size and false discovery rate (FDR) control. The methods are proposed in Lyu et al. (2020).
The model of interest is high-dimensional vector autoregression (VAR) with measurement error, \[ \mathbf{y}_{t} = \mathbf{x}_{t} + \mathbf{\epsilon}_{t}, \ \ \ \ \mathbf{x}_{t+1} = \mathbf{A}_* \mathbf{x}_{t} + \mathbf{\eta}_{t}, \] where \(\mathbf{y}_{t} \in \mathbb{R}^{p}\) is the observed multivariate time series, \(\mathbf{x}_{t}\in \mathbb{R}^{p}\) is the multivariate latent signal that admits an autoregressive structure, \(\mathbf{\epsilon}_{t}\in \mathbb{R}^{p}\) is the measurement error for the observed time series, \(\mathbf{\eta}_{t} \in \mathbb{R}^{p}\) is the white noise of the latent signal, and \(\mathbf{A}_*\in \mathbb{R}^{p\times p}\) is the sparse transition matrix that encodes the directional relations among the latent signal variables of \(\mathbf{x}_{t}\). Furthermore, we focus on the scenario \(\|\mathbf{A}_*\|_2 <1\) such that the VAR model of \(\mathbf{x}_{t}\) is stationary. The error terms \(\mathbf{\epsilon}_{t}\) and \(\mathbf{\eta}_{t}\) are i.i.d. multivariate normal with mean zero and covariance \(\sigma_{\epsilon,*}^2 \mathbf{I}_p\) and \(\sigma_{\eta,*}^2 \mathbf{I}_p\), respectively, and are independent of \(\mathbf{x}_{t}\). This package can handle high-dimensional setting where \(p^2\) exceeds the length of series \(T\).
Estimation aims to recover \(\{\mathbf{A}_*, \sigma_{\eta,*}^2, \sigma_{\epsilon,*}^2\}\) from observation \(\mathbf{y}_{t}\)'s. The statistical inference goal is the transition matrix \(\mathbf{A}_*\). The global hypotheses is \[ H_{0}: A_{*,ij} = A_{0,ij}, \ \textrm{ for all } (i,j) \in \mathcal{S} \quad \textrm{versus} \quad H_{1}: A_{*,ij} \neq A_{0,ij}, \ \textrm{ for some } (i,j) \in \mathcal{S}, \] for a given \(\mathbf{A}_{0} = (A_{0,ij}) \in \mathbb{R}^{p \times p}\) and \(\mathcal{S} \subseteq [p] \times [p]\), where \([p] = \{1, \ldots, p\}\). The most common choice is \(\mathbf{A}_0=\mathbf{0}_{p\times p}\) and \(\mathcal{S} =[p] \times [p]\). The simultaneous hypotheses are \[ H_{0; ij}: A_{*,ij} = A_{0,ij}, \quad \textrm{versus} \quad H_{1; ij}: A_{*,ij} \ne A_{0,ij}, \ \textrm{ for all } (i, j) \in \mathcal{S}. \]
1. Estimation: sparse EM algorithm
Let \(\{ \mathbf{y}_{t},\mathbf{x}_{t} \}_{t=1}^{T}\) denote the complete data, where \(T\) is the total number of observations, \(\mathbf{y}_{t}\) is observed but \(\mathbf{x}_{t}\) is latent. Let \(\Theta = \left\{ \mathbf{A}, \sigma_{\eta}^2, \sigma_{\epsilon}^2 \right\}\) collect all the parameters of interest in model , and \(\Theta_* = \left\{ \mathbf{A}_*, \sigma_{\eta,*}^2, \sigma_{\epsilon,*}^2 \right\}\) denote the true parameters. The goal is to estimate \(\Theta_*\) by maximizing the log-likelihood function of the observed data, \(\ell (\Theta | \{\mathbf{y}_{t}\}_{t=1}^T)\), with respect to \(\Theta\). The computation of \(\ell (\Theta | \{\mathbf{y}_{t}\}_{t=1}^T)\), however, is highly nontrivial. Sparse EM algorithm then turns to an auxiliary function, named the finite-sample \(Q\)-function, \[ Q_y (\Theta | \Theta') = \mathbb{E} \left[ \ell\left( \Theta | \{ \mathbf{y}_{t},\mathbf{x}_{t} \}_{t=1}^{T} \right) | \{ \mathbf{y}_{t}\}_{t=1}^T, \Theta' \right], \] which is defined as the expectation of the log-likelihood function for the complete data \(\ell(\Theta | \{ \mathbf{y}_{t},\mathbf{x}_{t} \}_{t=1}^{T})\), conditioning on a parameter set \(\Theta'\) and the observed data \(\mathbf{y}_t\), and the expectation is taken with respect to the latent data \(\mathbf{x}_t\). The \(Q\)-function can be computed efficiently by Kalman filter and smoothing, and provides a lower bound of the target log-likelihood function \(\ell (\Theta|\{\mathbf{y}_{t}\}_{t=1}^T)\) for any \(\Theta\). The equality \(\ell (\Theta'|\{\mathbf{y}_{t}\}_{t=1}^T) = Q_y(\Theta' | \Theta')\) holds if \(\Theta = \Theta'\). Maximizing Q-function provides an uphill step of the likelihood. Starting from an initial set of parameters \(\hat\Theta_0\), sparse EM algorithm then alternates between the expectation step (E-step), where the \(Q\)-function \(Q_y (\Theta | \hat{\Theta}_{k})\) conditioning on the parameters \(\hat\Theta_{k}\) of the \(k\)th iteration is computed, and the maximization step (M-step), where the parameters are updated by maximizing the \(Q\)-function \(\hat{\Theta}_{k+1} = \arg\max_{\Theta} Q_y (\Theta | \hat{\Theta}_{k})\).
For the M-step, the maximizer of \(Q_y(\Theta | \hat{\Theta}_{k})\) satisfies that \(\frac{1}{T-1} \sum_{t=1}^{T-1} \mathbf{E}_{t,t+1;k} = \{\frac{1}{T-1}\sum_{t=1}^{T-1} \mathbf{E}_{t,t;k} \}\mathbf{A}^\top\), where \(\mathbf{E}_{t,s;k} = \mathbb{E} \left\{ \mathbf{x}_{t}\mathbf{x}_{s}^\top | \{\mathbf{y}_{t'}\}_{t'=1}^T, \hat{\Theta}_{k-1} \right\}\) for \(s, t\in [T]\) is obtained from the E-step. Instead of directly inverting the matrix involving \(\mathbf{E}_{t,t;k}\)'s, which is computationally challenging when the dimension \(p\) is high and yields a dense estimator of \(\mathbf{A}_*\) leading to a divergent statistical error, sparse EM algorithm implements generalized Dantzig selector for Yule-Walker equation,
\[ \hat{\mathbf{A}}_{k} = \arg\min_{\mathbf{A} \in \mathbb{R}^{p\times p}} \|\mathbf{A}\|_1, \;\; \textrm{such that} \; \left\| \frac{1}{T-1} \sum_{t=1}^{T-1} \mathbf{E}_{t,t+1;k} -\frac{1}{T-1} \sum_{t=1}^{T-1} \mathbf{E}_{t,t;k} \mathbf{A}^\top \right\|_{\max} \le \tau_k, \] where \(\tau_k\) is the tolerance parameter that is tuned via cross-validation each iteration (in the observed time series, first Ti_train time points serve as training set, then gap Ti_gap time points, and use the remain as test set). The optimization problem is solved using linear programming in a row-by-row parallel fashion. In the package, an option of further hard thresholding \(\hat{\mathbf{A}}_{k}\) is provided to improve model selection performance. Hard thresholding sets entries of magnitude less than threshold level as zero. The variance estimates are next updated as, \[ \begin{align} \label{eqn: epsilon} \begin{split} \hat\sigma_{\eta,k}^2 & = \frac{1}{p(T-1)} \sum_{t=1}^{T-1} \left\{ \mathrm{tr}( \mathbf{E}_{t+1,t+1;k}) - \mathrm{tr}\left ( \hat{\mathbf{A}}_{k} \mathbf{E}_{t,t+1;k} \right) \right\} , \\ \hat\sigma^2_{\epsilon,k} & = \frac{1}{pT } \sum_{t=1}^{T} \left\{ \mathbf{y}_{t}^\top \mathbf{y}_{t} - 2 \mathbf{y}_{t}^\top \mathbf{E}_{t;k} + \mathrm{tr} (\mathbf{E}_{t,t;k}) \right\}, \end{split} \end{align} \] where \(\mathbf{E}_{t;k} = \mathbb{E} \{ \mathbf{x}_{t} | \{\mathbf{y}_{t'}\}_{t'=1}^T, \hat{\Theta}_{k-1} \}\) for \(t \in [T]\), and comes from taking derivative on \(Q_y(\Theta | \hat{\Theta}_{k})\). Sparse EM algorithm terminates when reaches the maximal number of iterations or the estimates are close enough in two consecutive iterations, e.g., \(\min \left\{ \|\hat{\mathbf{A}}_{k} -\hat{\mathbf{A}}_{k-1} \|_F , | \hat{\sigma}_{\eta,k }-\hat{\sigma}_{\eta,k-1}| ,| \hat{\sigma}_{\epsilon, k}-\hat{\sigma}_{\epsilon, k-1}| \right\} \le 10^{-3}\).
2. Statistical inference
2.a Gaussian test statistic matrix
The fundamental tools of testing is a gausisan test statistic matrix whose entries marginally follow standard normal under null.
The test statistic is constructed as follows. Observation \(\mathbf{y}_t\) follows an autoregressive structure, \(\mathbf{y}_{t+1} = \mathbf{A}_* \mathbf{y}_{t} + \mathbf{e}_{t}\), with the error term \(\mathbf{e}_{t} = - \mathbf{A}_* \mathbf{\epsilon}_{t}+ \mathbf{\epsilon}_{t+1} + \mathbf{\eta}_{t}\). Then the lag-1 auto-covariance of the error \(\mathbf{e}_t\) is of the form, \[ \mathbf{\Sigma}_e = \mathrm{Cov}(\mathbf{e}_{t},\mathbf{e}_{t-1}) = -\sigma_{\epsilon,*}^2 \mathbf{A}_*. \] This suggests that we can apply the covariance testing methods on \(\mathbf{\Sigma}_e\) to infer transition matrix \(\mathbf{A}_*\). However, \(\mathbf{e}_t\) is not directly observed. Define generic estimates of \(\Theta_*\) by \(\left \{\hat{\mathbf{A}},\hat\sigma_{\epsilon}^2, \hat\sigma_{\eta}^2 \right\}\) (sparse EM estimates also work). We use them to reconstruct this error, and obtain the sample lag-1 auto-covariance estimator, \[\hat{\mathbf{\Sigma}}_e = \frac{1}{T-2} \sum_{t=2}^{T-1} \hat{\mathbf{e}}_{t}\hat{\mathbf{e}}_{t-1}^\top, \ \text{where} \ \ \hat{\mathbf{e}}_{t} = \mathbf{y}_{t+1} - \hat{\mathbf{A}} \mathbf{y}_{t} - \frac{1}{T-1}\sum_{t'=1}^{T-1} (\mathbf{y}_{t'+1} - \hat{\mathbf{A}}\mathbf{y}_{t'}).\]
This sample estimator \(\hat{\mathbf{\Sigma}}_e\), nevertheless, involves some bias due to the reconstruction of the error term, and also an inflated variance due to the temporal dependence of the time series data. Bias and variance correction lead to the Gaussian matrix test statistic \(\mathbf{H}\), whose \((i,j)\)th entry is,
\[ H_{ij} = \frac{ \sum_{t=2}^{T-1} \{ \hat{e}_{ t,i}\hat{e}_{ t-1,j} + \left( \hat{\sigma}_{\eta}^2 +\hat{\sigma}_{\epsilon}^2 \right) \hat{A}_{ij} - \hat{\sigma}_\eta^2 A_{0,ij} \} }{\sqrt{T-2} \; \hat{\sigma}_{ij}}, \quad i,j \in [p]. \] Lyu et al. (2020) proves that, under mild assumptions,
\[ \frac{ \sum_{t=2}^{T-1} \{\hat{e}_{ t,i}\hat{e}_{ t-1,j} + \left( \hat{\sigma}_{\eta}^2 +\hat{\sigma}_{\epsilon}^2 \right) \hat{A}_{ij} - \hat{\sigma}_\eta^2 A_{*,ij} \}}{\sqrt{T-2} \; \hat{\sigma}_{ij}}\rightarrow_{d} \mathrm{N}(0, 1) \] uniformly for \(i,j \in [p]\) as \(p, T \to \infty\).
2.b Global testing
The key insight of global testing is that the squared maximum entry of a zero mean normal vector converges to a Gumbel distribution. Specifically, the global test statistic is
\[ G_{\mathcal{S}} = \max_{(i,j) \in \mathcal{S}} H_{ij}^2. \] Lyu et al. (2020) justifies that the asymptotic null distribution of \(G_{\mathcal{S}}\) is Gumbel, \[ \lim_{|\mathcal{S}| \rightarrow \infty} \mathbb{P} \Big( G_\mathcal{S} -2 \log |\mathcal{S}| + \log \log |\mathcal{S}| \le x \Big) = \exp \left\{- \exp (-x/2) / \sqrt{\pi} \right\}. \] It leads to an asymptotic \(\alpha\)-level test, \[\begin{eqnarray*} \Psi_\alpha = \mathbb{1} \big[ G_\mathcal{S} > 2 \log |\mathcal{S}| - \log \log |\mathcal{S}| - \log \pi -2 \log\{-\log(1-\alpha)\} \big]. \end{eqnarray*}\]
The global null is rejected if \(\Psi_\alpha=1\).
2.c Simultaneous testing
Let \(\mathcal{H}_0 = \{(i,j) : A_{*,ij}=A_{0,ij}, (i,j) \in \mathcal{S} \}\) denote the set of true null hypotheses, and \(\mathcal{H}_1 = \{ (i,j) : (i,j)\in \mathcal{S} , (i,j) \notin \mathcal{H}_0\}\) denote the set of true alternatives. The test statistic \(H_{ij}\) follows a standard normal distribution when \(H_{0;ij}\) holds, and as such, we reject \(H_{0;ij}\) if \(|H_{ij}| > t\) for some thresholding value \(t > 0\). Let \(R_{\mathcal{S}}(t) = \sum_{(i,j) \in \mathcal{S}} \mathbb{1} \{ |H_{ij}|> t\}\) denote the number of rejections at \(t\). Then the false discovery proportion (FDP) and the false discovery rate (FDR) in the simultaneous testing problem are, \[\begin{eqnarray*} \textrm{FDP}_{\mathcal{S}}(t)=\frac{\sum_{(i,j) \in \mathcal{H}_0} \mathbb{1} \{ |H_{ij}|> t\}}{R_{\mathcal{S}}(t)\vee 1}, \;\; \textrm{ and } \;\; \textrm{FDR}_{\mathcal{S}}(t) = \mathbb{E} \left\{ \textrm{FDP}_{\mathcal{S}}(t) \right\}. \end{eqnarray*}\]
An ideal choice of the threshold \(t\) is to reject as many true positives as possible, while controlling the false discovery at the pre-specified level \(\beta\), that is \(\inf \{ t > 0 : \text{FDP}_{\mathcal{S}} (t) \le \beta \}\). However, \(\mathcal{H}_0\) in \(\text{FDP}_{\mathcal{S}} (t)\) is unknown. Observing that \(\mathbb{P} ( |H_{ij}|> t ) \approx 2\{ 1- \Phi (t) \}\), where \(\Phi (\cdot)\) is the cumulative distribution function of a standard normal distribution, the false rejections \(\sum_{(i,j) \in\mathcal{H}_0} \mathbb{1} \{ |H_{ij}|> t\}\) in \(\text{FDP}_{\mathcal{S}} (t)\) can be approximated by \(\{ 2- 2 \Phi(t) \} |\mathcal{S}|\). Moreover, the search of \(t\) is restricted to the range \(\left(0, \sqrt{2\log |\mathcal{S}|} \right]\), since \(\mathbb{P}\left( \hat{t} \text{ exists in } \left(0, \sqrt{2\log |\mathcal{S}|}\right] \right) \to 1\) as shown in the theoretical justification of Lyu et al. (2020). The simultaneous testing procedure is justified that consistently control FDR, \[ \lim_{|\mathcal{S}| \to \infty} \frac{\text{FDR}_{\mathcal{S}} (\, \hat{t} \; )}{\beta |\mathcal{H}_0|/|\mathcal{S}|} = 1, \quad \textrm{ and } \quad \frac{\text{FDP}_{\mathcal{S}} (\, \hat{t} \; )}{\beta | \mathcal{H}_0|/|\mathcal{S}|}\rightarrow_{p} 1 \;\; \textrm{ as } \; |\mathcal{S}| \to \infty. \]
The purpose of this section is to show users the basic usage of this package. We will briefly go through main functions, see what they can do and have a look at outputs. An detailed example of complete procedures of estimation and inference is be presented to give users a general sense of the pakcage.
We first generate observations from the model.
library(hdiVAR)
set.seed(123)
p=3; Ti=400 # dimension and time
A=diag(1,p) # transition matrix
sig_eta=sig_epsilon=0.2 # error std
Y=array(0,dim=c(p,Ti)) #observation t=1, ...., Ti
X=array(0,dim=c(p,Ti)) #latent t=1, ...., T
Ti_burnin=400 # time for burn-in to stationarity
for (t in 1:(Ti+Ti_burnin)) {
if (t==1){
x1=rnorm(p)
} else if (t<=Ti_burnin) { # burn in
x1=A%*%x1+rnorm(p,mean=0,sd=sig_eta)
} else if (t==(Ti_burnin+1)){ # time series used for learning
X[,t-Ti_burnin]=x1
Y[,t-Ti_burnin]=X[,t-Ti_burnin]+rnorm(p,mean=0,sd=sig_epsilon)
X[,t- Ti_burnin]=A%*%X[,t-1- Ti_burnin]+rnorm(p,mean=0,sd=sig_eta)
Y[,t- Ti_burnin]=X[,t- Ti_burnin]+rnorm(p,mean=0,sd=sig_epsilon)
The first example is sparse EM algorithm.
# cross-validation grid of tolerance parameter \tau_k in Dantzig selector.
tol_seq=c(0.0001,0.0003,0.0005)
# cross-validation grid of hard thresholding levels in transition matrix estimate.
# Set as zero to avoid thresholding. The output is \hat{A}_k.
ht_seq=0
A_init=diag(0.1,p) # initial estimate of A
# initial estimates of error variances
sig2_eta_init=sig2_epsilon_init=0.1
# the first half time points are training data
Ti_train=Ti*0.5
# The latter 3/10 time points are test data (drop out train (1/2) and gap (1/5) sets).
Ti_gap=Ti*0.2
# sparse EM algorithm
sEM_fit=sEM(Y,A_init,sig2_eta_init,sig2_epsilon_init,Ti_train,Ti_gap,tol_seq,ht_seq,is_echo = TRUE)
## [1] "CV-tuned (lamda,ht) is in (1,1)/(3,1) of the parameter grid."
## sparse EM is terminated due to vanishing updates
# estimate of A
sEM_fit$A_est
## [,1] [,2] [,3]
## [1,] 0.96431768 -0.02789077 0.01344706
## [2,] 0.01932975 0.99301878 0.00309380
# estimate of error variances
c(sEM_fit$sig2_epsilon_hat,sEM_fit$sig2_eta_hat)
## [1] 0.06451885 0.06700449
The second example is statistical inference.
# use sparse EM estimates to construct test. Alternative consistent estimators can also be adopted if any.
# test the entire matrix.
# FDR control levels for simultaneous testing
FDR_levels=c(0.05,0.1)
# if null hypotheses are true (null hypothesis is true A):
# p-value should > 0.05, and simultaneous testing selects no entries.
true_null=hdVARtest(Y,sEM_fit$A_est,sEM_fit$sig2_eta_hat,sEM_fit$sig2_epsilon_hat,
global_H0=A,global_idx=NULL,simul_H0=A,
simul_idx=NULL,FDR_levels=FDR_levels)
# global pvalue:
true_null$pvalue
## [1] 0.4193607
# selection at FDR=0.05 control level
true_null$selected[,,FDR_levels==0.05]
## [,1] [,2] [,3]
## [1,] 0 0 0
# if null hypotheses are false (null hypothesis is zero matrix):
# p-value should < 0.05, and simultaneous testing selects diagnoal entries.
false_null=hdVARtest(Y,sEM_fit$A_est,sEM_fit$sig2_eta_hat,sEM_fit$sig2_epsilon_hat,
global_H0=matrix(0,p,p),global_idx=NULL,simul_H0=matrix(0,p,p),
simul_idx=NULL,FDR_levels=c(0.05,0.1))
false_null$pvalue
## [1] 8.75966e-14
false_null$selected[,,FDR_levels==0.05]
Lyu, Xiang, Jian Kang, and Lexin Li. Statistical Inference for High-Dimensional Vector Autoregression with Measurement Error. arXiv preprint arXiv:2009.08011 (2020). | CommonCrawl |
Variational approach to second species periodic solutions of Poincaré of the 3 body problem
On the existence of bi--pyramidal central configurations of the $n+2$--body problem with an $n$--gon base
March 2013, 33(3): 1033-1047. doi: 10.3934/dcds.2013.33.1033
The angular momentum of a relative equilibrium
Alain Chenciner 1,
ASD, IMCCE (UMR 8028), Observatoire de Paris, 77 avenue Denfert-Rochereau, 75014 Paris
Received April 2011 Revised February 2012 Published October 2012
There are two main reasons why relative equilibria of $N$ point masses under the influence of Newton attraction are mathematically more interesting to study when space dimension is at least 4:
1) in a higher dimensional space, a relative equilibrium is determined not only by the initial configuration but also by the choice of a hermitian structure on the space where the motion takes place (see [3]); in particu\-lar, its angular momentum depends on this choice;
2) relative equilibria are not necessarily periodic: if the configuration is balanced but not central (see [3,2,7]), the motion is in general quasi-periodic.
In this exploratory paper we address the following question, which touches both aspects: what are the possible frequencies of the angular momentum of a given central (or balanced) configuration and at what values of these frequencies bifurcations from periodic to quasi-periodic relative equilibria do occur? We give a full answer for relative equilibrium motions in $R^4$ and conjecture that an analogous situation holds true for higher dimensions. A refinement of Horn's problem given in [12] plays an important role.
Keywords: N-body problem, Horn's problem..
Mathematics Subject Classification: Primary: 70F10, 15A1.
Citation: Alain Chenciner. The angular momentum of a relative equilibrium. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1033-1047. doi: 10.3934/dcds.2013.33.1033
A. Albouy, Integral manifolds of the $N$-body problem,, Inventiones Mathematicæ, 114 (1993), 463. doi: 10.1007/BF01232677. Google Scholar
A. Albouy, "Mutual Distances in Celestial Mechanics,", Lectures at Nankai Institute, (2004). Google Scholar
A. Albouy and A. Chenciner, Le problème des $n$ corps et les distances mutuelles,, Inventiones Mathematicæ, 131 (1998), 151. doi: 10.1007/s002220050200. Google Scholar
V. I. Arnold, "Mathematical methods of classical Mechanics,", Graduate Texts in Mathematics, (1989). Google Scholar
R. Bhatia, Linear algebra to quantum cohomology: The story of alfred Horn's inequalitites,, The American Mathematical Monthly, 108 (2001), 289. doi: 10.2307/2695237. Google Scholar
P. Birtea, I. Casu, T. Ratiu and M. Turhan, Stability of equilibria for the so$(4)$ free rigid body,, preprint, (). Google Scholar
A. Chenciner, The Lagrange reduction of the $N$-body problem: a survey,, preprint, (). Google Scholar
A. Chenciner, Symmetric 4-body balanced configurations and their relative equilibrium motions,, in preparation., (). Google Scholar
A. Chenciner and H. Jiménez-Pérez, Angular momentum and Horn's problem,, preprint, (). Google Scholar
W. Fulton, Eigenvalues of sums of hermitian matrices,, Séminaire Bourbaki, 1997/98 (1998). Google Scholar
W. Fulton, Eigenvalues, invariant factors, highest weights, and Schubert calculus,, Bull. Amer. Math. Soc. (N. S.), 37 (2000), 209. Google Scholar
S. Fomin, W. Fulton, C. K. Li and Y. T. Poon, Eigenvalues, singular values, and Little wood-Richardson coefficients,, Amer. J. Math., 127 (2005), 101. doi: 10.1353/ajm.2005.0005. Google Scholar
A. Knutson, The symplectic and algebraic geometry of Horn's problem,, Linear Algebra and its Applications, 319 (2000), 61. Google Scholar
A. Knutson and T. Tao, Honeycombs and sums of Hermitian matrices,, Notices of the AMS, 48 (2001). Google Scholar
H. B. Lawson Junior and M. L. Michelson, "Spin Geometry,", Princeton University Press (1989)., (1989). Google Scholar
Chjan C. Lim, Joseph Nebus, Syed M. Assad. Monte-Carlo and polyhedron-based simulations I: extremal states of the logarithmic N-body problem on a sphere. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 313-342. doi: 10.3934/dcdsb.2003.3.313
Gabriella Pinzari. Global Kolmogorov tori in the planetary $\boldsymbol N$-body problem. Announcement of result. Electronic Research Announcements, 2015, 22: 55-75. doi: 10.3934/era.2015.22.55
Nai-Chia Chen. Symmetric periodic orbits in three sub-problems of the $N$-body problem. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1523-1548. doi: 10.3934/dcdsb.2014.19.1523
Eduardo S. G. Leandro. On the Dziobek configurations of the restricted $(N+1)$-body problem with equal masses. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 589-595. doi: 10.3934/dcdss.2008.1.589
Marshall Hampton, Anders Nedergaard Jensen. Finiteness of relative equilibria in the planar generalized $N$-body problem with fixed subconfigurations. Journal of Geometric Mechanics, 2015, 7 (1) : 35-42. doi: 10.3934/jgm.2015.7.35
Vasile Mioc, Ernesto Pérez-Chavela. The 2-body problem under Fock's potential. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 611-629. doi: 10.3934/dcdss.2008.1.611
Richard Moeckel. A proof of Saari's conjecture for the three-body problem in $\R^d$. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 631-646. doi: 10.3934/dcdss.2008.1.631
Kuo-Chang Chen. On Chenciner-Montgomery's orbit in the three-body problem. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 85-90. doi: 10.3934/dcds.2001.7.85
Rafał Kamocki, Marek Majewski. On the continuous dependence of solutions to a fractional Dirichlet problem. The case of saddle points. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2557-2568. doi: 10.3934/dcdsb.2014.19.2557
Montserrat Corbera, Jaume Llibre. On the existence of bi--pyramidal central configurations of the $n+2$--body problem with an $n$--gon base. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1049-1060. doi: 10.3934/dcds.2013.33.1049
Oleg Yu. Imanuvilov, Masahiro Yamamoto. Calderón problem for Maxwell's equations in cylindrical domain. Inverse Problems & Imaging, 2014, 8 (4) : 1117-1137. doi: 10.3934/ipi.2014.8.1117
Albert Clop, Daniel Faraco, Alberto Ruiz. Stability of Calderón's inverse conductivity problem in the plane for discontinuous conductivities. Inverse Problems & Imaging, 2010, 4 (1) : 49-91. doi: 10.3934/ipi.2010.4.49
Matteo Santacesaria. Note on Calderón's inverse problem for measurable conductivities. Inverse Problems & Imaging, 2019, 13 (1) : 149-157. doi: 10.3934/ipi.2019008
Mark Lewis, Daniel Offin, Pietro-Luciano Buono, Mitchell Kovacic. Instability of the periodic hip-hop orbit in the $2N$-body problem with equal masses. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1137-1155. doi: 10.3934/dcds.2013.33.1137
Regina Martínez, Carles Simó. On the stability of the Lagrangian homographic solutions in a curved three-body problem on $\mathbb{S}^2$. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1157-1175. doi: 10.3934/dcds.2013.33.1157
Qinglong Zhou, Yongchao Zhang. Analytic results for the linear stability of the equilibrium point in Robe's restricted elliptic three-body problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1763-1787. doi: 10.3934/dcds.2017074
Hildeberto E. Cabral, Zhihong Xia. Subharmonic solutions in the restricted three-body problem. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 463-474. doi: 10.3934/dcds.1995.1.463
Davide L. Ferrario, Alessandro Portaluri. Dynamics of the the dihedral four-body problem. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 925-974. doi: 10.3934/dcdss.2013.6.925
Edward Belbruno. Random walk in the three-body problem and applications. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 519-540. doi: 10.3934/dcdss.2008.1.519
Ernesto A. Lacomba, Mario Medina. Oscillatory motions in the rectangular four body problem. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 557-587. doi: 10.3934/dcdss.2008.1.557
Alain Chenciner | CommonCrawl |
Abstract: A22.00007 : Consequences of ionic and covalent bonding in Ge-Sb-Te phase change materials
Saikat Mukhopadhyay
(Materials Science and Technology Division, Oak Ridge National Laboratory)
Jifeng Sun
(Department of Physics and Astronomy, University of Missouri)
Alaska Subedi
(Max Planck Institute for the Structure and Dynamics of Matter)
Theo Siegrist
(Department of Chemical and Biomedical Engineering, FAMU-FSU College of Engineering, Tallahassee)
David Singh
Structural transformation of Ge$_{\mathrm{2}}$Sb$_{\mathrm{2}}$Te$_{\mathrm{5}}$ has attracted a great deal of research as it involves two states (crystalline and amorphous) that are stable at ambient temperature but with remarkably different physical properties, in particular, very different optical constants. The differences in physical properties in these states have been explained in terms of resonant bonding that has been generalized to the description of covalent systems with high symmetry structures such as benzene and graphite. However, given the local lattice distortions noted from both experimental and theoretical investigations, it is clear that the meaning of ``resonant bonding'' in GST is very different from that in graphite or benzene and the precise nature of bonding in this phase has not been fully established. In this talk, based on our first-principles calculations, we show that there is a strong competition between ionic and covalent bonding in the cubic phase, and establish a link between the origins of phase change memory properties and giant responses of piezoelectric materials. | CommonCrawl |
•https://doi.org/10.1364/OE.434787
Compensation of EUV lithography mask blank defect based on an advanced genetic algorithm
Ruixuan Wu, Lisong Dong, Xu Ma, and Yayi Wei
Ruixuan Wu,1,2 Lisong Dong,1,2,3,5 Xu Ma,4 and Yayi Wei1,2,3,6
1Institute of Microelectronics, Chinese Academy of Sciences, Beijing, 100029, China
2University of Chinese Academy of Sciences, Beijing, 100049, China
3Guangdong Greater Bay Area Applied Research Institute of Integrated Circuit and Systems, Guangzhou 510700, China
4Key Laboratory of Photoelectronic Imaging Technology and System of Ministry of Education of China, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
[email protected]
[email protected]
R Wu
L Dong
X Ma
Y Wei
Ruixuan Wu, Lisong Dong, Xu Ma, and Yayi Wei, "Compensation of EUV lithography mask blank defect based on an advanced genetic algorithm," Opt. Express 29, 28872-28885 (2021)
Source mask optimization for extreme-ultraviolet lithography based on thick mask model and social...
Zinan Zhang, et al.
Opt. Express 29(4) 5448-5465 (2021)
Fast heuristic-based source mask optimization for EUV lithography using dual edge evolution and...
EUV multilayer defect characterization via cycle-consistent learning
Ying Chen, et al.
Fourier Optics, Image and Signal Processing
Deep ultraviolet
Extreme ultraviolet lithography
Immersion lithography
Original Manuscript: June 23, 2021
Revised Manuscript: August 9, 2021
Manuscript Accepted: August 9, 2021
Mask modification based on genetic algorithm
Advanced genetic algorithm
Simulation and discussion
Equations (1)
Mask blank defect is one of the most important factors that degrades the image quality of extreme ultraviolet (EUV) lithography system, and further leads to a yield lose. In order to compensate the amplitude and phase distortions caused by the EUV mask blank defects, this paper proposes an advanced algorithm to optimize the mask absorber pattern based on genetic algorithm. First, a successive approximation correction method is used to roughly compensate the effect of mask blank defect. Then, an advanced genetic algorithm is proposed to obtain higher efficiency and compensation accuracy, which uses an adaptive coding strategy and a fitness function considering normalized image log slope of lithography image. For illustration, the proposed method is verified based on rectangular contact patterns and complex pattern with different defects. The aerial images of optimized masks are evaluated by a commercial lithography simulator. It will show that the proposed method can mitigate the impact of mask defects, and improve the fidelity of lithography print image. The simulation results also demonstrate the higher convergence efficiency and mask manufacturability can be guaranteed by the proposed method.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Extreme ultraviolet (EUV) lithography is considered as the most promising candidate for the semiconductor manufacturing at 7nm node and beyond [1]. Unlike the masks used for deep ultraviolet lithography with 193nm wavelength, EUV lithography mask are always reflective. The introduction of reflective mask has brought a series of challenges, which would introduce different impacts to the overall lithography workflow. At the technology nodes with immersion lithography, substrate fabrication and mask manufacturing are sufficiently mature, by which mask defects could be limited to acceptable levels in both of density and size [2]. Over high defect density is a major obstacle to the high volume manufacturing (HVM) in EUV lithography. In order to achieve high reflectivity, a stack comprised of about 40 bilayers of Mo and Si are used as reflective mirrors on the EUV masks [3]. As shown in Fig. 1, defects embedded in the multilayer of EUV mask could cause the deformation of multilayer and decrease the image quality of exposure [4], and thus result in a yield lose.
Fig. 1. Illustration of the (a) defect-free mask, (b) defect-free aerial image, (c) defect-free resist contour, (d) defective mask, (e) defective aerial image, and (f) defective resist contour.
In the past, extensive researches were conducted to develop novel methods or workflows, such as pattern-shifting or absorber pattern optimization, to reduce mask defect impacts as much as possible. Strategies including design-blank matching, pattern rotation and intentional pattern deformation were adopted and validated by simulations or experiments. Algorithms such as gradient descent, simulated annealing and level-set were used to improve the accuracy and efficiency of optimization [5,6]. For the pattern-shifting methods, there are strict requirements for the defect sizes. The relative position relationship between defects and mask pattern also determines whether the defects can be mitigated successfully. In order to break these restrictions, Chae et al. evaded more defects by adding a 2nd order deformation to the local patterns [7]. However, it is hard to deal with the cooperation of multiple patterns, and the non-Manhattan patterns due to the 2nd deformation is not friendly to manufacturing. Zhang et al. proposed a covariance-matrix-adaption evolution strategy to compensate for the mask defects by inserting rectangle patterns on mask with repaired results [8]. Zhang's method modified mask pattern by applying additional patterns directly on the target pattern, which does not consider mask manufacturability. However, excessive defects cannot be compensated by merely modifying mask pattern. In addition, using pattern-shifting method indiscriminately for all kinds of defects would reduce the manufacturability [9]. At the same time, using pixelated approach to modify the absorber patterns may result in mask patterns that are not friendly to manufacturing [10]. Thus, it is necessary to explore new kinds of efficient and manufacturing-friendly methods to further improve the compensation capability of EUV mask defects.
In order to overcome the difficulties encountered in the previous works, this paper develops a mask defect compensation method based on advanced generic algorithm with higher efficiency and accuracy. The problem is formulated as follows:
Problem: Given an initial mask pattern with a defect, set up a method to accurately compensate the impact of defect. The genetic algorithm is required to achieve the maximum fitness value within a limited loop number, and the output mask should have promising manufacturability.
In our previous conference paper, we proposed an EUV mask defect compensation method based on genetic algorithm [11]. Firstly, the mask pattern was roughly corrected according to the edge placement error (EPE), which was referred to as the successive approximation correction method. Then, the approximate correction result was used to generate the initial population of the following modified genetic algorithm to greatly reduce the iteration number. In particular, an adaptive coding strategy was introduced, which adjusted the segment lengths and segment number during the optimization process to enrich the diversity of individuals. In addition, the normalized image log slope (NILS) was considered in the fitness function to improve the contrast of the lithography aerial image. The optimized mask pattern obtained by this method is edge-based, which is more friendly to manufacturing than the pixelated mask pattern.
In this paper, we make further exploration based on our former work in [11]. The previous work mainly focused on the optimization of simple lithography mask layouts, including the one-dimensional line-space pattern and square contact pattern. In addition, it only comes with a few comparisons with other algorithms. In semiconductor manufacturing, the rectangular contact patterns are more commonly used in Contact layers, and some complex two-dimensional mask layouts are extensively used in Metal layers. In order to prove the robustness of the proposed method, this paper studies the optimization of rectangular contact patterns and complex mask pattern to compensate the mask defects. Besides, the influence of different positions and sizes of mask defects on the compensation performance is discussed. In order to demonstrate the superiority of the proposed algorithm, comparisons with different kinds of genetic algorithms are conducted. The results also show that the proposed successive approximation correction and adaptive coding methods can improve the performance of convergence speed and compensation accuracy.
The remaining content of this paper is organized as follow: Fundamental of mask modification and genetic algorithm are introduced in Section 2. The whole workflow and advancement of the proposed method are described in Section 3. Lithography simulations based on multiple masks with different defects are provided in Section 4. The compensation capacity of the proposed method is proved by the results with different defect parameters. Furthermore, simulations based on complex layout patterns are also provided. Section 5 gives the conclusion of this work.
2. Mask modification based on genetic algorithm
There are two major methods to compensate the defects by mask optimization: one is to modify the mask pattern directly, the other is to cover the defect with absorber. Modifying the mask pattern directly is a general and wide-used method to compensate the lithographic influence caused by defects. Commonly, buried mask blanket defects are phase defects. That means the compensation results in focus may not be found in through focus. This limits the depth of focus (DOF) of the repaired mask. The goal of covering the defect with absorber is to transform phase defects into amplitude defects. However, this method increases the difficulty of compensating defects in focus. Besides, a small-sized isolated absorber is required, which would seriously affect the manufacturability of mask pattern. At the same time, this method also needs to know the location, size and shape of defects in advance. According to [12], We believed that the former method could guarantee the manufacturability of the final mask pattern.
The genetic algorithm is a computational model to simulate the natural selection and genetic mechanism of Darwin's biological evolution theory. In the genetic algorithm, a set of individuals compose a population, which represent the possible solutions to the appointed problems. A genetic algorithm begins with the initialization step to create the population with random or specific genes. These genes can be mapped to the individuals in the population, and the individuals represent the mask patterns in this paper. Then, a certain fitness function is utilized to calculate the fitness value of each individual to achieve the evaluation results. Based on these results, individuals with the best fitness will be selected as survivals in the population. Following that, these remaining individuals multiply through cross-over and mutation operations to obtain a new population in the next generation. The process of "evaluation-selection-crossover-mutation" will be looped until the stop criteria is satisfied, which could be the maximum loop number or a target fitness value. In the applications of mask modification, genes are used to encode the movement of segments along the mask boundaries. The individuals thus formed can correspond to a certain mask pattern and its lithography image. The workflow of the proposed genetic algorithm is described in Fig. 2
Fig. 2. Workflow of the proposed genetic algorithm in mask modification.
For genetic algorithms, it is vital to select a proper initial population. Especially in this paper, where defects would have a serious impact on the lithography image, and improper initial population would result in a great lost in convergence rate. Therefore, a rough correction is used to generate the initial population for the genetic algorithm. On the basis of the rough correction, adaptive encoding method will be applied to obtain more potential individuals. Genetic operators including selection, crossover and mutation will be used to generate the new population.
3. Advanced genetic algorithm
Different from the traditional methods, three major improvements are adopted: Firstly, the proposed genetic algorithm uses a successive approximation correction to increase the convergence speed. Then, the adoption of adaptive coding could provide abundant potentials of individuals. Finally, this proposed genetic algorithm also uses an innovative fitness function where the NILS is involved to obtain better lithography image. These three points will be described in this section.
3.1 Successive approximation correction
Regarding to the genetic algorithm, selecting a fine initial population would accelerate the algorithm greatly. At the beginning of the algorithm, the defective masks would result in poor aerial images, which probably lead to empty print images. Therefore, a rough correction is needed to rapidly generate the initial population for the genetic algorithm.
A successive approximation correction is proposed to implement the rough correction. As shown in Fig. 2, the mask boundaries are first divided into segments with a fixed length. In each iteration, the segments are moved according their local EPEs. Typically, the movement step equals half of the local EPE or a given maximum movement step when the local EPE cannot be measured. After several iterations, this correction would obtain a mask pattern with smaller overall EPE. Based on this correction result, the initial population of genetic algorithm is generated.
After this rough correction, the result of print image contour usually seems acceptable. However, because the movement of each segment only considers its local EPE, the results need to be further optimized. This successive approximation correction process results in a good guess of the initial population, and thus improves the convergence rate of the genetic algorithm.
3.2 Adaptive encoding method
Fragmentation is the operation that breaks the pattern edges into smaller segments, and the segments can be moved forward or backward during the following optimization. The number of additional vertices on the mask pattern produced by the segment movement will greatly affect the algorithm speed.
Although a fragmentation is done in the successive approximation correction, segments represented by this gene are fixed, which limits the potential of individuals. Therefore, an adaptive encoding method is proposed to increase the degrees of freedom in fragmentation. In the advanced genetic algorithm with adaptive coding, the movement step and length of each segment are optimized according to its feature type (edge, convex corner or concave corner). An example of random individual is shown in Fig. 3. In Fig. 3(a), the length of each segment is the same. After setting different values for the movement and length of segments, the individual is modified as shown in Fig. 3(b).
Fig. 3. Illustration of (a) an encoding example and (b) individual after setting attributes.
However, tuning the lengths of segments may not always get better results. Long segments count against fine compensation of mask defects, while short segments count against manufacturability. In the genetic algorithm, the length of each segment is limited within in the range of $[{{l_{\min }}\textrm{,}{l_{\max }}} ]$ by splitting or merging segments, as shown in Fig. 4(b). According to Ref. [13], the minimum segment length is determined as 4nm on the wafer scale to maintain basic manufacturability. In the next Section, the critical dimension (CD) on wafer is 22nm, and thus the ${l_{\min }}$ and ${l_{\max }}$ are set as 4nm and 12nm respectively.
Fig. 4. The (a) mergence and (b) split of the edge segments.
In this approach, initial segments with equal length could be adopted at the beginning. In the process of optimization, the algorithm gradually selects the individual with the best segmentation according to the fitness function. This avoids the influence of fixed segmentation on the compensation results.
3.3 Fitness function
The investigation of an appropriate fitness function is a crucial step for genetic algorithm. After decoding individuals to the mask patterns, lithography simulations are used to verify the capability of defect compensation. The design of fitness function is considered from the following aspects: (1) The compensation of the print image; (2) The compensation of the aerial image; (3) The compensation of local edges; (4) Weighting of the above three parts. These aspects evaluate the compensation from the entirety to locality. We select the difference in aerial image, the difference in print image and NILS at the edge of the pattern closest to the defect location to describe the first three aspects, and weighting parameters to adjust the weight of each part. The fitness function is formulated as following:
(1)$${f_{fitness}} = \frac{{{a_1}\sqrt {({|{\hat{I}(\hat{x},\hat{y};z) - \hat{I}{{(\hat{x},\hat{y};z)}_{defect - free}}} |} )} + {a_2}({|{\hat{I}{{(\hat{x},\hat{y};z)}_{\textrm{print}}} - \hat{I}{{(\hat{x},\hat{y};z)}_{\textrm{print of }defect - free}}} |} )}}{{{a_3}(1 + NIL{S_{at\textrm{ }edge}})}}$$
where the $\hat{I}(\hat{x},\hat{y};z)$ is the intensity of aerial image according to the Abbe theory of imaging; $\hat{I}{(\hat{x},\hat{y};z)_{\textrm{print}}}$ is the print image, which is a 0–1 binary image converted from aerial image and a given threshold; $NIL{S_{at\textrm{ }edge}}$ is the NILS at the edge of the pattern closest to the defect location; ${a_1}$, ${a_2}$ and ${a_3}$ are weighting parameter and they are 0.5, 1 and 1 respectively in this paper. Overall, the fitness function is mainly driven by the pattern error. The optimization process would be terminated when one of these following criterions is satisfied: (1) the loop count reaches the maximum of generations; (2) the image CD difference between the repaired mask and defect-free mask falls within the ±5% range of the target CD; and (3) the best fitness in population remains constant for many generations. When the algorithm terminates since one of these criterions is satisfied, using this fitness function in the selection section in genetic algorithm tends to obtain steeper boundaries.
4. Simulation and discussion
4.1 Simulation settings
For a given EUV substrate, firstly, defects on multilayer are inspected, and then characterized by the height and the full width at half maximum (FWHM) at the top and bottom of the multilayer, as shown in Fig. 5. As the printed images are defined by the absorber above the stack, the pattern of the absorber is generally modified to compensate the impact of mask defects. Large defects which are considered as irreparable are usually evaded by pattern-shifting [4]. Then, the pattern shapes of absorbers are adjusted to cover the rest defects to compensate the imaging deformation.
Fig. 5. Defect with parameters defined. (a) Top view. (b) Side view.
In this paper, we calculate the diffraction field of mask by commercial simulator Sentaurus Lithography (S-Litho) using waveguide method. The default lithography simulation parameters settings are shown in Table 1, with some parameters using the default values of the simulator. All dimensions in this work are defined and shown on the wafer scale for uniformity. The actual size of defects and absorbers on the mask scale could be calculated based on the demagnification factor of lithography system. Simulations for different types of mask patterns are studied in this work (Table 2). The influence of different defect sizes and positions are also studied.
Table 1. Parameter settings for lithography simulation
View Table | View all tables in this article
Table 2. Mask settings (wafer scale /nm)
4.2 Simulations with different defect sizes
In order to simplify the optimization flow and reduce the complexity of simulation, contact pattern is mainly used in the simulations. Defects are characterized by Gaussian shapes and preset in different sizes and positions on the mask. The center of mask is defined as the coordinate origin. The x and y coordinates represent the position of defect. In order to reduce the influence of exposure intensity distribution on the neighboring patterns, the ratio of CD to pitch in the x direction is 1:3 instead of 1:2. All defects are located in the center or left side of the contact hole. The illumination intensity threshold is set to make the CD in the middle of contact for the defect-free mask equals to the target CD, which represents the standard.
The defect located on the edge of contact is selected to analyze the influence of different defect sizes on the same position, and to evaluate the maximum compensation capability of the proposed algorithm. The reason that the defect position is chosen on the edge of the contact is that the contact edge defect has more serious influences on the aerial image in contrast to the defect in the corner. Specific defect parameters and the repaired results are shown in Tables 3, 4, and 5. Five different sizes are selected at the same mask position. For each mask pattern, the corresponding repaired mask is shown. For each pattern, the contour of print image is shown as the black curve overlapped with the aerial image, and the red circle on the repaired mask stands for the defect area. In these simulations, we use pattern error (PE) to describe the quality of imaging results. According to a given unit pixel (1nm×1nm in this paper) and threshold, each aerial image could generate a corresponding print image. The sum of pixels different from the print image corresponding to a pattern and the print image corresponding to defect-free pattern is recorded as its PE. A lower PE means the print image is closer to that of defect-free case.
Table 3. Parameters of defects (wafer scale /nm)
Table 4. Simulation results with different defect sizes
Table 5. Measurement results for different defect sizes
It can be inferred that the print image shrinks along with the increment of defect size, which poses greater challenges to the defect compensation. Affected by the defect, the target mask CD cannot be fully recovered. With a tolerance of 10% in CD, a repaired CD within the range from 19.8 nm to 24.2 nm is considered acceptable. Judging from the repaired masks, defect #1 and #2 could be repaired well. Both of them have approximate 3% loss in CD. A drop of performance emerges from defect #3 and #4. From the corresponding contour images, it can be observed that the impact caused by these two defects can be still repaired normally. All these above four correction cases could meet the requirement of 10% CD loss, and further reach 5% CD loss. The repaired result of defect #5 is a little over the limit of 10% CD loss (19.8 nm expected). It can be observed from the aerial images that intensity distribution is not satisfactory at the defect location. Thus, defect #5 is considered irreparable defect. The limit of repairable defect size is between the defect #4 and defect #5. Defects beyond this limit should be regarded as irreparable since the defect area would be always less than the threshold.
4.3 Simulations with different defect positions
Next, the influence of defect position is studied. According to the simulation results in the previous section, defect #2 is selected as a typical repairable defect. Three typical defect locations are chosen: center, edge and corner of the contact, which represents several possible situations. The edge and corner are chosen in the opposite direction to the shadow effect. So, the lithography imaging is more influenced by the defect when compensating the shadow effect by the shifting mask. Simulation results of are shown in Table 6 and Table 7.
Table 6. Simulation results for different defect positions
Table 7. Measurement results for different defect positions
Obviously, the defect in the center position has the largest impact on the imaging, and the average intensity of its repaired aerial image is the lowest. However, its print image after correction has smaller pattern error than the case with a corner defect. Although the maximum intensity of the repaired aerial image is closer to the defect-free case, the corner defect is harder to be repaired than the edge defect. From the distribution and intensity of the repaired aerial images, it is observed that the defect on the edge is the easiest to be fixed.
4.4 Comparisons between proposed method and traditional genetic algorithm
As an advanced genetic algorithm, the successive approximation correction and the adaptive encoding method are used in this work. To prove the advantages of the proposed algorithm, several experiments are conducted. According to the experiments above, defect #2 is selected as the typical repairable defect, which is placed at different locations
As shown in Fig. 6, three different algorithms are compared in terms of accuracy and convergence rate. In each case, the successive approximation correction (SAC) is applied in the first 20 generations of the proposed method. Compared to the traditional genetic algorithm with fixed-length coding, the proposed adaptive coding has little help in the rate of convergence, but provides better compensation accuracy. When it comes to the proposed algorithm, the successive approximation correction accelerates the convergence, and a better result is achieved. For different defect locations, the convergence curves show the same trend.
Fig. 6. Comparison between different algorithms with different defects at the (a) center (b) edge and (c) corner.
4.5 Optimization for complex mask pattern
To validate the competence of the proposed algorithm, a comparison of different version of genetic algorithm is furtherly discussed with complex mask pattern on a much larger area. The pitch is 240 nm on the wafer scale, which corresponds to 960 nm on the mask scale. The defect is located at the center of the mask. Other parameters of the defect are shown in Table 8. The repaired aerial image shows that the proposed genetic algorithm can effectively compensate the defect in the complex mask pattern. The repaired mask is depicted in Fig. 7(e). It also shows that the mask features far from the defect are not modified during the optimization, since their print images are not influenced by the defect.
Fig. 7. Defect compensation for complex mask pattern. (a) defect-free aerial image (b) defective aerial image (c) repaired aerial image (d) intial mask (e) repaired mask.
Table 8. Defect setting for complex mask pattern
4.6 Comparison between proposed genetic algorithm and related work
In order to compare the practical performance of this proposed algorithm, we follow the way of Ref. [8] to build another version of genetic algorithm to compensate defects. Table 9 shows the comparison of the proposed method and the related method. It shows that lower pattern error is achieved by the proposed genetic algorithm under the same condition, and the convergence speed of the proposed method is much higher.
Table 9. Comparison between proposed genetic algorithm and related work
In this paper, an advanced genetic algorithm is proposed to compensate the influence of EUV mask blank defects on the lithography imaging quality. Firstly, the successive approximation correction is proposed to generate the initial population of the genetic algorithm. Then, an adaptive coding method is developed to enrich the diversity of individuals and guarantee their manufacturability. Finally, the NILS is involved in the fitness function of genetic algorithm to get better repaired image results. Simulations with different defects are carried out. It shows that the aerial image and print image of defective mask can be repaired well if the defect size and position are constrained within a certain range. Comparison with the traditional genetic algorithm demonstrates an improvement in convergence speed and compensation accuracy by using the adaptive coding and successive approximation correction. Compared with the related work, the proposed method can converge much faster.
However, the current work only considers the modifications of main mask patterns. In addition, the overlapped process window is not considered. Future work will study the insertion methods of sub-resolution assist features to increase the process window of critical mask patterns.
National Natural Science Foundation of China (61804174); National Key Research and Development Program of China (2019YFB2205005); Beijing Municipal Natural Science Foundation (4182021); Youth Innovation Promotion Association of the Chinese Academy of Sciences (2021115) .
The authors declare no conflicts of interest.
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
1. S. Hsu, R. Howell, J. Jia, H. Liu, K. Gronlund, S. Hansen, and J. Zimmermann, "EUV Resolution Enhancement Techniques (RETs) for k1 0.4 and below," Proc. SPIE 9422, 94221I (2015). [CrossRef]
2. O. Wood, C. Koay, K. Petrillo, H. Mizuno, S. Raghunathan, J. Arnold, D. Horak, M. Burkhardt, G. McIntyre, Y. Deng, B. Fontaine, U. Okoroanyanwu, A. Tchikoulaeva, T. Wallow, J. Chen, M. Colburn, S. Fan, B. Haran, and Y. Yin, "Integration of EUV lithography in the fabrication of 22-nm node devices," Proc. SPIE 7271, 727104 (2009). [CrossRef]
3. K. Hooker, A. Kazarian, X. Zhou, J. Tuttle, G. Xiao, Y. Zhang, and K. Lucas, "New methodologies for lower-K1 EUV OPC and RET optimization," Proc. SPIE 10143, 101431C (2017). [CrossRef]
4. W. Cho, D. Price, P. Morgan, D. Rost, M. Satake, and V. Tolani, "Classification and printability of EUV mask defects from SEM images," Proc. SPIE 10450, 5 (2017). [CrossRef]
5. Y. Negishi, Y. Fujita, K. Seki, T. Konishi, J. Rankin, S. Nash, E. Gallagher, A. Wagner, P. Thwaite, and A. Elayat, "Using pattern shift to avoid blank defects during EUVL mask fabrication," Proc. SPIE 8701, 870112 (2013). [CrossRef]
6. A. Kagalwalla and P. Gupta, "Comprehensive Defect Avoidance Framework for Mitigating EUV Mask Defects," Proc. SPIE 9048, 90480U (2014). [CrossRef]
7. Y. Chae, R. Jonckheere, and P. Gupta, "Defect avoidance for extreme ultraviolet mask defects using intentional pattern deformation," Proc. SPIE 10809, 52 (2018). [CrossRef]
8. H. Zhang, S. Li, X. Wang, C. Yang, and W. Cheng, "Optimization of defect compensation for extreme ultraviolet lithography mask by covariance-matrix-adaption evolution strategy," J. Micro/Nanolith. MEMS MOEMS 17(04), 1 (2018). [CrossRef]
9. A. Erdmann, P. Evanschitzky, T. Bret, and R. Jonckheere, "Modeling strategies for EUV mask multilayer defect dispositioning and repair," Proc. SPIE 8679, 86790Y (2013). [CrossRef]
10. T. Fuhner and A. Erdmann, "Improved mask and source representations for automatic optimization of lithographic process conditions using a genetic algorithm," Proc. SPIE 5754, Optical Microlithography XVIII (2005).
11. R. Wu, L. Dong, R. Chen, T. Ye, and Y. Wei, "A method for compensating lithographic influence of EUV mask blank defects by an advanced genetic algorithm," Proc. SPIE11147, 111471U (2019).
12. C. H. Clifford, T. T. Chan, and A. R. Neureuther, "Compensation methods for buried defects in extreme ultraviolet lithography masks," Proc. SPIE 7636, 763623 (2010). [CrossRef]
13. R. Pearman, J. Ungar, N. Shirali, A. Shendre, M. Niewczas, L. Pang, and A. Fujimura, "How curvilinear mask patterning will enhance the EUV process window: a study using rigorous wafer + mask dual simulation," Proc. SPIE 11178, 1117809 (2019). [CrossRef]
S. Hsu, R. Howell, J. Jia, H. Liu, K. Gronlund, S. Hansen, and J. Zimmermann, "EUV Resolution Enhancement Techniques (RETs) for k1 0.4 and below," Proc. SPIE 9422, 94221I (2015).
O. Wood, C. Koay, K. Petrillo, H. Mizuno, S. Raghunathan, J. Arnold, D. Horak, M. Burkhardt, G. McIntyre, Y. Deng, B. Fontaine, U. Okoroanyanwu, A. Tchikoulaeva, T. Wallow, J. Chen, M. Colburn, S. Fan, B. Haran, and Y. Yin, "Integration of EUV lithography in the fabrication of 22-nm node devices," Proc. SPIE 7271, 727104 (2009).
K. Hooker, A. Kazarian, X. Zhou, J. Tuttle, G. Xiao, Y. Zhang, and K. Lucas, "New methodologies for lower-K1 EUV OPC and RET optimization," Proc. SPIE 10143, 101431C (2017).
W. Cho, D. Price, P. Morgan, D. Rost, M. Satake, and V. Tolani, "Classification and printability of EUV mask defects from SEM images," Proc. SPIE 10450, 5 (2017).
Y. Negishi, Y. Fujita, K. Seki, T. Konishi, J. Rankin, S. Nash, E. Gallagher, A. Wagner, P. Thwaite, and A. Elayat, "Using pattern shift to avoid blank defects during EUVL mask fabrication," Proc. SPIE 8701, 870112 (2013).
A. Kagalwalla and P. Gupta, "Comprehensive Defect Avoidance Framework for Mitigating EUV Mask Defects," Proc. SPIE 9048, 90480U (2014).
Y. Chae, R. Jonckheere, and P. Gupta, "Defect avoidance for extreme ultraviolet mask defects using intentional pattern deformation," Proc. SPIE 10809, 52 (2018).
H. Zhang, S. Li, X. Wang, C. Yang, and W. Cheng, "Optimization of defect compensation for extreme ultraviolet lithography mask by covariance-matrix-adaption evolution strategy," J. Micro/Nanolith. MEMS MOEMS 17(04), 1 (2018).
A. Erdmann, P. Evanschitzky, T. Bret, and R. Jonckheere, "Modeling strategies for EUV mask multilayer defect dispositioning and repair," Proc. SPIE 8679, 86790Y (2013).
T. Fuhner and A. Erdmann, "Improved mask and source representations for automatic optimization of lithographic process conditions using a genetic algorithm," Proc. SPIE 5754, Optical Microlithography XVIII (2005).
R. Wu, L. Dong, R. Chen, T. Ye, and Y. Wei, "A method for compensating lithographic influence of EUV mask blank defects by an advanced genetic algorithm," Proc. SPIE11147, 111471U (2019).
C. H. Clifford, T. T. Chan, and A. R. Neureuther, "Compensation methods for buried defects in extreme ultraviolet lithography masks," Proc. SPIE 7636, 763623 (2010).
R. Pearman, J. Ungar, N. Shirali, A. Shendre, M. Niewczas, L. Pang, and A. Fujimura, "How curvilinear mask patterning will enhance the EUV process window: a study using rigorous wafer + mask dual simulation," Proc. SPIE 11178, 1117809 (2019).
Arnold, J.
Bret, T.
Burkhardt, M.
Chae, Y.
Chan, T. T.
Chen, J.
Cheng, W.
Cho, W.
Clifford, C. H.
Colburn, M.
Deng, Y.
Dong, L.
Elayat, A.
Erdmann, A.
Evanschitzky, P.
Fan, S.
Fontaine, B.
Fuhner, T.
Fujimura, A.
Fujita, Y.
Gallagher, E.
Gronlund, K.
Gupta, P.
Hansen, S.
Haran, B.
Hooker, K.
Horak, D.
Howell, R.
Jia, J.
Jonckheere, R.
Kagalwalla, A.
Kazarian, A.
Koay, C.
Konishi, T.
Lucas, K.
McIntyre, G.
Mizuno, H.
Morgan, P.
Nash, S.
Negishi, Y.
Neureuther, A. R.
Niewczas, M.
Okoroanyanwu, U.
Pang, L.
Pearman, R.
Petrillo, K.
Price, D.
Raghunathan, S.
Rankin, J.
Rost, D.
Satake, M.
Seki, K.
Shendre, A.
Shirali, N.
Tchikoulaeva, A.
Thwaite, P.
Tolani, V.
Tuttle, J.
Ungar, J.
Wagner, A.
Wallow, T.
Wang, X.
Wei, Y.
Wood, O.
Wu, R.
Xiao, G.
Yang, C.
Ye, T.
Yin, Y.
Zhang, H.
Zhang, Y.
Zhou, X.
Zimmermann, J.
J. Micro/Nanolith. MEMS MOEMS (1)
Proc. SPIE (10)
Equations on this page are rendered with MathJax. Learn more.
(1) f f i t n e s s = a 1 ( | I ^ ( x ^ , y ^ ; z ) − I ^ ( x ^ , y ^ ; z ) d e f e c t − f r e e | ) + a 2 ( | I ^ ( x ^ , y ^ ; z ) print − I ^ ( x ^ , y ^ ; z ) print of d e f e c t − f r e e | ) a 3 ( 1 + N I L S a t e d g e )
Parameter settings for lithography simulation
Submodule
Optics Illumination Wavelength: 13.5 nm
Annular, (σin, σout) = (0.4, 0.8)
CRAO = 6°
Projection NA = 0.33
Reduction = 4
Mask Absorber Cr, thickness = 100 nm
Refractive index: 0.93245 to 0.03888j
Multilayer 40 pairs of Mo/Si multilayer
Refractive index: Mo: 0.92108 to 0.00644j
Si: 0.99932 to 0.00183j
Substrate 2 nm Silicon
Mask settings (wafer scale /nm)
Mask pattern
CD in x direction
CD in y direction
Pitch in x direction
Pitch in y direction
Contact 22 44 66 88
Parameters of defects (wafer scale /nm)
Defect Settings
#1 (−11, 0) 0.9 5.5 11 6.5
#2 (−11, 0) 1.05 6.75 12 8.0
#4 (−11, 0) 1.35 9.25 14 11.0
#5 (−11, 0) 1.5 10.5 15 12.5
Simulation results with different defect sizes
Measurement results for different defect sizes
Pattern Error (PE) before repair
PE after repair
CD before repair (/nm)
CD after repair (/nm)
Defect-free 0 0 22.0 22.0
#1 48 26 19.1 22.1
#3 158 42 15.5 20.2
#5 496 65 8.1 17.1
Simulation results for different defect positions
Measurement results for different defect positions
PE before repair
Center 434 42 - 20.6
Edge 80 36 17.7 21.5
Corner 50 46 - -
Defect setting for complex mask pattern
#Complex Pattern (0,0) 1.5 10.5 15 12.5
Comparison between proposed genetic algorithm and related work | CommonCrawl |
NLR, MLP, SVM, and LDA: a comparative analysis on EMG data from people with trans-radial amputation
Alberto Dellacasa Bellingegni ORCID: orcid.org/0000-0002-9569-96801,2,
Emanuele Gruppioni1,2,
Giorgio Colazzo1,
Angelo Davalli2,
Rinaldo Sacchetti2,
Eugenio Guglielmelli1 &
Loredana Zollo1
Currently, the typically adopted hand prosthesis surface electromyography (sEMG) control strategies do not provide the users with a natural control feeling and do not exploit all the potential of commercially available multi-fingered hand prostheses. Pattern recognition and machine learning techniques applied to sEMG can be effective for a natural control based on the residual muscles contraction of amputated people corresponding to phantom limb movements. As the researches has reached an advanced grade accuracy, these algorithms have been proved and the embedding is necessary for the realization of prosthetic devices. The aim of this work is to provide engineering tools and indications on how to choose the most suitable classifier, and its specific internal settings for an embedded control of multigrip hand prostheses.
By means of an innovative statistical analysis, we compare 4 different classifiers: Nonlinear Logistic Regression, Multi-Layer Perceptron, Support Vector Machine and Linear Discriminant Analysis, which was considered as ground truth. Experimental tests have been performed on sEMG data collected from 30 people with trans-radial amputation, in which the algorithms were evaluated for both performance and computational burden, then the statistical analysis has been based on the Wilcoxon Signed-Rank test and statistical significance was considered at p < 0.05.
The comparative analysis among NLR, MLP and SVM shows that, for either classification performance and for the number of classification parameters, SVM attains the highest values followed by MLP, and then by NLR. However, using as unique constraint to evaluate the maximum acceptable complexity of each classifier one of the typically available memory of a high performance microcontroller, the comparison pointed out that for people with trans-radial amputation the algorithm that produces the best compromise is NLR closely followed by MLP. This result was also confirmed by the comparison with LDA with time domain features, which provided not significant differences of performance and computational burden between NLR and LDA.
The proposed analysis would provide innovative engineering tools and indications on how to choose the most suitable classifier based on the application and the desired results for prostheses control.
In clinics the state-of-the-art technology for people with trans-radial amputation is commonly a dual-site controlled myoelectric hand prosthesis. The available single degree of freedom is actuated by applying a simple threshold or a proportional amplitude method on surface electromyography (sEMG) signals recorded from antagonistic muscles (e.g., wrist flexor and wrist extensor) that can be easily contracted in a separate way. In the case of multi-fingered hand prosthesis with several degrees of freedom (DoFs), but still having two control signals, the switching between DoFs or predefined grasps is normally made by co-contraction, as in a finite state machine. This serial operation is slow and unnatural; in addition, it requires considerable training and cognitive effort [1].
On the other hand, Targeted Muscled Re-innervation (TMR) [2], via surgical operation, allows replacing nerves from the stump of persons with amputation to different anatomical muscles (e.g., chest muscles) in order to obtain independent signals. The risk associated to the surgical re-innerving operation is the main drawback that limits the applicability of this technique to all the kinds of amputations [3, 4].
Pattern recognition techniques based on sEMG currently represent the best compromise between invasiveness and prosthesis controllability and thanks to the notable scientific progress, allows increasing the number of controllable DoFs by keeping low the number of utilized electrodes [5]. Recognizing the user's will, control strategy resorting to pattern recognition techniques could improve performance by mapping the actuation of the prostheses on sEMG signals produced as result of phantom limb gestures [6]. The system becomes more user-friendly, and makes easier complex tasks that may include the sequential actuation of different DoFs.
Myoelectric control systems based on pattern recognition techniques (Fig. 1) rely on supervised machine learning classification algorithms.
Block diagram of a generic pattern recognition system based on sEMG signals
An initial training phase is needed, during which the system learns the way of linking the gestures to specific myoelectric patterns. Subsequently, the trained system is able to find out, from recorded patterns, the function for realizing and executing the desired task. Usually the feature extraction step precedes classification of sEMG signals where the most important components of the recorded myoelectric signal on a chosen time window are identified and selected [7] in order to improve the stability of the features (reducing variance and increasing classification performance). Previous studies suggest that the optimum window length for pattern recognition controls ranges from 150 to 250 ms depending on the skill of the subject [8]. For real-time applications it is conventionally accepted that the actuation delay must be less than 300 ms, therefore it was proposed to use a method for adopting "raw" filtered sEMG signals as input features, which enables an extreme reduction of the classification time and of the response time of the system without significant loss of system performance [9,10,11]. The saved time is used to improve the stability of the classification by means of post processing techniques as voting and/or threshold policies [12, 13].
Linear classifiers, such as Linear Discriminant Analysis (LDA), Logistic Regression (LR) or Support Vector Machine (SVM) with linear kernel, and nonlinear classifiers, such as Non-linear Logistic Regression (NLR), SVM with nonlinear kernels and Multi-Layer Perceptron (MLP), represent the state-of-the-art about pattern recognition classifiers [14, 15]. The main difference between linear and nonlinear classifiers consists in the shape of the decision boundary: straight line, or plane in the first case and curved line, or surface, in the second. Performance, complexity and computational time usually increase together. Hence, the choice of a classification algorithm should not be entirely relied upon performance, but rather on a trade-off between computational burden and performance, especially in embedded systems. This work aims to provide useful insights into the choice of the suitable classifier (and its specific internal settings) for the embedded control of multi-fingered hand prostheses. To this purpose, a comparative analysis among NLR, MLP, SVM with Radial Basis Function (RBF) kernel, and LDA with time domain feature extraction, considered as benchmark classifier, on sEMG data from 30 people with trans-radial amputation is carried out, in terms of performance and computational burden. The use of LDA with time domain feature extraction in on-line control of prosthetic devices has been demonstrated by several studies [16, 17]; this method is now commercially available in the US by COAPT https://www.coaptengineering.com.
This paper is structured as follows: Sect. II describes the protocol for the acquisition of the sEMG datasets, the implemented machine learning algorithms, and the methods adopted for data analysis; Sect. III reports the results of a preliminary analysis on the complexity range of the model of NLR and MLP, then it reports the comparative analysis among the NLR, MLP and SVM classifiers including a combined index of performance and computational burden for the evaluation of the most suitable classifier for the embedding version on a microcontroller with a 256 KB of memory for the realization of a prosthetic device. The section concludes with a comparative analysis between NLR and the ground truth represented by LDA with time domain feature extraction. Conclusive remarks are finally reported in Sect. V and VI.
sEMG data acquisition protocol
The same acquisition protocol as in [18] was used to collect the sEMG data from the subjects participating in the experiments. Thirty people with trans-radial amputation, aged between 18 and 65, free of known muscular and/or neurological diseases, participated in the experiments. Each subject gave informed consent before performing the experiments, which were approved by local scientific and ethical committees, and were already experienced in myoelectric control of prosthetic hands. Six commercial active sEMG sensors (Ottobock 13E200 = 50, 27 mm × 18 mm × 9.5 mm) were equidistantly placed on a silicone adjustable bracelet (Fig. 2a) and were fastened on subject's stump (Fig. 2b). These sensors operate in the range 0–5 V with a bandwidth of 90–450 Hz and a common rejection ratio higher than 100 dB. The first sensor was located on the flexor carpi-radialis muscle, while the sixth sensor on the brachio-radialis muscle. These two muscles were identified by manual inspection of the stump; then, sEMG sensors were equally spaced each other on the silicone bracelet. The bracelet was located about 5 cm below the subject's elbow, in line with the positioning of the electrodes, commonly used to control the myoelectric prosthesis. The data was collected using a purpose built software on LabView platform by means of a NI DAQ USB 6002 device in order to sample the six sEMG signals at 1 kHz frequency and with 12 bits resolution.
Experimental Setup a) sEMG bracelet and NI DAQ USB 6002; b) Subject positioning and acquisition Software
Each subject was sitting in a comfortable chair in front of a PC monitor (Fig. 2b), where one of five hand gestures was randomly shown. The subjects were instructed to reproduce steady state the displayed gesture with their phantom limb. Once the signals became stable the sampling session started and continued for 2 s obtaining for each sensor 2000 samples. The gestures to reproduce were selected among the eight canonical hand postures [7, 19] and were "Rest" (relaxed hand), "Spherical" (hand with all fingers closed), "Tip" (hand with thumb and finger touching to pick up a small object), "Platform" (hand completely open and stretched), and "Point" (hand with all fingers closed, except for the index finger that is pointing). Each acquisition started from "Rest" position; after two seconds of acquisition, the subjects were asked to return to the Rest posture. Moreover, the subjects were instructed to accomplish the task with the minimum muscular contraction and focus on the main phantom fingers related to the gesture. The selected gesture was shown as in Fig. 3. Ten repetitions of each gesture were accomplished in a single acquisition session with an inter-stimulus interval of about 5 s. Figure 3 also shows a case of the raw recording from the six sEMG sensors for all the imagined movements. The plot is related to a single acquisition session from one of the subjects who took part to the experiment.
Graphic display of the selected gestures and of the raw recording for the six different channels at the same time for all the imagined movements of a single acquisition session from one of the subjects who took part to the experiment
NLR, MLP, and SVM classification algorithms
In order to obtain a fast response real-time classification no feature extraction was performed from the recorded signals, hence the sEMG signal are used directly as input for the classification algorithms. The unique operation done on sEMG signals is the scaling. It consists of subtracting the mean value to each signal and dividing the result by the range. Hence, for each time step (i) we obtain a six-element vector x (i) of scaled sEMG signals, which is used as input for the classifiers to compare, i.e.: NLR, MLP, LDA, and SVM with RBF kernel. Supervised machine learning techniques are commonly adopted in problems where there is no functional relationship y = f(x) that binds the inputs x (i) with the corresponding class (y). There are two different approaches to classification: the first one returns a distribution P(y| x); the second one returns a result without any probability of class membership [20].
LR [21], or Perceptron, is a linear and binary supervised classification algorithm that calculates the class membership, probability using the following logistic function
$$ P\left(1|x,\theta \right)=\left\{\begin{array}{l}g\left({\theta}^T\cdot x\right)=\frac{1}{1+{e}^{-\left({\theta}^T\cdot x+{\theta}_0\right)}}\hfill \\ {}1-P\left(y=0|x,\theta \right)\kern2.04em ,\hfill \end{array}\right. $$
where θ and θ 0 are the classification parameters vector and the bias term, respectively, and g(∙) is the logistic, or sigmoid, function. In order to achieve a NLR the creation of additional input features (interaction terms) is needed. For this study, additional polynomial features were used, which were obtained as a combination product of the starting input features (e.g., x 1 ; x 2 ; x 1 · x 2 ; x 1 2 ; x 2 2 ; …). The prediction of class labels (h θ ) for LR or NLR algorithm is then achieved by comparing the distribution P(y| x) with a decision threshold (TH) as
$$ {h}_{\theta }(x)=\left\{\begin{array}{l}P\left(1|x,\theta \right)\ge TH\to 1\hfill \\ {}P\left(1|x,\theta \right)< TH\to 0.\hfill \end{array}\right. $$
MLP [20, 21] is a particular case of supervised Artificial Neural Network (ANN) where each node, or neuron, of the architecture implements a logistic function. The network architecture has an input layer, one or more hidden layers (with the same number of neurons), and an output layer with one neuron for each class to be classified. The output vector of the l-th layer (a (l)) of this particular classifier is obtained through forward propagation as
$$ {a}^{(l)}=\left\{\begin{array}{l}x,\hfill \\ {}g\left({\varTheta}^{\left(l-1\right)}\cdot {a}^{\left(l-1\right)}+{\varTheta}_0^{\left(l-1\right)}\right)\kern0.62em ,\hfill \end{array}\right.\kern2.1em {\displaystyle \begin{array}{l}l=1.\hfill \\ {}l=2,\kern0.5em 3,\kern0.5em \dots, \kern0.5em L.\hfill \end{array}} $$
Where Θ(l), Θ0 (l) are the classification parameters matrix and the bias vector associated with the l-th layer, respectively, and L indicates the output layer. Hence, the output of the network is a vector Pv(y| x) whose elements represent the class membership probability expressed as
$$ Pv\left(y|x,{\varTheta}^{(l)},{\varTheta}_0^{(l)}\right)={a}^{(L)}.l=1,\kern0.5em 2,\dots, \kern0.5em L. $$
Also for MLP it is possible to achieve the prediction of class labels (h Θ) by comparing each value of the distribution vector Pv(y| x) with TH and assigning to h Θ the index of the element of Pv(y| x) that represents the maximum among all those resulted above the decision threshold.
SVM [20, 21] is a linear and binary supervised classification algorithm that considers only dichotomous distinction between two classes, and assigns class label 0 or 1 to unknown data item [20] as follows
$$ {h}_{\theta }(x)=\left\{\begin{array}{l}\left({\theta}^T\cdot x+{\theta}_0\right)\ge +1\to 1\hfill \\ {}\left({\theta}^T\cdot x+{\theta}_0\right)\le -1\to 0\kern0.36em .\hfill \end{array}\right. $$
In order to obtain a nonlinear classifier, a kernel function needs to be included into the model. A kernel function is a similarity function (f), satisfying the Mercer's Theorem, that expresses the similarity between the generic input vector x and a landmark (s), representing one of the the two classes. Typically a selection of all the x vectors recorded for training the SVM algorithm are set as landmarks and the j-th element of f for a RBF kernel becomes
$$ {f}_j=\exp \left[-\frac{{\left|x-{s}^{(j)}\right|}^2}{2\gamma}\right],\kern2em j=1,\kern0.5em 2,\kern0.5em \dots, \kern0.5em n. $$
where n is the number of landmarks chosen as representative vector of classes 0 and 1, and γ is the internal RBF parameter. Then the input features vector becomes f and the class labels for a SVM with RBF kernelFootnote 1 are assigned as
$$ {h}_{\theta }(f)=\left\{\begin{array}{l}\left({\theta}^T\cdot f+{\theta}_0\right)\ge +1\to 1\hfill \\ {}\left({\theta}^T\cdot f+{\theta}_0\right)\le -1\to 0.\hfill \end{array}\right. $$
Classification parameters θ, θ 0, Θ(l), and Θ0 (l) are obtained from the minimization of a particular cost function J(∙) associated with each classifier,
NLR, MLP, SVM classifiers and optimization algorithm implementation
NLR, MLP and SVM classification algorithms were implemented in MATLAB. For NLR and MLP the code was ad-hoc developed, while for SVM the open source library libsm3.20 [22] was used. The developed function that implements NLR allows the user to choose the maximum value of the variable D, which encodes a structure of polynomial features as reported in Table 1.
Table 1 Encoding the variable D
As polynomial features are intended the starting features high till the indicated degree and all the multiplications that arise from the possible permutations without repetitions of a maximum number of elements corresponding to the indicated degree. A cross-entropy error cost function has been associated to the NLR algorithm and is expressed a
$$ J\left(\theta, {\theta}_0\right)=-\frac{1}{m}\left[\sum \limits_{i=1}^m{y}^{(i)}\cdot \ln g\left({\theta}^T\cdot {x}^{(i)}+{\theta}_0\right)\right]-\frac{1}{m}\left[\sum \limits_{i=1}^m\left(1-{y}^{(i)}\right)\cdot \ln \left(1-g\left({\theta}^T\cdot {x}^{(i)}+{\theta}_0\right)\right)\right], $$
where m is the number of samples used to train the algorithm and y (i) is the known class membership of the i-th sample. Being NLR a binary classification algorithm, a one vs. all approach was implemented to address the multi-class classification problem.
The developed function that implements MLP allows the user to decide the maximum number of hidden layers and the maximum number of neurons for each of them. A mean square error cost function has been associated to the MLP algorithm, as
$$ J\left(\varTheta, {\varTheta}_0\right)=\frac{1}{m}\sum \limits_{i=1}^m\sum \limits_{k=1}^K{\left[{y}_k^{(i)}-{\left({a}_k^{(L)}\right)}^{(i)}\right]}^2, $$
where K is the number of classes to be recognized, \( {y}_k^{(i)} \) is the known k-th element of the class membership vector of the i-th sample, and \( {a}_k^{(i)} \) is the k-th element of the evaluated membership probability vector of the i-th sample.
As previously mentioned, the SVM classifier with RBF kernel has been developed exploiting the open source library libsvm3.20 that is widely used for multiclass machine learning problems. More detailed information can be found in [22,23,24]. Anyway the cost function J(∙) associated to the SVM algorithm can be expressed as
$$ J\left(\theta, {\theta}_0\right)=-C\left[\sum \limits_{i=1}^m{y}^{(i)}\cdot \ln g\left({\theta}^T\cdot f+{\theta}_0\right)\right]-C\left[\sum \limits_{i=1}^m{y}^{(i)}\cdot \ln \left(1-g\left({\theta}^T\cdot {x}^{(i)}-{\theta}_0\right)\right)\right]+\frac{1}{2}\left[{\theta}^T\cdot \theta +{\left({\theta}_0\right)}^2\right], $$
The developed function allows the user to set the value regularization parameters C that appear into the cost function implemented in libsvm3.20 and the value of the internal RBF parameter γ. In this case, to address the multiclass classification problem it has been chosen to rely on a one vs. one method as recommended by the developers for practical usage of the library [23–30].
Since each of the aforementioned classifiers requires to set internal parameters, in addition to classification parameters θ, θ 0, Θ(l), and Θ0 (l), it is coupled with an iterative optimization algorithm. The optimization strategy relies on a three ways data split approach [25]. Hence, the initial data set is divided into three subsets: "Training Set" (TR) containing 60% of the data, "Cross Validation Set" (CV) containing 20% of the data, and "Test Set" (TS) containing the remaining 20% of the data. These subsets are iteratively filled through a random shuffle until a configuration with a proportionated class number is reached. The TR is used to train the supervised classification algorithms by minimizing the specific cost function. As minimization algorithm, Resilient Backpropagation (RProp) [26] has been chosen for NLR and MLP and Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) [27] for SVM. Each single classifier is iteratively trained with all the possible configurations of its internal parameters, varying each of these within an appropriate range of values. The CV is then used to evaluate performance of each configuration (i.e. model), in order to avoid overfitting and find out the best model.
The F1Score [28] was used in this study to assess performance, in lieu of accuracy, being more robust also for classes that do not have a perfect symmetrical cardinality. Considering this simple confusion matrix
Where nP is the number of true positive, nN the number of true negative, nFP the number of false positive and nFN the number of false negative, F1Score can be evaluated as
$$ \left\{\begin{array}{c}\hfill PR=\frac{nP}{\left( nP+ nFN\right)}\kern2.759999em \hfill \\ {}\hfill RE=\frac{nP}{nP+ nFP}\kern3.239999em \hfill \\ {}\hfill F1 Score=2\cdot \frac{PR\cdot RE}{PR+ RE}\cdot 100,\hfill \end{array}\right. $$
where PR is called Precision and RE is called Recall.
After determining the optimal classifier model, the TS is used to achieve an estimation of the performance that the classifier is expected to show when new features are provided as input.
NLR, MLP, SVM Downsampling and creation of generalization set
For each subject involved in the experiment, the data sampled at 1 kHz were organized in a matrix; each column of the matrix was coupled with an EMG sensor. Hence, the choice of avoiding features extraction based on time windowing of sEMG generated 105 × 6 data (large-scale datasets) and, consequently, a very long time (more than 4 h per subject) is required to complete training and optimization for each classification algorithm. Therefore, downsampling has been applied to speed up the whole process. The discarded data were used to compose a new set of data called "Generalization Set" (GS) which has been used as second test set in order to obtain an estimation of the generalization ability of each classification algorithm. In particular, for a downsampling step equals to 10 (one in ten), the GS will contain 90% of the data, the TR 6%, the CV 2%, and the TS the remaining 2% of the data. In other terms, the results evaluated on TS represent an estimation of the classification ability when the signal to classify is sampled at the same frequency of the training data (a downsampling step equals to 10 produce a 100 Hz dataset) while results evaluated on GS represents an estimation of the classification ability when classifying a signal sampled up to 1 kHz.
LDA classifier
LDA is a linear and binary supervised classification algorithm that considers a dichotomous distinction between two classes, and assigns class label 1 or 2 to unknown data item relying on the following decision function
$$ {h}_{\beta }(x)=\left\{\begin{array}{l}\left({\beta}^T\cdot x+{\beta}_0\right)\ge 0\to 1\hfill \\ {}\left({\beta}^T\cdot x+{\beta}_0\right)<0\to 2\kern0.36em ,\hfill \end{array}\right. $$
where β and β 0 are the classification parameters vector and the bias term, respectively. Classification parameters can be evaluated as
$$ \left\{\begin{array}{l}\beta ={\varSigma}^{-1}\cdot \left({\mu}_1-{\mu}_2\right)\hfill \\ {}{\beta}_0=-{\beta}^T\cdot \left(\frac{\mu_1+{\mu}_2}{2}\right)+\ln \left(\frac{\Pi_1}{\Pi_2}\right)\kern0.36em ,\hfill \end{array}\right. $$
where Σ is the pooled covariance matrix, μ 1, μ 2 and Π1, Π2 are the mean vectors and the prior probabilities of class 1 and class 2, respectively. Since this classifier does not require setting internal parameters, training and test rely on a two ways data split approach [25]. Hence, the initial dataset is divided into training set and test set. The training set contains 70% of the data (TR70%), and the test set contains the remaining 30% of the data (TS30%). The subsets are iteratively filled through a random shuffle until a configuration with proportionated class number is reached. The TR70% is used to train the classifier evaluating classification parameters β and β 0; on the other hand, the TS30% is used to estimate the classifier performance when new features are provided as input. Being LDA a binary classification algorithm, a one vs. all approach was implemented to address the multi-class classification problem. The class label (c) is predicted as
$$ {h}_{\beta }(x)=\underset{c}{\max \limits}\left({{}_c\beta}^T\cdot x+{{}_c\beta}_0\right)\mathrm{and}\kern2.04em \left\{\begin{array}{c}\hfill {}_c\beta ={\varSigma}^{-1}\cdot {\mu}_{c\kern8.519994em }\kern0.24em \hfill \\ {}\hfill {{}_c\beta}_0=-{{}_c\beta}^T\cdot \left(\frac{\mu_c}{2}\right)+\ln \left({\Pi}_c\right),\hfill \end{array}\right. $$
where \( {}_c{}\beta \) and \( {{}_c{}\beta}_0 \) are the classification parameters vector and the bias term of c class, respectively. For building our LDA benchmark classifier five commonly used time domain features were consideredFootnote 2: Mean Absolute Value (MAV), Root Mean Square (RMS), Slope Sign Change (SSC), Waveform Length (WL) and Variance (σ 2). They were extracted in windows of 250 ms with an overlap of 200 ms [17]. Since the training of the LDA classifier is performed by means of Eq. (13, 14) and the feature extraction avoids the generation of large-scale-dataset, a short time is required to complete the training of the classifier and there is no need to perform down sampling. The classification algorithm was implemented in MATLAB with an ad hoc developed software code.
The study was divided into three parts: the first one investigated the optimal range of D (initial guess 1–7) for NLR, and the range of maximum number of layers (initial guess 1–10) and neurons (initial guess 1–30) for MLP, while the second part is focused on the comparison among the NLR, MLP and SVM classification algorithms. The third part is focused on the comparison with our ground truth, the LDA classifier. The first part can be seen as a preliminary investigation in order to reduce the evaluation time of the comparison among the three classifiers. A downsampling step equal to 10 (and corresponding to a 100 Hz sampling frequency) has been applied to data collected from 30 people with trans-radial amputation. Performance of each algorithm has been measured by means of the F1Score (12) value and a statistical analysis has been based on the Wilcoxon Signed-Rank test, which has been shown to be appropriate for comparing different classifiers in common datasets [1-29]. Statistical significance was considered at p < 0.05. The maximum value of D, of the number of layers, and of the number of neurons have been obtained by means of a sequential statistical analysis, starting from the simplest case and then sequentially comparing all the others until a high significant difference of performance is found. This is taken as the new benchmark for all the subsequent comparisons. The process ends when it is found the last case in which the differences are not statistically significant compared to all subsequent cases.
The second part, the core of our work, resorted to the results obtained in the first part to compare NLR, MLP, and SVM considering both performance and run-time computational burden on EMG data collected from 30 people with trans-radial amputation. As regards the SVM, the range of variation of the regularization parameter C belongs to 0–104, with variable steps starting from 0.01 and doubling each time, while γ belongs to 0–50 (with a pitch equals to 0.1); both have been empirically determined in previous tests. The computational burden was evaluated through the number of parameters (nθ), expressing the cardinality of classification vector θ (1) (7) or matrices Θ (3) that identify the particular classification algorithm. In detail, the number of matrix elements created by the libsvm training function, which are necessary to run the evaluated SVM model, were used for evaluating the cardinality of SVM parameters. Particularly they were: rho, sv_coef, and SVs [30]. The values of sample rate were: 5 Hz, 10 Hz, 20 Hz, 40 Hz, and 100 Hz (corresponding to 200, 100, 50, 25 and 10 downsampling step). Again, the statistical analysis has been performed through a Wilcoxon Signed-Rank test with significance threshold set to 0.05. Lastly a combined index, called EOF (Embedding Optimization Factor), that takes into account both performance and computational burden has been calculated. It is defined as
$$ \left\{\begin{array}{c}\hfill \mathrm{if}\;\left( N\varTheta > n\theta \right)\to P=\frac{\left( N\varTheta - n\theta \right)}{N\varTheta}\cdot 100\hfill \\ {}\hfill \mathrm{if}\;\left( N\varTheta \le n\theta \right)\to P=0\kern3.839998em \hfill \\ {}\hfill EOF=\frac{2\cdot \left(F1 Score\cdot P\right)}{\left(F1 Score+P\right)},\kern3.119999em \hfill \end{array}\right. $$
where NΘ is the maximum acceptable number of parameters. This index plays a paramount role in the implementation of these algorithms in embedded systems, where memory storage and program memory are limited. To this purpose, as representative example, NΘ has been chosen as equal to the maximum number of parameters storable into a 256 KB memory, which is typically used for high performance embedded microcontrollers applied to prosthetic hands (e.g. Touch Bionics I-Limb, Ultra and Robo-Limb). As each parameter is coded as a float which 4 memory bytes are needed to just store one of them, hence, for our example, the maximum number of storable parameters is 64 ∙ 103 classification parameters. This is an application example of how that index and NΘ can be evaluated, but the same method can be applied taking into account different size of memory and/or other constraints, such as the available RAM memory or the evaluation time for a single classification (which is related to the microcontroller clock frequency).
In the third part a comparative analysis among the three non-linear classifiers and the LDA was carried out. Since LDA was trained and tested with data sampled at 1 kHz (without downsampling), NLR, MLP and SVM models with the highest EOF values on GS were taken for the comparison. Again, the analysis was performed taking into account classification performance, computational burden and EOF index. The statistical analysis was performed through a Wilcoxon Signed-Rank test with significance threshold set to 0.05.
The results are presented in boxplots where the central line represents the median value; the edges of the box are the 25th and the 75th percentiles; the whiskers give the range of the data without outliers; solid markers represent the mean value.
Max degree of polynomial features for NLR
Figure 4 shows the values of F1Score of TS and GS over the max degree of polynomial features (indicated with D) applied as input to NLR.
F1Score of Test Set (smaller boxes) and Generalization Set (bigger boxes) of 5 classes over the maximum value of variable D calculated from 30 people with trans-radial amputation. The figure also shows the trend of the mean value for both Sets. Statistical non-significance over value 5 is shown by "ns"
In both cases, the maximum is reached by setting 7 as maximum D value, but the Wilcoxon Signed-Rank test applied to the F1Score values points out no statistically significant difference for polynomial features over the value 5 for both GS and TS. The result seems to indicate that, for people with trans-radial amputation, the system performance saturates setting the maximum D value of the polynomial features over value 5 as showed in Fig. 3 by the trend lines of the mean values.
Max number of hidden layers for MLP
Figure 5 shows the values of F1Score of TS and GS over the max number of hidden layers. Each hidden layer has maximum 30 neurons for MLP.
F1Score of Test Set (smaller boxes) and Generalization Set (bigger boxes) of 5 classes over the maximum number of layers having fixed at 30 the maximum number of neurons for each hidden layer calculated from 30 people with trans-radial amputation. The figure also shows the trend of the mean value for both Sets. Statistical non-significance over value 5 is shown by "ns"
In both cases, the best performance is obtained for a maximum number of layers equal to 8, but the Wilcoxon Signed-Rank test applied to the values of achieved F1Score values points out no statistically significant difference over 5 hidden layers for both GS and TS. This probably means that for people with trans-radial amputation the system performance saturates for a maximum number of hidden layers over the value 5.
Max number of neurons for MLP
Figure 6 summarizes the values of F1Score of TS and GS with respect to the max number of neurons for a MLP with maximum 5 hidden layers varying by 5 the number of neurons until the value 23, for compactness. The Wilcoxon Signed-Rank test applied to the achieved values of F1Score points out no highly statistically significant difference over 23 for TS and over 28 for GS. This probably means that for people with trans-radial amputation the system performance saturates for a maximum number of neurons between 23 and 28 depending on the frequency of the signals to classify.
F1Score of Test Set (smaller boxes) and Generalization Set (bigger boxes) of 5 classes over the maximum number of neurons for each layer. The maximum number of hidden layers calculated from 30 people with trans-radial amputation has been fixed at 5. The figure also shows the trend of the mean value for both Sets. Statistical non-significances over value 23 for and overvalue 28 for GS are shown by "ns"
NLR, MLP, SVM comparison based on TR sampling rate
Figure 7 shows the values of F1Score of TS and GS, obtained training the classifiers on TR sampled at increasing sampling rate (or at decreasing downsampling step) for NLR, MLP, and SVM. As mentioned in Sect. II, NLR and MLP has been optimized by using the results previously obtained by limiting to 5 the maximum D value, for NLR and to 5 and 28 the maximum number of layers and neurons, respectively, for MLP. Afterwards, performance of NLR, MLP, and SVM were compared, at different sampling frequencies of the dataset used to train the algorithms, through a Wilcoxon Signed-Rank test. For both TS and GS the analysis reports no statistically difference between the three classifiers when training the algorithms with a 5 Hz sampled dataset, and that NLR achieved significant lower value than MLP and SVM with the others sampling frequencies. Conversely, MLP achieved statistically significant lower performance than SVM only using a 100 Hz frequency.
F1Score values from 30 people with trans-radial amputation increasing the sampling frequency of the dataset used to train and cross validate the NLR, MLP, and SVM algorithms and 5 classes. Statistical significance is shown by "*". a) F1Score values for Test Set; b) F1Score values for Generalization Set
NLR, MLP, SVM comparison based on computational burden
Figure 8 shows the number of classification parameters (nθ), obtained training the classifiers on datasets sampled at increasing sampling rate (or at decreasing downsampling step) for NLR, MLP, and SVM. Variable nθ is regarded as an index quantifying the algorithm computational burden. Again NLR and MLP has been optimized thanks to the previously obtained results. As the model of the classifier adopted for TS and GS is the same, also the complexity in the two cases is the same.
Number of classification parameters from 30 people with trans-radial amputation increasing the sampling frequency of the dataset used to train and cross validate the NLR, MLP, and SVM algorithms and 5 classes. The y ax is in logarithmic scale. Statistical non-significance is shown by "ns"
By comparing the algorithms at different sampling rates for the dataset used to train the three algorithms, it can be observed that SVM is always characherized by the highest computational cost, while NLR by the lowest one. While NLR and MLP remain statistically different they retained values of nθ that always belong to the same order of magnitude (102 for NLR and 103 for MLP), SVM initially scores values statistically equals to MLP (5 Hz) and then diverged with respect to the sampling rate. This difference in behavior of the SVM classifier is due to its unique achitecture that generates a number of landmarks (6), which are strictly related to the number of the classification parameters, depending on the numerosity of the dataset used to train the algorithm. Therefore, the higher the sampling frequency the more numerous the TR will be and, consequently, a high number of landmarks to represent the data is needed. All the others comparisons proved to be statistically different among them.
NLR, MLP, SVM comparison based on EOF
As previously mentioned in this section it was reported a result of an applicative example comparing NLR, MLP and SVM classifiers using EOF as comparison index. The only constraint adopted in this analysis is the burden on a 256 KB memory that the classification parameters to be stored produce. Figure 9 shows values of EOF for TS and GS, obtained training the classifiers on datasets sampled at increasing sampling rate (or at decreasing downsampling step) for NLR, MLP, and SVM. Again, NLR and MLP were optimized using the results previously obtained. Hence, a comparative analysis among NLR, MLP, and SVM was carried out (first for TS, then for GS).
EOF values from 30 people with trans-radial amputation increasing the sampling frequency of the dataset with 5 classes used to train and cross validate the NLR, MLP, and SVM algorithms. Statistical significance is shown by "*". a) EOF values for Test Set; b) EOF values for Generalization Set
Except that for TS at 5 Hz sampling frequency (where SVM has obtained the maximum value of EOF) among the three classifiers NLR attained the maximum EOF value for both TS and GS and perhaps, the result means that for people with trans-radial amputation NLR and MLP classifiers represent the best compromise between classification performance and computational burden. The result is even more valuable considering the trend of the value of EOF increasing the sampling rate. In fact, for the NLR and the MLP classifier the value of this index tends to slightly increase, while for the SVM classifier it decreases more and more.
NLR, MLP, SVM and LDA comparison
In this section the results of the comparative analysis of LDA withNLR, MLP, and SVM classifiers are reported. For comparative purposes, NLR, MLP, and SVM models that obtained the highest EOF values on GS were used. The LDA classifier was considered as ground truth, in terms of performance, number of parameters and EOF index. Figure 10 shows the values of F1Score of GS for NLR, and MLP on TR sampled at 100 Hz and SVM, on TR sampled at 25 Hz, and of TS30% for LDA on TR70% sampled at 1 kHz. By exploiting the previously obtained optimization results, D value was limited to 5 for NLR, while the maximum number of layers and neurons was limited to5 and 28 for MLP. Table 2 shows the numeric values of F1Scores averaged over 30 subjects with trans-radial amputation and the corresponding standard deviation (s) for all the four algorithms.
F1Score values from 30 people with trans-radial amputation for MLP, NLR, SVM, tested on GS, and LDA with 5 time domain features, on a 5 classes dataset. NLR and MLP where trained using data sampled at 100 Hz, while SVM using data sampled at 10 Hz. Statistical non-significance is shown by "ns"
Table 2 Classification performance and computational burden for NLR, MLP and SVM models with highest EOF value on GS and LDA sampled at 1 kHz with features
A Wilcoxon Signed-Rank test was adopted for the statistica analysis of comparison between NLR, MLP, and SVM and LDA.. The analysis reports no statistically significant difference between LDA and both NLR and MLP classifiers, while SVM achieved significantly lower value than the others. Figure 11 displays the number of classification parameters (nθ and n nβ). Table 2 shows the number of classification parameters averaged over 30 subjects with trans-radial amputation and the corresponding standard deviation (σ) for the four algorithms. The analysis showed that LDA obtained the minimum number of parameters, and no statistically significant difference was observed only between MLP and SVM.
Number of classification parameters from 30 people with trans-radial amputation for MLP, NLR, SVM, and LDA with 5 time domain features, on a 5 classes dataset. NLR and MLP where trained using data sampled at 100 Hz, while SVM using data sampled at 10 Hz. The y ax is in logarithmic scale. Statistical non-significance is shown by "ns"
Finally, the EOF index for LDA was evaluated and compared with NLR, MLP and SVM, as showed in Fig. 12 and Table 2. While SVM achieved significantly lower value than the other classifiers, MLP, NLR and LDA showed similar EOF score. The Wilcoxon Signed-Rank showed no statistically significant difference only between the NLR and LDA classifier.
EOF values from 30 people with trans-radial amputation for MLP, NLR, SVM, tested on GS, and LDA with 5 time domain features, on a 5 classes dataset. NLR and MLP where trained using data sampled at 100 Hz, while SVM using data sampled at 10 Hz. Statistical non-significance is shown by "ns"
In this study an in-depth analysis has been carried out of three of the most adopted classifiers for EMG signals, i.e. NLR, MLP, and SVM using LDA with time domain feature extraction as ground truth for the final validation of the performed analysis. The choice fell on these because of the extensive discussion in the literature and because of the high performance notwithstanding the extremely different number of classification parameters. In particular, an intensive analysis on data acquired from 30 people with trans-radial amputation was conducted and performance were assessed, with special attention to the problem of developing embedded classifier solutions. Although the type and number of recruited subjects was not sufficient to generalize the results to all kinds of trans-radial amputations, this study wants to provide a solid basis for reflecting upon the trade-off between performance and computational burden of these classifiers.
Six commercial sEMG sensors produced analog signals that were sampled at 1 kHz and used as "raw" input features of the classifiers. In order to speed up the training and the cross validation of NLR, MLP and SVM classification algorithms, downsampling was applied to the data creating one downsampled dataset (TR, CV, and TS) and one dataset containing all the remaining data (GS). While the TR and CV were used to train and cross validate, TS and GS have been used to test the performance of the classifiers.
The performance of NLR and MLP algorithms were firstly evaluated and then analyzed with the Wilcoxon Signed-Rank test for both TS and GS. The results showed that for NLR no significant improvement of performance can be obtained for a degree of polynomial features greater than 5 and that for MLP no significant improvements can be achieved by increasing the complexity of the network up to 5 layers and 23 neurons for TS and 28 neurons for GS, respectively (Fig. 5). This result is very important because sets a boundary on the complexity of the classifier, allowing to reduce the training and cross-validating times when applying these algorithms on raw sEMG data recorded from people with trans-radial amputation. Furthermore, it is also relevant to observe that NLR in the linear case analysis (polynomial features of grade 1) obtained the lowest F1Score value with respect of the other higher grade of polynomial features, suggesting the use of a non-linear classifier when as input features the raw outputs of the Ottobock sEMG sensors are used.
After this preliminary investigation, a comparative analysis among the NLR, MLP, and SVM algorithms was performed using data at different frequencies (5 Hz, 10 Hz, 20 Hz, 40 Hz, and 100 Hz) as TR, CV and TS. The comparison pointed out that the sampling rate and the classification performance increased at the same time (Fig. 7). In fact, for all the algorithms the maximum performance was obtained with 100 Hz sampling rate, however, increasing the sampling rate also tends to elevate the number of classification parameters, used as index of computational burden of the classifier. The analysis showed that, for both classification performance and number of classification parameters (Fig. 8), SVM attains the highest values followed by MLP, and then by NLR. Although downsampling causes a loss of information, classification performance was still high (ranging from 91.1% to 94.5%) meaning that the signals kept the main content related to the gesture. The reason is that, for constructing a decision boundary, it is not necessary to use high frequency sampled data during the classifier training phase; data with similar range, dispersion and redundancy are required. This also explains why GS systematically reports higher performance value than TS. GS contains a larger number of data than TS and, consequently, leads to higher performance scores. Hence, the results carried out from it might better represents the real behavior of the classifiers when data sampled up to 1 kHz are provided as input.
Although when implementing these algorithms on PC systems it is reasonable to choose the one with the highest classification performance, when moving to embedded systems for prosthetic devices, the computational burden is no longer negligible. Hence, in order to investigate the best compromise between performance and computational burden, the EOF index was presented. Using as unique constraint the memory usage, the EOF has been evaluated referring to a standard microcontroller 256 KB memory at different frequencies of TR, CV and TS. As previously reported, this is just an application example but the same method can be applied taking into account different memory values and/or other constraints, such as the available RAM memory and/or the evaluation time for a single classification for any microcontroller. The analysis performed showed that, for people with trans-radial amputation and using sampled sEMG signals to more than 5 Hz as input, the algorithm that produces the best compromise is NLR, with the highest values of EOF (95.5%), closely followed by MLP (94.8%). Conversely, SVM algorithm, which obtained the highest classification performance, presents considerably lower values of EOF (93.3%) than the other two algorithms (Fig. 9); this means that high performance is achieved at the expenses of a sharp increase of the computational burden and memory usage. Hence, it is possible to summarize that in order to choose the most suitable classifier in a real application with data sampled at the same frequency used for train and cross validate the algorithm, there is no difference between NLR, MLP, and SVM up to 10 Hz, while from 10 to 100 Hz SVM becomes significantly disadvantageous with respect to the other two classifiers, which did not show significant difference. On the other hand, for use in a real application with data sampled at higher frequency (up to 1 kHz) than the ones used to train and cross validate the algorithms, NLR resulted to be the most suitable clearly representing the best compromise between classification performance and computational burden. Furthermore, the analysis suggests, among the tested cases, a downsampling step equal to 10 (100 Hz) for the training and the cross validation of NLR and MLP algorithms, and equal to 100 (10 Hz) for SVM.
Finally, a comparison between each of the three non-linear classifiers and LDA was carried out. Since LDA was trained and tested with data sampled at 1 kHz (without downsampling), NLR, MLP and SVM models with the highest EOF values on GS for performance, number of parameters and EOF index were used for the comparative analysis. This analysis pointed out no statistically significant difference between NLR and LDA in terms of performance and EOF index (Figs. 10-11-12, Table 2) confirming the results of the previously showed comparisons (Figs. 7-8-9) despite LDA reported the minimum computational burden. Therefore, this result is also more appreciable if we consider that NLR was trained and tested using raw sEMG data. So, this study shows that it is possible to use non-linear classification algorithms on raw sEMG signals recorded from people with trans-radial amputation also for embedded applications. Furthermore, since LDA and NLR retained statistically similar value for both performance and computational burden, it is possible to speculate that the features extraction step linearize the classification problem at the expense of a delay on the class evaluation time and on the readiness of the system during the transition between two different gestures. Indeed, using raw sEMG signals as input features the class evaluation time and system readiness approximate the sampling time; on the other hand, using features based on time windowing, the class evaluation time equals the window shift and the readiness delay is around the half of the time window length.
It is worth noticing that, when transient EMG signals are included in classifier training, system controllability and performance are shown to improve [31]; conversely, offline classification accuracy degrades. This comparative study was grounded on steady state sEMG signals, however, this does not affect our comparative analysis, since the experimental data were the same for all the analysed classifiers.
In this study the NLR, MLP and SVM classification algorithms were developed, tested and optimized on a dataset of 5 hand gestures classes composed of the data recorded from 30 people with trans-radial amputation, using 6 commercial sEMG sensors. After evaluating the maximum complexity of the NLR and MLP algorithms needed to apply pattern recognition on this population, the comparative analysis among the three algorithms was carried out. It pointed out that, for both classification performance and number of classification parameters, SVM attains the highest values followed by MLP, and then by NLR. Hence, in order to investigate the best compromise between performance and computational burden, the EOF index was presented. The analysis performed showed that, for people with trans-radial amputation and using sampled sEMG signals to more than 5 Hz as input, the algorithm that reached the best compromise is NLR (with the highest value of EOF) closely followed by MLP. This result was also confirmed by the comparative analysis with LDA with time domain features, which showed no statistically significant difference with NLR. The proposed analysis would provide innovative engineering tools and indications on how to choose the most suitable classifier, and its specific internal settings, based on the application and the desired results for prostheses control. As the research has reached an advanced grade of accuracy, these algorithms were proved and the embedding is necessary for the realization of prosthetic devices. Future developments will exploit the results of this study by extending the analysis to transient EMG signals, and developing a control unit embedding pattern recognition algorithms for people with trans-radial amputation. Then, measures of system robustness and reliability will be carried out and performance of real-time myoelectric pattern recognition control of a multifunctional upper-limb prosthesis will be evaluated by means of specific tests (e.g. TAC test [16]).
note that in the following SVM will be used to indicate SVM with RBF kernel
note that in the following LDA will be used to indicate LDA with 5 time domain features
Artificial Neural Network
Cross Validation Set
DOFs:
GS:
Generalization Set
L-BFGS:
Limited Memory Broyden-Fletcher-Goldfarb-Shanno
Linear Discriminant Analysis
LR:
MLP:
Multi-Layer Perceptron
NLR:
Non-Linear Logistic Regression
RBF:
Radial Basis Function
sEMG:
Surface electromyography
TMR:
Targeted Muscled Re-innervation
TR:
TS:
Test Set
Ortiz Catalan M, Håkansson B, Brånemark R. Real-time and simultaneous control of artificial limbs based on pattern recognition algorithms. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2014;22:756–64.
Parker P, Englehart K, Hudgins B. Myoelectric signal processing for control of powered limb prostheses. J Electromyogr Kinesiol. 2006;16:541–8.
Farina D, Aszmann O. Bionic limbs: clinical reality and academic promises. Sci Transl Med. 2014;12:257–12.
Anna Lisa C, et al. Control of prosthetic hands via the peripheral nervous system. Frontiers in neuroscience. 2016;10:116.
Roche AD, et al. Prosthetic myoelectric control strategies: a clinical perspective. Current Surgery Reports. 2014;2:1–11.
Castellini C, et al. Fine detection of grasp force and posture by amputees via surface electromyography. Journal of Physiology-Paris. 2009;103:255–62.
Zecca M, Micera S, et al. Control of multifunctional prosthetic hands by processing the electromyographic signal. Critical reviews™ in. Biomed Eng. 2002;30:4–6.
Smith LH, Lock BA, and Hargrove L. Effects of window length and classification accuracy on the real-time controllability of pattern recognition myoelectric control. Proceedings of the 18th Congress of the International Society for Electrophysiology and Kinesiology. 2010.
Benatti S, et al. Analysis of robust implementation of an EMG pattern recognition based control. Biosignals. 2014;
Nazarpour K. Surface EMG signals pattern recognition utilizing an adaptive crosstalk suppression preprocessor. ICSC Congress on ma pc Computational Intelligence Methods and Applications. 2005;
Dohnalek P. Human activity recognition on raw sensors data via sparse approximation. International Conference on Telecommunications and Signal. 2013;
Chan ADC, Englehart KB. Continuous classification of myoelectric signals for powered prostheses using Gaussian mixture models. Engineering in Medicine and Biology Society. 2003;
Zhijun L, et al. Boosting-based EMG patterns classification scheme for robustness enhancement. IEEE Journal of Biomedical and Health Informatics. 2013;17:545–52.
Cloutier A, Yang J. Design, control, and sensory feedback of externally powered hand prostheses: a literature review. Critical Reviews™ in Biomedical Engineering. 2013;2:161–81.
Scheme E, Englehart K. Electromyogram pattern recognition for control of powered upper-limb prostheses: state of the art and challenges for clinical use. J Rehabil Res Dev. 2011;48:643–59.
Simon AM, Hargrove LJ, Lock BA, Kuiken TA. The target achievement control test: evaluating real-time myoelectric pattern recognition control of a multifunctional upper-limb prosthesis. J Rehabil Res Dev. 2011;48(6):619.
Young AJ, Smith LH, Rouse EJ, Hargrove LJ. A comparison of the real-time controllability of pattern recognition to conventional myoelectric control for discrete and simultaneous movements. J Neuroeng Rehabil. 2014;11(1):5.
Riillo F, et al. Optimization of EMG-based hand gesture recognition: supervised vs. unsupervised data preprocessing on healthy subjects and transradial amputees. Biomedical Signal Processing and Control. 2014;14:117–25.
Dalley SA, et al. A multigrasp hand prosthesis for transradial amputees. 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. 2010.
Dreiseitl S, Ohno ML. Logistic regression and artificial neural network classification models: a methodology review. J Biomed Inform. 2002;35:352.
Chaiyaratana N, Zalzala AMS, Datta D. Myoelectric signals pattern recognition for intelligent functional operation of upper-limb prosthesis. 1996.
Hsu CW, Chang CC, Lin CJ. A practical guide, to support vector classification. Department of Computer Science: National Taiwan University, Taipei. Tech. Rep; 2010.
Hsu CW, Lin CJ. A comparison of methods for multi-class support vector machine. IEEE Trans Neural Netw. 2002;13:415–25.
Chen PH, Lin CJ. LIBSVM: a libray for support vector machines. ACM Transactions on Intellingent Systems and Technology. 2011;2:3.
Ripley BD. Pattern recognition and neural networks. Camb Univ press. 2007;
Baykal N, Erkmen AM. Resilient backpropagation for RBF networks. Knowledge-Based Intelligent Engineering Systems and Allied Technologies. 2000.
Ding Y, Lushi E, Li Q. Investigation of quasi-Newton method for unconstrained optimization. Canada: Simon Fraser University; 2004.
Powers D, Martin D. Evaluation from precision, recall and F-measure to ROC, informedness, markedness and correlation. J Mach Learn Technol. 2011;2:37–63.
Demšar J. Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res. 2006;1:30.
LIBSVM FAQ. 2015. http://www.Csie.Ntu.Edu.Tw/~cjlin/libsvm/faq.Html#/Q04:_Training_and_prediction. Accessed 10 Jun 206.
Hargrove LJ, et al. A real-time pattern recognition based myoelectric control usability study implemented in a virtual environment. Engineering in Medicine and Biology Society, 2007. EMBS 2007. 29th Annual International Conference of the IEEE. 2007.
This work was supported partly by the National Institute for Insurance against Accident at Work (INAIL) with the PPR2 (CUP: E58C13000990001), PCR1/2, PPR AS 1/3 (CUP: E57B160005) projects and partly by the European Project H2020/AIDE: Multimodal and Natural Computer Interaction Adaptive Multimodal Interfaces to Assist Disabled People in Daily Activities (CUP: J42I15000030006).
National Institute for Insurance against Accident at Work (INAIL) and European Commission (Call: H2020-ICT-2014-1, Topic: ICT-22-2014).
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Research Unit of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico di Roma, v. Alvaro Del Portillo, Rome, Italy
Alberto Dellacasa Bellingegni, Emanuele Gruppioni, Giorgio Colazzo, Eugenio Guglielmelli & Loredana Zollo
Centro Protesi INAIL di Vigorso di Budrio, v. Rabuina, Bologna, Italy
Alberto Dellacasa Bellingegni, Emanuele Gruppioni, Angelo Davalli & Rinaldo Sacchetti
Alberto Dellacasa Bellingegni
Emanuele Gruppioni
Giorgio Colazzo
Angelo Davalli
Rinaldo Sacchetti
Eugenio Guglielmelli
Loredana Zollo
ADB designed the study analyzed the literature, made the analysis and interpretation of data and wrote the paper. EG made substantial contributions to conception and design of the study. GC acquired the sEMG data from 30 people with trans-radial amputation. AD, RS collaborated during the literature analysis and revised the paper. EG given final approval of the version to be published. LZ was involved in drafting the manuscript and revised it critically for important intellectual content. All the authors read and approved the manuscript.
Correspondence to Alberto Dellacasa Bellingegni.
All subjects gave informed consent to take part in the study that was approved by local scientific and ethical committees.
Dellacasa Bellingegni, A., Gruppioni, E., Colazzo, G. et al. NLR, MLP, SVM, and LDA: a comparative analysis on EMG data from people with trans-radial amputation. J NeuroEngineering Rehabil 14, 82 (2017). https://doi.org/10.1186/s12984-017-0290-6
Nonlinear logistic regression | CommonCrawl |
The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Login or register account
remember me Password recovery
INFONA - science communication portal
resources people groups collections journals conferences series
Search results for: A. Penzo
Search for author's resources
Items from 1 to 20 out of 677 results
customise view
Search for a heavy pseudoscalar boson decaying to a Z and a Higgs boson at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A. M. Sirunyan, A. Tumasyan, W. Adam, F. Ambrogi, more
The European Physical Journal C > 2019 > 79 > 7 > 1-27
A search is presented for a heavy pseudoscalar boson $$\text {A}$$ A decaying to a Z boson and a Higgs boson with mass of 125$$\,\text {GeV}$$ GeV . In the final state considered, the Higgs boson decays to a bottom quark and antiquark, and the Z boson decays either into a pair of electrons, muons, or neutrinos. The analysis is performed using a data sample corresponding to an integrated luminosity...
Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV
The CMS collaboration, A. M. Sirunyan, A. Tumasyan, W. Adam, more
Journal of High Energy Physics > 2019 > 2019 > 6 > 1-34
Abstract Results are reported of a search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb−1 collected at a center-of-mass energy of 13 TeV using the CMS detector. The results are interpreted in the context of models of gauge-mediated supersymmetry breaking. Production...
Search for the associated production of the Higgs boson and a vector boson in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV via Higgs boson decays to τ leptons
Abstract A search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to a pair of τ leptons is performed. A data sample of proton-proton collisions collected at s $$ \sqrt{s} $$ = 13 TeV by the CMS experiment at the CERN LHC is used, corresponding to an integrated luminosity of 35.9 fb−1. The signal strength is measured relative to the expectation...
Search for a low-mass τ−τ+ resonance in association with a bottom quark in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Abstract A general search is presented for a low-mass τ−τ+ resonance produced in association with a bottom quark. The search is based on proton-proton collision data at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The data are consistent with the standard model expectation. Upper limits at 95% confidence level...
Search for supersymmetry in events with a photon, jets, $$\mathrm {b}$$ b -jets, and missing transverse momentum in proton–proton collisions at 13$$\,\text {Te}\text {V}$$ Te
A search for supersymmetry is presented based on events with at least one photon, jets, and large missing transverse momentum produced in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te . The data correspond to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 and were recorded at the LHC with the CMS detector in 2016. The analysis characterizes signal-like...
Combined measurements of Higgs boson couplings in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te , corresponding to an integrated luminosity of 35.9$${\,\text {fb}^{-1}} $$ fb-1 . The combination is based...
Combinations of single-top-quark production cross-section measurements and |fLVVtb| determinations at s $$ \sqrt{s} $$ = 7 and 8 TeV with the ATLAS and CMS experiments
The ATLAS collaboration, M. Aaboud, G. Aad, B. Abbott, more
Abstract This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using data from LHC proton-proton collisions at s $$ \sqrt{s} $$ = 7 and 8 TeV corresponding to integrated luminosities of 1.17 to 5.1 fb−1 at s $$ \sqrt{s} $$ = 7 TeV and 12.2 to 20.3 fb−1 at s $$ \sqrt{s} $$ = 8 TeV. These combinations...
Measurement of inclusive very forward jet cross sections in proton-lead collisions at s N N $$ \sqrt{s_{\mathrm{NN}}} $$ = 5.02 TeV
Abstract Measurements of differential cross sections for inclusive very forward jet production in proton-lead collisions as a function of jet energy are presented. The data were collected with the CMS experiment at the LHC in the laboratory pseudorapidity range −6.6 < η < −5.2. Asymmetric beam energies of 4 TeV for protons and 1.58 TeV per nucleon for Pb nuclei were used, corresponding to a...
Measurement of the energy density as a function of pseudorapidity in proton–proton collisions at $$\sqrt{s} =13\,\text {TeV} $$ s=13TeV
A measurement of the energy density in proton–proton collisions at a centre-of-mass energy of $$\sqrt{s} =13$$ s=13 $$\,\text {TeV}$$ TeV is presented. The data have been recorded with the CMS experiment at the LHC during low luminosity operations in 2015. The energy density is studied as a function of pseudorapidity in the ranges $$-\,6.6<\eta <-\,5.2$$ -6.6<η<-5.2 and $$3.15<|\eta...
Measurement of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ production cross section, the top quark mass, and the strong coupling constant using dilepton events in pp collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A measurement of the top quark–antiquark pair production cross section $$\sigma _{\mathrm {t}\overline{\mathrm {t}}} $$ σtt¯ in proton–proton collisions at a centre-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te is presented. The data correspond to an integrated luminosity of $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 , recorded by the CMS experiment at the CERN LHC in 2016. Dilepton events ($$\mathrm...
Search for vector-like quarks in events with two oppositely charged leptons and jets in proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te
A search for the pair production of heavy vector-like partners $$\mathrm {T}$$ T and $$\mathrm {B}$$ B of the top and bottom quarks has been performed by the CMS experiment at the CERN LHC using proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te . The data sample was collected in 2016 and corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . Final states...
Measurements of the pp → WZ inclusive and differential production cross sections and constraints on charged anomalous triple gauge couplings at s $$ \sqrt{s} $$ = 13 TeV
Abstract The WZ production cross section is measured in proton-proton collisions at a centre-of-mass energy s $$ \sqrt{s} $$ = 13 TeV using data collected with the CMS detector, corresponding to an integrated luminosity of 35.9 fb−1. The inclusive cross section is measured to be σtot(pp → WZ) = 48.09 − 0.96+ 1.00 (stat) − 0.37+ 0.44 (theo) − 2.17+ 2.39 (syst) ± 1.39(lum) pb, resulting in...
Search for nonresonant Higgs boson pair production in the b b ¯ b b ¯ $$ \mathrm{b}\overline{\mathrm{b}}\mathrm{b}\overline{\mathrm{b}} $$ final state at s $$ \sqrt{s} $$ = 13 TeV
Abstract Results of a search for nonresonant production of Higgs boson pairs, with each Higgs boson decaying to a b b ¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair, are presented. This search uses data from proton-proton collisions at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb−1, collected by the CMS detector at the LHC. No signal is observed, and...
Search for contact interactions and large extra dimensions in the dilepton mass spectra from proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for nonresonant excesses in the invariant mass spectra of electron and muon pairs is presented. The analysis is based on data from proton-proton collisions at a center-of-mass energy of 13 TeV recorded by the CMS experiment in 2016, corresponding to a total integrated luminosity of 36 fb−1. No significant deviation from the standard model is observed. Limits are set at 95% confidence...
Measurement of the top quark mass in the all-jets final state at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV and combination with the lepton+jets channel
A top quark mass measurement is performed using $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 of LHC proton–proton collision data collected with the CMS detector at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV . The measurement uses the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ all-jets final state. A kinematic fit is performed to reconstruct the decay of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ system...
Search for resonant production of second-generation sleptons with same-sign dimuon events in proton–proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV
A search is presented for resonant production of second-generation sleptons ($$\widetilde{\mu } _{\mathrm {L}}$$ μ~L , $$\widetilde{\nu }_{\mu }$$ ν~μ ) via the R-parity-violating coupling $${\lambda ^{\prime }_{211}}$$ λ211′ to quarks, in events with two same-sign muons and at least two jets in the final state. The smuon (muon sneutrino) is expected to decay into a muon and a neutralino (chargino),...
Search for resonant t t ¯ $$ \mathrm{t}\overline{\mathrm{t}} $$ production in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for a heavy resonance decaying into a top quark and antiquark t t ¯ $$ \left(\mathrm{t}\overline{\mathrm{t}}\right) $$ pair is performed using proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV. The search uses the data set collected with the CMS detector in 2016, which corresponds to an integrated luminosity of 35.9 fb−1. The analysis considers three exclusive...
Search for excited leptons in ℓℓγ final states in proton-proton collisions at s = 13 $$ \sqrt{\mathrm{s}}=13 $$ TeV
Abstract A search is presented for excited electrons and muons in ℓℓγ final states at the LHC. The search is based on a data sample corresponding to an integrated luminosity of 35.9 fb−1 of proton-proton collisions at a center-of-mass energy of 13 TeV, collected with the CMS detector in 2016. This is the first search for excited leptons at s $$ \sqrt{s} $$ = 13 TeV. The observation is consistent...
Search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks is performed in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te collected with the CMS detector at the LHC. The analyzed data sample corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . The signal is characterized by a large missing transverse...
Measurement of exclusive $$\mathrm {\Upsilon }$$ Υ photoproduction from protons in $$\mathrm {p}$$ p Pb collisions at $$\sqrt{\smash [b]{s_{_{\mathrm {NN}}}}} = 5.02\,\text {TeV} $$ sNN=5.02TeV
The exclusive photoproduction of $$\mathrm {\Upsilon }\mathrm {(nS)} $$ Υ(nS) meson states from protons, $$\gamma \mathrm {p} \rightarrow \mathrm {\Upsilon }\mathrm {(nS)} \,\mathrm {p}$$ γp→Υ(nS)p (with $$\mathrm {n}=1,2,3$$ n=1,2,3 ), is studied in ultraperipheral $$\mathrm {p}$$ p Pb collisions at a centre-of-mass energy per nucleon pair of $$\sqrt{\smash [b]{s_{_{\mathrm {NN}}}}} = 5.02\,\text...
Add an author who is a portal user
Add a recipient who is not a portal user
Sending message cancelled
Are you sure you want to cancel sending this message?
Last 3 months (14)
Last year (80)
Last 3 years (254)
Date range setting
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.
Set your own date range
Content availability
CMS (139)
HADRON-HADRON SCATTERING (29)
HIGGS (20)
SUPERSYMMETRY (12)
TOP QUARK (11)
LHC (10)
CHERENKOV LIGHT (9)
EXOTICA (8)
OPTICAL FIBERS (8)
CROSS SECTION (6)
BSM (5)
EXTRA DIMENSIONS (5)
HEAVY IONS (5)
B2G (4)
DIRAC EXPERIMENT (4)
ELECTROWEAK (4)
HEAVY ION (4)
RESONANCES (4)
SILICON PHOTOMULTIPLIERS (4)
SUSY (4)
W′ (4)
B-PHYSICS (3)
DIBOSON (3)
ELEMENTARY ATOM (3)
MUON (3)
MUONS (3)
PHOTON (3)
PION SCATTERING (3)
SAMPLING CALORIMETRY (3)
SCINTILLATORS (3)
TOP QUARK MASS (3)
Z′ (3)
2HDM (2)
ALPHA-S (2)
AQGC (2)
B PHYSICS (2)
B-TAGGING (2)
CHARGE ASYMMETRY (2)
DIJET (2)
DILEPTONS (2)
DIMUONS (2)
DIPHOTON (2)
EXPERIMENTAL RESULT (2)
GRAVITON (2)
HEAVY-IONS (2)
HELICITY (2)
LEPTON-FLAVOUR-VIOLATION (2)
LEPTONS (2)
LOW MISSING TRANSVERSE ENERGY (2)
MSSM (2)
PHOTONS (2)
PIONIUM ATOM (2)
POLARIZATION EFFECTS (2)
PRIMAKOFF EFFECT (2)
QUARKONIUM PRODUCTION (2)
RIDGE (2)
SCINTILLATOR (2)
SINGLE SPIN ASYMMETRY (2)
STRONG COUPLING CONSTANT (2)
TPRIME (2)
UPC (2)
VH (2)
[PHYSICAL ASTRONOMY CLASSIFICATION SCHEME] 13.85.QK (2)
[PHYSICAL ASTRONOMY CLASSIFICATION SCHEME] 13.88.+E (2)
[PHYSICAL ASTRONOMY CLASSIFICATION SCHEME] 14.40.AQ (2)
[PHYSICAL ASTRONOMY CLASSIFICATION SCHEME] 29.40.MC (2)
Η-MESON PRODUCTION (2)
Τ (2)
13 TEV (1)
3-JET MASS (1)
A2(1320) (1)
ALL-HADRONIC CHANNEL (1)
ALPHAT (1)
ANOMALOUS COUPLING (1)
ANOMALOUS COUPLINGS (1)
ASYMMETRY (1)
ATGC (1)
Springer (436)
Wiley (1)
Journal of High Energy Physics (305)
Physics Letters B (189)
The European Physical Journal C (130)
Nuclear Inst. and Methods in Physics Research, A (28)
Nuclear Physics B (Proceedings Supplements) (9)
Nuclear Physics, Section B (5)
Nuclear Physics A (4)
Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment (3)
Acta Physica Hungarica New Series Heavy Ion Physics (1)
IEEE Transactions on Nuclear Science (1)
Journal of Neuroscience Research (1)
Nuclear Physics, Section A (1)
Report an error / abuse
© 2015 Interdisciplinary Centre for Mathematical and Computational Modelling
Reporting an error / abuse
Sending the report failed
Submitting the report failed. Please, try again. If the error persists, contact the administrator by writing to [email protected].
You can adjust the font size by pressing a combination of keys:
CONTROL + + increase font size
CONTROL + – decrease font
Navigate the page without a mouse
You can change the active elements on the page (buttons and links) by pressing a combination of keys:
TAB go to the next element
SHIFT + TAB go to the previous element
Financed by the National Centre for Research and Development under grant No. SP/I/1/77065/10 by the strategic scientific research and experimental development program: SYNAT - "Interdisciplinary System for Interactive Scientific and Scientific-Technical Information". | CommonCrawl |
Treatment of missing data in Bayesian network structure learning: an application to linked biomedical and social survey data
Xuejia Ke1,2,
Katherine Keenan2 &
V. Anne Smith1
Availability of linked biomedical and social science data has risen dramatically in past decades, facilitating holistic and systems-based analyses. Among these, Bayesian networks have great potential to tackle complex interdisciplinary problems, because they can easily model inter-relations between variables. They work by encoding conditional independence relationships discovered via advanced inference algorithms. One challenge is dealing with missing data, ubiquitous in survey or biomedical datasets. Missing data is rarely addressed in an advanced way in Bayesian networks; the most common approach is to discard all samples containing missing measurements. This can lead to biased estimates. Here, we examine how Bayesian network structure learning can incorporate missing data.
We use a simulation approach to compare a commonly used method in frequentist statistics, multiple imputation by chained equations (MICE), with one specific for Bayesian network learning, structural expectation-maximization (SEM). We simulate multiple incomplete categorical (discrete) data sets with different missingness mechanisms, variable numbers, data amount, and missingness proportions. We evaluate performance of MICE and SEM in capturing network structure. We then apply SEM combined with community analysis to a real-world dataset of linked biomedical and social data to investigate associations between socio-demographic factors and multiple chronic conditions in the US elderly population.
We find that applying either method (MICE or SEM) provides better structure recovery than doing nothing, and SEM in general outperforms MICE. This finding is robust across missingness mechanisms, variable numbers, data amount and missingness proportions. We also find that imputed data from SEM is more accurate than from MICE. Our real-world application recovers known inter-relationships among socio-demographic factors and common multimorbidities. This network analysis also highlights potential areas of investigation, such as links between cancer and cognitive impairment and disconnect between self-assessed memory decline and standard cognitive impairment measurement.
Our simulation results suggest taking advantage of the additional information provided by network structure during SEM improves the performance of Bayesian networks; this might be especially useful for social science and other interdisciplinary analyses. Our case study show that comorbidities of different diseases interact with each other and are closely associated with socio-demographic factors.
Bayesian networks (BNs), first proposed by Pearl [1], are a flexible statistical tool for encoding probabilistic relationships with directed acyclic graphs (DAGs) [2]. BNs have a wide range of applications, including developing expert systems for predicting diseases [3], disclosing diffusion of messages in social networks [4], reconstructing gene regulatory networks [5], and inferring neuronal networks [6] and ecological networks [7]. However, BNs are still only rarely applied to population health and social science questions. Relatedly, use of survey data for BN structure learning is limited.
Schematic diagram of Multiple Imputation by Chained Equations approach. For a given incomplete dataset, MICE firstly imputes all missing values via univariate imputation methods. Then it removes the imputed values from variables one by one and creates a model by using the other complete samples. After that, it imputes missingness in each variable in turn using the created model and the remaining variables. These steps are repeated until the data is completed. It then subtracts this new completed data from the initial imputed values to get a difference matrix. The new completed data then becomes the starting point for the next iteration. The whole process is iterated until a pre-defined threshold on the difference between initial imputed and new completed data is met
Schematic diagram of Structural Expectation-Maximization algorithm. SEM has two components: E-step and M-step. It considers a BN structure for the incomplete data at the very beginning. Then it applies the iterative two steps, alternating E-step and M-step. E-step estimates the values of missing data by computing the expected statistics using the current network structure. The M-step maximizes the scoring function and updates the resulting network structure. These two steps are repeated until convergence is met
Compared with other fields of study, for instance, experimental biological systems, missing data are more pervasive in observational and survey data. There are plentiful causes, including item missingness, e.g., unanswered questions in questionnaires, data entry errors, or subject missingness, e.g., patients dropping out in longitudinal research, or missing samples. Missing data not only reduce overall statistical power and precision, but can lead to biased inferences in subsequent data analysis [8]. Taking a popular method of listwise deletion (e.g., undertaking analysis only on those complete cases without any missing data) as an example, its statistical power and precision would be inevitably reduced because of the decreased sample size.
Based on the different processes leading to the missingness, every missing data pattern can be generally classified into three categories - missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR) [9]. This nomenclature is widely used in statistical data analysis and is also referred to as the missing data mechanisms. MCAR occurs if the missingness is unrelated to both unobserved and observed variables. Data are said to be MAR if the missingness is related to observed variables but not to any unobserved variables given the observed ones. MNAR is the most complicated because its missingness relates to both unobserved and observed variables [9]. These three patterns cause different levels of risks of bias in data analysis. For instance, listwise deletion analysis in MAR and MNAR data would yield more biased estimates than MCAR [10].
Multiple imputation by chained equations (MICE) is a popular multiple imputation method used in biomedical, epidemiological and social science fields. It is designed to impute missing data values under the missing data assumption MAR [11, 12]. Compared to single imputation, multiple imputation methods are less biased because they take account of the uncertainty of the missing data by combining multiple predictions for each missing value. MICE uses a divide and conquer approach to replace missing values for all variables in the data set: it focuses on one variable at a time and makes use of other variables to predict the missing values in that focused variable. Figure 1 illustrates how MICE imputes missing values for a given incomplete data set. Firstly, it imputes all values by using univariate imputation methods (e.g., replace missing values by the median of a single variable) to create a starting point. Then it removes the imputed values from each variable in turn and creates a model (e.g., a linear regression model) using the complete samples. This model may or may not include all variables in the dataset. After that, it imputes the values in each variable using this model and other values in the remaining variables. These steps are repeated until the data is completed. Then it subtracts this completed data from the starting point to get a difference matrix. To make this difference close to 0, the whole process is iterated, using the just completed data as a new starting point, until a pre-defined threshold on the difference between the starting point and new completed data is met. Depending on the features of the focused variable, MICE employs different multivariate regression models to predict the missing values (e.g., logistic regression for binary dependant variables). In epidemiology and clinical research, multiple imputation can enhance reliability of inferences based on data with values missing at random (MAR); however, the same procedures are not suitable for MNAR data, and thus further work is required to address MNAR data in a multiple imputation framework [8].
Learning BN structure from incomplete data is quite challenging. Depending on the missing data mechanisms (e.g., MNAR or MAR), learning would be biased if we simply delete incomplete observations. However, while BNs can theoretically consider completion of the dataset, to do so for all missing values in all possible configurations would increase computational time infeasibly (exponential increase per missing data point) [13].
The structural expectation-maximization (SEM) algorithm makes BN structure learning from incomplete data computationally feasible by changing its search space to be over structures rather than parameters and structures. SEM iteratively completes the data, then applies the standard structure learning procedures to the completed data [13]. Similar to the standard EM algorithm [14], SEM involves two steps - expectation (E-step) and maximization (M-step). Figure 2 shows the basic principle of SEM algorithm. Firstly, it considers a BN structure (e.g., an empty one) for the incomplete data. Then it applies the iterative two-step, alternating E-step and M-step. The E-step estimates the values of missing data by computing the expected statistics using the current network structure. The M-step maximizes the scoring function and updates the resulting network structure. This continues until convergence is met [15]. The framework of SEM was first proposed by Friedman [16]. His simulation results suggest that although there is a degradation of learning performance with an increased percentage of missing data, SEM shows promise for handling data involving missing values and hidden variables [16]. Friedman [15] later improved his work so that SEM is not limited to using scoring matrices like minimal description length (MDL) or Bayesian Information Criterion (BIC) that only compute the approximations to Bayesian posterior probability, enabling direct optimizations of the Bayesian posterior probability that incorporates prior information (e.g., Dirichlet priors) over network parameters into the learning procedures.
In this study, we evaluate methods for addressing incomplete data using a simulation framework. Simulation provides a vital mechanism for understanding and evaluating the performance of approaches before applying them to real-world cases. Here we simulate multiple incomplete categorical data sets, including three different missing data mechanisms, various number of variables and amounts of missing data. We concentrate here on categorical, or discrete, data due to its ubiquity in population health and social science data (e.g., categorical survey responses, presence or absence of disease). We then evaluate and compare the performance of MICE and SEM with each other and with the standard expedient of using only samples without missing data, by comparing their resulting network structures with the original network structure.
We then apply the best working method (SEM, see Results) to a real-world health and social survey dataset to investigate concurrent chronic diseases in the US elderly population. Multimorbidity (the concurrence of two or more chronic diseases in an individual) places an enormous burden on individuals and health systems, and is expected to grow more in importance as populations age [17,18,19]. Researchers have used a variety of methods to unpick the complexity of combinations of diseases, and identify clusters and risk factors [20, 21]. Among these, BNs have great potential to tackle such complex problems and can help us understand multimorbidity as a complex system of biosocial disadvantage. In our network analysis, we investigate the interactions between presence and treatment of several chronic diseases, cognition, and their associations with health behaviours and other factors including race, gender and socioeconomic status.
Overview of our simulation
Figure 3 shows an overview of our simulation approach. We compare the performance of MICE and SEM on incomplete categorical (discrete) data, and both against doing nothing (e.g., using only complete cases). The main steps are as follows:
1. Generate a random graph. This random graph is also referred to as the original structure in the final step for comparison.
2. Sample data points from the random graph to get the complete data.
3. Introduce missing values to the complete data.
4. Learn the Bayesian network structure, either: (a) from all complete cases, (b) from the data set completed via MICE, or (c) using SEM.
5. Compare learned Bayesian network structures with the original structure.
Flowchart of our simulation approach
We analysed networks with numbers of variables ranging from 2 to 20. For each number of variables, we analysed a range of missing proportions from 0.1 to 0.6 at intervals of 0.1. Each variable number/missing proportion was repeated 100 times. We completed the whole analysis for each of 1000, 5000 and 10,000 sampled data points.
Simulated data
Random networks and sampled data
We first generated a randomly connected network structure with the specified number of nodes (variables) using method Ide's and Cozman's Generating Multi-connected DAGs (ic-dag) algorithm in the function random.graph from R package bnlearn [22]. We set maximum in-degree for any node at 3, and each node had 3 discrete levels. Various descriptive statistics of these random network structures are shown in Additional file 1; the networks had expected changes: increasing out-degrees, reduced density and clustering, and increased diameter with larger networks. We obtained conditional probability tables (CPTs) for each node by generating random vectors from the Dirichlet distribution using function rdirichlet from R package MCMpack [23]. The parameter \(\alpha\) of Dirichlet distribution was 0.5 for nodes with parents and 5 for nodes without parents. This provided our random parameterised BN. We then randomly sampled 1000, 5000 or 10,000 data points from the parameterised BN to get our sampled data using the function rbn from R package bnlearn [22].
For each missing data mechanism, we introduced different amounts of missing data to the sampled data using the function ampute from R package mice [24]. This function requires a complete data set and specified missing patterns (i.e., the variable or variables that are missing in a given sample). We used the default missing pattern matrix for all simulations, in which the number of missing patterns is equal to the number of variables, and one for each variable is missing. We also used the default relative frequency vector for the missing patterns, so that each missing pattern has the same probability to occur. Thus, the probability of being missing is equal across variables. The data is split into subsets, one for each missing pattern. Based on the probabilities of missingness, each case in each subset can be either complete or incomplete. Finally, the subsets are merged to generate the required incomplete data. The allocated probability for each value to be removed in each subset depends on the specified missing proportion and missing data mechanism [25]:
MCAR The missingness is generated by chance. Each value in the sampled data has the same probability to be incomplete and such probability is computed once the missing proportion is specified [25].
MAR The probability of each value being incomplete is dependent on a weighted sum score calculated from values of other variables. We used the default weights matrix in our simulation, in which all variables except the missing one contribute to the weighted sum score [25].
MNAR Simulating MAR and MNAR data share most procedures during amputation. The only difference is that it is the value of the potential missing value that contributes to the probability of its own missingness [25].
Bayesian network structure learning
During the whole study, we used the same BN structure learning procedures to learn from data either before processing or after. That is, procedures were all the same for methods "None", "MICE" and "SEM" in Fig. 3: we used a score and search algorithm, using the BDe score [2] and the tabu search algorithm for searching the best network structure [26]. The imaginary sample size used by BDe was set equal to 1 (default value). A test for the impact of scoring function was performed by also assessing structures learned using the BIC and BDs scores for one dataset configuration (MNAR data, 1000 data points, 0.3 missingness; BDs imaginary sample size set to 1 as default; BIC also used default value for penalty coefficient: log(number data points)*0.5). For "None" and "MICE", we applied the tabu function from R package bnlearn [22]; for SEM the search was incorporated into the iterative steps as described below.
No imputation
We used the complete cases of simulated incomplete data for BN structure learning.
Structural EM
We applied the SEM algorithm to the incomplete data using the function structural.em from R package bnlearn [22]. We used the default imputation method ("parents") in the E-step, which imputes missing data values based on their parents in the current network. We applied tabu search and BDe scoring matrix for structure learning and the default method Maximum Likelihood parameter estimation (mle) for parameter learning in the M-step. The maximum number of iterations was 5 as default.
Multiple Imputation by Chained Equations
As all the variables in this study were categorical and unordered, we used the polytomous logistic regression model for prediction using the function mice from R package mice [24]. The number of iterations was 5 as default.
A toy example comparison across four skeleton networks (from left to right): original network, None (complete cases), SEM, and MICE. The original networks work as the reference network for comparison. Blue arcs indicate the arcs that are missed by methods but exist in the original network (\(False\;Negative\)). Red arcs represent the arcs that are additionally found by methods but not in the original network (\(False\;Positive\)). Bold arcs are found by methods that are also in the original network (\(True\;Positive\))
Evaluation of recovered network structures
To compare the learned BN structures with the original ones, we compared their skeletons using functions compare and skeleton from R package bnlearn [22]. We compared skeletons, which represent all links in the network as undirected links, to deal with variation of link direction due to different equivalence classes. We explored comparison of equivalence classes, but a single missing/extra link could significantly change equivalence class, giving erroneous results for those dependencies accurately recovered. For example, a link which was directed in the equivalence class of the simulated network could, due to a missing link elsewhere, be undirected in the equivalence class of the recovered network; this would result in not only recording one missing link but also an additional, incorrect, extra link. Comparison of the undirected skeletons resolved this issue. We measured the performance of each method by computing the precision and recall (sensitivity) based on their comparison results. Precision measures the level of a method making mistakes by adding false arcs to the network, while recall evaluates the sensitivity of a method to capturing positive arcs from the targets. Their equations are as follows:
$$\begin{aligned} Precision = \frac{True\;Positive}{True\;Positive+False\;Positive} \end{aligned}$$
$$\begin{aligned} Recall = \frac{True\;Positive}{True\;Positive + False\;Negative} \end{aligned}$$
where \(True\;Positive\) represents finding arcs present in the original structure, \(False\;Positive\) represents finding arcs that are not in the original structure, and \(False\;Negative\) represents lack of an arc that is present in the original structure (Fig. 4).
We divided the number of variables into 6 groups for analysis: having number of variables 2-5, 6-8, 9-11, 12-14, 15-17 and 18-20. For each group with each missing proportion in each sampled data amount, we performed a one-way ANOVA to test whether there were any statistically significant differences between the means of the three methods. We applied a Bonferroni correction to correct the resulting p-values in these multiple comparisons. If there were significant Bonferonni-corrected results (p < 0.05) in a variable group/missing proportion combination, we performed the honestly significant difference (Tukey's HSD) test on the pairwise comparisons between the three methods. For both precision and recall, the same procedures were applied.
Evaluation of imputed data values
We explored the accuracy of MICE's and SEM's imputation, using a subset of the simulations. We extracted the completed datasets from the last iteration of SEM and MICE for each missing mechansim (MCAR, MAR, MNAR) for 1000 data points at missing proportion 0.3, using 10 datasets each of 10 and 20 variables. We calculated the Hamming distance between the imputed datasets from the original (no missing values) simulated dataset. We performed Student's t-test to test whether there were any statistically significant differences between the means of the Hamming distance of imputed versus original data of the two methods.
Real-world data application
We use self-reported and nurse-collected data from the United States Health and Retirement Study (HRS) [27,28,29], a representative study of adults aged 50 and older. We merged the interview data (N = 42233) [27] collected in 2016, the harmonised data (N = 42233) [29] and the laboratory data (N = 7399) [28] that were collected in the same year. As we are focusing on imputation methods, we set any provided imputed values to missing (i.e., to use our method). To ensure a representative sample of older respondents, and due to the focus on multimorbidity, we excluded those aged below 50 (N = 279). To ensure biomarker and survey data were collected concurrently, we excluded respondents whose interviews were finished in 2017 and 2018 (N = 1394). Our analysis dataset consisted of 29 categorical variables each with two to four levels. Supplementary Table 1 in Additional file 1 shows the detailed description of each variable. This cleaned subset contained 5726 observations, in which only 2688 cases were complete (corresponding to a missingness proportion of 0.53).
We applied the best-working method, SEM (see Results), to this real-world data. Because SEM includes random elements in the algorithm, we averaged across multiple repeats to capture the most complete picture of relationships among real-world variables. To accomplish this, we set different random seeds using the base function set.seed in the R environment, before applying the function structural.em from R package bnlearn [22] (using tabu search and BDe scoring metrics in the M-step, as above). In this way, we learned 100 network structures using the SEM algorithm from the whole incomplete data set. We determined the average network across the 100 repetitions based on an arc strength of each learned structure, calculated from the completed partially directed acyclic graph using the function arc.strength also from bnlearn. As the resulting arc strengths were strongly bimodal (see Results), we included in a final average network all links in the higher mode. While the resulting networks were partially directed, we show as results the skeletons – all links as undirected – because we do not wish to imply causal relationships between these measured variables; we are presenting statistical associations only.
We then further explored relationships among real-world variables based on the network structure by applying hierarchical divisive clustering from the R package igraph [30] to detect the densely connected variables in the learned average network. This identifies community groups consisting of nodes that are densely connected together but sparsely connected to others based on the edge betweenness of the edges without considering the directions.
Performance on MCAR data with 1000 data points. Precision (A) and recall (B) of three different methods of handling incomplete data: none, multiple imputation by chained equations (MICE) and structural expectation-maximization (SEM). Rows represent different missing proportions and columns indicate different groups of number of variables. Barplots show means with error bars representing standard error of the mean. Adjusted p-values for ANOVAs are displayed in those panels that are significant at least the 0.05 level. Lines representing significant Tukey's HSD pairwise tests are shown and annotated as: *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001
Performance on MAR data with 1000 data points. Precision (A) and recall (B) of three different methods of handling incomplete data: none, multiple imputation by chained equations (MICE) and structural expectation-maximization (SEM). Rows represent different missing proportions and columns indicate different groups of number of variables. Barplots show means with error bars representing standard error of the mean. Adjusted p-values for ANOVAs are displayed in those panels that are significant at least the 0.05 level. Lines representing significant Tukey's HSD pairwise tests are shown and annotated as: *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001
Performance on MNAR data with 1000 data points. Precision (A) and recall (B) of three different methods of handling incomplete data: none, multiple imputation by chained equations (MICE) and structural expectation-maximization (SEM). Rows represent different missing proportions and columns indicate different groups of number of variables. Barplots show means with error bars representing standard error of the mean. Adjusted p-values for ANOVAs are displayed in those panels that are significant at least the 0.05 level. Lines representing significant Tukey's HSD pairwise tests are shown and annotated as: *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001
Distribution of the difference in means of recall of three pairwise comparisons among three methods when there are 1000 data points: MICE's increase over doing nothing (red), SEM's increase over nothing (blue), and SEM's increase over MICE (green) A. MCAR data. B. MAR data. C. MNAR data. The y-axis represents the difference of the mean recall (averaged over the 100 simulations). The x-axis represents the number of variables from 2-20. Column panels represent missing proportions
Recovered network structures
A total of 1026 scenarios and 102,600 data sets were analysed.
Results of all three missingness mechanisms shared similar features among three levels of sampled data points. Detailed results are shown in Fig. 5 for MCAR, Fig. 6 for MAR, and Fig. 7 for MNAR with 1000 data points. In general, there was enhanced performance of methods of addressing missing data over doing nothing, and better performance of SEM over MICE. There were more significant differences looking at recall than precision. There were more significant differences with increasing proportion of missingness and number of variables. This observation was consistent when there were 5000 and 10,000 data points, although the out-performance of SEM over MICE decreased with 5000 data points and was even less obvious with 10,000 data points. Detailed results for 5000 and 10,000 data points are shown in Additional file 1.
In addition to the pairwise comparisons between the three methods regarding precision and recall, we also compared the performance of each method across the three missing data mechanisms (MCAR, MAR and MNAR) for each level of data points. However, our results did not show any significant differences in performance across the mechanisms.
We summarise patterns of recall across the simulation experiments in Fig. 8 when there are 1000 data points. This demonstrates substantial improvements in performance when using either method (compared to doing nothing), which start to emerge consistently at a 0.3 level of missingness, and increase as levels of missingness and number of variables increases. Generally, SEM outperforms MICE, but the difference does not appear to be conditioned by levels of missingness or missing data mechanism. There is an increase in SEM's outperformance through low numbers of variables, and then appears to reach an asymptote above 5 or 6 variables. This pattern was also observed when there were 5000 and 10,000 data points (see Additional file 1). However, their scale of observed difference was much smaller than with 1000 data points (differences around 0.01-0.02 compared to 0.1-0.2).
The same general pattern of SEM outperforming MICE, and both imputation methods outperforming doing nothing, also held with the test using the BIC and BDs scores (see Additional file 1).
Imputed data
We further compared the performance of MICE and SEM in terms of missing data completion, using 1000 data points with a 0.3 level of missingness. The data completed by SEM in the last iteration is more similar to the original simulated data than MICE (Fig. 9). SEM has a significantly better performance than MICE in data imputation and this finding is consistent when there are 10 variables and 20 variables and across all three missing mechanisms, with p < 0.0001 for all comparisons.
Comparison of the mean Hamming distance of MICE and SEM imputed data from the simulated data at 0.3 level of missingness with 1000 data points, using 10 datasets each of condition. Barplots show means with error bars representing standard error of the mean. Rows represent different numbers of variables and columns indicate different missing mechanisms. Lines representing significant Student's t-tests are shown and annotated as: ****, p < 0.0001
Figure 10 displays an overview of the levels of missingness in the cleaned HRS data set. Most have less than 5% of missing values; a few have \(\sim\)10% or greater, with the highest value being 33.1% missing for household income hhincome. There is a large amount of missing patterns that are different combinations of various variables. Only a few variables are missing individually.
The arc strengths averaged over the 100 repetitions of SEM applied to this data were strongly bimodal, with individual links having strength 0.87-1.0 (representing presence in 87-100% of the networks) or 0.05 or less. Thus, we generated a final averaged network with arc strengths of 0.87 or greater (Fig. 11).
Five community groups were identified within this network structure (nodes of each community are coloured the same in Fig. 11). Common cardiovascular conditions, such as heart disease, stroke and high blood pressure (HBP), are clustered with total cholesterol level and treatment for those conditions. Diabetes, HbA1c level and diabetes treatment are clustered. Another cluster contains arthritis, self-assessed memory decline and BMI level. Diabetes is directly linked to HBP, HbA1c and BMI levels. The other two clusters contain a mixture of diseases and social factors. Cognitive impairment (TICS-M) is clustered with cancer, lung disease, smoking and race. It is also directly linked to education whereas education clusters with high-density lipoprotein (HDL), drinking, exercise, gender, cohabitation and household income. We find expected links between health behaviours and chronic conditions, e.g., smoking and lung disease. Biomarkers are directly linked to socio-demographic and socio-economic factors, e.g., alcohol use is directly linked to HDL cholesterol level and gender. We also find some unexpected links and clusters: arthritis is directly linked to lung disease, and cancer treatment is directly linked to individual income.
The main aim of this work was to quantitatively evaluate and compare the performance of a common form of imputation (MICE) and SEM on learning BN structures from incomplete data, such as is commonly found in observational health and social datasets. According to our simulation results, as might be expected, both MICE and SEM performed better than no imputation. In addition, significant improvements in recall and precision were observed with SEM versus MICE. This disparity might be explained given that SEM is using additional information, i.e. the structure of the network, to deal with missing data, whereas MICE relies only on the multivariate associations between variables.
We note that SEM performs comparatively well under the MNAR mechanism. This is significant because MNAR is a complex problem to which there is no obvious solution. In MNAR data, a particular value's missingness rate depends on the real value itself and some unobserved predictors. Although it is theoretically achievable to calculate the missing data rate given the correct set of explanatory factors, in practice it is very hard to find out the combinations of factors that influence the missing rate [31]. Taking an example of blood glucose measurements, people suffering from hyperglycemia will be more likely to drop out of clinical surveys because they feel unwell. However, this assumption is unverifiable using the observed data, and in practice we cannot distinguish between MAR and MNAR data [31]. Multiple imputation methods would therefore generate biased results if we apply them on MNAR data, and the issue can only be addressed by sensitivity analysis to evaluate the difference under different assumptions about the missing data mechanism [31]. In the case of BN structure learning, our results suggest that SEM may be a principled approach to deal with MNAR data. However, this finding should be validated by conducting further experiments under varying MNAR conditions.
The validity of multiple imputation methods also depends on the choices of statistical approaches in analysing the sampled complete data sets and the resulting distribution of estimates for each missing value [8]. More sophisticated approaches are required if the mechanism MNAR appears in different types of variables. Galimard and colleagues [32] recently proposed a new imputation model based on Heckman's model [33, 34] to address the issue caused by MNAR binary or continuous outcome variables. They then integrated this model into MICE for managing MAR predictors at the same time. We can use function mice.impute.hecknorm from R package miceMNAR [32] to impute incomplete data with MNAR outcome variables and MAR predictors. Although it has been proposed that applying imputation methods on multivariate data before learning BNs can be problematic [32, 35], this novel method might be helpful for the further development of BN structure learning from incomplete data.
While SEM did consistently perform statistically significantly better than MICE, we point out that the differences were relatively small (on the order of <5% for both precision and recall). The overwhelming signal in our results is that imputation is far superior to using only complete cases (e.g., see Fig. 8). SEM can be more computationally intensive than MICE, particularly with higher missing proportion, thus there could be a trade-off between accuracy and computation time. However, these computational times are relatively small (seconds–minutes), thus we still recommend using the better performing SEM.
We showed the usefulness of SEM by applying it to real-world linked biomedical and survey data on chronic diseases, in a dataset which had a high level of missingness. The network we recover from real-world data highlights pivotal interactions among several chronic diseases, health behaviours and social risk factors [20]. As seen in other studies we observe clustering of cardiovascular diseases [36] and metabolic conditions, and treatments for them (e.g. diabetes). Known risk factors of HBP, BMI and smoking either directly or indirectly link to these conditions, although HBP stands apart as being directly linked to diabetes, stroke and heart disease. The connections between cognitive impairment, education and race have been previously observed in the US context [37]. Our analysis also highlights potential areas of investigation. Cognitive impairment is closely associated with cancer, but stands alone from self-assessed memory decline. Cancer treatment is directly linked to individual income, suggesting socioeconomic disparities in cancer treatment, and/or differential survival patterns by income.
Our simulation study showed better performance of SEM, and our real-world case study was able to reveal features of interest from a dataset with high levels of missingness. As in most simulation studies, the main drawback in our simulation is that simulated data sampled from random network is not guaranteed to reflect real data. Our simulation data has two main limitations. First, our simulation used all categorical variables and an even distribution of missing values among variables, which is not very plausible in real-world social science data. For example, some survey questions (e.g., income) will suffer higher levels of missingness due to refusal than other less sensitive ones (e.g., gender). These features probably help to reduce the difference between missing data mechanisms, especially the difference in data with MNAR. This perhaps could also help to explain why there were no significant differences across three missing data mechanisms in our simulation results, particularly with MICE method. Thus, future extensions of this work should incorporate more realistic simulations of mixtures of variable types and uneven missingness patterns. Second, our simulation study deals with cross-sectional, non hierarchical data, and in real social science data observations are often clustered or contain repeat measures from individuals. This can lead to a different, complex and important form of missingness – survey attrition. In future work, we could investigate the application of SEM using more complicated real-world data, using more complex missing patterns (e.g., longitudinal data).
Distribution of missing values in the real-world data set. (A) Proportion of missing values in each variable (named as in Supplementary Table 1 of Additional file 1), shown as a bar chart. (B) Missing patterns, shown as a heatmap with proportions to the right of the plot. Rows represent a single missing pattern ('Combinations') and columns variables, with the variable missing in a given pattern coloured green (blue otherwise). The proportion of each missing pattern is shown as a horizontal bar chart to the right of the heatmap (summing to 0.53 for missing patterns). The very bottom row represents the pattern with no missing values, with its proportion bar in blue with value 0.47
The average network learned from SEM. Nodes are labelled with variable names as found in Supplementary Table 1 of Additional file 1. Nodes are coloured to represent the different groups as discovered by community analysis on the network structure
Our simulation results indicate that both SEM and MICE improve the completeness of BN structures learned from partially observed data. In most circumstances, especially when there are relatively high number of variables and missing values, SEM performs better than MICE. This suggests that making use of extra information from the BN structure within SEM iterations could enhance its capability of capturing the real network structure from incomplete data. In our real-world data application, SEM identified expected interactions between common chronic diseases, and provided additional insights about the links between socio-demographic, socio-economic factors and chronic conditions. Our study suggests that BN researchers working with incomplete biomedical and social survey data should use SEM to deal with missing data.
The data that support the findings of this study are publicly available from the University of Michigan Health and Retirement Study (HRS; https://hrsdata.isr.umich.edu/), based on relevant data sharing policy.
MICE:
SEM:
Structural expectation-maximization
BNs:
DAGs:
Directed acyclic graphs
MCAR:
Missing completely at random
Missing at random
MNAR:
Missing not at random
CPTs:
Conditional probability tables
MDL:
Minimal description length
BDe:
Bayesian Dirichlet equivalent
Bayesian Information Criterion
BDs:
Bayesian Dirichet sparse
HRS:
Health and Retirement Study
TICS-M:
Telephone interview for cognitive status measurement
HDL:
High-density lipoprotein
HBP:
Pearl J. Probabilistic reasoning in intelligent systems: networks of plausible inference. Burlington: Morgan Kaufmann; 1988.
Heckerman D, Geiger D, Chickering DM. Learning Bayesian networks: The combination of knowledge and statistical data. Mach Learn. 1995;20:197–243.
Lin JH, Haug PJ. Exploiting missing clinical data in Bayesian network modeling for predicting medical problems. J Biomed Inform. 2008;41:1–14.
Varshney D, Kumar S, Gupta V. Predicting information diffusion probabilities in social networks: A Bayesian networks based approach. Knowl-based Syst. 2017;133:66–76.
Werhli AV, Husmeier D. Reconstructing gene regulatory networks with Bayesian networks by combining expression data with multiple sources of prior knowledge. Stat Appl Genet Mol Biol. 2007;6:15.
Smith VA, Yu J, Smulders TV, Hartemink AJ, Jarvis ED. Computational inference of neural information flow networks. PLoS Comput Biol. 2006;2:e161.
Milns I, Beale CM, Smith VA. Revealing ecological networks using Bayesian network inference algorithms. Ecology. 2010;91:1892–9.
Sterne JA, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. Brit Med J. 2009;338:b2393.
Rubin DB. Inference and missing data. Biometrika. 1976;63:581–92.
Schafer JL, Graham JW. Missing data: our view of the state of the art. Psychol Methods. 2002;7:147–77.
Raghunathan TE, Lepkowski J, Van Hoewyk JH, Solenberger PW. A multivariate technique for multiply imputing missing values using a sequence of regression models. Surv Methodol. 2001;27:85–95.
Azur MJ, Stuart EA, Frangakis C, Leaf PJ. Multiple imputation by chained equations: what is it and how does it work? Int J Meth Psychiatr Res. 2011;20:40–9.
Scutari M. Bayesian network models for incomplete and dynamic data. Stat Neerl. 2020;74:397–419.
Lauritzen SL. The EM algorithm for graphical association models with missing data. Comput Stat Data Anal. 1995;19:191–201.
Friedman N. The Bayesian Structural EM Algorithm. In: Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence. UAI'98. San Francisco: Morgan Kaufmann; 1998. p. 129–38.
Friedman N. Learning belief networks in the presence of missing values and hidden variables. In: Fourteenth International Conference on Machine Learning (ICML). San Francisco: Morgan Kaufmann; 997. p. 125–33.
Uijen AA, van de Lisdonk EH. Multimorbidity in primary care: prevalence and trend over the last 20 years. Eur J Gen Pract. 2008;14:28–32.
Johnston MC, Crilly M, Black C, Prescott GJ, Mercer SW. Defining and measuring multimorbidity: a systematic review of systematic reviews. Eur J Public Health. 2019;29:182–9.
Kingston A, Robinson L, Booth H, Knapp M, Jagger C, project M. Projections of multi-morbidity in the older population in England to 2035: estimates from the Population Ageing and Care Simulation (PACSim) model. Age Ageing. 2018;47:374–80.
Prados-Torres A, Calderón-Larrañaga A, Hancco-Saavedra J, Poblador-Plou B, van den Akker M. Multimorbidity patterns: a systematic review. J Clin Epidemiol. 2014;67:254–66.
Cezard G, McHale CT, Sullivan F, Bowles JKF, Keenan K. Studying trajectories of multimorbidity: a systematic scoping review of longitudinal approaches and evidence. BMJ Open. 2021;11:e048485.
Scutari M. Learning Bayesian Networks with the bnlearn R Package. J Stat Softw. 2010;35:1–22.
Martin AD, Quinn KM, Park JH. MCMCpack: Markov Chain Monte Carlo in R. J Stat Softw. 2011;42:22.
van Buuren S, Groothuis-Oudshoorn K. mice: Multivariate Imputation by Chained Equations in R. J Stat Softw. 2011;45:1–67.
Schouten R, Lugtig P, Brand J, Vink G. Generating missing values for simulation purposes: A multivariate amputation procedure. J Stat Comput Simul. 2018;88(15):1909–30.
Glover F. Tabu Search-Part I INFORMS J Comput. 1989;1:190–206.
Health and Retirement Study, (RAND HRS Longitudinal File 2018 (V1)) public use dataset. Produced and distributed by the University of Michigan with funding from the National Institute on Aging (grant number NIA U01AG009740). Ann Arbor; 2021.
Health and Retirement Study, (2016 Biomarker Data (Early, Version 1.0)) public use dataset. Produced and distributed by the University of Michigan with funding from the National Institute on Aging (grant number NIA U01AG009740). Ann Arbor; 2020.
Health and Retirement Study, (Harmonized HRS (VERSION C)) public use dataset. Produced and distributed by the University of Michigan with funding from the National Institute on Aging (grant number NIA U01AG009740). Ann Arbor; 2022.
Csardi G, Nepusz T. The igraph software package for complex network research. InterJournal 2006;Compex Systems:1695.
Molenberghs G, Fitzmaurice GM, Kenward KG, Tsiatis AA, Verbeke G. Handbook of Missing Data Methodology. Chapman & Hall/CRC Handbooks of Modern Statistical Methods; 2014.
Galimard JE, Chevret S, Curis E, Resche-Rigon M. Heckman imputation models for binary or continuous MNAR outcomes and MAR predictors. BMC Med Res Methodol. 2018;18:1–13.
Heckman JJ. The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models. In: Annals of Economic and Social measurement. vol. 5. Cambridge: National Bureau of Economic Research, Inc; 1976. p. 475–92.
Heckman JJ. Sample selection bias as a specification error. Econometrica. 1979;47:153–61.
Kalton G. The treatment of missing survey data. Surv Methodol. 1986;12:1–16.
Vetrano DL, Roso-Llorach A, Fernández S, Guisado-Clavero M, Violán C, Onder G, et al. Twelve-year clinical trajectories of multimorbidity in a population of older adults. Nat Commun. 2020;11:1–9.
Vásquez E, Botoseneanu A, Bennett JM, Shaw BA. Racial/ethnic differences in trajectories of cognitive function in older adults: Role of education, smoking, and physical activity. J Aging Health. 2016;28:1382–402.
The authors acknowledge the Research/Scientific Computing teams at The James Hutton Institute and NIAB for providing computational resources and technical support for the "UK's Crop Diversity Bioinformatics HPC" (BBSRC grant BB/S019669/1), use of which has contributed to the results reported within this paper. Access to this was provided via the University of St Andrews Bioinformatics Unit which is funded by a Wellcome Trust ISSF award (grant 105621/Z/14/Z and 204821/Z/16/Z).
XK was supported by a World-Leading PhD Scholarship from St Leonard's Postgraduate School of the University of St Andrews. VAS and KK were partially supported by HATUA, The Holistic Approach to Unravel Antibacterial Resistance in East Africa, a three-year Global Context Consortia Award (MR/S004785/1) funded by the National Institute for Health Research, Medical Research Council and the Department of Health and Social Care. KK is supported by the Academy of Medical Sciences, the Wellcome Trust, the Government Department of Business, Energy and Industrial Strategy, the British Heart Foundation Diabetes UK, and the Global Challenges Research Fund [Grant number SBF004\1093]. KK is additionally supported by the Economic and Social Research Council HIGHLIGHT CPC- Connecting Generations Centre [Grant number ES/W002116/1].
School of Biology, Sir Harold Mitchell Building, Greenside Place, KY16 9TH, St Andrews, UK
Xuejia Ke & V. Anne Smith
School of Geography and Sustainable Development, Irvine Building, North Street, KY16 8AL, St Andrews, UK
Xuejia Ke & Katherine Keenan
Xuejia Ke
Katherine Keenan
V. Anne Smith
XK performed the analyses on simulated and real data, designed the figures and wrote the initial draft of the manuscript. KK assisted with the case study data and interpretation. VAS assisted with Bayesian network analyses. VAS conceptualised the general study idea. All authors conceptualised specific questions. All authors revised and agreed on the final manuscript.
Correspondence to V. Anne Smith.
Supplementary Figs. 1-8, showing simulation results for 5000 and 10,000 data points, Supplementary Fig. 9, showing simulation results of scoring functions BIC and BDs on MNAR data with 1000 data points and 0.3 missing proportion, Supplementary Table 1, showing description of variables in the real-world dataset, Supplementary Tables 2 and 3, showing descriptive statistics of random network structures.
Ke, X., Keenan, K. & Smith, V.A. Treatment of missing data in Bayesian network structure learning: an application to linked biomedical and social survey data. BMC Med Res Methodol 22, 326 (2022). https://doi.org/10.1186/s12874-022-01781-9
Simulation study | CommonCrawl |
Pooled clone collections by multiplexed CRISPR-Cas12a-assisted gene tagging in yeast
Benjamin C. Buchmuller ORCID: orcid.org/0000-0002-4915-59491 na1,
Konrad Herbst ORCID: orcid.org/0000-0003-1296-15221 na1,
Matthias Meurer1,
Daniel Kirrmaier1,2,
Ehud Sass3,
Emmanuel D. Levy ORCID: orcid.org/0000-0001-8959-73653 &
Michael Knop ORCID: orcid.org/0000-0003-2566-923X1,2
Nature Communications volume 10, Article number: 2960 (2019) Cite this article
Genomic engineering
Clone collections of modified strains ("libraries") are a major resource for systematic studies with the yeast Saccharomyces cerevisiae. Construction of such libraries is time-consuming, costly and confined to the genetic background of a specific yeast strain. To overcome these limitations, we present CRISPR-Cas12a (Cpf1)-assisted tag library engineering (CASTLING) for multiplexed strain construction. CASTLING uses microarray-synthesized oligonucleotide pools and in vitro recombineering to program the genomic insertion of long DNA constructs via homologous recombination. One simple transformation yields pooled libraries with >90% of correctly tagged clones. Up to several hundred genes can be tagged in a single step and, on a genomic scale, approximately half of all genes are tagged with only ~10-fold oversampling. We report several parameters that affect tagging success and provide a quantitative targeted next-generation sequencing method to analyze such pooled collections. Thus, CASTLING unlocks avenues for increasing throughput in functional genomics and cell biology research.
The systematic screening of arrayed biological resources in high-throughput has proven highly informative and valuable to disentangle gene and protein function. For eukaryotic cells, a large body of such data has been obtained from yeast strain collections ("libraries") in which thousands of open-reading frames (ORFs) are systematically altered in identical ways, for example, by gene inactivation or over-expression to determine gene dosage phenotypes and genetic interactions1,2,3. Likewise, gene tagging, for example with fluorescent protein reporters, has been used in functional genomics to study protein abundance4, localization5, turnover6,7, or protein–protein interactions8,9,10.
Due to their genewise construction, producing arrayed clone collections is typically time-consuming and cost-intensive. For yeast, this has been partly addressed with the development of SWAT libraries in which a generic N- or C-terminal tag can be systematically replaced with the desired reporter for tagging any ORF in the genome11,12. However, manipulation and screening of arrayed libraries remains dependent on special equipment to handle the strain collections and is confined to the genetic background of the yeast strain BY474113 in which most of these libraries were constructed. Therefore, arrayed libraries cannot address current and future demands in functional genomics that embrace the systematic analysis of complex traits or the comparison of different strains or species14.
We imagine that a paradigm shift from arrayed to pooled library generation may offer a solution: experimentation with pooled biological resources is already well established15 and the phenotype-to-genotype relationship can be inferred conveniently by genotyping phenotypically distinct subsets of pooled libraries using next-generation sequencing (NGS).
To generate the pooled libraries rapidly and independent of their genetic background, an efficient strategy to introduce the genetic alterations is required. For example, RNA-programmable CRISPR (clustered regularly interspaced short palindromic repeat)-associated endonucleases have revolutionized the creation of pooled collections of gene activation and inactivation mutants in mammalian cells16,17,18 since thousands of CRISPR guide RNAs (gRNAs) can be produced by cost-effective microarray-based oligonucleotide synthesis. In bacteria and yeast, strategies that exploit homologous recombination have enabled multiplexed gene editing by delivering short repair templates on the same oligonucleotides as the gRNA19,20,21,22 with applications for phenotypic profiling of genomic sequence variations.
Because of high-throughput, low-cost, and broad host versatility, it is interesting to leverage these CRISPR-based methods beyond loss- or gain-of-function screens for the precise insertion of longer DNA constructs that deliver reporter molecules or tags to monitor the different cellular components encoded in the genome. Rapid access to such collections would synergize, for example, with image-activated cell sorting23, and enable to use subcellular localization as a criterion for cell sorting.
To exert gene tagging in a pooled format, thousands of DNA constructs must be generated, each containing the reporter gene flanked with locus-specific homology arms and paired with a corresponding gRNA. However, parallel construction of thousands of such constructs is challenging.
Here, we describe "CRISPR-assisted tag library engineering" (CASTLING) to create pooled collections of hundreds to thousands of yeast clones in a single reaction tube. All clones contain the same, large DNA construct (up to several kb in length) accurately inserted at a different, yet precisely specified chromosomal locus. CASTLING is compatible with microarray-based oligonucleotide synthesis since each insertion is specified by a single oligonucleotide only. Our method employs an intramolecular recombineering procedure that allows the conversion of oligonucleotide pools into pools of tagging cassettes.
In this proof-of-concept study, we establish CASTLING in the yeast Saccharomyces cerevisiae using gene tagging with fluorescent protein reporters as an example. We derive a set of rules to aid designing effective CRISPR RNAs (crRNAs) for the CRISPR endonuclease Cas12a (formerly known as Cpf1)24 for C-terminal tagging of genes in yeast, and determine parameters to maximize tagging success in libraries of different sizes. We use a simple assay based on fluorescence-activated cell sorting (FACS) to demonstrate how CASTLING libraries can be used for proteome profiling and ad hoc characterization of previously uncharacterized proteins, and provide a targeted NGS method for the quantitative analysis of such pooled experiments.
Gene tagging with SICs
The main component of CASTLING is a linear DNA construct that comprises multiple genetic elements: the "feature" for genomic integration such as a fluorescent protein tag, a selection marker, a gene for a locus-specific Cas12a crRNA and flanking homology arms to direct the genomic insertion of the DNA fragment by homologous recombination (Fig. 1a). We conceptually termed these DNA constructs "self-integrating cassettes" (SICs).
CRISPR-Cas12a-assisted single gene-tagging in yeast. a After transformation of the self-integrating cassette (SIC) into a cell, the CRISPR RNAs (crRNA) expressed from the SIC directs a CRISPR-Cas12a endonuclease to the genomic target locus where the DNA double strand is cleaved. The lesion is repaired by homologous recombination using the SIC as repair template so that an in-frame gene fusion is observed. b Efficiency of seven SICs of C-terminal tagging of highly expressed open-reading frames (ORFs) with a fluorescent protein reporter, in the absence (gray) or presence (purple) of Francisella novicida U112 (FnCas12a). Colony-forming units (CFUs) per microgram of DNA and cells used for transformation, and integration fidelity by colony fluorescence are shown. c Co-integration events upon simultaneous transformation of two SICs directed against either ENO2 or PDC1. Both SICs confer resistance to Geneticin (G-418), but contain different fluorescent protein tags. Colonies exhibiting green and red fluorescence (arrows) were streaked to identify true co-integrands. False-color fluorescence microscopy images show nuclear Pdc1-GFP (green fluorescent protein) in green and the cytosolic Eno2-RFP in magenta; scale bar 5 µm. d Titration of both SICs against each other (lower panel) with evaluation of GFP-tagged (GFP+), red fluorescent protein (RFP)-tagged (RFP+) or co-transformed (GFP+ RFP+) colonies. b–d Source data are provided as a Source Data file
We used Cas12a from Francisella novicida U112 (FnCas12a), which is functional in yeast25, because the genomic target space of the Cas12a endonucleases is defined by A/T-rich protospacer-adjacent motifs (PAMs)26,27,28,29. This makes Cas12a endonucleases well suited for genetic engineering at transcriptional START and STOP sites in many organisms (Supplementary Fig. 1).
To test the SIC strategy, we generated SICs for tagging several highly expressed genes with a fluorescent protein reporter. After individual transformation of the SICs and marker selection, we obtained 100–1000 times more colonies from hosts that had transiently expressed a Cas12a endonuclease as compared to a host that did not (Fig. 1b). Also, the presence of a crRNA gene specific for the target locus of the SIC was required (Supplementary Fig. 2), indicating that a functional crRNA transcribed from the linear DNA fragment promotes the integration of a SIC. Based on fluorescent colony counts, tagging fidelity had increased from 50 to 85% in the absence of Cas12a to 95–98% when recombination was stimulated by the action of Cas12a (Fig. 1b).
We also tested Cas12a endonucleases from other species24, finding that Cas12a from Acidaminococcus sp. BV3L6 (AsCas12a) showed similar activity as FnCas12a (Supplementary Fig. 3a–c). However, we continued with FnCas12a since it offered a broader genomic target space in the yeast genome than AsCas12a (Supplementary Fig. 4).
Because of the high efficiency of SIC integration, we worried that multiple loci could be tagged within the same cell when different SICs were transformed as pools. We therefore transformed a mixture of two SICs, one to tag ENO2 with mCherry and the other one to tag PDC1 with sfGFP. We detected only a few individual colonies where both genes were fluorescently tagged (Fig. 1c), independent of the relative concentration of the two SICs used for transformation (Fig. 1d). Therefore, tagging multiple loci in the same cell would rarely occur if more than one SIC was transformed simultaneously.
Implementing CASTLING for pooled gene tagging
To produce many different SICs in a pooled format using microarray-synthesized oligonucleotides, all gene-specific elements of a SIC, that is, the crRNA sequence and both homology arms, must be contained in a single oligonucleotide—one for each target locus (Fig. 2a). In turn, this demands a strategy to convert these oligonucleotides in bulk into the corresponding SICs.
CRISPR-Cas12a (Cpf1)-assisted tag library engineering (CASTLING) in a nutshell. a For each target locus, a DNA oligonucleotide with site-specific homology arms (HAs) and a CRISPR spacer encoding a target-specific CRISPR RNAs (crRNA) is designed and synthesized as part of an oligonucleotide array. The resulting oligonucleotide pool is recombineered with a custom-tailored feature cassette into a pool of self-integrating cassettes (SICs). This results in a clone collection (library) that can be subjected to phenotypic screening and genotyping, for example, using Anchor-Seq12. b The three-step recombineering procedure for SIC pool generation; details are given in the main text and Methods
We implemented a three-step molecular recombineering procedure for this conversion that is executed in vitro (Fig. 2b, Supplementary Fig. 5a–e). Its central intermediate is a circular DNA species formed by the oligonucleotides and a feature cassette. The feature cassette provides all the generic elements of the SIC, that is, the tag (e.g. green fluorescent protein (GFP)), the selection marker and an RNA polymerase III (Pol III) promoter to express the crRNA. The circular intermediates are then amplified by rolling circle amplification (RCA) instead of PCR to avoid the formation of chimeras containing non-matching homology arms. The individual SICs are finally released by cleaving the DNA concatemer using a restriction site in between both homology arms.
To accommodate all gene-specific elements on a single oligonucleotide, it was critical to use a Cas12a endonuclease because its crRNA consists of a comparably short direct repeat sequence (~20 nt) that precedes each target-specific CRISPR spacer (~20 nt; Supplementary Fig. 5f). This arrangement allows the Pol III promoter, which drives crRNA expression, to remain part of the feature cassette, while the short Pol III terminator30 can be included in the oligonucleotide itself. This design leaves enough space for homology arms of sufficient length for homologous recombination (>28 bp)31. Adding up all the sequences, each oligonucleotide (160–170 nt) is within the length limits for commercial microarray-based synthesis.
To select CRISPR targets near the desired chromosomal insertion points and to assist the design of the oligonucleotide sequences for microarray synthesis (Supplementary Fig. 6a–d), we wrote the software tool castR (https://github.com/knoplab/castR/tree/v1.0). For use with small genomes, castR is available online (http://schapb.zmbh.uni-heidelberg.de/castR/).
Using CASTLING to generate a GFP library of nuclear proteins
To test CASTLING, we sought to create a small library covering a set of proteins with known localization32. We chose 215 nuclear proteins whose localization had been validated in different genome-wide data sets12,33. We designed 1577 oligonucleotides covering all suitable PAM sites within 30 bp around the C-termini of the selected ORFs, yielding seven oligonucleotides per gene on average. We purchased this oligonucleotide pool three times from different suppliers, one pool from supplier A (pool A) and two pools from supplier B (pools B1 and B2; Fig. 3a, Methods). The amount of starting material for PCR to amplify each pool was adjusted to obtain a product within ~20 cycles. We observed that pool A required about 200-fold more starting material than pool B1 or B2 (Fig. 3a). After recombineering with a feature cassette comprising the bright green fluorescent protein reporter mNeonGreen34, we generated four different libraries in technical duplicates of 30,000–95,000 clones each (Fig. 3a, Supplementary Table 1).
CRISPR-Cas12a (Cpf1)-assisted tag library engineering (CASTLING) for tagging 215 nuclear proteins with a green fluorescent protein. a Three oligonucleotide pools of the same design (1577 sequences, Supplementary Table 1) were used to create four tag libraries by CASTLING in duplicate sampling the indicated amount of starting material for PCR. b Detected oligonucleotide sequences of the design after PCR amplification (blue), self-integrating cassette (SIC) assembly (green), and in the final library (orange); oligonucleotides with copy number estimates (unique molecular identifier (UMI) counts) in the lowest quartile (lower 25%) are shown in light shade. c Same as b, but evaluated in terms of open-reading frames (ORFs) represented by the oligonucleotides or SICs. d Copy number of PCR amplicons recovered (red) or lost (blue) after recombineering; black horizontal lines indicate median UMI counts. e Pearson's pairwise correlation of oligonucleotide or SIC copy number between replicates after PCR or rolling circle amplification (RCA), respectively; n.s., not significant (p > 0.05). f Kernel density estimates of copy number in replicate 1a as normalized to the median copy number observed in the oligonucleotide pool (before recombineering) and after recombineering into the SIC pool (left panel); the distribution of fold changes (right panel) highlights two frequency ranges: [0.1–0.9], that is, 80% of SICs, and [0.25–0.75], that is, 50% of SICs. g Representative fluorescence microscopy images of cells displaying nuclear, diffuse non-nuclear (asterisks), or no mNeonGreen fluorescence (arrows); scale bar 5 µm. h Quantification of fluorescence localization in >1000 cells in each replicate. i Recurrence of off-target events as revealed by Anchor-Seq across all library replicates and all genomic loci (left panel); the fraction of cells with SICs integrated at off-target sites (blue) within each clone population (red) is shown (right panel, axis trimmed). b–i Source data are provided as a Source Data file
We used NGS in combination with unique molecular identifiers (UMIs)35 to quantitatively analyze the entire procedure at three stages: after PCR amplification of the oligonucleotide pool, after SIC amplification (Supplementary Fig. 7a), and after yeast library construction. To characterize the yeast libraries, we adapted the targeted NGS method Anchor-Seq12 with UMIs to analyze the CRISPR spacers of the inserted SICs along with the genomic sequence adjacent to the insertion site in all clones of the libraries (Supplementary Fig. 7b).
Overall, the represented oligonucleotide diversity gradually decreased during recombineering (Fig. 3b). The best performance was observed in one duplicate generated from pool B2 that used a high amount of starting material (libraries 4a and 4b), preserving more than 70% of the originally amplified oligonucleotides in the SIC pool and more than 60% of the oligonucleotide diversity in the yeast libraries (Fig. 3b). This loss in complexity was alleviated by the fact that multiple oligonucleotides were included per gene and we observed that more than 90% of the targeted genes were tagged in library 4a and 4b (Fig. 3c). We noticed that low abundant oligonucleotides after PCR amplification were prone to depletion during SIC preparation, accounting for the observed loss in sequence diversity (Fig. 3d). Across all preparations, copy numbers of individual oligonucleotides were highly correlated between duplicates after PCR (Pearson's correlation >0.96), but less between synthesis replicates (0.78–0.90), and least for oligonucleotide pools obtained from different suppliers (Fig. 3e). After recombineering and RCA, no significant correlation of SIC copy numbers was observed except for libraries 4a and 4b. A more detailed analysis indicated that 50% of the sequences exhibited a copy number change >2-fold during RCA (Fig. 3f), which could explain the loss of correlation between replicates after RCA. Taken together, these analyses identified the quality and amount of starting material and its recovery during recombineering as critical factors to preserve library diversity. Nevertheless, for a small library of 215 genes, CASTLING enabled tagging most of the selected genes within one library preparation.
Next, we quantified tagging fidelity by fluorescence microscopy, which was possible because we had selected genes encoding proteins with validated nuclear localization: 90–95% of the cells had a nuclear localized mNeonGreen signal in all libraries (Fig. 3g, h). The remainder of the cells showed either no fluorescence (2–8%) or a fluorescence signal elsewhere (0–4%), usually in the cytoplasm with one exception (see below). So, nearly all genes must have been tagged in the correct reading frame.
For the clones with no fluorescence signal, we suspected either frameshift mutations in the polypeptide linker (due to faulty oligonucleotides) or in the fluorescent protein reporter (due to limited fidelity of DNA polymerases), or off-target integration of the SIC. Sequencing of several insertion junctions of dark clones revealed small deletions of one or more nucleotides in the 5′-homology arms that direct the SICs to the 3′ ends of the ORFs. Therefore, the majority of dark clones appeared to contain correctly targeted SICs in which mNeonGreen was not in frame due to errors in the sequences derived from the oligonucleotides.
Next, we generated library-wide Anchor-Seq data encompassing the crRNA sequences and the 3′-insertion junctions. This identified 280 instances in which the crRNA sequence and the genomic insertion site did not match. These off-target insertions corresponded to <0.2% of the clones. Most of them were single occurrences associated with 196 different SICs in total. Only 37 SICs showed off-target insertion at various genomic loci or in more than one library replicate (Fig. 3i). It remains, however, unclear which of these insertions were caused by Cas12a-mediated cleavage at an off-target site and which were spontaneous chromosomal insertions.
In addition to these events, we observed fluorescence signals at unexpected subcellular localizations. For example, 2% of the cells in library 2b displayed fluorescence at the spindle-pole body, which we attributed, based on Anchor-Seq, to a TEM1-mNeonGreen gene fusion. Indeed, on average 1.6% of all cells across all libraries had integrated SICs originally designed for another experiment in this study, which must have entered SIC or library preparation as a result of contamination.
Together, these experiments demonstrate that in a pooled experiment CASTLING allows for highly efficient tagging of hundreds of genes with low levels of off-target insertion.
Parameters affecting tagging success on a genome-wide scale
Simultaneously with the small pool of nuclear proteins, we designed an oligonucleotide pool for C-terminal tagging of the yeast proteome. For crRNA design, we first retrieved a set of more than 34,000 candidate CRISPR targets using our castR script and using TTV (V = A, C, or G) and TYN (Y = C or T; N = any nucleobase) as PAMs. Next, we removed sequences that contained thymidine runs longer than five nucleotides, since they may prematurely terminate Pol III transcription30. Subsequently, we filtered out crRNA targets with a high off-target estimate and removed most, but not all, target sequences that are not destroyed after insertion of the SICs (Supplementary Note 1). From the remainder, we chose randomly 12,472 sequences (limited by the chosen microarray) that covered 5664 of 6681 (85%) of the annotated ORFs in S. cerevisiae36. Although the number of oligonucleotides per gene was lower as compared to the nuclear pool, the high number of genomic targets should allow identifying parameters that would influence tagging success and clone representation in such large-scale experiments.
After PCR and SIC pool generation, we sequenced the PCR amplicons and one SIC pool. We analyzed the sequencing data implementing a de-noising strategy to discriminate errors introduced during NGS from errors in the templates37. This revealed that the PCR product contained 57% of the designed oligonucleotides, but only 31% of the designed sequences were represented by at least one error-free amplicon. Similarly, 51% of all designed sequences were detected in this SIC pool, but only 25% were error free (Fig. 4a). Due to redundancy, the error-free SICs in this pool still covered 45% of the 5664 ORFs.
Identification of factors influencing clone representation in CRISPR-Cas12a (Cpf1)-assisted tag library engineering (CASTLING). a Sequence quality of an oligonucleotide pool (oligonucleotide pool C, Supplementary Table 2) after PCR amplification and self-integrating cassette (SIC) assembly. Following de-noising of next-generation sequencing (NGS) artifacts, molecules that aligned with any of the 12,472 designed oligonucleotides were classified error-free, erroneous, or absent at the respective stage (left panel). The genotype space (designed: 5664 open-reading frames (ORFs)) was covered by each class (right panel). b Representative fluorescence microscopy images of a pooled tag library (derived from oligonucleotide pool C); scale bar: 20 µm (overview), 5 µm (details). c Genotype diversity within three independent library preparations (libraries #1.1, #1.2, and #1.3, Supplementary Table 2) generated from the same oligonucleotide pool; all libraries combined tagged 3262 different ORFs. d Summary of parameters significantly (Fisher's exact test, p < 0.05) increasing the likeliness of tagging success beyond SIC abundance (details in Supplementary Fig. 7a–b). a–c Source data are provided as a Source Data file
To explore how many genes could be tagged with this oligonucleotide pool, we repeated PCR and SIC assembly three times. Following transformation in yeast, this resulted in three independent libraries of 75,000–100,000 clones each. Inspection of the cells by fluorescence microscopy revealed localization across a broad range of subcellular compartments (Fig. 4b). By Anchor-Seq, we detected a total of 3262 different ORFs (58% of all targeted ORFs), of which 1127 ORFs (20%) were shared across all replicates (Fig. 4c, Supplementary Table 2).
The acquired data allowed us to identify factors that might have impeded efficient genomic integration of a SIC. First, the likelihood of tagging success was 3- to 4-fold decreased when the crRNA target sequence was not disrupted by the inserted SIC, that is, when recurrent cleavage of the locus was possible. Neither nucleosome occupancy of the PAM nor of the target sequence itself had a statistically significant impact on the tagging success in this library. However, the choice of the PAM (TTC > TTG > TTA » TYN) and the first two PAM-proximal nucleotides (CG, CC, GG) increased the chances of target integration 2- to 3-fold each (Fig. 4d, Supplementary Fig. 8a–b). Interestingly, it seemed advantageous to target genes on their non-transcribed strand by Cas12a. Despite the limited success to create a genome-wide library at first trial, we anticipated that these parameters could help to improve tagging success for CASTLING in yeast.
Using CASTLING to construct complex pooled yeast libraries
To further investigate the creation of genome-wide pooled libraries with CASTLING, we designed a new microarray for tagging 5940 ORFs. Applying these rules for each ORF, we selected 17,691 target sites near the STOP codon and filled up the remaining positions on a 27,000-well array. We generated three libraries in total using two different strategies to investigate the minimal effort that would be required for creating a large library with CASTLING.
First, we pooled SICs from 30 RCAs and generated a large library of 704,000 clones (LibA), and a small library of 44,000 clones (LibB). Second, we constructed a third library of 116,000 clones (LibC) using a SIC pool made from two RCAs of the same oligonucleotide pool (Fig. 5a). To quantify genotype composition in each of the different libraries, we again used Anchor-Seq at the crRNA junction. Altogether, the three libraries contained tagged alleles of 76% of all the targeted ORFs with an overlap of 43% between the three libraries (Fig. 5b, c). The largest library, LibA, contained the most tagged ORFs (3801 ORFs), corresponding to 64% of the design. Interestingly, the much smaller library LibB with 44,000 clones already contained 80% of these genotypes. LibB and LibC each covered ~50% of the desired ORFs, sharing 2038 ORFs. In practical terms, this implied that about one-third of the intended genes could be reliably and reproducibly tagged with minimal effort by recovering 40,000–120,000 clones only.
Creating and screening large CRISPR-Cas12a (Cpf1)-assisted tag library engineering (CASTLING) libraries. a Three libraries with different numbers of collected clones were generated from self-integrating cassette (SIC) pools combining either 2 or 30 recombineering reactions to investigate the minimum effort for a proteome-wide (design: 5940 open-reading frames (ORFs), oligonucleotide pool D, Supplementary Table 2) CASTLING library (details in Methods). b Venn diagram of genotypes recovered in each of the three libraries; all libraries combined tagged 4516 different ORFs. c Genotype diversity in each of the three libraries, shared between them, or after their combination. d Proteome profiling by fluorescence intensity of a non-exhaustive mNeonGreen tag library (library #1.1, Fig. 4c, Supplementary Table 2) using fluorescence-activated cell sorting (FACS). After enriching the fluorescent sub-population of the library and determining the fold enrichment of each genotype by next-generation sequencing (NGS), this sub-population was sorted into eight bins according to fluorescent intensity. Analysis of each bin by Anchor-Seq and on-site nanopore sequencing allowed the assignment of an expected protein abundance for each genotype. e Pairwise comparisons between fluorescence intensity estimates calculated from genotype distribution across all bins (Methods, Eq. 2; this study denoted as BUC) and protein abundances reported by selected genome-scale experiments4,39,40 normalized to molecules per cell38. Outliers (orange) were determined based on the comparison to a green fluorescent protein (GFP) tag flow cytometry study39. Spearman's correlation coefficients (ρ) are given. Marginal lines indicate abundance estimates only present in the respective study but missing in the other. f Comparison of Spearman's correlation coefficients between studies either considering their overlap in detected ORFs or only the overlap with the 435 ORFs we could detect in this experiment. A Pearson's correlation coefficient (r) is given. g Eight genes that had not been characterized in other genome-scale experiments38 were tagged individually to verify whether fluorescence intensity corresponded with their predicted characterization by FACS. Same exposure time for all fluorescent microscopy images except for Ybr196c-a, which was imaged at 10% excitation; scale bar 10 µm. b, c, e Source data are provided as a Source Data file
We validated the rule set used for oligonucleotide design by comparing SICs with approximately equal copy number in the SIC pool (Supplementary Fig. 9a–b).
Functional studies that use pooled libraries fundamentally depend on enrichment procedures to physically separate cells based on the information provided by the reporter. When fluorescent protein fusions to endogenous proteins are used, high-resolution fluorescence microscopy would be the method of choice, as this would enable scoring and subsequent cell sorting based on very complex but highly informative phenotypes. The necessary technology is currently under development23. To demonstrate that CASTLING libraries can be used for screening, we reverted to FACS, which permits sorting based on fluorescence intensity.
Starting from a library containing 2052 mNeonGreen-tagged ORFs (Fig. 4c), we first sorted cells for which fluorescence could be detected by FACS. Anchor-Seq revealed that in comparison to the starting library, this cell population contained 848 genotypes, while 732 genotypes were depleted. Therefore, we estimated that 35% of the mNeonGreen-tagged genes could be profiled based on fluorescence intensity in our pooled study, which agrees with a meta-analysis on yeast protein abundance38 that reported abundance estimates for 1404 proteins characterized by flow cytometric fluorescence measurements39, that is, 34% of 4159 ORFs tagged in the C-GFP library32.
To determine the fluorescence intensity of individual proteins in the fluorescence-enriched fraction, we sorted the cells into eight fractions of increasing fluorescence intensity Next, we analyzed the genotype distribution within the bins using Anchor-Seq. We sequenced the amplified insertion junctions using MinION nanopore sequencing. This method allows a more rapid profiling workflow, but provides a lower sequencing depth as compared to Illumina dye sequencing, which we usually used to characterize CASTLING libraries. We obtained 18,638 informative reads, which enabled us to determine the relative enrichment in the individual bins for 435 (50%) of the 848 tagged proteins. These estimates correlated well with the abundance estimates from the flow cytometry study by Newman et al. 39 (Spearman's correlation coefficient >0.63; Fig. 5e) and were comparable with different protein abundance data sets consolidated by Ho et al. 38 (Supplementary Fig. 10).
To estimate whether our low-depth showcase experiment can be considered representative for larger-scale CASTLING-based experiments, we quantified the dependence of correlation coefficients on the number of compared genes and found that the coefficients of correlation obtained from the complete data sets38 or an analysis limited to the 435 tagged genes that we had detected correlated well with each other (Pearson's correlation coefficient 0.74; Fig. 5f), indicating a predictive value of our low-depth experiment.
We found that 23 (13%) of 175 tagged genes yielded clearly detectable fluorescence signals in our study, but were not detected by Newman et al. 39 (Fig. 5e, orange points). Since these proteins were also detected by complementary approaches such as mass spectrometry40 or immunoblotting4, we assumed that these "false positives" resulted from false-negative clones of the C-GFP library (Fig. 5e). Using independently generated clones based on a different gene tagging strategy12, we validated the expression of most of these genes when tagged with mNeonGreen, including eight proteins that were not covered in the C-GFP library and neither characterized in any other study analyzed by Ho et al.38 (Fig. 5g, Supplementary Table 3).
Together, these results highlight the use of CASTLING libraries as a rapid venue for phenotypic profiling and screening experiments when combined with Anchor-Seq to analyze the clone distribution across sub-populations isolated from such libraries.
We developed CASTLING to enable the rapid creation of pooled libraries of clones with large chromosomal insertions such as fluorescent protein tags.
Typically, libraries in yeast have been constructed genewise in an arrayed format using PCR targeting41. Based on our own experience12,42, the construction of arrayed libraries depends on special equipment for parallelization of the procedures, it requires a (costly) resource of arrayed primers for PCR tagging, and handling thousands of strains keeps multiple researchers occupied for several months.
In contrast, fewer resources must be committed to create a library by CASTLING. All the necessary oligonucleotides can be obtained from microarrays, which are about two orders of magnitude more cost-effective than a genome-wide set of conventional solid-phase-synthesized oligonucleotides. Once an established oligonucleotide pool is available, it can be reused to construct a variety of SIC pools containing different features, that is, tags or selection markers. The construction of SIC pools is rapid and can be completed within 1–2 days since the CASTLING workflow avoids preparatory sub-cloning into a plasmid library, which is commonly used in other multiplexed gene-editing approaches19,20,21,43. Transformation and growth of the yeast clones take another 2–3 days, followed by recovery and analysis of the library. This makes library preparation by CASTLING very efficient and therefore it is possible to create a new library for each strain background or mutant of interest. Classical libraries in contrast are confined to the background they were made in and require genetic crossing to introduce a mutant, which depends on strains specifically constructed for these procedures44.
In addition to the versatility and flexibility of library creation, tagging fidelity by CASTLING is 90% or higher, exceeding the fidelity observed in conventional gene tagging by PCR targeting, where routinely 50–85% of the obtained clones are correct. It may be worth mentioning that elimination of the false clones during the construction of classical arrayed libraries remains one of the most laborious steps. With CASTLING, false clones cannot obstruct the correct interpretation of a screening because with Anchor-Seq all genotypes can be quantified that are present at the beginning of an experiment as well as their respective enrichment or depletion after phenotypic selection. This allows excluding erroneous genotypes while completing the analysis, which is typically not possible in other multiplexed CRISPR-based gene editing approaches that rely on indirect measures for genotype determination (e.g. sequencing the ectopic crRNA plasmids).
A potential downside of CASTLING and many other pooled library approaches lies within the initial indeterminacy of the exact library composition: Each transformation will yield pools with not exactly the same composition. Currently, genotype coverage with CASTLING can exceed 90% when relatively small libraries with hundreds of genes are created and reproducibly reached 50% for libraries with thousands of genes using <10-fold oversampling (44,000 clones over 5940 ORFs). We have identified that SICs for which the CRISPR target site would be destroyed after integration, or SICs that had a GC-rich crRNA in its PAM-proximal dinucleotides, yielded higher clone numbers as compared to SICs lacking those features (Supplementary Fig. 9).
The identified parameters increased the likelihood of tagging success, but they might also reduce the number of clones for ORFs for which only less efficient SICs could be designed. In this case, additional oversampling would be required. Along this line, a better strategy to increase coverage might be to use successive rounds of CASTLING involving each time a new microarray to target the remainder of genes. The first array would target those genes that can be reproducibly tagged in all trials (Fig. 5b, c), while subsequent arrays would incrementally complete the library with almost proportional scaling efforts in terms of clones to be collected. Probably, it would require 2–4 rounds of CASTLING with a total of 60,000–120,000 clones to tag >60–90% of all 5500–6000 genes in yeast. This would exceed available genome-wide tagging collection, for example, the C-GFP collection32 with 4159 ORFs (Thermo Fisher), the TAP-tag collection4 with 4247 ORFs (Dharmacon), or our tandem fluorescent timer collection with 4081 ORFs42. Importantly, such an optimization might be necessary only once. Afterwards, all oligonucleotide pools could be used in parallel to generate a nearly complete library. This approach might also yield optimized rule sets to guide the development of CASTLING for a different species.
A major factor that decreased tagging success seemed to be oligonucleotide quality. CASTLING requires long oligonucleotides >100 bp. Even very small error rates and almost perfect coupling efficiencies during oligonucleotide synthesis will give rise to pools that only contain a minor fraction of full-length error-free oligonucleotides. Furthermore, we observed that the same sequences synthesized in different batches gave rise to pools with different performance (pools B1 and B2). We have sequenced and thoroughly analyzed one of the oligonucleotide pools for large library creation. Only a fraction of the designed sequences was represented by perfect full-length oligonucleotides. Most frequently we observe deletions and single-nucleotide polymorphisms (SNPs) in the oligonucleotide sequences. SNPs seem to be more frequent at the 3′ end of the oligonucleotide (which is synthesized first), whereas deletions become more frequent towards the 5′ end of the oligonucleotide (which is synthesized last). Indeed, error-free synthesis of long oligonucleotides remains challenging45,46. To increase the chance of representing each target locus by a perfect oligonucleotide, it might be beneficial to use as many different oligonucleotides per gene as possible or to include multiple redundant sequences.
It is important to stress that faulty oligonucleotides do not necessarily impact the fidelity of the tagging because the in vitro recombineering steps and the in vivo recombination47 all select against faulty oligonucleotides. Also, errors in the crRNA will most likely render it inactive. Consequently, only a few oligonucleotides that end up in the genome are associated with frameshift errors that impair the expression of the tag. This is impressively demonstrated with the nuclear protein libraries that were prepared with three different oligonucleotide pools, all of which showing >90% in-frame tagging rates (Fig. 3h). This results in intrinsic quality control during CASTLING yielding correctly tagged genes in the majority.
Prospective applications of CASTLING
In combination, CASTLING and quantitative Anchor-Seq enable the rapid creation and analysis of pooled libraries with tagged genes. Since each reaction tube contains an entire library, the pooled format is able to address much broader, comparative questions, including different genetic backgrounds and/or environmental conditions.
CASTLING is a method for gene tagging, and the type of screen that can be performed with such libraries entirely depends on the used tag. Therefore, it is up to the creativity of the researcher to develop a screening procedure to convert the information provided by the tags into information about the biological question in mind. Importantly, a screening procedure requires physical fractionation of the library into sub-pools based on a suitable phenotypic read-out, for example, using tags that enable the coupling of a protein behavior such as protein localization48 or protein–protein interactions10 with a growth phenotype.
In our opinion, fluorescent protein reporters constitute a particularly attractive group of tags as they provide visual insights into the cellular organization and dynamics, changes of which are associated with many disturbances of biological processes. Our simple FACS enrichment experiment (Fig. 5d–g) can serve but as proof of principle in this regard as current flow cytometry-based cell sorters cannot resolve complex cellular phenotypes, such as the subcellular localization of proteins49. We think that for methods such as the recently developed image-activated cell sorting23, CASTLING can enable a variety of entirely new experimental designs and analyses, ranging from functional genomics to biomedical research, paving the way to a new paradigm of shot-gun cell biology.
Beyond yeast, CASTLING could be adapted for other organisms able to repair DNA lesions by homologous recombination, including bacteria, fungi, flies, and worms, and potentially also in plants and mammalian cells. First evidence that this is the case is provided in the pre-print from Fueller et al. 50, where we show that an adapted SIC strategy can be used for efficient endogenous tagging of genes in mammalian cells. We have preliminary data suggesting that CASTLING also works in mammalian cells, although the size of the library that can be generated with it is currently unclear.
Please note that inadequate adoption of CASTLING can unwittingly generate clones qualified to initiate a gene-drive upon sexual reproduction51,52. This can be easily prevented (Supplementary Note 2).
In summary, our work shows that CASTLING libraries and quantitative genotype analysis using Anchor-Seq seamlessly integrate into existing (and upcoming) high-throughput cell sorting instrumentation to enable functional analyses of pooled resources. This outlines new avenues for the investigation of complex cellular processes in direct competition with strategies based on arrayed library resources.
Yeast strains and plasmids
All strains were derived from ESM356-1 (S. cerevisiae S288C, MATa ura3-52 leu2∆1 his3∆200 trp1∆63, which is a spore from strain FY167913,53) and are listed in Supplementary Table 4. Plasmids are listed in Supplementary Table 5. Human codon-optimized Cas12a (formerly Cpf1) family proteins24 of FnCas12a, Lachnospiriceae bacterium ND2006 (LbCas12a), Acidaminococcus sp. BV3L6 (AsCas12a), and Moraxella bovoculi 237 (MbCas12a) were expressed using the galactose-inducible GAL1 promoter54 from plasmids integrated into the ura3-52 locus (pMaM486, pMaM487, pMaM488, pMaM489).
Cell lysis and Western blot detection of HA-tagged proteins
Denaturing protein extracts from yeast cells were prepared using incubation with NaOH/β-mercaptoethanol followed by precipitation with trichloroacetic acid and protein solubilization with 6 M urea containing sample buffer for sodium dodecyl sulfate -polyacrylamide gel electrophoresis5. Proteins were resolved on Tris-glycine-buffered 10% (v/v) polyacrylamide gels by electrophoresis at 200 V for 90 min, transferred onto a nitrocellulose membrane by wet blotting (12 mM Tris, 96 mM glycine, 20% (v/v) methanol) at 25 V for 120 min, blocked with 10% (w/v) milk powder in blotting buffer (20 mM Tris, 150 mM NaCl, 0.1% (w/v) Tween-20), and the proteins of interest detected with monoclonal mouse anti-Pgk1 (R & D Systems, Fisher Scientific, 1:2,500) and monoclonal mouse anti-HA (12CA5, Sigma-Aldrich, 1:2,000) antibodies bound in 5% (w/v) milk powder in blotting buffer at 4 °C overnight. The surplus of unbound primary antibody was washed away and the secondary horse radish peroxidase-coupled antibody (1:10,000) applied in 5% (w/v) milk powder in blotting buffer at room temperature.
CASTLING library design
To facilitate oligonucleotide design, an R package (castR) is available from our repository (https://github.com/knoplab/castR/tree/v1.0) that ships along with a graphical user interface (GUI). For small genomes, the GUI can be accessed online (http://schapb.zmbh.uni-heidelberg.de/users/knoplab/castR/). The principles used for oligonucleotide design are described in Supplementary Note 1.
Oligonucleotide sequences used for microarray synthesis of oligopools in this study are given in Supplementary Data 1 (for arrays used in Fig. 3), Supplementary Data 2 (for the array used in Fig. 4), and Supplementary Data 3 (for the array used in Fig. 5).
Generating SICs for individual genes
Individual SICs were generated by PCR using a corresponding plasmid template (Supplementary Table 5) and using primers (Supplementary Table 6) that introduced the required 5′ and 3′ homology arms along with a locus-specific crRNA spacer. Cycling conditions for VELOCITY DNA polymerase-based amplification (Bioline) were 97 °C for 3 min, followed by 30 cycles of 97 °C (30 s), 63 °C (30 s), 72 °C (2 min 30 s), and a final 72 °C (5 min) extension hold. The reactions were column purified and adjusted to equal SIC concentration before yeast cell transformation.
Amplifying oligonucleotide pools and feature cassettes
The oligonucleotide pools used in this study (Supplementary Table 7) were synthesized by either CustomArray Inc. (pools A and C), Twist Bioscience (pools B1 and B2), or Agilent Technologies (pool D), and reconstituted in TE in case they arrived lyophilized. Pool dilution and annealing temperature were optimized in each case to yield a uniform product of the expected length (Supplementary Fig. 5a, Supplementary Table 1–2). In this study, pool C was diluted 1000-fold and 1.5 fmol were amplified using VELOCITY DNA polymerase (Bioline) with forward primer pool-FP1 and reverse primer pool-RP2 using the following PCR conditions: 97 °C for 3 min, followed by 20 cycles of 97 °C (30 s), 58 °C (30 s), 72 °C (20 s), and a final 72 °C (5 min) extension hold. To keep library member representation as uniform as possible, using more input material and higher annealing temperatures is desirable, as this will usually require fewer PCR cycles for amplification of the full-length synthesis product. All other pools were designed to allow for amplification in 15 cycles using Herculase II DNA polymerase (Agilent Technologies) with forward primer pool-FP2 (or pool-FP3, as indicated) and reverse primers pool-RP2 (or pool-RP3). Cycling conditions were: 95 °C for 2 min, followed by six cycles of 95 °C (20 s), touch down from 67 °C (20 s, ∆T = −1 °C per cycle), 75 °C (30 s), then nine cycles of 95 °C (20 s), 67 °C (20 s), 72 °C (30 s), and a final 72 °C (5 min) extension hold. Primers and truncated oligonucleotides (<75 bp) were removed using NucleoSpin Gel and PCR clean-up columns (Machery-Nagel GmbH & Co. KG). Feature cassettes were amplified by PCR using cognate cassette-FP and cassette-RP and any compatible plasmid template (50 ng, Supplementary Table 5) under the following conditions: 97 °C for 3 min, followed by 30 cycles of 97 °C (30 s), 63 °C (30 s), 72 °C (2 min 30 s), and a final 72 °C (5 min) extension hold. The reaction was treated with DpnI (New England Biolabs) in situ and cleaned-up using NucleoSpin Gel and PCR clean-up columns. For PCR, VELOCITY high-fidelity DNA polymerase (Bioline) was used with the manufacturer's reaction mix supplemented with 500 µM betaine (Sigma-Aldrich). For analysis, 2 µL of the reaction were used for DNA gel electrophoresis (0.8% or 2.0% agarose in TAE (Tris-acetate-EDTA), Supplementary Figure 5a).
Recombineering step 1
Circularization of the amplified oligonucleotide pool (0.8 pmol) with the amplified feature cassette (0.2 pmol) was performed using NEBuilder HiFi DNA Assembly Master Mix (New England Biolabs) in a total reaction volume of 20 µL at 50 °C for 30 min. For analysis by DNA gel electrophoreses, 10 µL of the reaction were used (0.8% agarose in TAE, Supplementary Fig. 5b).
To amplify selectively the circular product from step 1, rolling circle amplificytion (RCA) using phi29 was used. First, the annealing mixture was set up (total volume: 5 µL in a PCR tube) using 1 µL of the crude or gel-purified circularization reaction, 2 µL exonuclease-resistant random heptamers (500 µM, Thermo Fisher Scientific), 1 µL of annealing buffer (stock: 400 mM Tris-HCl, 50 mM MgCl2, pH = 8.0), and 1 µL of water. For annealing, the mixture was heated to 94 °C for 3 min and cooled down in thermocycler at 0.5 °C/s to 4 °C. Then, 15 µL amplification mixture were added (consisting of 2.0 µL 10× phi29 reaction buffer, 2.0 µL 100 mM dNTP mix, 0.2 µL 100× bovine serum albumin, 10 mg/mL, and 0.6 µL phi29 DNA polymerase; all from New England Biolabs). Amplification was allowed to proceed for 12–18 h at 30 °C, followed by heat inactivation of the enzymes at 80 °C for 10 min. For analysis by DNA gel electrophoresis (0.8% agarose in TAE), 0.5 µL of this reaction was used (Supplementary Fig. 5c).
To release the SICs, 20 U of the restriction enzyme BstXI (New England Biolabs) were added directly to the amplification reaction and the mixture was incubated for 3 h at 37 °C. Typically, such a reaction yielded 10–20 µg of SICs. For DNA gel electrophoresis, 1 µL was used (Supplementary Fig. 5d).
Estimating recombineering fidelity by NGS
The oligonucleotide pools were analyzed by NGS (Figs. 3 and 5) after PCR amplification and after recombineering, including UMIs for de-duplication (Supplementary Fig. 7a). For the PCR amplicons, fragments with UMIs were generated using 200 ng starting material (purified by ethanol precipitation) in two cycles of PCR with Herculase II Fusion DNA Polymerase (Agilent Technologies) using an equimolar mixture of P023poolseqNN-primers (1 mM final concentration) in a 25 µL reaction. Cycling conditions were based on the manufacturer's recommendations (62 °C annealing, 30 s elongation). The reactions were purified with NucleoSpin Gel and PCR clean-up columns using diluted NTI buffer (1:5 in water) to facilitate primer depletion, and the fragments eluted in 20 µL 5 mM Tris-HCl (pH = 8.5) each. To remove residual primers, 7 µL of eluate were treated with 0.5 µL exonuclease I (Escherichia coli, New England Biolabs) in 1× Herculase II reaction buffer (1 h, 37 °C) and heat inactivated (20 min, 80 °C). The reaction was used without further purification as input for a second PCR (Herculase II Fusion DNA Polymerase, 30 cycles, 72 °C annealing, 30 s elongation) to introduce indexed Illumina-TruSeq-like adapters (primer Ill-ONP-P7-bi7NN and Ill-ONP-P5-bi5NN). The products were size selected on a 3% NuSieve 3:1 Agarose gel (Lonza), purified using NucleoSpin Gel and PCR clean-up columns, and quantified on a Qubit Fluorometer (dsDNA HS Assay Kit, Thermo Fisher Scientific) and by quantitative PCR (qPCR) (NEBNext Library Quant, New England Biolabs, LightCycler 480, Roche). SIC pools were processed likewise using tRNA-seqNN and mNeon-seqNN as primers to introduce UMIs. All samples were pooled according to the designed complexity and sequenced on a NextSeq 550 system (Illumina) with 300 cycle paired-end chemistry.
We sequenced the oligonucleotide pool after PCR amplification, and the SIC pool obtained from the recombineering procedure (Fig. 4). In the latter instance, fragments compatible with Illumina NGS were generated digesting the products of RCA with BtsαI (55 °C, 90 min, New England Biolabs) and SalI-HF (37 °C, 90 min, New England Biolabs). The fragments were column purified, diluted to 100 ng/µL, and blunted using 1 U/µg mung bean nuclease under the appropriate buffer conditions (New England Biolabs). The DNA fragments of 150–200 bp length were gel extracted on 3% NuSieve 3:1 Agarose (Lonza). Both samples were sequenced by GATC Biotech AG (Konstanz, Germany) using Illumina MiSeq 150 paired-end NGS technology.
Transformation of SICs
For transformation of individual SICs or SIC pools, Cas12a-family proteins were transiently expressed by making frozen competent cells using either yeasts strains with GAL1-controlled Cas12a proteins grown in YP (1% yeast extract + 2% peptone) or SC (synthetic complete) medium containing 2% (w/v) raffinose and 2% (w/v) galactose as carbon source. For transformation55, the heat shock was extended to 40 min and no dimethyl sulfoxide was added. Recovery of cells that required selection for dominant antibiotic resistance markers (G-418, hygromycin B and clonNAT56) was allowed for 5–6 h at room temperature in YP-Raf/Gal (yeast extract peptone dextrose medium containing raffinose and galactose) or YPD (yeast extract peptone dextrose) to proceed prior to plating them on corresponding selection plates.
SIC pools were transformed at a total of 1 µg per 100 µL of frozen competent yeast cells (approximately 2 × 108 cells). Per library approximately 5 of such transformation reactions were combined corresponding to a yeast culture volume of 50 to 100 mL (OD600 = 1.0) to generate the competent cells. The number of transformants per library was calculated from serial dilutions. Replica plating on selective plates was used to exclude transiently transformed clones. After outgrowth, libraries were harvested in 15% glycerol and stored at –80 °C. For subsequent experiments, including genotyping, approximately 10,000 cells per clone were inoculated in YPD, diluted to OD600 = 1.0 (approximately 50 mL of culture), and grown overnight. If necessary, a second dilution was performed to obtain cells in exponential growth phase.
For co-integration experiments using individual SICs, 1 µg DNA per SIC and condition was transformed using 50 μL competent yeast cells. Colony number and fluorescence images were acquired after the sample had been spread onto selective plates. Potential co-integrands were tested by replica plating, streaking, and fluorescence microscopy.
Each transformation mixture was split into two parts containing 1/20 (libA) or 19/20 (libB) of the volume. The largest sample was plated onto four 25 × 25 cm2 square plates with YPD + G-418. No replica plating was performed before the libraries were cryo-preserved in 2.5, 10, and 50 mL of 15% glycerol, respectively.
For libraries libC, and the small nuclear library (based on P1), the transformation mixture was plated onto two 25 × 25 cm2 plates with YPD + hygromcycin B.
Cells were inoculated at an OD600 = 0.5 per condition in 5 mL low-fluorescent SC medium (SC-LoFlo57) from cryopreservation stocks and grown overnight, followed by dilution to OD600 = 0.1 in 20 mL SC-LoFlo the next morning and imaging during mid-exponential growth in the afternoon. Cells were attached to glass-bottom 96-well microscopy plates (MGB096-1-2-LG-L, Matrical) using concanavalin A coating58. High-resolution fluorescence micrographs were taken on a Nikon Ti-E epifluorescence microscope equipped with a 60x ApoTIRF oil-immersed objective (1.49 NA, Nikon), a 2048 × 2048 pixel (6.5 µm), an sCMOS camera (Flash4, Hamamatsu), and an autofocus system (Perfect Focus System, Nikon) with either bright field, 469/35 excitation and 525/50 emission filters, or 542/27 excitation and 600/52 emission filters (all from Semrock except 525/50, which was from Chroma). For each condition, a z-stack of 10 planes at 0.5 µm distance was acquired each with a bright field, a short (75% excitation intensity, 10 ms) and a long fluorescence exposure (100% excitation intensity, 100 ms) regimen. For display, the fluorescent image stacks were z-projected for maximum intensity, and cell boundaries taken from out-of-focus bright field images. For imaging cells in Fig. 3 (small nuclear pools), cells were inoculated from cryopreservation stocks and grown overnight in selective synthetic media (SC with monosodium glutamate and hygromycin B). The next morning, the cells were diluted in the same medium and grown to mid-exponential phase. Z-stacks were acquired using 17 planes and 0.3 µm spacing between planes.
A homogenous population of small cells (mostly in the G1 phase of the cell cycle) were selected using forward and side scatter. Single cells were sorted according to fluorescence intensity using fluorescence-activated cell sorting performed on a FACSAria III (BD Diagnostics) equipped for the detection of green fluorescent proteins (excitation: 488 nm,; long pass: 502LP,; bandpass: 530/30). We first isolated cells (three million in total), which represented roughly the 30% most fluorescent cells in library #1.1 (Supplementary Table 2) as judged by comparison to cells from strain ESM356-1, which was used as a negative control. The population of fluorescent cells was then grown to exponential phase and sorted into eight fractions (bins) of 125,000 cells each (except for 62,500 cells sorted into bin 8) using bin sizes of roughly 5% (bin 1), 20%, 20%, 20%, 25%, 5%, 5%, 1% (bin 8) according to the log10-transformed intensity of fluorescence emission of small (G1) cells. Sorted pools were grown overnight and the cells were harvested for genomic DNA extraction and target enrichment NGS by Anchor-Seq.
Library characterization by Anchor-Seq
To determine cassette integration sites in CASTLING libraries, we used a modified Anchor-Seq protocol:12 Libraries #1.1, #1.2, and #1.3 (Fig. 4) were prepared with vectorette bubble adapters (vect_illumina-P5 and vect_illumina-P7) that themselves contained barcodes for multiplexing several samples in the same sequencing run. For all other libraries (Figs. 3 and 5), the adapters contained UMIs to account for PCR bias during NGS library preparation (Supplementary Fig. 7b); the barcodes for multiplexing were introduced at the stage of the Illumina sequencing adapters. Genomic DNA (gDNA) was isolated from a saturated overnight culture (approximately 2 × 108 cells) using YeaStar Genomic DNA Kit (Zymo Research). Genomic DNA (125 µL at 15 ng/µL in ultrapure water) was fragmented by sonication to 800–1000 bp in a microTUBE Snap-Cap AFA Fiber on a Covaris M220 focused ultrasonicator (Covaris Ltd.). In our hands, 51 s shearing time per tube, a peak incident power of 50 W, a duty factor of 7%, and 200 cycles per burst robustly yielded the required size range. Adapters were prepared by combining 50 µM of the respective Watson and Crick oligonucleotides (Supplementary Table 6). Each mixture was heated up to 95 °C for 5 min, followed by cooling to 23 °C in a large water bath over the course of at least 30 min. Annealed adapters were stored at –20 °C until use. We prepared an equimolar mixture of annealed adapters that contained either none, one, or two additional bases inserted after the UMI (halfY-Rd2-Watson and halfY-Rd2-NN-Crick) to increase heterogeneity of the sequencing library. The fragmented genomic DNA (55.5 µL) were end repaired and dA tailed (NEBNext Ultra End Repair/dA-Tailing Module, New England Biolabs) and ligated to 1.5 µL of the 25 µM annealed adapter mix (NEBNext Ultra Ligation Module, New England Biolabs). Products larger than 400 bp were purified by gel excision (using NuSieve, described above) and eluted in 50 µL 5 mM Tris-HCl (pH = 8.5). SIC integration sites were enriched by PCR (NEBNext Ultra Q5 Master Mix, New England Biolabs) using 12 µL of the eluate with suitable pairs of adapter- and SIC-specific primers. Initial denaturation was 98 °C (30 s), followed by 15 cycles of 98 °C (10 s), and 68 °C (75 s). Final extension was carried out at 65 °C (5 min). Reactions were purified using Agencourt AMPure XP beads (0.9 vol, Beckman Coulter). The fragments were further enriched in a second PCR using the custom-designed primers Ill-ONP-P7-bi7NN and Ill-ONP-P5-bi5NN to introduce technical sequences necessary for multiplexed Illumina sequencing. After size selection by gel extraction (250–600 bp), NGS library concentrations were measured by Qubit Fluorometer (dsDNA HS Assay Kit, Thermo Fisher Scientific) and by qPCR (NEBNext Library Quant, New England Biolabs, LightCycler 480, Roche). Furthermore, their size distribution was verified either on a Fragment Analyzer (Advanced Analytical Technologies Inc) or by gel electrophoresis of the qPCR product. Quantified libraries were sequenced on a NextSeq 500 (for pool C, Deep Sequencing Core Facility) or on a NextSeq 550 sequencing system (both Illumina, 300 cycle paired end). If necessary, 10–15% phiX gDNA was spiked in to increase sequence complexity.
For MinION nanopore sequencing, the first PCR was carried out as described above for library #1.1 (using 20 cycles) to introduce barcodes for multiplexing FACS bins on the same sequencing run, column purified, and the NGS library was prepared for 1D sequencing by ligation (SQK-LSK108) according to the manufacturer's protocols (Oxford Nanopore Technologies). Sequencing was performed on a MinION device using R9.4 chemistry (Oxford Nanopore Technologies). Samples were multiplexed considering the number of different clones present in a pool, bin size, gDNA yield after extraction, and yield of the first PCR.
Insertion junction sequencing of non-fluorescent cells
Cells from library 1a were grown in selective synthetic media (SC with monosodium glutamate and hygromycin B) for approximately eight generations, and non-fluorescent cells were sorted into glass-bottom 384-microscopy plates using a FACS Aria III as described under "FACS". The absence of fluorescence was confirmed by fluorescence microscopy and 60 non-fluorescent clones were pooled and grown overnight to full density. Anchor-Seq amplicons were prepared as described under "Library characterization by Anchor-Seq" using primers NegCells-NNN (Supplementary Table 6). The amplicons were size selected (~600 bp) and cloned using the NEB PCR Cloning Kit (New England Biolabs). The resulting amplicons were Sanger sequenced at Eurofins Genomics (Cologne, Germany).
Illumina NGS data analysis and read counting
Raw reads (150 bp paired-end) were trimmed and de-multiplexed using a custom script written in Julia v0.6.0 with BioSequences v0.8.0 (https://github.com/BioJulia/BioSequences.jl). Read pairs were retained upon detection of basic Anchor-Seq adapter features. Next, these reads were aligned to a reference with all targeted loci using bowtie259 v2.3.3.1. Such references comprised the constant sequence starting from the feature cassette amplified by PCR and 600 bp of the respective proximal genomic sequence of S. cerevisiae strain S288C (R64-2-1). For off-target analysis, the constant Anchor-Seq adapter features were trimmed off the reads. The remaining variable sequence of the reads was then aligned with bowtie210 to the complete and unmodified genome sequence of S. cerevisiae strain S288C (R64-2-1). A read pair that aligned to the reference was counted if both reads of the pair were aligned, such that the forward read started at the constant region of the Anchor-Seq adapter-specific primers. In addition, we set the requirement that the inferred insert size was longer than the sequence provided for homologous recombination during the tagging reaction. Counting was implemented using a custom script (Python v3.6.3 with HTSeq 0.9.160 and pysam 0.13). In case UMIs were included in the Anchor-Seq adapter design, they were normalized for sequencing errors using UMI-tools (version 0.5.3)61.
For analysis of data obtained from amplicon sequencing (i.e., from PCR and SIC amplification reactions), the reads were either denoised from sequencing errors using dada2 (version 1.5.2)37 to evaluate fidelity and abundance or directly aligned with bowtie2 to a reference build from the designed oligonucleotides. Denoised reads were assigned to loci based on the minimal hamming distance to designed oligonucleotides.
Analysis of nanopore sequencing data and read counting
Nanopore sequencing yields very long reads. Therefore, the reference was assembled as aforementioned but using 2000 bp of the locus-specific sequences plus the constant sequence of the cassette enriched by the Anhor-Seq reaction. MinION data were basecalled using the Albacore Sequencing Pipeline Software v2.0.2 (Oxford Nanopore Technologies). For data analysis, a custom script was used to extract and de-multiplex informative sequence segments from all reads based on approximate matching of amplicon features (e.g., the constant region of the vectorette or feature cassette; Julia v0.6.0 with BioSequences v0.8.0, see above). Matching with a Levenshtein distance of 1 was sufficient to discriminate between the barcodes used in this study. Then, the extracted sequence segments were aligned to the reference using minimap2 (v2.2-r409)62, using the default parameters (command line option: "-ax map-ont") for mapping of long noisy genomic reads. Only reads that mapped to the beginning of the reference were counted using a custom shell script. The count data for the clones retrieved in each library for cells contained in the individual bins after FACS are provided in Supplementary Table 3.
Calculation of fluorescence intensity estimates
Fluorescence intensity estimates were calculated as previously described for FACS-based profiling of pooled yeast libraries:63 Let b be a natural number from 1 to 8 indicating one of our eight FACS bins B for which we know the fraction pb of the total cell population sorted into this bin. Further, we determined by sequencing for each bin the number of reads rg,b of an individual genotype g (tagged ORF) of all detected genotypes G. The observed unnormalized cell distribution of g is given by:
$$\tilde C_g\left( b \right) = \frac{{r_{g,b}}}{{\mathop {\sum }\nolimits_{g \in G} r_{g,b}}}p_b.$$
We define the fluorescence intensity estimate for g as the empirical mean of \(\tilde C_g\):
$${\mathrm{fluorescence}}\,{\mathrm{intensity}}\,{\mathrm{estimate:}} = E_{f\sim \tilde C_g}\left[ b \right] = \mathop {\sum }\limits_b \frac{{b \cdot \tilde C_g}}{{\mathop {\sum }\nolimits_{b \in B} \tilde C_g}}.$$
Calculations and statistical analyses
Statistical analyses were performed using R as specified in the scripts or legends.
Estimation of co-integrand number
We assumed that most co-integrands would result from doubly transformed individuals. So, the number of phenotypic heterozygous individuals (e.g., GFP + RFP + or kanR hygR) represents half of the co-integrands if both feature cassettes that were transformed at equimolar ratios have an equal probability of being taken up with the likes of them (i.e., GFP+ GFP+ and RFP+ RFP+) as with each other. Further, we assumed that the fluorescent protein or the antibiotic resistance marker present in the feature cassette had no or only a minor impact on integration efficiency.
Calculation of copy number changes during RCA
Copy numbers (UMI counts) were normalized to the median UMI frequency in each sequencing experiment and the Gaussian kernel density estimate plotted. Fold changes were calculated as normalized UMI counts after RCA divided by normalized UMI counts after PCR for each oligonucleotide.
Software and figure generation
Proportional Venn diagrams were generated using eulerAPE64. Analyses were performed using R v3.4.1/v3.5.1 with Biostrings v2.44.265 and data.table v1.10.4/v1.11.4. Plots were generated using ggplot2 v2.3.0 and figures were made using Apple Keynote 8.2.
Raw sequencing data has been deposited at the BioProject database under accession code PRJNA545279 as well as at heiDATA (https://doi.org/10.11588/data/L45TRX). Plasmids and plasmid maps are available upon request. The source data underlying Figs. 1b–d, 3b–i, 4a–c, 5b, c, e and Supplementary Figs. 1, 2, 3a–c, 4, 5a–d, 8, 9, and 10 are provided as Source Data file. Any other relevant data is available from the authors upon request.
The source code of the R shiny application for oligonucleotide design is available from our github repository (https://github.com/knoplab/castR/tree/v1.0).
Sopko, R. et al. Mapping pathways and phenotypes by systematic gene overexpression. Mol. Cell 21, 319–330 (2006).
CAS ADS Article Google Scholar
Douglas, A. C. et al. Functional analysis with a barcoder yeast gene overexpression system. G3 2, 1279–1289 (2012).
Kuzmin, E. et al. Systematic analysis of complex genetic interactions. Science 360, eaao1729 (2018).
Ghaemmaghami, S. et al. Global analysis of protein expression in yeast. Nature 425, 737–741 (2003).
Hu, C.-D., Chinenov, Y. & Kerppola, T. K. Visualization of interactions among bZIP and Rel family proteins in living cells using bimolecular fluorescence complementation. Mol. Cell 9, 789–798 (2002).
Belle, A., Tanay, A., Bitincka, L., Shamir, R. & O'Shea, E. K. Quantification of protein half-lives in the budding yeast proteome. Proc. Natl. Acad. Sci. USA 103, 13004–13009 (2006).
Khmelinskii, A. et al. Tandem fluorescent protein timers for in vivo analysis of protein dynamics. Nat. Biotechnol. 30, 708–714 (2012).
Gavin, A.-C. et al. Functional organization of the yeast proteome by systematic analysis of protein complexes. Nature 415, 141–147 (2002).
Krogan, N. J. et al. Global landscape of protein complexes in the yeast Saccharomyces cerevisiae. Nature 440, 637–643 (2006).
Tarassov, K. et al. An in vivo map of the yeast protein interactome. Science 320, 1465–1470 (2008).
Yofe, I. et al. One library to make them all: streamlining the creation of yeast libraries via a SWAp-Tag strategy. Nat. Methods 13, 371–378 (2016).
Meurer, M. et al. Genome-wide C-SWAT library for high-throughput yeast genome tagging. Nat. Methods 15, 598–600 (2018).
Winston, F., Dollard, C. & Ricupero-Hovasse, S. L. Construction of a set of convenient Saccharomyces cerevisiae strains that are isogenic to S288C. Yeast 11, 53–55 (1995).
Wilkening, S. et al. Genotyping 1000 yeast strains by next-generation sequencing. BMC Genom. 14, 90 (2013).
Roemer, T., Davies, J., Giaever, G. & Nislow, C. Bugs, drugs and chemical genomics. Nat. Chem. Biol. 8, 46–56 (2011).
Shalem, O. et al. Genome-scale CRISPR-Cas9 knockout screening in human cells. Science 343, 84–87 (2014).
Zhou, Y. et al. High-throughput screening of a CRISPR/Cas9 library for functional genomics in human cells. Nature 509, 487–491 (2014).
Kuscu, C. et al. CRISPR-STOP: gene silencing through base-editing-induced nonsense mutations. Nat. Methods 14, 710–712 (2017).
Garst, A. D. et al. Genome-wide mapping of mutations at single-nucleotide resolution for protein, metabolic and genome engineering. Nat. Biotechnol. 35, 48–55 (2017).
Sadhu, M. J. et al. Highly parallel genome variant engineering with CRISPR-Cas9. Nat. Genet. 50, 510–514 (2018).
Roy, K. R. et al. Multiplexed precision genome editing with trackable genomic barcodes in yeast. Nat. Biotechnol. 36, 512–520 (2018).
Guo, X. et al. High-throughput creation and functional profiling of DNA sequence variant libraries using CRISPR–Cas9 in yeast. Nat. Biotechnol. 36, 540–546 (2018).
Nitta, N. et al. Intelligent image-activated cell sorting. Cell 175, 266–276.e13 (2018).
Zetsche, B. et al. Cpf1 is a single RNA-guided endonuclease of a class 2 CRISPR-Cas system. Cell 163, 759–771 (2015).
Verwaal, R., Buiting-Wiessenhaan, N., Dalhuijsen, S. & Roubos, J. A. CRISPR/Cpf1 enables fast and simple genome editing of Saccharomyces cerevisiae. Yeast 35, 201–211 (2017).
Zhang, L., Kasif, S., Cantor, C. R. & Broude, N. E. GC/AT-content spikes as genomic punctuation marks. Proc. Natl. Acad. Sci. USA 101, 16855–16860 (2004).
Kim, H. K. et al. In vivo high-throughput profiling of CRISPR-Cpf1 activity. Nat. Methods 14, 153–159 (2017).
Tu, M. et al. A 'new lease of life': FnCpf1 possesses DNA cleavage activity for genome editing in human cells. Nucleic Acids Res. 45, 11295–11304 (2017).
Swiat, M. A. et al. FnCpf1: a novel and efficient genome editing tool for Saccharomyces cerevisiae. Nucleic Acids Res. 45, 12585–12598 (2017).
Arimbasseri, A. G., Rijal, K. & Maraia, R. J. Transcription termination by the eukaryotic RNA polymerase III. Biochim. Biophys. Acta 1829, 318–330 (2013).
Orr-Weaver, T. L., Szostak, J. W. & Rothstein, R. J. Yeast transformation: a model system for the study of recombination. Proc. Natl. Acad. Sci. USA 78, 6354–6358 (1981).
Huh, W.-K. et al. Global analysis of protein localization in budding yeast. Nature 425, 686–691 (2003).
Dubreuil, B. et al. YeastRGB: comparing the abundance and localization of yeast proteins across cells and libraries. Nucleic Acids Res. 425, 737 (2018).
Shaner, N. C. et al. A bright monomeric green fluorescent protein derived from Branchiostoma lanceolatum. Nat. Methods 10, 407–409 (2013).
Kivioja, T. et al. Counting absolute numbers of molecules using unique molecular identifiers. Nat. Methods 9, 72–74 (2011).
Engel, S. R. et al. The reference genome sequence of Saccharomyces cerevisiae: then and now. G3 4, 389–398 (2014).
Callahan, B. J. et al. DADA2: high-resolution sample inference from Illumina amplicon data. Nat. Methods 13, 581–583 (2016).
Ho, B. et al. Unification of protein abundance datasets yields a quantitative Saccharomyces cerevisiae proteome. Cell Syst. 6, 192–205 (2018).
Newman, J. R. S. et al. Single-cell proteomic analysis of S. cerevisiae reveals the architecture of biological noise. Nature 441, 840–846 (2006).
de Godoy, L. M. et al. Comprehensive mass-spectrometry-based proteome quantification of haploid versus diploid yeast. Nature 455, 1251–1254 (2008).
Baudin, A., Ozier-Kalogeropoulos, O., Denouel, A., Lacroute, F. & Cullin, C. A simple and efficient method for direct gene deletion in Saccharomyces cerevisiae. Nucleic Acids Res. 21, 3329–3330 (1993).
Khmelinskii, A. et al. Protein quality control at the inner nuclear membrane. Nature 516, 410–413 (2014).
Billon, P. et al. CRISPR-mediated base editing enables efficient disruption of eukaryotic genes through induction of STOP codons. Mol. Cell 67, 1068–1079.e4 (2017).
Tong, A. H. et al. Systematic genetic analysis with ordered arrays of yeast deletion mutants. Science 294, 2364–2368 (2001).
Kosuri, S. & Church, G. M. Large-scale de novo DNA synthesis: technologies and applications. Nat. Methods 11, 499–507 (2014).
Klein, J. C. et al. Multiplex pairwise assembly of array-derived DNA oligonucleotides. Nucleic Acids Res. 44, e43–e43 (2016).
Anand, R., Beach, A., Li, K. & Haber, J. Rad51-mediated double-strand break repair and mismatch correction of divergent substrates. Nature 544, 377–380 (2017).
Haruki et al. The anchor-away technique: rapid, conditional establishment of yeast mutant phenotypes. Mol. Cell 31, 925–932 (2008).
Barteneva, N. S., Fasler-Kan, E. & Vorobjev, I. A. Imaging flow cytometry: coping with heterogeneity in biological systems. J. Histochem. Cytochem. 60, 723–733 (2012).
Fueller et al. CRISPR/Cas12a-assisted PCR tagging of mammalian genes. https://www.biorxiv.org/content/10.1101/473876v1. 2018.
Committee on Gene Drive Research in Non-Human Organisms: Recommendations for Responsible Conduct, Board on Life Sciences, Division on Earth and Life Studies. National Academies of Sciences, Engineering, and Medicine. In Advancing Science, Navigating Uncertainty, and Aligning Research with Public Values (National Academies Press, Washington, 2016). https://doi.org/10.17226/23405.
DiCarlo, J. E., Chavez, A., Dietz, S. L., Esvelt, K. M. & Church, G. M. Safeguarding CRISPR-Cas9 gene drives in yeast. Nat. Biotechnol. 33, 1250–1255 (2015).
Brachat, A., Kilmartin, J. V., Wach, A. & Philippsen, P. Saccharomyces cerevisiae cells with defective spindle pole body outer plaques accomplish nuclear migration via half-bridge-organized microtubules. Mol. Biol. Cell 9, 977–991 (1998).
Johnston, M. A model fungal gene regulatory mechanism: the GAL genes of Saccharomyces cerevisiae. Microbiol. Rev. 51, 458–476 (1987).
Knop, M. et al. Epitope tagging of yeast genes using a PCR-based strategy: more tags and improved practical routines. Yeast 15, 963–972 (1999).
Janke, C. et al. A versatile toolbox for PCR-based tagging of yeast genes: new fluorescent proteins, more markers and promoter substitution cassettes. Yeast 21, 947–962 (2004).
Sheff, M. A. & Thorn, K. S. Optimized cassettes for fluorescent protein tagging in Saccharomyces cerevisiae. Yeast 21, 661–670 (2004).
Khmelinskii, A. & Knop, M. Analysis of protein dynamics with tandem fluorescent timers. Methods Mol. Biol. 1174, 195–210 (2014).
Langmead, B. & Salzberg, S. L. Fast gapped-read alignment with Bowtie 2. Nat. Methods 9, 357–359 (2012).
Anders, S., Pyl, P. T. & Huber, W. HTSeq—a Python framework to work with high-throughput sequencing data. Bioinformatics 31, 166–169 (2015).
Smith, T. et al. UMI-tools: modeling sequencing errors in Unique Molecular Identifiers to improve quantification accuracy. Genome Res. 27, 491 (2017).
Li, H. Minimap2: pairwise alignment for nucleotide sequences. Bioinformatics 34, 3094–3100 (2018).
Kats, I. et al. Mapping degradation signals and pathways in a eukaryotic N-terminome. Mol. Cell 70, 488–501.e5 (2018).
Micallef, L. & Rodgers, P. eulerAPE: drawing area-proportional 3-Venn diagrams using ellipses. PLoS ONE 9, e101717 (2014).
Pagès, H., Aboyoun, P., Gentleman, R. & DebRoy, S. Biostrings: String objects representing biological sequences, and matching algorithms. R package version 2.40.2. (2016).
The authors wish to thank Ilia Kats, Cyril Mongis, and Krisztina Gubicza for help with IT infrastructure and experiments. We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG KN498/12–1), the state of Baden-Württemberg through bwHPC for high-performance computing and SDS@hd for data storage (grant INST 35/1314–1 FUGG), and the Dietmar Hopp foundation. K.H. was supported by a HBIGS graduate school fellowship. We also acknowledge help from the Flow Cytometry Core Facility at ZMBH, and the Deep Sequencing Core Facility of the University of Heidelberg, both of which are supported by the CellNetworks cluster of excellence. E.D.L. acknowledges support from A.-M. Boucher, from the Estelle Funk Foundation, the Estate of Fannie Sherr, the Estate of Albert Delighter, the Merle S. Cahn Foundation, Mrs. Mildred S. Gosden, the Estate of Elizabeth Wachsman, the Arnold Bortman Family Foundation. E.D.L. is incumbent of the Recanati Career Development Chair of Cancer Research.
These authors contributed equally: Benjamin C. Buchmuller, Konrad Herbst.
Zentrum für Molekulare Biologie der Universität Heidelberg (ZMBH), DKFZ-ZMBH Alliance, 69120, Heidelberg, Germany
Benjamin C. Buchmuller, Konrad Herbst, Matthias Meurer, Daniel Kirrmaier & Michael Knop
Cell Morphogenesis and Signal Transduction, German Cancer Research Center (DKFZ), DKFZ-ZMBH Alliance, 69120, Heidelberg, Germany
Daniel Kirrmaier & Michael Knop
Department of Structural Biology, Weizmann Institute of Science, Rehovot, 7610001, Israel
Ehud Sass & Emmanuel D. Levy
Benjamin C. Buchmuller
Konrad Herbst
Matthias Meurer
Daniel Kirrmaier
Ehud Sass
Emmanuel D. Levy
Michael Knop
M.K. conceived the project. M.K., B.C.B., K.H., and M.M. designed the experiments and B.C.B., K.H., M.M., and D.K. performed the experiments. E.D.L. and E.S. contributed methods. K.H., B.C.B., M.K., and M.M. analyzed the data. M.K. and B.C.B. wrote the manuscript. All authors read and approved the final manuscript.
Correspondence to Michael Knop.
Peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Data 1
Description of Additional Supplementary Files
Peer Review File
Buchmuller, B.C., Herbst, K., Meurer, M. et al. Pooled clone collections by multiplexed CRISPR-Cas12a-assisted gene tagging in yeast. Nat Commun 10, 2960 (2019). https://doi.org/10.1038/s41467-019-10816-7
Enhanced ethanol production from sugarcane molasses by industrially engineered Saccharomyces cerevisiae via replacement of the PHO4 gene
Renzhi Wu
, Dong Chen
, Shuwei Cao
, Zhilong Lu
, Jun Huang
, Qi Lu
, Ying Chen
, Xiaoling Chen
, Ni Guan
, Yutuo Wei
& Ribo Huang
RSC Advances (2020)
Design and Construction of Portable CRISPR-Cpf1-Mediated Genome Editing in Bacillus subtilis 168 Oriented Toward Multiple Utilities
Wenliang Hao
, Feiya Suo
, Qiao Lin
, Qiaoqing Chen
, Li Zhou
, Zhongmei Liu
, Wenjing Cui
& Zhemin Zhou
Frontiers in Bioengineering and Biotechnology (2020)
CRISPR-Cas12a–assisted PCR tagging of mammalian genes
Julia Fueller
, Konrad Herbst
, Matthias Meurer
, Krisztina Gubicza
, Bahtiyar Kurtulmus
, Julia D. Knopf
, Daniel Kirrmaier
, Benjamin C. Buchmuller
, Gislene Pereira
, Marius K. Lemberg
& Michael Knop
Journal of Cell Biology (2020)
A colorimetric RT-LAMP assay and LAMP-sequencing for detecting SARS-CoV-2 RNA in clinical samples
Viet Loan Dao Thi
, Kathleen Boerner
, Lukas PM Kremer
, Andrew Freistaedter
, Dimitrios Papagiannidis
, Carla Galmozzi
, Megan L. Stanifer
, Steeve Boulant
, Steffen Klein
, Petr Chlanda
, Dina Khalid
, Isabel Barreto Miranda
, Paul Schnitzler
, Hans-Georg Kräusslich
, Michael Knop
& Simon Anders
Science Translational Medicine (2020)
Editors' Highlights
Top Articles of 2019
Nature Communications ISSN 2041-1723 (online) | CommonCrawl |
Techno-economic analysis of a new downstream process for the production of astaxanthin from the microalgae Haematococcus pluvialis
Andreas Bauer1 &
Mirjana Minceva1
Bioresources and Bioprocessing volume 8, Article number: 111 (2021) Cite this article
The biotechnological production of the carotenoid astaxanthin is done with the microalgae Haematococcus pluvialis (H. pluvialis). Under nutrient deficiency and light stress, H. pluvialis accumulates astaxanthin intracellularly and forms a resistant cyst cell wall that impedes direct astaxanthin extraction. Therefore, a complex downstream process is required, including centrifugation, mechanical cell wall disruption, drying, and supercritical extraction of astaxanthin with CO2. In this work, an alternative downstream process based on the direct extraction of astaxanthin from the algal broth into ethyl acetate using a centrifugal partition extractor (CPE) was developed. A mechanical cell wall disruption or germination of the cysts was carried out to make astaxanthin accessible to the solvent. Zoospores containing astaxanthin are released when growth conditions are applied to cyst cells, from which astaxanthin can directly be extracted into ethyl acetate. Energy-intensive unit operations such as spray-drying and extraction with supercritical CO2 can be replaced by directly extracting astaxanthin into ethyl acetate. Extraction yields of 85% were reached, and 3.5 g of oleoresin could be extracted from 7.85 g homogenised H. pluvialis biomass using a CPE unit with 244 mL column volume. A techno-economic analysis was done for a hypothetical H. pluvialis production facility with an annual biomass output of 8910 kg. Four downstream scenarios were examined, comparing the novel process of astaxanthin extraction from homogenised cyst cells and germinated zoospores via CPE extraction with the conventional industrial process using in-house or supercritical CO2 extraction via an external service provider. After 10 years of operation, the highest net present value (NPV) was determined for the CPE extraction from germinated zoospores.
The red carotenoid "astaxanthin" is used as a feed additive for colouring salmon, seafood, and poultry (Shah et al. 2016). It is increasingly used in the cosmetics and dietary supplement industry due to its oxidative characteristics and healthful properties (Li et al. 2020; Astaxanthin Market Size, Share & Trends Analysis Report 2020). Astaxanthin can be chemically synthesised or biotechnologically produced with the microalgae H. pluvialis (Nguyen 2013). Due to the increased consumer demand for sustainable ecological products, the market for biotechnologically produced astaxanthin is expected to rise to $148.1 million US (Haematococcus Pluvialis Market 2021).
The life cycle of H. pluvialis can be divided into mobile and non-mobile phases (Zhang et al. 2017). During favourable growth conditions, the microalgae live mainly as green, flagellated vegetative cells (Fig. 1a). Vegetative cells consist of a cell membrane and an extracellular gelatinous matrix (Hagen et al. 2002). Under stress conditions (nitrate depletion and high light intensity), the vegetative cells become round, expand in cell size, and form immobile aplanospores (Fig. 1b). They accumulate astaxanthin in the cytoplasm of the cell under persistent stress conditions (Fig. 1c) and develop a rigid and resistant cell wall (Fig. 1d) (Hagen et al. 2002; Grünewald et al. 1997). When growth conditions are applied to cyst cells, these form a sporangium and release astaxanthin containing zoospores (Fig. 1e), which only have a thin cell matrix (Fig. 1f). After a specific time, the zoospores become round and form non-motile aplanospores (Fig. 1g). The industrial process is usually performed phototrophically in two steps. In the first stage of the process, the algal biomass is cultivated to reach high cell concentrations under optimal growth conditions, with a sufficient supply of nutrients such as nitrates and phosphates (Nahidian et al. 2018), CO2 (Chekanov et al. 2017), and artificial lighting (Katsuda et al. 2004; Xi et al. 2016b). Under nitrate and phosphate deficiency and light stress (Xi et al. 2016a; Sun et al. 2015), the astaxanthin synthesis is initiated in the second step. Astaxanthin accumulation is accompanied by the formation of a resistant cyst cell wall, which impedes direct and efficient astaxanthin extraction. Consequently, a complex downstream process is required, including harvesting the biomass via centrifugation, mechanical cell wall disruption, spray-drying, and the extraction of astaxanthin using supercritical CO2 (Panis and Carreon 2016). The current industrial process is shown schematically in Fig. 2, including the conventional downstream process with in-house (D1) supercritical CO2 extraction and via an external service provider (D4). Downstream processing in biotechnological processes often represents a bottleneck, showing potential for considerable economic savings (Hatti-Kaul 2000; Minceva and Bauer 2020). Microalgae harvesting may already account for up to 20–30% of the total production costs (Panis and Carreon 2016).
Schematic presentation of the biotechnological production of H. pluvialis, including centrifugation, mechanical cyst cell disruption, drying and a supercritical CO2 extraction performed in-house (D1) and by an external service provider (D4), as well as liquid–liquid chromatographic extraction of astaxanthin from mechanical disrupted cyst cells (D2) and germinated zoospores (D3) into ethyl acetate, and a subsequent solvent evaporation step
Schematic presentation of the extraction of astaxanthin into a solvent using a CPE column, including a filling the column with solvent; b equilibration with water; c injection of the algal biomass (zoospores or mechanically disrupted cyst cells); d extraction of astaxanthin from the aqueous algal broth into the solvent, and e fractioning the stationary phase in the elution–extrusion mode
Due to the highly rigid cell wall of the H. pluvialis cyst cells, the mechanical cell wall disruption also represents a procedural challenge. Energy-intensive mechanical processes such as bead milling or high-pressure homogenisation are used for industrial cell wall disruption. Up to three-cycle repetitions are needed to achieve sufficient cell wall disruption efficiency using high-pressure homogenisation (Praveenkumar et al. 2020). Drying represents an energy-intensive process step due to the high evaporation enthalpy of water (Δhvap. = 2442 kJ kg−1 at 25 °C) (Lide 2005). Spray-drying is commonly used in industry for drying H. pluvialis biomass. In this process step, the risk of astaxanthin degradation due to high temperatures or oxidation needs to be considered. The biomass obtained in the drying step needs to show defined densities for it to be processed in the subsequent supercritical CO2 extraction. In literature, supercritical CO2 extraction of astaxanthin from H. pluvialis was intensively studied (Krichnavaruk et al. 2008; Molino et al. 2018). Similar maximum extraction yields of 94% and 92% were reported for supercritical CO2 extraction at a pressure of 550 bar and 50 °C (without co-solvent) and 65 °C (with ethanol as co-solvent), respectively (Molino et al. 2018). An increase in the temperature to 80 °C was accompanied by a strong decrease in the yield, which was attributed to the degradation of astaxanthin at these temperatures (Molino et al. 2018). In another study, significantly lower yields of approx. 25% were obtained at a pressure of 400 bar and 70 °C. The yield could be increased to 36% by adding 10 v/v soybean oil as co-solvent (Krichnavaruk et al. 2008). On an industrial scale, up to 1000 bar are used for supercritical CO2 extraction of astaxanthin from H. pluvialis. Applying pressures ≥ 800 bar and a temperature range from 60 to 80 °C, extraction yields larger than 90% have been reported on an industrial scale (Tippelt 2019).
So far, the biotechnological production of astaxanthin from H. pluvialis cannot compete with synthetically produced astaxanthin in terms of production costs (Li et al. 2011). Several studies are presented in the literature to improve the downstream process of biotechnological astaxanthin production (Khoo et al. 2019b). These include alternatives to mechanical cyst disruption using hydrochloric acid or sodium hydroxide, followed by extraction using acetone, where extraction yields of 35% and 30% were reached (Mendes-Pinto et al. 2001). Other alternatives are magnetic-assisted extraction (Zhao et al. 2016) and ultrasound-assisted solvent extraction from dried biomass (Zou et al. 2013). Also, ionic liquids were used to extract astaxanthin from germinated zoospores (Praveenkumar et al. 2015) or dried cyst cell biomass (Liu et al. 2019). Using the CO2-based ionic liquid dimethylammonium dimethylcarbamate, extraction yields of 93% were reached for the extraction from dried cyst biomass (Khoo et al. 2021). Using a liquid biphasic floating system composed of 2-propanol and (NH4)2SO4, from 10 mg dried and disrupted H. pluvialis biomass dissolved in the salt-rich phase, extraction yields of 95% could be achieved within 15 min (Khoo et al. 2019a). Integrating an ultrasound horn, this process could be further optimised and scaled up to extract 500 mg dried H. pluvialis biomass, where yields of 84% could be reached (Khoo et al. 2020). In this study, two novel downstream processes (D2 and D3) were developed to extract astaxanthin from mechanically disrupted cyst cells (D2) or germinated zoospores (D3) directly from the fermentation broth into ethyl acetate using a liquid–liquid chromatographic column. In both cases, the astaxanthin oleoresin could be recovered after evaporation of ethyl acetate. The energy-intensive drying step and extraction with supercritical CO2 can be replaced in the novel downstream processes. The new downstream processes were compared with D1 and D4, which present the current industrial downstream process, including mechanical cell wall disruption, homogenisation, drying and extraction with supercritical CO2. In scenario D1, the supercritical CO2 extraction is performed in-house, in D4 it was considered to be carried out by an external service provider.
The core of the new downstream processes D2 and D3 is the liquid–liquid extraction of astaxanthin from the algal broth into ethyl acetate using a liquid–liquid chromatographic unit. Liquid–liquid chromatography is a solid support-free chromatographic method based on the distribution of solutes between two liquid phases. One of the two liquid phases (ethyl acetate saturated with water in processes D2 and D3) is held stationary in the unit by a centrifugal force. The other phase, the mobile phase (homogenised cyst cells in D2 and germinated zoospores in D3), is pumped through the stationary phase. Dispersion of the mobile phase into the stationary phase occurs, and solutes with lower partition coefficients move faster through the column than those with higher partition coefficients. Depending on the partition coefficients of the solutes, solute separation or extraction can be achieved. If the partition coefficient of a solute is very high, the solute will take a long time to elute from the column. This situation is unfavourable for chromatographic separation, but highly advantageous for extracting astaxanthin from an aqueous algal broth into ethyl acetate. Liquid–liquid chromatographic units exist in hydrodynamic and hydrostatic versions (Ito 2005). In this work, a hydrostatic CPE unit was used. A CPE column is composed of alternately stacked annular plates and annular discs. Chambers are milled into the annular discs, and channels link these chambers. Between two annular discs, an annular plate connects the last chamber of an annular disc with the next through a hole in the annular plate. Annular discs and annular plates are alternately placed on top of each other and mounted on the axis of a centrifuge. A centrifugal force is generated by rotation, and one phase is retained in the chambers (stationary phase, ethyl acetate), while the second phase (mobile phase, algal broth) is pumped through the column from chamber to chamber (Goll et al. 2015). If the mobile phase is the denser phase, this mode is called descending mode; if the mobile phase is the less dense phase, it is called ascending mode. CPE was already used for the extraction of β-carotene from the microalgae Dunaliella salina (Marchal et al. 2013) and torularhodin from the yeast Rhodotorula rubra (Ungureanu et al. 2013).
In this work, an operating parameter selection of the CPE extraction from mechanically disrupted cysts cells and germinated zoospores was made. A techno-economic study was performed to compare the novel CPE extraction processes from homogenised cyst cells and flagellated zoospores with the industrial supercritical CO2 extraction performed in-house or via an external service provider.
H. pluvialis cyst cell disruption and germination
The biomass for the CPE extraction experiment was provided by the project partner Sea & Sun Technology GmbH, Germany. Either mechanically disrupted cyst cells or germinated zoospores were used for CPE extraction. For germination, cyst cells were harvested, centrifuged at 5500 rpm with a Sigma 3-16KL centrifuge from Sigma GmbH (Germany), and washed with distilled water. A previous publication demonstrated that zoospore release was enhanced under heterotrophic germination conditions compared to photo- or mixotrophic germination (Bauer and Minceva 2021). The highest extraction yield of astaxanthin into ethyl acetate was reached by combining mixotrophic and heterotrophic germination conditions at twice the nitrate concentration of the Bolds modified basal medium (BBM), illumination under mixotrophic conditions until nitrate depletion and subsequent germination under heterotrophic conditions (Bauer and Minceva 2021). Thus, to germinate the cyst cells, these were suspended in BBM with 4 mM glucose and illuminated under mixotrophic conditions for 21 h with a red light (maximum wavelength of 658 nm) and at an intensity of 75 µmol m−2 s−1 followed by heterotrophic cultivation for 28 h (Bauer and Minceva 2021). Red light was chosen, as higher H. pluvialis growth rates compared to fluorescence lamps have been reported in the literature (Katsuda et al. 2004). The germination was done at ambient air without additional CO2. CPE extraction was performed 49 h after the start of germination when the maximum zoospore release was achieved. Mechanical cell wall disruption of H. pluvialis cyst cells was performed using the APV 1000 high-pressure homogeniser from APV Systems (Denmark) at 750 bar. Mechanical cell wall disruption was carried out in one or three cycles.
Astaxanthin quantification
The biomass astaxanthin content of the biomass and extracted into ethyl acetate was determined as described in our previous study (Bauer and Minceva 2019). For HPLC analysis, the astaxanthin extract was dissolved in solvent B (methanol, MTBE, water, 8:89:3, v/v) and filtered with a 0.22 µm disposable nylon syringe filter. The astaxanthin quantification was carried out on an HPLC unit (LC-20AB, Shimadzu, Japan), using a YMC carotenoid column (C30, 3 μm, 150 × 4.6 mm, YMC Co., Japan) with a diode array detector (SPD-M20A, Shimadzu, Japan) according to our previous study (Bauer and Minceva 2019). Solvent A (methanol, MTBE, water, 81:15:4, v/v) and solvent B (methanol, MTBE, water, 8:89:3, v/v) were used as the mobile phase. The gradient of the solvent A and B was as follows: 2% solvent B for 11 min, a linear gradient from 2% solvent B to 40% solvent B for 7 min, 40% solvent B for 6.5 min followed by a linear gradient to 100% solvent B for 2.5 min, 100% solvent B for 3 min, a linear gradient to 2% solvent B for 3 min, held for 3 min. The mobile phase flow rate was 1 mL min−1 using an injection volume of 10 µL.
Extraction of astaxanthin from H. pluvialis using a centrifugal partition extractor
Extraction experiments were conducted using the centrifugal partition extractor CPC 250 PRO SPECIAL BIO Version (acronym CPE) from Armen Instrument (France), with an experimentally determined column volume of 244 mL (Roehrer and Minceva 2019). The column consists of 12 discs, where each disc has 20 engraved twin-cells resulting in a total of 240 cells. The discs are made of stainless steel and are also coated with polytetrafluoroethylene. The maximum rotational speed achievable was 3000 rpm, with a permitted pressure drop of 100 bar. Two isocratic pumps, model 306 50 C, from Gilson (USA), equipped with an 806 Manometric Module (Gilson, USA), were used to pump the two liquid phases for the CPE extraction experiments.
The process for the extraction of astaxanthin from zoospores or mechanically disrupted cyst cells using a CPE unit is presented schematically in Fig. 3. First, the CPE unit was filled with ethyl acetate (saturated with water) as a stationary phase (see Fig. 3a). The rotation was set to 1800 rpm, and then water (saturated with ethyl acetate) was pumped (see Fig. 3b). Depending on the set flow rate, a specific amount of the stationary phase was displaced from the column.
Astaxanthin extracted into the stationary phase and after cleaning the CPE unit with 80 mL acetone, for injection volumes of 20, 120, 240, 360, and 480 mL of homogenised cyst cells with biomass concentrations of 33 g L−1, and a flow rate of 40 mL min−1 at a rotational speed of 1800 rpm
When the column reaches its hydrodynamic equilibrium, i.e. no more stationary phase leaves the column, the algal broth (zoospores or mechanically disrupted cyst cells) was injected into the column via a 20-mL injection loop (Fig. 3c). The injected biomass concentrations cDW,injected of each experiment are presented in Table 1. After injection, water (saturated with ethyl acetate) was continuously pumped, and astaxanthin was extracted from the aqueous algal broth (zoospores or mechanically disrupted cyst cells) into ethyl acetate (see Fig. 3d). After a predefined time tswitch (Eq. 1) (Fig. 3d), the solvent-rich phase was pumped into the column (Fig. 3e), causing the displacement of the content from the column. The first fractions collected contained the water-rich phase. After the whole water-rich phase was eluted from the column, the solvent-rich phase—loaded with astaxanthin—was eluted from the column, starting with the least concentrated fraction. The column was cleaned by injecting 80 mL acetone before the next extraction run was performed. Aliquots of the collected stationary phase and acetone used for cleaning were pipetted into 4 mL vials, evaporated using an Alpha 3–4 LSC basic freeze dryer from Martin Christ Gefriertrocknungsanlagen GmbH (Germany) and further processed to analyse the astaxanthin content using HPLC.
Table 1 Examined operating conditions of the used CPE column
The switching time, tswitch, was calculated according to Eq. 1, where VMP represents the volume of the water-rich phase in the column, F the flow rate of the mobile phase, and Vinj the injection volume of the zoospores or disrupted cyst cells. Equation 1 gives the time theoretically required for the extracted zoospore or disrupted cyst cells to leave the CPE column:
$$t_{{{\text{switch}}}} = \frac{{V_{{{\text{MP}}}} + V_{{{\text{inj}}}} }}{F}.$$
The astaxanthin extraction yield Y was defined according to Eq. 2, as the sum of the extracted mass of astaxanthin in the stationary phase (solvent) mATX,SP and the amount of astaxanthin recovered after cleaning off the CPE column mATX,clean, divided by the amount of astaxanthin in the feed biomass injected, i.e. mATX,F (see Fig. 3c):
$$Y = \frac{{m_{{{\text{ATX}},{\text{SP}}}} + m_{{{\text{ATX}},{\text{clean}}}} }}{{m_{{{\text{ATX}},F}} }}.$$
Astaxanthin cannot be extracted from cyst cells into ethyl acetate. Hence, Y depends on the number of released zoospores or mechanically disrupted cyst cells. Therefore, the yield Yextract, which considers the actual extractable amount of astaxanthin from the algal broth, was defined (Eq. 3):
$$Y_{{{\text{extract}}}} = \frac{Y}{{Y_{{{\text{single-stage}}\;{\text{extraction}}}} }}.$$
The extractable astaxanthin from the feed (zoospores or mechanically disrupted cyst cells) was determined for each extraction experiment by mixing 5 mL algal broth with 5 mL ethyl acetate using a Multi Bio RS-24 shaker from Biosan (Riga, Latvia) for 60 min. The samples were then centrifuged at 5500 rpm for 15 min with a Sigma 3-16KL centrifuge from Sigma GmbH (Germany). The mass of astaxanthin extracted into the solvent divided by the mass of astaxanthin in the feed before extraction defines the yield \(Y_{{{\text{single-stage}}\;{\text{extraction}}}} .\)
The total extraction time textraction was defined as the sum of filling the column with the stationary phase (tfilling), equilibration time (tequilibration), the time until the column was empty (tswitch), the time for fractioning the stationary phase (tfractioning), and the time for cleaning the column (tcleaning) and is presented in Eq. 4:
$$\begin{aligned} t_{{{\text{extraction}}}} & = t_{{{\text{filling}}}} + t_{{{\text{equilibration}}}} + \, t_{{{\text{switch}}}} \\ &\quad+ \, t_{{{\text{fractioning}}}} + \, t_{{{\text{cleaning}}}} . \end{aligned}$$
The conducted CPE experiments are presented in Table 1. The first set of experiments was performed to evaluate the influence of the biomass concentration and mobile flow rate on the extraction performance.
In the second set of experiments, the injection volume of the algal broth was increased from 20 to 480 mL. In the third set of experiments, extraction from germinated zoospores and homogenised cyst cells was done.
The objective of this work was to compare four different downstream processing scenarios for the recovery of astaxanthin from H. pluvialis. In addition to the existing process, a new process scheme was proposed, performing solvent extraction of astaxanthin from homogenised cyst cells or germinated zoospores using a CPE unit. First, the CPE extraction experiments were conducted using a column with 244 mL volume to evaluate the process performance, followed by a theoretical scale-up of the process to an industrial CPE unit with a 5-L column volume. Subsequently, the mass balances of the upstreaming and downstreaming process of four different scenarios (Fig. 2), supercritical extraction both in-house CO2 (D1) and by an external service provider (D4), as well as the extraction of astaxanthin from mechanically disrupted cyst cells (D2) and germinated zoospores (D3) using CPE extraction are discussed. Finally, the total capital investment and total product costs were determined, and an economic analysis was performed.
Extraction of astaxanthin from H. pluvialis using a CPE unit
First, 20 mL of homogenised cyst cells with a concentration of 72.5 g L−1 were injected at three different flow rates of the mobile phase: 10, 20, and 40 mL m−1. Additionally, at a flow rate of 40 mL min−1, a further 20 mL with a biomass concentration of 24 g L−1 was injected. The highest extraction yield Y of 72% was reached at a flow rate of 40 mL min−1 and an injected biomass concentration of 24 g L−1. In comparison, a yield of 46% was achieved at a flow rate of 40 mL min−1 and an injected biomass concentration of 72.5 g L−1. This suggests that a high biomass concentration of 72.5 g L−1 compared to 24 g L−1 limits the mass transfer of astaxanthin into the solvent due to the increased viscosity at higher biomass concentrations than at lower ones. Extraction yields of 65% and 58% were reached at an injected biomass concentration of 72.5 g L−1 and flow rates of the mobile phase of 20 mL min−1 and 10 mL min−1, respectively. Despite the shorter residence time of the biomass in the CPE unit of 1.8 min at a flow rate of 20 mL min−1 compared to 2.0 min at a flow rate of 10 mL min−1, the yield was larger at the higher flow rate. Higher contact area between the cells and the solvent result of a better dispersion of the cells in the solvent-rich stationary phase at a higher flow rate at a higher flow rate. The lower yield at a flow rate of 40 mL min−1, compared to 20 or 10 mL min−1, could be due to the short residence time of 1.4 min of the biomass in the CPE unit at a higher flow rate.
The subsequent experiments were performed at a flow rate of 40 mL min−1 and low biomass concentrations of 33 g L−1 and an astaxanthin content of 1.13 wt%. The injection volume of the homogenised algal broth was gradually increased from 20 to 480 mL. After each experiment, 80 mL of acetone was injected into the CPE column to recover any residues adsorbed onto the CPE column. Figure 4 shows the mass of astaxanthin in the fractions of the stationary phase collected and in the cleaning fraction with acetone. The maximum amount of astaxanthin extracted into the stationary phase is approx. 54 mg for injection volumes of 240 mL to 480 mL. Consequently, the extraction yield calculated using Eq. 2 drops from a maximum of 85% at an injection volume of 240 mL to 48% at 480 mL. This is due to an increasing amount of biomass, leaving the CPE unit non-extracted. After fractionating the stationary phase, significant amounts of astaxanthin could be recovered by injecting 80 mL of acetone. The relatively strong absorption of the carotenoid is probably due to the CPE unit being coated with polytetrafluoroethylene. The injection volume of 240 mL corresponds to an injected biomass of 7.85 g. From the biomass injected, 3.5 g oleoresin with 2.16 wt% astaxanthin was recovered in the stationary phase and cleaning step after the solvent had evaporated.
Process flow scheme for the h process steps D1, D2, D3 and D4
In the last set of experiments, CPE extraction from homogenised cyst cells was compared to that from flagellated zoospores. Therefore, 240 mL homogenised cyst cells and zoospores with a biomass concentration as reported in Table 1 were injected. Extraction yields of 70% and 80% were reached for the extraction from homogenised cyst cells and zoospores, respectively. In the literature, CPE extraction of β-carotene from living microalgae of Dunaliella salina has already been performed. There, extraction yields of 37% and 65% have been reported using decane and ethyl oleate as solvent (Marchal et al. 2013).
Scale-up of the CPE extractor to an industrial scale
For the scale-up, the results of the experiment with an injection volume of 240 mL homogenised biomass and a concentration of 33 g L−1, an astaxanthin content of 1.13 wt% in the biomass, and a flow rate of 40 mL min−1 was used. An extraction yield of 85% was reached, and 3.5 g oleoresin with an astaxanthin content of 2.16% was extracted from 7.85 g of biomass with 1.13 wt% astaxanthin. For the scale-up, it was assumed that the astaxanthin content of the biomass of the cyst cells is 5 wt%, and the astaxanthin content in the oleoresin is at around 10 wt%.
Table 2 shows the processing time for one batch extraction, using the experimental data of the experiment with an injection volume of 240 mL and a flow rate of 40 mL min−1. The CPE experiment conducted was scaled to a commercially available 5 L CPE from Gilson so that the flow rate resulting in contact time within the column stays the same. As the 5 L CPE unit consists of stainless steel, the column does not need daily cleaning.
Table 2 Process step times in CPE columns with a volume of 244 mL and 5 L
Table 3 shows the amount of biomass that can be extracted using one industrial 5-L CPE column in 24 h and 330 days. Assuming an annual biomass production of 8910 kg biomass (445.5 kg astaxanthin), three CPE units with a column volume of 5 L would be required, which was used for the downstream processes D2 and D3 in the subsequent techno-economic study.
Table 3 Injected amount of biomass, extracted amount of oleoresin and astaxanthin and required number of CPE units in a 24 h and 330 days operation schedule
Mass balances of the unit steps of the different downstream scenarios
The mass flows of each unit operation are presented in Fig. 5. These values were used for calculating the product costs of the process. Subsequently, the assumptions made for the upstreaming process and the unit operations harvesting, cell disruption/germination, spray-drying, CPE extraction, and solvent evaporation are discussed.
Composition of the total product costs (III), including the direct production costs (A), fixed charges (B), plant overhead costs (C), and general expenses (II) for the four downstream scenarios D1, D2, D3, and D4
A two-stage process was assumed for the upstream process, divided into a 10-day growth phase and a 5-day stress phase for astaxanthin accumulation (Fig. 2). The total photobioreactor volume installed was assumed to be 170 m3, as cleaning the harvested reactor (10 m3) each day was considered. It was assumed that 10 m3 of algal broth with a biomass concentration of 2.7 g L−1 and an astaxanthin content of 5 wt% is harvested every day. This means that 8910 kg of biomass can be harvested annually, which is the typical capacity of a small to medium-sized plant (Li et al. 2020). Cultivation on a day–night cycle was assumed so that lighting was required 12 h per day. The installed power was assumed to be 2 W Lalgal broth−1. Four rotary vane pumps with a flow of 1200 L h−1 each were considered for pumping 10 m3 of algal broth. Based on biomass composition of CH1.83O0.48N0.11 (Panis and Carreon 2016) and a CO2 conversion rate of 0.75 (Acien et al. 2012), 2.66 kg of CO2 was estimated to produce 1 kg algal biomass. CO2 was assumed to be dissolved into the algal broth using a CO2 sprinkler. The annual nitrogen consumption was calculated based on the chemical composition of the microalgae mentioned, while further nutrients were determined based on their proportion in the Bolts modified basal medium.
A disc-stack centrifuge (GEA SEE 10) was considered as a device for harvesting and concentrating the algal broth from an initial 2.7 g L−1 (0.27% total suspended solids, TSS) to 250 g L−1 (25% TSS). Accordingly, 9894 L can be separated within 4 h, applying a harvesting flow rate of 2500 L h−1. The yield for this process step was assumed to be 98%, corresponding to 29.4 kg of cyst cells after the centrifugation step.
Mechanical disruption using a mechanical high-pressure homogeniser (D1, D2, D4) and germination of the cyst cells (D3) was considered for cyst cell disruption. For the mechanical cyst cell disruption, the high-pressure homogeniser GEA Ariete NS3006H was selected. It was considered to operate at a flow rate of 25 L h−1, a pressure of 1500 bar, and an operating time of 4.15 h. For cyst cell germination, the approach described in "H. pluvialis cyst cell disruption and germination" section was scaled to a 1000-L reactor. A combination of mixotrophic or heterotrophic germination of cyst cells was considered, where astaxanthin extraction yields of up to 64% could be reached 41 h after the start of germination (Bauer and Minceva 2021). For the scale-up study, two parallel batch photobioreactors with a total volume of 1000 L (727.1 L cultivation volume with a cell concentration of 35.7 g L−1) were assumed. Nutrient composition of the BBM with additional 4 mM glucose was considered for germination. As presented in "Extraction of astaxanthin from H. pluvialis using a CPE unit" section, an astaxanthin extraction yield of 80% could be reached for the CPE extraction of germinated zoospores 41 h after the start of germination. This exceeds the reported astaxanthin extraction yield of 64% reported and might be due to differences in the cell status of the microalgae (age, nitrate level, etc.). To the author's knowledge, the germination of H. pluvialis cyst cells is not established on the industrial scale yet, but germination efficiencies were assumed for this study, which would enable a yield of 85% in the subsequent CPE extraction.
The yield of the unit steps of mechanical homogenisation and germination was assumed to be 98%, resulting in 25.93 kg biomass that can be processed in the subsequent unit operations of spray-drying (D1 and D4) or CPE extraction (D2 and D3).
Spray-drying
Using the GEA production minor spray dryer, drying with an evaporation rate of 16 Lwater h−1 was considered in this study for drying the algal biomass for subsequent extraction with supercritical CO2. The water content of 103.7 L can be reduced to 1.27 L (5 wt%water in the biomass) within around 6.4 h. In this process step, a yield of 98% was considered, which corresponds to 25.41 kg of dry biomass (Panis and Carreon 2016).
Extraction with supercritical CO2
For in-house supercritical CO2 extraction, using a 2 × 40 L (40 L net extractor volume) unit from NATEX Prozesstechnologie GesmbH, with 1000 bar operating pressure, was considered, which can process up to 10 tonnes of biomass within 330 days of annual operation. Applying these pressures, extraction with supercritical CO2 can be performed without an additional co-solvent (Tippelt 2019). An annual loss of 1 tonnes of CO2 must be considered, according to the manufacturer. For supercritical CO2 extraction via an external service provider (D4), it was assumed that the daily produced biomass would be stored at − 20 °C in a cold storage facility until 1000 kg are collected for shipment. As 25.41 kg biomass is collected daily after drying, this corresponds to a 40-day accumulation time.
CPE extraction
As presented in "Scale-up of the CPE extractor to an industrial scale" section, three CPE units with a column volume of 5 L are required to process 25.93 kg homogenised cyst cells or zoospores daily. Per batch injection, 0.161 kg algal biomass could be processed within 25.57 min (Table 3). Therefore, 56 batch injections are required daily per CPE unit, corresponding to a daily process time of 22.9 h. For CPE extraction, the solubility of 7.47 v/v ethyl acetate in water and 2.96 v/v water in ethyl acetate must be considered (Stephenson and Stuart 1986). Therefore, the feed must be saturated with ethyl acetate in both scenarios (germinated and homogenised), and the solvent within the CPE column must be saturated with water.
After the extraction of astaxanthin from the algal broth into ethyl acetate using three 5-L CPE units, 637.8 L of solvent-rich phase, consisting of 618.9 L ethyl acetate and 18.9 L water, must be evaporated daily to receive solvent-free astaxanthin oleoresin. In addition, separation of ethyl acetate from the water-rich phase was considered, although, according to local authorities in Germany, this is not needed for the quantities discharged into wastewater. At atmospheric pressure, the ethyl acetate content in water can be reduced to 0.01 v/v in single-stage evaporation and distillate with approx. 89 v/v ethyl acetate and 11 v/v water can be obtained (Toth 2019). A high-speed evaporator from Ecodyst, with a capacity of 100 L and a maximum evaporation rate of 55 L h−1, was considered for solvent evaporation. Given an average evaporation rate of 50 L h−1, total evaporation of 637.8 L solvent-rich phase takes 12.8 h, and evaporation of the 119.4 L ethyl acetate from the water-rich phase takes 2.4 h, respectively. Due to the hydrolysis of ethyl acetate to acetic acid and ethanol (Ghobashy et al. 2018), total solvent replacement every 10 days, i.e. 33 times a year, was considered.
Determination of the total capital investment and total product costs
In the following, the biotechnological production of astaxanthin using the microalgae H. pluvialis, and four different downstream processes, supercritical CO2 extraction performed in-house (D1), solvent extraction from mechanically disrupted cyst cells (D2) and germinated zoospores (D3), and supercritical CO2 extraction performed by an external service provider (D4) are examined about their economic profitability, using the procedure described by Peters and Timmerhaus (1991).
A list of the most important required equipment (TEC) was made for the upstream process and the four downstream scenarios (Turton et al. 2012). This list was used to determine the fixed-capital investments (FCI) and total capital investments (TCI). Finally, the total product costs were calculated as the sum of manufacturing costs and general expenses.
Table 4 lists the most significant equipment costs of the upstream process and the four downstream scenarios. The equipment costs for the upstream process were €965,600, with the costs of the photobioreactors making up about 50% of the upstreaming equipment costs. The equipment costs for the downstream processing are presented in Table 4, where the highest equipment costs for the downstream process, at €1.88 million, were reached for the in-house supercritical CO2 extraction (D1), and the lowest cost was calculated for the external supercritical CO2 extraction (D4) at €0.58 million. For scenario D4, an additional cooling cell was considered because storage of biomass for around 40 days (up to 1000 kg) was assumed before sending it to the external supercritical CO2 extraction service provider. In the conventional downstream processes D1 and D4, the main investment costs are the spray dryer (€450,000), and the additional investment costs of around €1.3 million for the supercritical CO2 extractor must be considered for in-house supercritical CO2 extraction in D1. The list price of a 1 L CPE column is around €92,000. The purchase price of a 5-L CPC column was estimated from the 1 L CPE column, using the six-tenth-factor rule (Peters and Timmerhaus 1991), resulting in a price of €241,900 per 5-L CPE column. The total direct plant costs are presented in Table 5 and consider the installation costs, instrumentation and control, piping, buildings, yard improvements, service facilities, and land and are then determined by a share of the TEC (Peters and Timmerhaus 1991; Molina Grima et al. 2003). Furthermore, indirect costs, fixed-capital investment (FCI), and the working capital need to be considered to calculate the total capital investment (TCI) (Acien et al. 2012). The course of the TCI correlates directly with the TEC, as it is used to determine the total direct and indirect planned costs (TDIPC), as presented in Table 5.
Table 4 Major equipment and total equipment costs (TEC) for the upstream and the four downstream scenarios D1, D2, D3, and D4
Table 5 List of the total direct plant costs, the indirect plant costs, the fixed-capital investment and the total capital investment of the four downstream processes D1, D2, D3, and D4
Subsequently, the manufacturing costs (I) were determined: these consist of the direct production costs (A), fixed charges (B), and the plant overhead costs (C) (Peters and Timmerhaus 1991). Finally, the sum of the manufacturing costs (I) and general expenses (II) gives the total product costs (III) and are presented in Table 6. The composition of the manufacturing costs, which are the sum of the direct production costs (A), fixed charges (B), and plant overhead costs (C), will be discussed in the following. The direct production costs include the raw material costs, which were in the range of €30,873 (D4) to €65,253 (D3) and are presented in Table 7 in further detail. The CO2 price for cultivation was assumed to be €0.39 per kg (Molina Grima et al. 2003). In total, nutrient costs of €0.50 per kg biomass were calculated, which agrees with reported values of $0.58 US per kg of biomass (Molina Grima et al. 2003). The main water consumption was during the daily harvesting of 10 m3 algal broth with water costs of €3.97 m3 (VEA: Wasserpreise für Industriekunden bleiben 2016 stabil 2021). The main raw material costs for CPE extraction were solvent costs for ethyl acetate and acetone for cleaning. A loss of 12 tonnes of CO2 per year for supercritical CO2 extraction results in costs of €4687, assuming a price of €0.39 per kg CO2 (Zgheib et al. 2018). For germination, the costs of nutrients, water, and glucose were considered.
Table 6 Direct production costs (A), Fixed charges (B) and plant overhead costs (C), manufacturing costs (I, A + B + C), general expenses (II) and total production costs (I + II) of the biotechnological production of astaxanthin from H. pluvialis comparing four different downstream scenarios, D1, D2, D3, and D4
Table 7 Raw material costs of the biotechnological production of H. pluvialis, comparing four different downstream scenarios D1, D2, D3, and D4
The operating labour costs in chemical production facilities are usually between 10 and 20% of the total product cost (III) (Peters and Timmerhaus 1991). In this work, a figure of 15% was assumed. Based on the operating labour costs, the supervisory labour costs and laboratory charges can be estimated (Table 6). The expenses for maintenance and repairs, patents and royalties were calculated as shown in Table 6. Electricity consumption and costs are presented in more detail in Table 8.
Table 8 Annual electricity costs of the biotechnological production of H. pluvialis, comparing four different downstream scenarios, D1, D2, D3, and D4
An electricity price of €0.18 kWh−1 was assumed for Germany (Industriestrom: Vergleich für Unternehmen 2021).
The total electricity costs for upstreaming are 2506.3 MWh a−1 (Table 8) to produce 8.9 tonnes of biomass. However, the exact power consumption varies greatly, depending on the type of cultivation, closed vs. open systems, climatic zone and temperature of the cultivation location, additional lighting (Panis and Carreon 2016; Acien et al. 2012). In a model calculation for the annual production of 18.3 tonnes and 6.15 tonnes of wet H pluvialis biomass in Livadeia (Greece) and Amsterdam (Netherlands), the energy consumption of 444.8 MWh a−1 and 291 MWh a−1 were considered for the upstreaming (Panis and Carreon 2016). In that work, the cultivation was carried out without artificial light. The green phase was conducted in closed photobioreactors, and astaxanthin accumulation was performed in open ponds. In a hypothetical industrial scenario, based on real production data, for the annual production of 17 tonnes of P. tricornutum in Germany using artificial light and a total cultivation volume of 315 m3, the electricity consumption of 92,916.8 MWh a−1 was determined for upstreaming (Derwenskus et al. 2020). This study considered the power consumption of 1100 W for mixing and circulation of the biomass results per rotary vane pump. Due to the lack of real production data, an artificial light installation of 2 W Lalgal broth−1 and lighting at 12 h intervals were assumed, resulting in annual electricity consumption of 1188 MWh a−1. For temperature control, values of 6.25–25 kWh m−3 were reported for H. pluvialis cultivation in Shenzhen, China (Li et al. 2011). Therefore, 12.5 kWh m−3 was assumed as the energy consumption for temperature control in this study. Power consumption for control and sensors was taken from literature and was adjusted to the cultivation volume of 160 m3 in this study (Derwenskus et al. 2020).
In the downstream process of H. pluvialis, the highest energy consumption was calculated for in-house supercritical CO2 extraction (D1) with 290.7 MWh a−1, while reduced electricity consumption levels of 135.9 MWh a−1 and 130.7 MWh a−1 were calculated for solvent extraction from homogenised cyst cells (D2) and flagellated zoospores (D3), respectively. The lowest electricity consumption, of 65.7 MWh a−1, was calculated for the process with external supercritical CO2 extraction (D4). The energy consumption for centrifugation was 5.28 MWh a−1 in all four scenarios, corresponding to 1.6 kWh malgal broth−3. The installed power of the disc-stack centrifuge was 4 kW, with a daily operating time of 4 h and a harvesting volume of 10 m3. Values of 1–1.4 kWh malgal broth−3 have been reported for centrifugation in the literature (Panis and Carreon 2016; Milledge 2013). The electricity consumption levels for mechanical cell wall disruption by homogenisation (D1, D2, and D4) and germination (D3) were 6.97 MWh a−1 and 5.72 MWh a−1, respectively. The costs for homogenisation were determined from the installed power of 5.5 kW of the used homogeniser and a daily operating time of 3.84 h.
Energy costs of 5.72 MWh a−1 were calculated for germination, using the data from the upstreaming scenario, and these were transferred to 2 × 1000 L photobioreactors. Lighting for 21 h per germination process was assumed ("H. pluvialis cyst cell disruption and germination" section).
The energy costs for spray-drying were calculated to be 48.2 MWh a−1. To determine these costs, the daily amount of water to be evaporated (102.45 kg) was multiplied by the evaporation enthalpy of water (Δhevaporation = 2442.3 kJ kg−1 at 25 °C (Lide 2005)) and a factor of 2.1. This factor was suggested by the manufacturer and is in good agreement with efficiencies of 40% and 55% reported in the literature for spray dryers without and with heat recovery, respectively (Kemp 2012) This corresponds to the energy consumption of 5.13 MJ kgwater−1 and fits well to the values of 5 MJ kgwater−1 reported in the literature for this unit operation (Thomassen et al. 2016). For the electricity costs for the CPE extraction (scenario D2 and D3), 2.5 kW needs to be considered according to the manufacturer Gilson (USA). The daily process time per CPE system was 22.9 h ("H. pluvialis cyst cell disruption and germination" section).
The selected evaporator for solvent evaporation has an installed power of 13.3 kW, resulting in annual energy consumption of 66.5 MWh a−1, when a daily operating time of 15.2 h is considered ("Solvent recovery" section). The power consumption of 29 kW h−1 for the extraction with supercritical CO2 was provided by the manufacturer, resulting in annual energy consumption of around 229.7 MWh a−1.
Concerning electricity costs, it could be shown that the extraction of astaxanthin from H. pluvialis using CPE extraction (D2 and D3) saves electricity costs compared to in-house extraction with supercritical CO2 (D1) since energy-intensive unit operations such as spray-drying and supercritical CO2 extraction can be replaced. Slightly lower electricity consumption for germination (2.34 MWh a−1) can be expected compared to high-pressure homogenisation (7.53 MWh a−1). In scenario D4, where supercritical CO2 extraction via an external service provider is done, the operation of a cooling cell (T = − 20 °C) was considered for storage of the harvested biomass up to 1000 kg before shipment. Therefore, additional energy consumption of 4.75 MWh a−1 was considered.
The highest direct production costs (A) were found to be €1.72 million and €1.67 million for external (D4) and in-house (D1) supercritical CO2 extraction. In comparison, production costs of around €1.4 million can be expected for the solvent extraction of astaxanthin from homogenised cyst cells (D2) and germinated zoospores (D3). For the supercritical CO2 extraction via an external service provider (D4), lower costs for electricity, raw materials, and repairs are outweighed by the payments for the external service provider (€419,301, €50 kgDW−1). To determine the manufacturing costs (I), in addition to the direct production costs (A), the fixed charges (B) and plant overhead costs (C) need to be determined (Table 6). The fixed charges are the sum of depreciation for equipment and buildings, local taxes, and insurances (Peters and Timmerhaus 1991). A linear depreciation period of 10 years and a residual value of 10% of the original value were assumed for the equipment costs (Turton et al. 2012). The buildings were depreciated by 3% annually (Peters and Timmerhaus 1991). Local tax and insurance costs were considered 1% and 4% of the FCI, respectively (Table 6) (Peters and Timmerhaus 1991).
Due to the high equipment costs for in-house CO2 extraction (D1), with €663,000, the resulting annual depreciation on equipment and buildings is higher, compared to solvent extraction from homogenised cyst cells (D2) and zoospores (D3) with €476,000 and €460,000, respectively. The lowest depreciation costs of €360,000 were calculated for an external supercritical CO2 extraction (D4). As the fixed charges (B) are derived from the depreciation, local taxes and insurance costs, at €1.1 million, these are also highest for in-house supercritical CO2 extraction (D1), followed by €0.78 and €0.76 million for solvent extraction from homogenised cyst cells (D2) and zoospores (D3), as well as €0.59 for supercritical CO2 extraction via an external service provider (D4). The plant overhead costs (C) are 50% of the costs of the operating labour, supervisory labour and maintenance and repairs (Peters and Timmerhaus 1991) and are presented in Table 6.
The general expenses (II) are the sum of the administrative costs, distribution and marketing, research and development, and interest payments and are shown in Table 6. An interest rate of 2% and a 100% debt financing of the project were assumed. Due to higher investment costs and therefore higher interest payments, the general expenses for in-house supercritical CO2 extraction (D1) are highest at €555,000, followed by €441,000 and €432,000 by solvent extraction from homogenised cyst cells (D2) and flagellated zoospores (D3), as well as €417,000 by external CO2 extraction. The total product costs (III) are the sum of the manufacturing costs (I) and general expenses (II), as presented in Table 6 and Fig. 5.
The highest total product costs (III) were determined to be €3.81 million for the conventional process, with an in-house supercritical CO2 extraction (D1). Total product costs of €2.98 million and €2.92 million were determined for the alternative process using CPE extraction from homogenised cysts (D2) and germinated zoospores (D3). Total product costs of €3.08 million were calculated for the process in which supercritical CO2 extraction is carried out by a service provider (D4). A comparison of the total product costs for scenarios D2 and D3 with D4 shows that the higher direct production costs in D4 (mainly due to the payment of the external service provider for supercritical CO2 extraction) are offset by lower costs of the fixed charges (mainly due to lower deprecation for equipment and buildings).
Economic performance of the four examined downstream scenarios
After determining the TCI and the total product costs (III) in the subsequent "Determination of the total capital investment and total product costs" section, the economic performance of the four downstream scenarios will be discussed.
The return on investment (ROI) and the net present value (NPV) were used as key figures for economic profitability:
$${\text{ROI}} = \frac{{{\text{EAT}}}}{{{\text{TCI}}}}.$$
As presented in Eq. 5, the ROI is the quotient of the profit after depreciation, interest, and taxes (EAT) and the TCI (Peters and Timmerhaus 1991).
The discount factor dn (Eq. 6) is the factor by which the future cash flow must be multiplied to obtain the present value of the cash flow after n years if invested at interest i (Peters and Timmerhaus 1991):
$$d_{n} = \frac{1}{{\left( {1 + i} \right)^{n} }}.$$
The discount factor was defined for yearly payments and annual compounding:
$${\text{NPV}} = \sum \limits_{n = 1}^{t} \frac{{{\text{NB}}_{n} }}{{\left( {1 + d} \right)^{n} }}.$$
The NPV of the processes compares the difference between the present value of annual cash flows and the initial required investment (Peters and Timmerhaus 1991). The NPV is calculated according to Eq. 7, where net benefits NB corresponds to the net cash flow at year n. The internal rate of return (IRR) was calculated and corresponded to a discount factor at NPV = 0 and gives the interest rate i at which the initial investment breaks even with the generated cash flows.
A total of 3241 kg of oleoresin (10 wt% astaxanthin) could be produced in the four downstream scenarios, as shown in Table 9. A sales price of €1200 per kg of oleoresin was assumed, which results in gross revenues of around €3.89 million. The difference between gross revenues and total product cost (III), excluding depreciation and interest payments, is the earnings before interest, taxes, depreciation, and amortisation (EBITDA). The EBITDA is an important economic parameter, as it enables the comparison of the economic performance of different companies regardless of interest payments, type of depreciation, and country-specific taxation. Due to the lowest total production costs, the two alternative downstream processes, using CPE extraction (D2 and D3), showed the highest EBITDA with €1.54 and €1.51 million, respectively. Consequently, a lower EBITDA was reached for the in-house (D1) and external (D2) supercritical CO2 extraction from homogenised cyst cells (Table 9).
Table 9 Economic key figures for the evaluation of the four downstream scenarios, D1, D2, D3, and D4
Due to higher depreciation, interest payments, and paid taxes for processes D2 and D3 compared to D4, the profits after depreciation, interest payments, and taxes (EAT) converge and amount to €0.69 million for process D3, €0.65 million for D2 and €0.58 million for D4. The lowest EAT of €0.06 million was in process D4, due to the highest depreciation and interest payments.
However, the profit itself is not a sufficient parameter for the economic comparison of the processes, as it neglects the TCI to reach the profit (Turton et al. 2012; Peters and Timmerhaus 1991). Therefore, the ROI was calculated as defined in Eq. 5 (Panis and Carreon 2016; Zgheib et al. 2018). The highest ROI of 11% was reached for the downstream process, performing supercritical CO2 extraction via a service provider (D4), followed by 10.3% by the solvent extraction from zoospores (D3) and 9.3% from homogenised cyst cells (D2). Due to low EAT and high TCI, with 0.6%, the ROI is lowest for in-house supercritical CO2 extraction (D4). However, from costs higher than 65€ per kg biomass for supercritical CO2 extraction via an external service provider (D4), the alternative processes of solvent extraction (D2 and D3) would achieve higher ROI than the contracted supercritical CO2 extraction. For long-term investments, the need for a NPV adjustment, taking the time value of money into account, is required. As presented in Table 10, the highest NPV was determined for scenario D3 with a value of €2.66 million after an operating time of 10 years. A negative NPV of €3.7 million is reached for the in-house supercritical CO2 extraction (D1). The IRR is the discount factor for which the NPV of the project is equal to zero and is the interest rate at which the project can just break even. Typically, rates for IRR are 10% for cost improvement of conventional technologies, 15% for the expansion of conventional technologies, 20% for product development, and 30% for speculative ventures (Van Dael et al. 2015). As shown in Table 10, the highest IRR can be expected for the external supercritical CO2 extraction, followed by the new downstream scenarios of solvent extraction from homogenised cyst cells and flagellated zoospores.
Table 10 Total present value for an interest rate of 2%, NPV after 10 years and IRR of the four downstream D1, D2, D3, and D4
In this study, an alternative downstream process for the extraction of astaxanthin from H. pluvialis was developed, replacing the drying of the biomass and supercritical CO2 extraction with CPE extraction from homogenised cyst cells or germinated zoospores. Using a CPE unit with a column volume of 244 mL, 3.5 g oleoresin could be extracted from 7.58 g homogenised H. pluvialis biomass within 32 min. A scale-up to an industrial 5-L CPE column showed that up to 2,947 kg of biomass could be processed within 330 days (24 h a day) of operation. For the techno-economic study, annual algal production of 8910 kg biomass with 5% astaxanthin was assumed, resulting in daily production of 9.83 kg oleoresin. Lower direct production costs were determined for the two alternative extraction processes using CPE compared to supercritical CO2 extraction. Also the total product costs are lower for the two new processes using CPE extraction compared to the supercritical CO2 extraction processes. After 10 years of operation, the NPV is highest for the CPE extraction from germinated zoospores. It must be noted that the results of the economic study will vary, depending on the individual situation of the H. pluvialis companies (financing, taxes, labour and electricity costs, depreciation, and interest rate). However, especially for small-size companies, the CPE extraction described represents an interesting alternative, as extraction can be performed in-house regularly, and the storage of biomass for shipment to a service provider for supercritical CO2 is no longer required.
The data supporting the conclusions of this article are included in the main manuscript.
BBM:
Bold modified basal medium
CPE:
Centrifugal partition extractor
d n :
Discount factor
EAT:
Earnings after tax
EBT:
Earnings before tax
EBIT:
Earnings before interest and taxes
EBITDA:
Earnings before interest, taxes, depreciation and amortisation
FCI:
Fixed-capital investments
H. pluvialis :
Haematococcus pluvialis
IRR:
Net benefits
TCI:
Total capital investments
TDIPC:
Total direct and indirect planned costs
TEC:
Total equipment costs
Acien FG, Fernandez JM, Magan JJ, Molina E (2012) Production cost of a real microalgae production plant and strategies to reduce it. Biotechnol Adv 30(6):1344–1353. https://doi.org/10.1016/j.biotechadv.2012.02.005
Astaxanthin Market Size, Share & Trends Analysis Report (2020) https://www.grandviewresearch.com/industry-analysis/global-astaxanthin-market. Accessed 17 Sept 2020
Bauer A, Minceva M (2019) Direct extraction of astaxanthin from the microalgae Haematococcus pluvialis using liquid–liquid chromatography. RSC Adv 9(40):22779–22789. https://doi.org/10.1039/c9ra03263k
Bauer A, Minceva M (2021) Examination of photo-, mixo-, and heterotrophic cultivation conditions on haematococcus pluvialis cyst cell germination. Appl Sci 11(16):7201. https://doi.org/10.3390/app11167201
Chekanov K, Schastnaya E, Solovchenko A, Lobakova E (2017) Effects of CO2 enrichment on primary photochemistry, growth and astaxanthin accumulation in the chlorophyte Haematococcus pluvialis. J Photochem Photobiol B 171:58–66. https://doi.org/10.1016/j.jphotobiol.2017.04.028
Derwenskus F, Weickert S, Lewandowski I, Schmid-Staiger U, Hirth T (2020) Economic evaluation of up- and downstream scenarios for the co-production of fucoxanthin and eicosapentaenoic acid with P. tricornutum using flat-panel airlift photobioreactors with artificial light. Algal Res 51:102078. https://doi.org/10.1016/j.algal.2020.102078
Ghobashy M, Gadallah M, El-Idreesy TT, Sadek MA, Elazab HA (2018) Kinetic study of hydrolysis of ethyl acetate using caustic soda. Int J Eng Technol 7(4):1995–1999. https://doi.org/10.14419/ijet.v7i4.14083
Goll J, Audo G, Minceva M (2015) Comparison of twin-cell centrifugal partition chromatographic columns with different cell volume. J Chromatogr A 1406:129–135. https://doi.org/10.1016/j.chroma.2015.05.077
Grünewald K, Hagen C, Braune W (1997) Secondary carotenoid accumulation in flagellates of the green alga Haematococcus lacustris. Eur J Phycol 32(4):387–392. https://doi.org/10.1080/09670269710001737329
Haematococcus Pluvialis Market (2021) https://www.meticulousresearch.com/product/haematococcus-pluvialis-market-5142. Accessed 25 Mar 2021
Hagen C, Siegmund S, Braune W (2002) Ultrastructural and chemical changes in the cell wall of Haematococcus pluvialis (Volvocales, Chlorophyta) during aplanospore formation. Eur J Phycol 37(2):217–226. https://doi.org/10.1017/S0967026202003669
Hatti-Kaul R (2000) Aqueous two-phase systems. Humana Press, Industriestrom: Vergleich für Unternehmen; 2021. https://www.eon.de/de/gk/strom/industriestromrechner.html?adobe_mc_sdid=SDID%3D6C0D58C71AB0F351-46A932B6BA0F57E1%7CMCORGID%3D17923CDE5783D4787F000101%40AdobeOrg%7CTS%3D1619600881&adobe_mc_ref=https%3A%2F%2Fwww.google.com%2F&utm_term=industriestrom&mc=0512222000&gclid=CjwKCAjwj6SEBhAOEiwAvFRuKD1BD_x7p8JWQK7FiwQujLgwOXLM_riCU2UVcACoaGrf9EesV1bA5hoCwlYQAvD_BwE&gclsrc=aw.ds. Accessed 21 May 2021
Ito Y (2005) Golden rules and pitfalls in selecting optimum conditions for high-speed counter-current chromatography. J Chromatogr A 1065(2):145–168. https://doi.org/10.1016/j.chroma.2004.12.044
Katsuda T, Lababpour A, Shimahara K, Katoh S (2004) Astaxanthin production by Haematococcus pluvialis under illumination with LEDs. Enzyme Microb Technol 35(1):81–86. https://doi.org/10.1016/j.enzmictec.2004.03.016
Kemp IC (2012) Fundamentals of energy analysis of dryers. In: Tsotsas E, Mujumdar A (eds) Modern drying technology, vol 4. Wiley, Hoboken, pp 1–45. https://doi.org/10.1002/9783527631681
Khoo KS, Chew KW, Ooi CW, Ong HC, Ling TC, Show PL (2019a) Extraction of natural astaxanthin from Haematococcus pluvialis using liquid biphasic flotation system. Bioresour Technol 290:121794. https://doi.org/10.1016/j.biortech.2019.121794
Khoo KS, Lee SY, Ooi CW, Fu X, Miao X, Ling TC, Show PL (2019b) Recent advances in biorefinery of astaxanthin from Haematococcus pluvialis. Bioresour Technol 288:121606. https://doi.org/10.1016/j.biortech.2019.121606
Khoo KS, Chew KW, Yew GY, Manickam S, Ooi CW, Show PL (2020) Integrated ultrasound-assisted liquid biphasic flotation for efficient extraction of astaxanthin from Haematococcus pluvialis. Ultrason Sonochem 67:105052. https://doi.org/10.1016/j.ultsonch.2020.105052
Khoo KS, Ooi CW, Chew KW, Foo SC, Lim JW, Tao Y, Jiang N, Ho S-H, Show PL (2021) Permeabilization of Haematococcus pluvialis and solid–liquid extraction of astaxanthin by CO2-based alkyl carbamate ionic liquids. Chem Eng J. https://doi.org/10.1016/j.cej.2021.128510
Krichnavaruk S, Shotipruk A, Goto M, Pavasant P (2008) Supercritical carbon dioxide extraction of astaxanthin from Haematococcus pluvialis with vegetable oils as co-solvent. Bioresour Technol 99(13):5556–5560. https://doi.org/10.1016/j.biortech.2007.10.049
Li J, Zhu D, Niu J, Shen S, Wang G (2011) An economic assessment of astaxanthin production by large scale cultivation of Haematococcus pluvialis. Biotechnol Adv 29(6):568–574. https://doi.org/10.1016/j.biotechadv.2011.04.001
Li X, Wang X, Duan C, Yi S, Gao Z, Xiao C, Agathos SN, Wang G, Li J (2020) Biotechnological production of astaxanthin from the microalga Haematococcus pluvialis. Biotechnol Adv 43:107602. https://doi.org/10.1016/j.biotechadv.2020.107602
Lide DR (2005) CRC handbook of chemistry and physics, internet version 2005. CRC Press, Boca Raton
Liu Z-W, Yue Z, Zeng X-A, Cheng J-H, Aadil RM (2019) Ionic liquid as an effective solvent for cell wall deconstructing through astaxanthin extraction from Haematococcus pluvialis. Int J Food Sci Technol 54(2):583–590. https://doi.org/10.1111/ijfs.14030
Marchal L, Mojaat-Guemir M, Foucault A, Pruvost J (2013) Centrifugal partition extraction of beta-carotene from Dunaliella salina for efficient and biocompatible recovery of metabolites. Bioresour Technol 134:396–400. https://doi.org/10.1016/j.biortech.2013.02.019
Mendes-Pinto MM, Raposo MFJ, Bowen J, Young AJ, Morais R (2001) Evaluation of different cell disruption processes on encysted cells of Haematococcus pluvialis: effects on astaxanthin recovery and implications for bio-availability. J Appl Phycol 13:9–24
Milledge JJ (2013) Energy balance and techno-economic assessment of algal biofuel production systems. University of Southampton, Southampton
Minceva M, Bauer A (2020) Method of extracting a pigment from microalgae
Molina Grima E, Belarbi EH, Acien Fernandez FG, Robles Medina A, Chisti Y (2003) Recovery of microalgal biomass and metabolites: process options and economics. Biotechnol Adv 20(7–8):491–515. https://doi.org/10.1016/s0734-9750(02)00050-2
Molino A, Mehariya S, Iovine A, Larocca V, Di Sanzo G, Martino M, Casella P, Chianese S, Musmarra D (2018) Extraction of astaxanthin and lutein from microalga haematococcus pluvialis in the red phase using CO(2) supercritical fluid extraction technology with ethanol as co-solvent. Mar Drugs 16(11):432. https://doi.org/10.3390/md16110432
Nahidian B, Ghanati F, Shahbazi M, Soltani N (2018) Effect of nutrients on the growth and physiological features of newly isolated Haematococcus pluvialis TMU1. Bioresour Technol 255:229–237. https://doi.org/10.1016/j.biortech.2018.01.130
Nguyen KD (2013) Astaxanthin: a comparative case of synthetic vs. natural product. University of Tennessee, Knoxville. https://trace.tennessee.edu/cgi/viewcontent.cgi?article=1094&context=utk_chembiopubs. Accessed 28 May 2021
Panis G, Carreon JR (2016) Commercial astaxanthin production derived by green alga Haematococcus pluvialis: a microalgae process model and a techno-economic assessment all through production line. Algal Res 18:175–190. https://doi.org/10.1016/j.algal.2016.06.007
Peters MS, Timmerhaus KD (1991) Plantdesign and economics for chemical engineers. McGraw-Hill, Inc., New York
Praveenkumar R, Lee K, Lee J, Oh Y-K (2015) Breaking dormancy: an energy-efficient means of recovering astaxanthin from microalgae. Green Chem 17(2):1226–1234. https://doi.org/10.1039/c4gc01413h
Praveenkumar R, Lee J, Vijayan D, Lee SY, Lee K, Sim SJ, Hong ME, Kim Y-E, Oh Y-K (2020) Morphological change and cell disruption of Haematococcus pluvialis cyst during high-pressure homogenization for astaxanthin recovery. Appl Sci 10(2):513. https://doi.org/10.3390/app10020513
Roehrer S, Minceva M (2019) Evaluation of inter-apparatus separation method transferability in countercurrent chromatography and centrifugal partition chromatography. Separations 6(3):36. https://doi.org/10.3390/separations6030036
Shah MM, Liang Y, Cheng JJ, Daroch M (2016) Astaxanthin-producing green microalga Haematococcus pluvialis: from single cell to high value commercial products. Front Plant Sci 7:531. https://doi.org/10.3389/fpls.2016.00531
Stephenson R, Stuart J (1986) Mutual binary solubilities—water alcohols and water esters. J Chem Eng Data 31(1):56–70. https://doi.org/10.1021/je00043a019
Sun H, Kong Q, Geng Z, Duan L, Yang M, Guan B (2015) Enhancement of cell biomass and cell activity of astaxanthin-rich Haematococcus pluvialis. Bioresour Technol 186:67–73. https://doi.org/10.1016/j.biortech.2015.02.101
Thomassen G, Egiguren Vila U, Van Dael M, Lemmens B, Van Passel S (2016) A techno-economic assessment of an algal-based biorefinery. Clean Technol Environ Policy 18(6):1849–1862. https://doi.org/10.1007/s10098-016-1159-2
Tippelt M (2019) From lab scale to pilot and production scale using scCO2 at 1000 bar with special focus on Haematococcus pluvialis. In: 17th European meeting on supercritical fluids 2019
Toth AJ (2019) Comprehensive evaluation and comparison of advanced separation methods on the separation of ethyl acetate–ethanol–water highly non-ideal mixture. Sep Purif Technol 224:490–508. https://doi.org/10.1016/j.seppur.2019.05.051
Turton R, Bailie RC, Whiting WB, Shaeiwitz JA, Bhattacharyya D (2012) Analysis, synthesis and design of chemical processes. Pearson Education, London
Ungureanu C, Marchal L, Chirvase AA, Foucault A (2013) Centrifugal partition extraction, a new method for direct metabolites recovery from culture broth: case study of torularhodin recovery from Rhodotorula rubra. Bioresour Technol 132:406–409. https://doi.org/10.1016/j.biortech.2012.11.105
Van Dael M, Kuppens T, Lizin S, Van Passel S (2015) Techno-economic assessment methodology for ultrasonic production of biofuels. Production of biofuels and chemicals with ultrasound. Biofuels and Biorefineries. Springer, Dordrecht, pp 317–345. https://doi.org/10.1007/978-94-017-9624-8_12
VEA: Wasserpreise für Industriekunden bleiben 2016 stabil (2021) https://www.euwid-wasser.de/news/wirtschaft/einzelansicht/Artikel/vea-wasserpreise-fuer-industriekunden-bleiben-2016-stabil.html. Accessed 14 Apr 2021
Xi T, Kim DG, Roh SW, Choi J-S, Choi Y-E (2016a) Enhancement of astaxanthin production using Haematococcus pluvialis with novel LED wavelength shift strategy. Appl Microbiol Biotechnol 100(14):6231–6238. https://doi.org/10.1007/s00253-016-7301-6
Zgheib N, Saade R, Khallouf R, Takache H (2018) Extraction of astaxanthin from microalgae_processdesign and economic feasibility study. In: Paper presented at the international conference on functional materials and chemical engineering (ICFMCE 2017), Dubai
Zhang CH, Liu JG, Zhang LT (2017) Cell cycles and proliferation patterns in Haematococcus pluvialis. Chin J Oceanol Limnol 35(5):1205–1211. https://doi.org/10.1007/s00343-017-6103-8
Zhao X, Fu L, Liu D, Zhu H, Wang X, Bi Y (2016) Magnetic-field-assisted extraction of astaxanthin from Haematococcus pluvialis. J Food Process Preserv 40(3):463–472. https://doi.org/10.1111/jfpp.12624
Zou TB, Jia Q, Li HW, Wang CX, Wu HF (2013) Response surface methodology for ultrasound-assisted extraction of astaxanthin from Haematococcus pluvialis. Mar Drugs 11(5):1644–1655. https://doi.org/10.3390/md11051644
This work greatly acknowledges the provision of the H. pluvialis cyst cells by Clemens Elle from Sea & Sun Technology GmbH, Germany.
This research was funded by the FEDERAL MINISTRY OF ECONOMIC AFFAIRS AND ENERGY (BMWi), Grant Number ZF4025031 SB8 and AB was funded from a fellowship granted by FOUNDATION OF GERMAN BUSINESS (sdw).
Biothermodynamics, TUM School of Life Sciences, Technical University of Munich, Maximus-von-Imhof-Forum 2, 85354, Freising, Germany
Andreas Bauer & Mirjana Minceva
Andreas Bauer
Mirjana Minceva
AB: conceptualisation, methodology, validation, formal analysis, investigation, methodology, data curation, writing—original draft, writing—review and editing MM: conceptualisation, methodology, validation, resources, writing—review and editing, visualisation, supervision, project administration, funding acquisition. All authors read and approved the final manuscript.
Correspondence to Andreas Bauer.
Ethics approval and consent participate
All authors have read this article and have approved its submission to Bioresources and Bioprocessing.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Bauer, A., Minceva, M. Techno-economic analysis of a new downstream process for the production of astaxanthin from the microalgae Haematococcus pluvialis. Bioresour. Bioprocess. 8, 111 (2021). https://doi.org/10.1186/s40643-021-00463-6
Liquid–liquid chromatography
Centrifugal partition extraction | CommonCrawl |
Can the postulates of path-integral QFT be stated concisely?
I'm pretty new to QFT. So is it possible, before delving into the history and motivation and variations and implications and perturbation techniques and so on, to briefly state the postulates in just one coherent way? That is, to assert every mathematical object that is in play, what it represents, and the mathematical relations between them?
For example, QED is the simplest QFT, right? And I want to avoid operators for now if possible. So here's my guess, from the bits I've read over the years:
Spacetime: There is a Minkowski space $M$, representing spacetime.
Electron Field: At every point $x$ of $M$, there is a Dirac spinor $\psi$ (4 complex numbers, that transform a certain way under Lorentz transforms). This represents the probability density that any spin-up/spin-down electron/positron is at $x$.
Photon (EM) field: At every point of $M$, there is also a 4-vector $A$ -- of real numbers? -- that transforms like a 4-vector. This represents the electromagnetic potential.
Are those all the objects? And then the law, relating the objects:
There is a certain function of the electron and photon fields, $\psi$ and $A$, called the Lagrangian. The 'action' $S_p$ along a path $p$ of $M$ is the path integral of this function.
Pick two complete, disjoint, spacelike hypersurfaces $T$ and $U$ of $M$. First of all, the integral of the electron field on $T$ and $U$ must match, and this represents the number of electrons & positrons in the universe/experiment.
Then, the value of the electron field at each point $y$ of $U$ is proportional to the integral, a la Feynman, over all imaginable paths $p$ between every point $x$ of $T$, and $y$, of $\psi(x)\exp(\frac{i}{\hbar}S_p)$. This, together with (5), determines the evolution of the electron field.
Is that close? Maybe not, because among other things it doesn't say how the photon field evolves. But isn't it possible to state the theory in one such consistent way? Is there a source, preferably online, that lays it out?
quantum-field-theory quantum-electrodynamics path-integral
Adam Herbst
Adam HerbstAdam Herbst
$\begingroup$ I assume you're talking about relativistic QFT. Even in that case, QED is not the simplest QFT. One of the simplest is the QFT of a single scalar field, such as the free scalar field (exactly solvable) or the $\phi^4$ model (not exactly solvable AFAIK). These models are easier than QED by far. Other essential warm-ups for QED are the free Dirac spinor field (without electromagnetism) and the free Maxwell field (without spinors). All of these can be formulated using a path-integral. Would you rather start with one of those? Or do you want the answer to be about QED? $\endgroup$
$\begingroup$ @ChiralAnomaly Well that's certainly informative, and undoubtedly I'll end up having to go back and look at those simpler models. But at the same time, since I understand Dirac spinors, electromagnetism, and Feynman's path integral concept, I can't help feeling like there should be a straight statement of QED in those terms, even if it's not as elegant as other ways of stating it. But if I'm way off base, by all means point me to the simpler ones. $\endgroup$
– Adam Herbst
$\begingroup$ Starting with QED and then jumping back as needed is a perfectly reasonable approach. Yes, there is a straight statement of QED in path-integral terms. One warning: it's not strictly well-defined in continuous spacetime, but it's good enough to convey the idea. If you want it to be strictly well-defined, then the only known ways of doing that (nonperturbatively) involve discretizing spacetime, but that's messy and artificial, so I'd recommend tucking that away in the back of your mind for now and starting with the continuous-spacetime version. $\endgroup$
$\begingroup$ @ChiralAnomaly Haha, it's always something crazy like that, isn't it? Okay, I'm game! $\endgroup$
$\begingroup$ Are Osterwalder-Schraeder axioms what you're looking for? $\endgroup$
– Prof. Legolasov
I'll sketch the postulates for one version of a path-integral formulation of QED. My goal here is only to give a brief orientation, with emphasis on a few conceptual points that many introductions neglect. For more detail, here are a few on-line resources:
For path-integral QED, see sections 43, 44, 57, and 58 in Srednicki (2006), Quantum Field Theory. A prepublication draft of the text can be downloaded here: http://web.physics.ucsb.edu/~mark/qft.html
For a quick preview, see this Physics SE post: Generator of QED in path integral approach
For path-integral scalar field theory, see McGreevy's lecture notes https://mcgreevy.physics.ucsd.edu/w20/index.html and https://mcgreevy.physics.ucsd.edu/s19/index.html
For a quick preview of path-integral scalar field theory, see https://en.wikipedia.org/wiki/Partition_function_(quantum_field_theory)
I'm sure there are lots of others, too. I didn't find any that mention the specific path-integral formulation described below, which gives the time-evolution of an arbitrary state, but there are lots of resources about the path integral that generates all time ordered vacuum expectation values, which uses mostly the same ideas.
The ingredients are much like what the question listed. I'll list them again with a little extra clarification.
Spacetime: Morally, we're thinking of continuous four-dimensional Minkowski spacetime $M$. But for the sake of clarity, we can consider a very fine discretization of spacetime, with a very large but finite extent, so we can think of $M$ as a finite set, say with a mere $10^{100000}$ points. For doing numerical calculations, we would need to trim that down a bit, but that's not the goal here.
Matter field: At every point $x\in M$, there is a set of four Grassmann variables (not complex numbers) denoted $\psi_k(x)$ and four more Grassmann variables denoted $\overline\psi_k(x)$. In constrast to real or complex numbers, Grassmann variables all anticommute with each other. They don't have values, but we can still define functions (polynomials) of them, and we can still do something analogous to integrating over them (called the Berezin integral), which we use to define the path integral. This is important: they don't represent probability densities. Probability densities are functions of the field variables. More about this below.
EM field: For every pair of neighboring points in $M$, say $x$ and its nearest neighbor in the $\mu$-direction, we associate a single complex variable $e^{i\epsilon A_\mu(x)}$, where $A_\mu(x)$ is a real variable (one for each $x$ and each $\mu$) and $\epsilon$ is the step-size between neighboring points. When we write the continuum version of the action, we use only $A_\mu(x)$ instead of $e^{i\epsilon A_\mu(x)}$, but some concepts are easier in the discretized version, so I wanted to mention it here.
Those are (adjusted versions of) the ingredients listed in the question. To help make contact with the general principles of quantum theory, which still hold in quantum field theory, I'll add two more:
Observables: Observables represent things that could be measured. In QFT, observables are expressed in terms of the field operators — field variables and derivatives with respect to the field variables. In QED, observables are required to be invariant under gauge transformations, at least under those that are continuously connected to the identity (trivial) gauge transformation. One example of an observable is $F_{ab}\equiv \partial_a A_b-\partial_b A_a$, and another example is $\overline\psi_j(y)e^{i\int_x^y ds\cdot A(s)}\psi_k(x)$. In the second example, the $A$-dependent factor can be regarded as a product of the complex variables $e^{i\epsilon A_\mu(x)}$ along some path from $x$ to $y$.
States: To specify a state, we specify a function $\Psi$ of all of the field variables associated with some spacelike slice, say the $t=0$ slice. I'll write this as $\Psi[\psi,\overline\psi,A]_0$, with a subscript to indicate which time the field variables are restricted to. I'm using square brackets as a standard reminder that $\Psi$ is a function of an enormous number of variables — several per point in the given spacelike slice. States, like observables, are required to be gauge-invariant.
A common point of confusion among QFT newcomers is the relationship between the state and the field variables, especially in QED where the spinor field is traditionally denoted $\psi$ — the same symbol that is also traditionally used for the state. These are not the same thing. I'm using an uppercase $\Psi$ for the state and a lowercase $\psi$ for the spinor field. The distinction should become clear below, if it isn't already.
Time evolution
Most of the literature about the path-integral formulation of QED focuses on time-ordered vacuum expectation values of products of field variables. That's useful for many reasons, but here I'll use a path-integral formulation to describe time-evolution in the Schrödinger picture starting from a given initial state (which is not necessarily the vacuum state), because this seems to be closer to what the question requested.
The path integral formulation describes how the state evolves in time. Schematically, the law looks like this: $$ \Psi_\text{final}[\psi,\overline\psi,A]_t = \int [d\psi]_{(t,0]}[d\overline\psi]_{(t,0]}[dA]_{(t,0]}\ \exp\Big(iS[\psi,\overline\psi,A]_{[t,0]}\Big) \Psi_\text{initial}[\psi,\overline\psi,A]_0 $$ where $S[\psi,\overline\psi,A]$ is the action — the integral of the Lagrangian over the part of spacetime between times $0$ and $t$. Defining the integrals mathematically takes more work. Here, I'm only highlighting one detail: the subscripts $(t,0]$ and $[t,0]$ indicate what part of spacetime the integration variables come from. In words: The initial state depends on the field variables in the time$=0$ slice. The action depends on all of the field variables from time$=0$ through time$=t$, inclusive. The integral is over all of these except the ones at the final time $t$. The result is a new state that depends only the variables associated with the time=$t$ slice.
Quantum field theory is just a special kind of quantum theory. The general principles of quantum theory, namely Born's rule and the projection rule, still apply. However, unlike nonrelativistic single-particle models in which the state is a function of the spatial coordinates, here the state is a function of an enormous number of field variables. Each such function, if it's sufficiently well-behaved, represents a state-vector in the Hilbert space.
Observables are operators on the Hilbert space, which can be expressed using multiplication by field variables and derivatives with respect to field variables. This is analogous to how operators are usually expressed in nonrelativistic single-particle models, but with a huge number of field variables in place of three spatial coordinates. In quantum field theory, the variables are field variables, and the spacetime coordinates play the role of indices that we use to keep track of all those field variables.
Schematically, given some observable $X$, its expectation value in a normalized state $\Psi$ is $$ \langle \Psi|X|\Psi\rangle \sim \int [d\psi][d\overline\psi][dA]\ \Psi^*[\psi,\overline\psi,A] X \Psi[\psi,\overline\psi,A]. $$ Again, the observable $X$ is some combination of multiplication by field variables and derivatives with respect to field variables. If $X$ is a projection operator onto one of an observable's eigenspaces, then this expectation value is the probability of getting that outcome if the observable is measured.
I'm glossing over the definitions of the integrals, but conceptually, the inner product $\langle\Psi_1|\Psi_2\rangle$ is an integral over the field variables on which the states $\Psi_1$ and $\Psi_2$ depend, just like in more familiar single-particle models — except that now the variables are field variables instead of spatial coordinates.
Wait — what about particles?
In QED, particles (electrons/positrons/photons) are phenomena that the theory predicts. The theory is expressed in terms of fields, not particles. The theory includes observables that act as particle detectors, and it has single- and multi-particle states, but constructing them explicitly is prohibitively difficult unless we resort to perturbation theory, which is exactly what most (all?) introductions do. Perturbation theory is a whole other industry, and I won't try to cover it here.
I will say one thing about particles. Field variables are indexed by (equivalently, are functions of) spacetime, so we can define their positive/negative frequency parts. If we set the coupling constant to zero, which makes the model boring, then we can calculate those parts explicitly. Whether or not we can calculate them explicitly, the positive-frequency parts act as energy-reducing operators (they annihilate the vacuum state), and the negative-frequency parts act as energy-increasing operators. In the zero-coupling case, they annihilate and create individual particles. Given any state $\Psi$, hitting it with the negative-frequency part of a field variable adds a particle — either a photon if the field is $A$ or as an electron or positron if the field is $\overline\psi$ or $\psi$, respectively. Beware that this simple relationship between fields and particles is restricted to the zero-coupling case — or to perturbation theory, which I won't go into here.
Other things I glossed over
I glossed over lots of other things, too. For example, I didn't explain how to define the integral over a Grassmann variable. I didn't say anything about how Wick rotation can be used to relate the arbitrary-initial-state formulation to the vacuum-expectation-value formulation, which dominates most textbooks for a good reason. I also didn't say anything about other axiomatic approaches to QFT, each of which brings its own valuable perspectives.
Chiral AnomalyChiral Anomaly
$\begingroup$ I really appreciate it, both the clarifying comments and the exquisitely detailed answer. Much obliged. $\endgroup$
Not the answer you're looking for? Browse other questions tagged quantum-field-theory quantum-electrodynamics path-integral or ask your own question.
Generator of QED in path integral approach
Can we find the functional for a quantum field in QFT
Are there rigorous constructions of the path integral for lattice QFT on an infinite lattice?
What is the path integral exactly?
Path integral quantization of bosonic string theory
Conceptual questions on the path integral formulation of QFT
Is the non-perturbative approach to QFT a path integral approach? If so then how, given we don't have simple path integral formula for Dirac equation?
Primary field in CFT and path integral
Transitioning from Path integral in QM to QFT | CommonCrawl |
Opinion dynamics under the influence of radical groups, charismatic leaders, and other constant signals: A simple unifying model
NHM Home
From a systems theory of sociology to modeling the onset and evolution of criminality
September 2015, 10(3): 443-475. doi: 10.3934/nhm.2015.10.443
A model of riots dynamics: Shocks, diffusion and thresholds
Henri Berestycki 1, , Jean-Pierre Nadal 2, and Nancy Rodíguez 3,
Ecole des Hautes Etudes en Sciences Sociales and CNRS, Centre d'Analyse et de Mathématique Sociales (CAMS, UMR8557), 190-198, avenue de France - 75013 Paris, France
Ecole des Hautes Etudes en Sciences Sociales and CNRS, Centre d'Analyse et de Mathématique Sociales (CAMS, UMR8557), 190-198 avenue de France - 75013 Paris, France
UNC Chapel Hill, Department of Mathematics, Phillips Hall, CB # 3250, Chapel Hill, NC 27599-3250, United States
Received November 2014 Revised February 2015 Published July 2015
We introduce and analyze several variants of a system of differential equations which model the dynamics of social outbursts, such as riots. The systems involve the coupling of an explicit variable representing the intensity of rioting activity and an underlying (implicit) field of social tension. Our models include the effects of exogenous and endogenous factors as well as various propagation mechanisms. From numerical and mathematical analysis of these models we show that the assumptions made on how different locations influence one another and how the tension in the system disperses play a major role on the qualitative behavior of bursts of social unrest. Furthermore, we analyze here various properties of these systems, such as the existence of traveling wave solutions, and formulate some new open mathematical problems which arise from our work.
Keywords: non-local diffusion., traveling wave solutions, Mathematical modeling, numerical solutions, partial differential equations.
Mathematics Subject Classification: Primary: 35K55, 35K57; Secondary: 35B9.
Citation: Henri Berestycki, Jean-Pierre Nadal, Nancy Rodíguez. A model of riots dynamics: Shocks, diffusion and thresholds. Networks & Heterogeneous Media, 2015, 10 (3) : 443-475. doi: 10.3934/nhm.2015.10.443
S. Alcaide, Movimiento 15-M: Los Ciudadanos Exigen Reconstruir La Política,, 2011., (). Google Scholar
H. Arendt, Crises of the Republic: Lying in Politics; Civil Disobedience; on Violence; Thoughts on Politics and Revolution, Houghton Mifflin Harcourt, 1972. Google Scholar
P. Baudains, A. Braithwaite and S. D. Johnson, Spatial Patterns in the 2011 London Riots, Policing, 7 (2012), 21-31. doi: 10.1093/police/pas049. Google Scholar
J.-Ph. Bouchaud, C. Borghesi and P. Jensen, On the emergence of an "intention field'' for socially cohesive agents, Journal of Statistical Mechanics: Theory and Experiment, (2014), P03010, 15 pp. Google Scholar
P. C. Bressloff and Z. P. Kilpatrick, Two-dimesional bumps in piecewise smooth neural fields with synaptic depression, Physica D, 239 (2010), 1048-1060. doi: 10.1016/j.physd.2010.02.016. Google Scholar
H. Berestycki and J.-P. Nadal, Self-organised critical hot spots of criminal activity, European Journal of Applied Mathematics, 21 (2010), 371-399. doi: 10.1017/S0956792510000185. Google Scholar
H. Berestycki and N. Rodríguez, Analysis of a heterogeneous model for riot dynamics : the effect of censorship of information, to appear in the European Journal of Applied Mathematics, (2015), 28 pp. Google Scholar
G. Le Bon, Psychologie des Foules, The Crowd: A Study of the Popular Mind, Editions Felix Alcan, 1895 (9th ed. 1905), Viking Press, New York, 1960. Google Scholar
J.-Ph. Bouchaud, Crises and collective socio-economic phenomena: Simple models and challenges, Journal of Statistical Physics, 151 (2013), 567-606. doi: 10.1007/s10955-012-0687-3. Google Scholar
D. Braha, Global civil unrest: Contagion, self-organization, and prediction, PloS one, 7 (2012), e48596, 1-9. doi: 10.1371/journal.pone.0048596. Google Scholar
R. Clarke and C. Lett, What happened when Michael Brown met Officer Darren Wilson,, 2014., (). Google Scholar
R. Crane and D. Sornette, Robust dynamic classes revealed by measuring the response function of a social system, Proceedings of the National Academy of Sciences of the United States of America, 105 (2008), 15649-15653. doi: 10.1073/pnas.0803685105. Google Scholar
J. D. Delk, Fires & Furies: The LA Riots, What Really Happened, ETC Publications, 1995. Google Scholar
T. P. Davies, H. M. Fry, A. G. Wilson and S. R Bishop, A mathematical model of the London riots and their policing, Scientific Reports, 3 (2013), 1-9. doi: 10.1038/srep01303. Google Scholar
C. Fizgerald, The Final Report: The L.A. Riots,, 2006., (). Google Scholar
M. W. Flamm, Law and Order: Street Crime, Civil Unrest, and the Crisis of Liberalism in the 1960s, Columbia University Press, 2005. Google Scholar
S. González-Bailón, J. Borge-Holthoefer, A. Rivero and Y. Moreno, The dynamics of protest recruitment through an online network, Scientific reports, 1 (2011), 1-7. Google Scholar
M. B. Gordon, J-P. Nadal, D. Phan and V. Semeshenko, Discrete choices under social influence: Generic properties, Mathematical Models and Methods in Applied Sciences (M3AS), 19 (2009), 1441-1481. doi: 10.1142/S0218202509003887. Google Scholar
M. Granovetter, Threshold models of collective behavior, American Journal of Sociology, 83 (1978), 1420-1443. doi: 10.1086/226707. Google Scholar
A. G. Hawkes, Spectra of some self-exciting and mutually exciting point processes, Biometrika, 58 (1971), 83-90. doi: 10.1093/biomet/58.1.83. Google Scholar
J. C. Lang and H. De Sterck, The Arab Spring: A simple compartmental model for the dynamics of a revolution, Mathematical Social Sciences, 69 (2014), 12-21. doi: 10.1016/j.mathsocsci.2014.01.004. Google Scholar
L. Li, H. Deng, A. Dong, Y. Chang and H. Zha, Identifying and Labeling Search Tasks via Query-based Hawkes Processes, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, (2014), 731-740. doi: 10.1145/2623330.2623679. Google Scholar
W. Lowery, C. D. Leonnig and M. Berman, Even before Michael Brown's slaying in Ferguson, racial questions hung over police,, 2014., (). Google Scholar
M. Lynch, The Arab Uprising The Unfinished Revolutions Of The New Middleeast, Public Affairs, New York, first edition, 2005. Google Scholar
B. Moore, Injustice: The Social Bases of Obedience and Revolt, White Plains, New York, 1978. Google Scholar
G. O. Mohler, M. B. Short, P. J. Brantingham, F. P. Schoenberg and G. E. Tita, Self-exciting point process modeling of crime, Journal of the American Statistical Association, 106 (2011), 100-108. doi: 10.1198/jasa.2011.ap09546. Google Scholar
L. Mucchielli, Autumn 2005: A review of the most important riot in the history of french contemporary society, Journal of Ethnic and Migration Studies, 35 (2009), 731-751. doi: 10.1080/13691830902826137. Google Scholar
D. J. Myers, The diffusion of collective violence: Infectiousness, susceptibility, and mass media networks, The American Journal of Sociology, 106 (2000), 173-208. doi: 10.1086/303110. Google Scholar
T. Newburn, The Ferguson riots may seem similar to those in UK in 2011 - but there are stark contrasts,, 2014., (). Google Scholar
Y. Ogata, Space-time point-process models for earthquake occurrences, Annals of the Institute of Statistical Mathematics, 50 (1998), 379-402. doi: 10.1023/A:1003403601725. Google Scholar
P. Peralva, Emeutes urbaines en france. les émeutes françaises racontées aux brésiliens, HAL archives ouvertes, https://hal.archives-ouvertes.fr/halshs-0048422, 2010. Google Scholar
J. Salter, Police shooting of black teenager in St. Louis reignites anger,, 2014., (). Google Scholar
T. C. Schelling, Hockey helmets, concealed weapons, and daylight saving: A study of binary choices with externalities, Journal of Conflict Resolution, 17 (1973), 381-428. doi: 10.1177/002200277301700302. Google Scholar
M. B. Short, M. R. D'Orsogna, P. J. Brantingham and G. E. Tita, Measuring and modeling repeat and near-repeat burglary effects, Journal of Quantitative Criminology, 25 (2009), 325-339. doi: 10.1007/s10940-009-9068-8. Google Scholar
M. B. Short, M. R. D'Orsogna, V. B. Pasour, G. E. Tita, P. J. Brantingham, A. L. Bertozzi and L. B. Chayes, A statistical model of criminal behavior, Math. Models Methods Appl. Sci., 18 (2008), 1249-1267. doi: 10.1142/S0218202508003029. Google Scholar
D. A. Snow, R. Vliegenthart and C. Corrigall-Brown, Framing the French riots: A comparative study of frame variation, Social Forces, 86 (2007), 385-415. doi: 10.1093/sf/86.2.385. Google Scholar
M. Taylor, P. Lewis and H. Clifton, Why the riots stopped: Fear, rain and a moving call for peace, The Guardian, December 2011. Google Scholar
M. Tsodyks, K. Pawelzik and H. Markram, Neural networks with dynamics synapses, Neural computation, 10 (1998), 821-835. doi: 10.1162/089976698300017502. Google Scholar
A. Volpert, V. Volpert and V. Volpert, Traveling Wave Solutions of Parabolic Systems, American Mathematical Society, Providence, translatio edition, 1994. Google Scholar
H. R. Wilson and J. D. Cowan, Excitatory and inhibit interneurons, Biophysics, 12 (1972), 1-24. Google Scholar
J. K. Walton and D. Seddon, Free Markets and Food Riots: The Politics of Global Adjustment, Wiley-Blackwell, 2008. doi: 10.1002/9780470712962. Google Scholar
Joelma Azevedo, Juan Carlos Pozo, Arlúcio Viana. Global solutions to the non-local Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021146
Shi-Liang Wu, Wan-Tong Li, San-Yang Liu. Exponential stability of traveling fronts in monostable reaction-advection-diffusion equations with non-local delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 347-366. doi: 10.3934/dcdsb.2012.17.347
Xiaojie Hou, Yi Li. Local stability of traveling-wave solutions of nonlinear reaction-diffusion equations. Discrete & Continuous Dynamical Systems, 2006, 15 (2) : 681-701. doi: 10.3934/dcds.2006.15.681
E. S. Van Vleck, Aijun Zhang. Competing interactions and traveling wave solutions in lattice differential equations. Communications on Pure & Applied Analysis, 2016, 15 (2) : 457-475. doi: 10.3934/cpaa.2016.15.457
Cheng-Hsiung Hsu, Jian-Jhong Lin. Stability analysis of traveling wave solutions for lattice reaction-diffusion equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (5) : 1757-1774. doi: 10.3934/dcdsb.2020001
Christos V. Nikolopoulos, Georgios E. Zouraris. Numerical solution of a non-local elliptic problem modeling a thermistor with a finite element and a finite volume method. Conference Publications, 2007, 2007 (Special) : 768-778. doi: 10.3934/proc.2007.2007.768
Imran H. Biswas, Indranil Chowdhury. On the differentiability of the solutions of non-local Isaacs equations involving $\frac{1}{2}$-Laplacian. Communications on Pure & Applied Analysis, 2016, 15 (3) : 907-927. doi: 10.3934/cpaa.2016.15.907
Huxiao Luo, Xianhua Tang, Zu Gao. Sign-changing solutions for non-local elliptic equations with asymptotically linear term. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1147-1159. doi: 10.3934/cpaa.2018055
Florent Berthelin, Paola Goatin. Regularity results for the solutions of a non-local model of traffic flow. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3197-3213. doi: 10.3934/dcds.2019132
Stig-Olof Londen, Hana Petzeltová. Convergence of solutions of a non-local phase-field system. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 653-670. doi: 10.3934/dcdss.2011.4.653
Juan C. Pozo, Vicente Vergara. Fundamental solutions and decay of fully non-local problems. Discrete & Continuous Dynamical Systems, 2019, 39 (1) : 639-666. doi: 10.3934/dcds.2019026
Zhenguo Bai, Tingting Zhao. Spreading speed and traveling waves for a non-local delayed reaction-diffusion system without quasi-monotonicity. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4063-4085. doi: 10.3934/dcdsb.2018126
Abraham Solar. Stability of non-monotone and backward waves for delay non-local reaction-diffusion equations. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 5799-5823. doi: 10.3934/dcds.2019255
Eugenia N. Petropoulou, Panayiotis D. Siafarikas. Polynomial solutions of linear partial differential equations. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1053-1065. doi: 10.3934/cpaa.2009.8.1053
Arnulf Jentzen. Taylor expansions of solutions of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 515-557. doi: 10.3934/dcdsb.2010.14.515
Barbara Abraham-Shrauner. Exact solutions of nonlinear partial differential equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 577-582. doi: 10.3934/dcdss.2018032
Nguyen Thieu Huy, Ngo Quy Dang. Dichotomy and periodic solutions to partial functional differential equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3127-3144. doi: 10.3934/dcdsb.2017167
Kyudong Choi. Persistence of Hölder continuity for non-local integro-differential equations. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1741-1771. doi: 10.3934/dcds.2013.33.1741
Liang Zhang, Bingtuan Li. Traveling wave solutions in an integro-differential competition model. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 417-428. doi: 10.3934/dcdsb.2012.17.417
Changbing Hu, Yang Kuang, Bingtuan Li, Hao Liu. Spreading speeds and traveling wave solutions in cooperative integral-differential systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1663-1684. doi: 10.3934/dcdsb.2015.20.1663
Henri Berestycki Jean-Pierre Nadal Nancy Rodíguez | CommonCrawl |
Optimization Seminars
Center for Mathematical Modeling – U. de Chile
On the construction of maximal p-cyclically monotone operators
14 de January de 2021 Seminarfflores
Speaker: Professor Orestes Bueno
Universidad del Pacífico, Lima, Perú
Date: January 20, 2021 at 10:00 (Chilean-time)
Title: On the construction of maximal p-cyclically monotone operators
Abstract: In this talk we deal with the construction of explicit examples of maximal p-cyclically maximal monotone operators. To the date, there is only one instance of an explicit example of a maximal 2-cyclically monotone operator that is not maximal monotone. We present several other examples, and a proposal of how such examples can be constructed.
A recorded video of the conference is …. ; the slides can be downloaded here
Venue: Online via Google Meet http://meet.google.com/mqh-bgjv-iyb
A brief biography of the speaker: Orestes Bueno is an Associate Professor at Universidad del Pacífico, Lima, Perú. He obtained his PhD at the Instituto de Matemática Pura e Aplicada (IMPA), Brazil, in 2012. His main interests are: Maximal Monotone Operators, Generalized Convexity and Monotonicity, Functional Analysis.
Coordinators: Fabián Flores-Bazán (Universidad de Concepción) and Abderrahim Hantoute (CMM).
On diametrically maximal sets, maximal premonotone maps and premonotone bifunctions
13 de November de 2020 Seminarfflores
Speaker: Professor Wilfredo Sosa
Graduate Program of Economics, Catholic University of Brasilia, Brazil
Date: November 18, 2020 at 10:00
Title: On diametrically maximal sets, maximal premonotone maps and premonotone bifunctions
Abstract: First, we study diametrically maximal sets in the Euclidean space (those which are not properly contained in a set with the same diameter), establishing their main properties. Then, we use these sets for exhibiting an explicit family of maximal premonotone operators. We also establish some relevant properties of maximal premonotone operators, like their local boundedness, and finally we introduce the notion of premonotone bifunctions, presenting a canonical relation between premonotone operators and bifunctions, that extends the well known one, which holds in the monotone case.
Venue: Online via Google Meet meet.google.com/tam-ddhj-psx
A brief biography of the speaker: Wilfredo Sosa es profesor del Programa de Graduados de Economía de la Universidad Católica de Brasilia, Brazil; Egresado de la Universidad de Ingeniería de Lima, Perú. Formado en el IMPA de Rio de Janeiro Brasil. Co-Fundador del IMCA de Lima Peru. Miembro titular de la Academia de Ciencias de Perú. Areas de interés: Optimization theory; Duality theory; Equilibrium theory; Mathematical economy.
Coordinators: Fabián Flores-Bazán (Universidad de Concepción) and Abderrahim Hantoute (CMM)
An algebraic view of the smallest strictly monotonic function
31 de October de 2020 Seminarfflores
Speaker: Professor César Gutiérrez
IMUVA (Mathematics Research Institute of the University of Valladolid), Valladolid, Spain
Title: An algebraic view of the smallest strictly monotonic function
Abstract: The talk concerns with one of the most popular functions to derive nonconvex separation results. Complete characterizations for both its level sets and basic properties such as monotonicity and convexity are provided in terms of its parameters. Most of these characterizations work without considering any additional requirement or assumption. Finally, as an application, a vectorial form of the Ekeland variational principle is provided.
A recorded video of the conference is here ; the slides can be downloaded here
Venue: Online via Google Meet meet.google.com/tta-bhpu-raa
A brief biography of the speaker: César Gutiérrez (ORCID iD 0000-0002-8223-2088) is Professor at Universidad of Valladolid (Spain) and researcher of the Mathematics Research Institute of the University of Valladolid (IMUVA). He is author of 54 papers on several subjects related to vector and set-valued optimization. Currently, he is Associate Editor of Optimization.
Principal-Agent problem in insurance: from discrete- to continuous-time
5 de October de 2020 Seminarfflores
Speaker: Doctor Nicolás Hernández
Center for Mathematical Modeling (CMM), Universidad de Chile, Santiago, Chile
Date: October 07, 2020 at 10:00
Title: Principal-Agent problem in insurance: from discrete-to continuous-time
Abstract: In this talk we present a contracting problem between an insurance buyer and the seller, subject to prevention efforts in the form of self-insurance and self-protection. We start with a static formulation, corresponding to an optimization problem with variational inequality constraint, and extend the main properties of the optimal contract to the continuous-time formulation, corresponding to a stochastic control problem in weak form under non-singular measures.
A recorded video of the conference is here; the slides can be downloaded here
Venue: Online via Google Meet here
A brief biography of the speaker: Nicolás Hernández is currently a Postdoctoral Researcher at the Center for Mathematical Modeling (CMM), at Universidad de Chile. He obtained his PhD in 2017, as a cotutelle between Université Paris-Dauphine and Universidad de Chile. His research areas of interest are Contract theory, stochastic control, mathematical finance, probability, optimization, game theory.
Sigma-convex functions and Sigma-subdifferentials
14 de September de 2020 Seminarfflores
Speaker: Prof. Mohammad Hossein Alizadeh
Institute for Advanced Studies in Basic Sciences (IASBS),
Zanjan, Iran
Date: September 23, 2020 at 10:00
Title: Sigma-convex functions and Sigma-subdifferentials
Abstract: In this talk we present and study the notion of $\sigma$-subdifferential of a proper function $f$ which contains the Clarke-Rockafellar subdifferential of $f$ under some mild assumptions on $f$.
We show that some well known properties of the convex function, namely Lipschitz property in the interior of its domain, remain valid for the large class of $\sigma$-convex functions.
Venue: Online via Google Meet meet.google.com/uoq-kifr-nsg
A brief biography of the speaker: Mohammad Hossein Alizadeh is an Assistant
Professor at the Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran. He obtains his Ph.D. from the University the Aegean, Greece, in 2012. He is mainly interested in the following areas:
Monotone and generalized monotone operators, Monotone and generalized monotone
bifunctions, generalized convexity and generalized inverses.
Coordinators: Abderrahim Hantoute (CMM) and Fabián Flores-Bazán (Universidad de Concepción)
Generalized Newton Algorithms for Tilt-Stable Minimizers in Nonsmooth Optimization
26 de August de 2020 Seminarahantoute
Speaker: Prof. Boris Mordukhovich
Distinguished University Professor of Mathematics Wayne State University
Date: September 2, 2020 at 10:00
Title: Generalized Newton Algorithms for Tilt-Stable Minimizers in Nonsmooth Optimization
Abstract: This talk aims at developing two versions of the generalized Newton method to compute local minimizers for nonsmooth problems of unconstrained and constraned optimization that satisfy an important stability property known as tilt stability. We start with unconstrained minimization of continuously differentiable cost functions having Lipschitzian gradients and suggest two second-order algorithms ofthe Newton type: one involving coderivatives of Lipschitzian gradient mappings, and the other based on graphical derivatives of the latter. Then we proceed with the propagation of these algorithms to minimization of extended-real-valued prox-regular functions, while covering in this way problems of constrained optimization, by using Moreau envelopes. Employing advanced techniques of second-order variational analysis and characterizations of tilt stability allows us to establish the solvability of subproblems in both algorithms and to prove the Q-superlinear convergence of their iterations. Based on joint work with Ebrahim Sarabi (Miami University, USA).
Venue: Online via Google Meet meet.google.com/gyf-mpcb-tre
A brief biography of the speaker: Prof. Boris Mordukhovich was born and educated in the former Soviet Union. He got his PhD from the Belarus State University (Minsk) in 1973. He is currently a Distinguished University Professor of Mathematics at Wayne State University. Mordukhovich is an expert in optimization, variational analysis, generalized differentiation, optimal control, and their applications to economics, engineering, behavioral sciences, and other fields. He is the author and a co-author of many papers and 5 monographs in these areas. Prof. Mordukhovich is an AMS Fellow, a SIAM Fellow, and a recipient of many international awards and honors including Doctor Honoris Causa degrees from 6 universities worldwide. He was the Founding Editor (2008) and a co-Editor-in-Chief (2009-2014) of Set-Valued and Variational Analysis, and is now an Associate Editor of many high-ranked journals including SIAM J. Optimization, JOTA, JOGO, etc. In 2016 he was elected to the Accademia Peloritana dei Pericolanti (Italy). Prof. Mordukhovich is in the list of Highly Cited Researchers in Mathematics.
An overview of Sweeping Processes with applications
Speaker: Prof. Emilio Vilches
Instituto de Ciencias de la Ingeniería, Universidad de O'Higgins, Rancagua, Chile
Date: August 26, 2020 at 10:00
Title: An overview of Sweeping Processes with applications
Abstract: The Moreau's Sweeping Process is a first-order differential inclusion, involving the normal cone to a moving set depending on time. It was introduced and deeply studied by J.J. Moreau in the 1970s as a model for an elastoplastic mechanical system. Since then, many other applications have been given, and new variants have appeared. In this talk, we review the latest developments in the theory of sweeping processes and its variants. We highlight open questions and provide some applications.
This work has been supported by ANID-Chile under project Fondecyt de Iniciación 11180098.
The recorded video of the conference can be downloaded here
The slides of the conference can be downloaded here
Venue: Online via Google Meet https://meet.google.com/toh-nxch-fhb
A brief biography of the speaker: Prof. Emilio Vilches is Assistant Professor at Universidad de O'Higgins, Rancagua, Chile. He obtains his Ph.D. from the University of Chile and the University of Burgundy in 2017. He is mainly interested in the application of convex and variational analysis to nonsmooth dynamical systems.
Epi-convergence, asymptotic analysis and stability in set optimization problems
3 de August de 2020 Seminarfflores
Speaker: Prof. Rubén López
University of Tarapacá, Arica, Chile
Title: Epi-convergence, asymptotic analysis and stability in set optimization problems
Abstract: We study the stability of set optimization problems with data that are not necessarily bounded. To do this, we use the well-known notion of epi-convergence coupled with asymptotic tools for set-valued maps. We derive characterizations for this notion that allows us to study the stability of vector and set type solutions by considering variations of the whole data (feasible set and objective map). We extend the notion of total epi-convergence to set-valued maps.
* This work has been supported by Conicyt-Chile under project FONDECYT 1181368
Joint work with Elvira Hérnández, Universidad Nacional de Educación a Distancia, Madrid, Spain
Venue: Online via Google Meet – https://meet.google.com/hgo-zwkr-fvh
A brief biography of the speaker: Prof. Rubén López is Professor at the University of Tarapacá, Arica – Chile. He studied at Moscow State University – Mech Math (1996, Russia) and Universidad de Concepción – DIM (2005, Chile). He works on Optimization: asymptotic analysis, variational convergences, stability theory, approximate solutions and well-posedness.
Satisfying Instead of Optimizing in the Nash Demand Games
15 de July de 2020 Seminarfflores
Speaker: Prof. Sigifredo Laengle
University of Chile, Santiago, Chile
Date: July 22, 2020 at 10:00
Abstract: The Nash Demand Game (NDG) has been one of the first models (Nash 1953) that has tried to describe the process of negotiation, competition, and cooperation. This model is still subject to active research, in fact, it maintains a set of open questions regarding how agents optimally select their decisions and how they face uncertainty. However, the agents act rather guided by chance and necessity, with a Darwinian flavor. Satisfying, instead of optimising. The Viability Theory (VT) has this approach. Therefore, we investigate the NDG under this point of view. In particular, we ask ourselves two questions: if there are decisions in the NDG that ensure viability and if this set also contains Pareto and equilibrium strategies. Thus, carrying out the work, we find that the answers to both questions are not only affirmative, but that we also advance in characterising viable NDGs. In particular, we conclude that a certain type of NDGs ensures viability and equilibrium. Many interesting questions originate from this initial work. For example, is it possible to fully characterise the NDG by imposing viability conditions? Under what conditions does viability require cooperation? Is extreme polarisation viable?
Venue: Online via Google Meet – meet.google.com/jhb-umew-kwp
A brief biography of the speaker: Prof. Sigifredo Laengle is an Associate Professor at the University of Chile since 2007. He received his PhD in Germany working on the theoretical problem of the value of information in organisations. He has published articles that articulate phenomena of strategic interaction, and optimisation.
Coordinators: Abderrahim Hantoute and Fabián Flores-Bazán (Universidad de Concepción)
Enlargements of the Moreau-Rockafellar Subdifferential
13 de July de 2020 Seminarahantoute
Speaker: Prof. Michel Théra
University of Limoges, France
Abstract: The Moreau-Rockafellar subdifferential is a highly important notion in convex analysis and optimization theory. But there are many functions which fail to be subdifferentiable at certain points. In particular, there is a continuous convex function defined on $\ell^2(\mathbb{N})$, whose Moreau–Rockafellar subdifferential is empty at every point of its domain. This talk proposes some enlargements of the Moreau-Rockafellar subdifferential: the sup$^\star$-subdifferential, sup-subdifferential and symmetric subdifferential, all of them being nonempty for the mentioned function. These enlargements satisfy the most fundamental properties of the Moreau–Rockafellar subdifferential: convexity, weak$^*$-closedness, weak$^*$-compactness and, under some additional assumptions, possess certain calculus rules. The sup$^\star$ and sup subdifferentials coincide with the Moreau–Rockafellar subdifferential at every point at which the function attains its minimum, and if the function is upper semi-continuous, then there are some relationships for the other points. They can be used to detect minima and maxima of arbitrary functions.
The slides of the conference can be downloaded here.
Venue: Online via Google Meet – meet.google.com/unx-gcse-wkn
A brief biography of the speaker: Michel Théra is a French mathematician. He obtained his PhD from the Université de Pau et des Pays de l'Adour (1978) and his thèse d'Etat at the University of Panthéon-Sorbonne (1988). Former President of the French Society of Industrial and Applied Mathematics, he has been also Vice President of the University of Limoges in charge of the International Cooperation. He is presently a professor emeritus of Mathematics in the Laboratory XLIM from the University of Limoges, where he retired as Professeur de classe exceptionnelle. He became Adjoint Professor of Federation University Australia, chairing there the International Academic Advisory Group of the Centre for Informatics and Applied Optimisation (CIAO). He is also scientific co-director of the International School of Mathematics "Guido Stampacchia" at the"Ettore Majorana" Foundation and Centre for Scientific Culture (Erice, Sicily). During several years, he has been a member of the Committee for the Developing Countries of the European Mathematical Society and became after his term an associate member. His research focuses on variational analysis, convex analysis, continuous optimization, monotone operator theory and the interaction among these fields of research, and their applications. He has published 130 articles in international journals on various topics related to variational analysis, optimization, monotone operator theory and nonlinear functional analysis. He serves as editor for several journals on continuous optimization and has been responsible for several international research programs until his retirement.
Coordinators: Abderrahim Hantoute and Fabián Flores-Bazán (DIM-UdeC)
Latest seminar
Center for Mathematical Modeling (CMM)
Faculty of Physical and Mathematical Sciences
Beauchef 851,
Edificio Norte, piso 7
Santiago – CHILE
©2021 CMM - Center for Mathematical Modeling | FCFM | Universidad de Chile
a CNRS International Mixed Unit | CommonCrawl |
Research | Open | Published: 25 September 2015
On geodesic strongly E-convex sets and geodesic strongly E-convex functions
Adem Kılıçman1 &
Wedad Saleh1
Journal of Inequalities and Applicationsvolume 2015, Article number: 297 (2015) | Download Citation
In this article, geodesic E-convex sets and geodesic E-convex functions on a Riemannian manifold are extended to the so-called geodesic strongly E-convex sets and geodesic strongly E-convex functions. Some properties of geodesic strongly E-convex sets are also discussed. The results obtained in this article may inspire future research in convex analysis and related optimization fields.
Convexity and its generalizations play an important role in optimization theory, convex analysis, Minkowski space, and fractal mathematics [1–7]. In order to extend the validity of their results to large classes of optimization, these concepts have been generalized and extended in several directions using novel and innovative techniques. Youness [8] defined E-convex sets and E-convex functions, which have some important applications in various branches of mathematical sciences [9–11]. However, some results given by Youness [8] seem to be incorrect according to Yang [12]. Chen [13] extended E-convexity to a semi-E-convexity and discussed some of there properties. Also, Youness and Emam [14] discussed a new class functions which is called strongly E-convex functions by taking the images of two points $x_{1} $ and $x_{2} $ under an operator $E\colon\mathbb{R}^{n}\rightarrow\mathbb{R}^{n} $ besides the two points themselves. Strong E-convexity was extended to a semi-strong E-convexity as well as quasi- and pseudo-semi-strong E-convexity in [15]. The authors investigated the characterization of efficient solutions for multi-objective programming problems involving semi-strong E-convexity [16].
A generalization of convexity on Riemannian manifolds was proposed by Rapcsak [17] and Udriste [18]. Moreover, Iqbal et al. [19] introduced geodesic E-convex sets and geodesic E-convex functions on Riemannian manifolds.
Motivated by earlier research works [18, 20–25] and by the importance of the concepts of convexity and generalized convexity, we discuss a new class of sets on Riemannian manifolds and a new class of functions defined on them, which are called geodesic strongly E-convex sets and geodesic strongly E-convex functions, and some of their properties are presented.
In this section, we introduce some definitions and well-known results of Riemannian manifolds, which help us throughout the article. We refer to [18] for the standard material on differential geometry.
Let N be a $C^{\infty} $ m-dimensional Riemannian manifold, and $T_{z}N $ be the tangent space to N at z. Also, assume that $\mu_{z}(x_{1},x_{2}) $ is a positive inner product on the tangent space $T_{z}N $ ($x_{1},x_{2}\in T_{z}N $), which is given for each point of N. Then a $C^{\infty} $ map $\mu\colon z\rightarrow\mu_{z} $, which assigns a positive inner product $\mu _{z} $ to $T_{z}N $ for each point z of N is called a Riemannian metric.
The length of a piecewise $C^{1} $ curve $\eta\colon [a_{1},a_{2}]\rightarrow N $ which is defined as follows:
$$L(\eta)= \int_{a_{1}}^{a_{2}} \bigl\Vert \acute{ \eta}(x)\bigr\Vert \, dx. $$
We define $d(z_{1},z_{2})= \inf \lbrace L(\eta)\colon\eta\mbox{ is a piecewise } C^{1} \mbox{ curve joining } z_{1} \mbox{ to } z_{2} \rbrace$ for any points $z_{1},z_{2}\in N $. Then d is a distance which induces the original topology on N. As we know on every Riemannian manifold there is a unique determined Riemannian connection, called a Levi-Civita connection, denoted by $\bigtriangledown_{X}Y $, for any vector fields $X,Y\in N $. Also, a smooth path η is a geodesic if and only if its tangent vector is a parallel vector field along the path η, i.e., η satisfies the equation $\bigtriangledown_{\acute{\eta}(t)}\acute{\eta}(t)=0 $. Any path η joining $z_{1} $ and $z_{2} $ in N such that $L(\eta )=d(z_{1},z_{2}) $ is a geodesic and is called a minimal geodesic.
Finally, assume that $(N,\eta) $ is a complete m-dimensional Riemannian manifold with Riemannian connection ▽. Let $x_{1} , x_{2} \in N $ and $\eta\colon[0,1]\rightarrow N $ be a geodesic joining the points $x_{1} $ and $x_{2} $, which means that $\eta_{x_{1},x_{2}}(0)=x_{2}$ and $\eta_{x_{1},x_{2}}(1)=x_{1} $.
Definition 2.1
A set B in a Riemannian manifold N is called totally convex if B contains every geodesic $\eta_{x_{1},x_{2}} $ of N whose endpoints $x_{1} $ and $x_{2} $ belong to B.
Note the whole of the manifold N is totally convex, and conventionally, so is the empty set. The minimal circle in a hyperboloid is totally convex, but a single point is not. Also, any proper subset of a sphere is not necessarily totally convex.
The following theorem was proved in [18].
Theorem 2.2
The intersection of any number of a totally convex sets is totally convex.
Remark 2.3
In general, the union of a totally convex set is not necessarily totally convex.
A function $f\colon B\rightarrow\mathbb{R} $ is called a geodesic convex function on a totally convex set $B\subset N $ if for every geodesic $\eta_{x_{1},x_{2}} $, then
$$f\bigl(\eta_{x_{1},x_{2}}(\gamma)\bigr)\leq\gamma f(x_{1})+(1- \gamma)f(x_{2}) $$
holds for all $x_{1},x_{2}\in B $ and $\gamma\in[0,1] $.
In 2005, strongly E-convex sets and strongly E-convex functions were introduced by Youness and Emam [14] as follows.
A subset $B\subseteq\mathbb{R}^{n} $ is called a strongly E-convex set if there is a map $E\colon\mathbb{R}^{n}\rightarrow \mathbb{R}^{n} $ such that
$$\gamma\bigl(\alpha b_{1}+E(b_{1})\bigr)+(1-\gamma) \bigl( \alpha b_{2}+E(b_{2})\bigr)\in B $$
for each $b_{1},b_{2}\in B$, $\alpha\in[0,1] $ and $\gamma\in[0,1] $.
A function $f\colon B\subseteq\mathbb {R}^{n}\rightarrow\mathbb{R} $ is called a strongly E-convex function on N if there is a map $E\colon\mathbb {R}^{n}\rightarrow\mathbb{R}^{n} $ such that B is a strongly E-convex set and
$$f\bigl(\gamma\bigl(\alpha b_{1}+E(b_{1})\bigr)+(1-\gamma) \bigl(\alpha b_{2}+E(b_{2})\bigr)\bigr)\leq \gamma f \bigl(E(b_{1})\bigr)+(1-\gamma)f\bigl(E(b_{2})\bigr) $$
In 2012, the geodesic E-convex set and geodesic E-convex functions on a Riemannian manifold were introduced by Iqbal et al. [19] as follows.
Assume that $E\colon N\rightarrow N $ is a map. A subset B in a Riemannian manifold N is called geodesic E-convex iff there exists a unique geodesic $\eta_{E(b_{1}),E(b_{2})}(\gamma) $ of length $d(b_{1},b_{2}) $, which belongs to B, for each $b_{1},b_{2}\in B $ and $\gamma\in[0,1] $.
A function $f\colon B\subseteq N \rightarrow\mathbb {R}$ is called geodesic E-convex on a geodesic E-convex set B if
$$f\bigl(\eta_{E(b_{1}),E(b_{2})}(\gamma)\bigr)\leq\gamma f\bigl(E(b_{1}) \bigr)+(1-\gamma)f\bigl(E(b_{2})\bigr) $$
for all $b_{1},b_{2}\in B $ and $\gamma\in[0,1] $.
Geodesic strongly E-convex sets and geodesic strongly E-convex functions
In this section, we introduce a geodesic strongly E-convex (GSEC) set and a geodesic strongly E-convex (GSEC) function in a Riemannian manifold N and discuss some of their properties.
Assume that $E\colon N\rightarrow N $ is a map. A subset B in a Riemannian manifold N is called GSEC if and only if there is a unique geodesic $\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma) $ of length $d(b_{1},b_{2}) $, which belongs to B, $\forall b_{1},b_{2}\in B$, $\alpha\in[0,1] $, and $\gamma\in [0,1] $.
Every GSEC set is a GEC set when $\alpha=0 $.
A GEC set is not necessarily a GSEC set. The following example shows this statement.
Example 3.3
Let $N^{2} $ be a 2-dimensional simply complete Riemannian manifold of non-positive sectional curvature, and $B\subset N^{2} $ be an open star-shaped. Let $E\colon N^{2}\rightarrow N^{2} $ be a map such that $E(z)= \lbrace y\colon y\in \operatorname{ker}(B), \forall z\in B \rbrace $. Then B is GEC; on the other hand it is not GSEC.
Proposition 3.4
Every convex set $B\subset N $ is a GSEC set.
Let us take a map $E\colon N\rightarrow N $ such as $E=I $ where I is the identity map and $\alpha=0 $, then we have the required result. □
Note if we take the mapping $E(x)=(1-\alpha)x$, $x\in B $, then the definition of a GSE reduces to the definition of a t-convex set.
If $B\subset N $ is a GSEC set, then $E(B)\subseteq B $.
Since B is a GSEC set, we have for each $b_{1},b_{2}\in B$, $\alpha \in[0,1] $, and $\gamma\in[0,1] $,
$$\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\in B. $$
For $\gamma=0 $ and $\alpha=0 $, we have $\eta _{E(b_{1}),E(b_{2})}(0)=E(b_{2})\in B $, then $E(B)\subseteq B $. □
If $\lbrace B_{j}, j\in I \rbrace$ is an arbitrary family of GSEC subsets of N with respect to the mapping $E\colon N\rightarrow N $, then the intersection $\bigcap_{j\in I}B_{j} $ is a GSEC subset of N.
If $\bigcap_{j\in I}B_{j} $ is an empty set, then it is obviously a GSEC subset of N. Assume that $b_{1},b_{2}\in\bigcap_{j\in I} B_{j} $, then $b_{1},b_{2} \in B_{j} $, $\forall j\in I $. By the GSEC of $B_{j} $, we get $\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma )\in B_{j}$, $\forall j\in I$, $\alpha\in[0,1] $, and $\gamma\in[0,1] $. Hence, $\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\in \bigcap_{j\in I} B_{j}$, $\forall\alpha\in[0,1] $ and $\gamma\in[0,1] $. □
The above theorem is not generally true for the union of GSEC subsets of N.
Now, we extend the definition of a GEC function on a Riemannian manifold to a GSEC function on a Riemannian manifold.
A real-valued function $f\colon B\subset N\rightarrow\mathbb{R} $ is said to be a GSEC function on a GSEC set B, if
$$f\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\bigr)\leq \gamma f\bigl(E(b_{1}) \bigr)+(1-\gamma)f\bigl(E(b_{2})\bigr), $$
$\forall b_{1},b_{2}\in B $, $\alpha\in[0,1]$, and $\gamma\in[0,1] $. If the above inequality is strict for all $b_{1},b_{2}\in B$, $\alpha b_{1}+E(b_{1})\neq\alpha b_{2}+E(b_{2})$, $\alpha\in[0,1]$, and $\gamma \in(0,1) $, then f is called a strictly GSEC function.
Every GSEC function is a GEC function when $\alpha=0 $. The following example shows that a GEC function is not necessarily a GSEC function.
Example 3.10
Consider the function $f\colon\mathbb{R}\rightarrow\mathbb{R} $ where $f(b)= -|b| $ and suppose that $E\colon\mathbb {R}\rightarrow\mathbb{R} $ is given as $E(b)=-b $. We consider the geodesic η such that
$$\begin{aligned} \eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma) =& \textstyle\begin{cases} - [\alpha b_{2}+E(b_{2}) +\gamma(\alpha b_{1}+E(b_{1})-\alpha b_{2}-E(b_{2})) ] ;& b_{1}b_{2}\geq0, \\ - [\alpha b_{2}+E(b_{2}) +\gamma(\alpha b_{2}+E(b_{2})-\alpha b_{1}-E(b_{1})) ] ;& b_{1}b_{2}< 0 \end{cases}\displaystyle \\ =& \textstyle\begin{cases} - [(\alpha-1) b_{2} +\gamma((\alpha-1) b_{1}+(1-\alpha) b_{2}) ] ;& b_{1}b_{2}\geq0, \\ - [(\alpha-1) b_{2} +\gamma ((\alpha-1) b_{2}+(1-\alpha) b_{1}) ] ;& b_{1}b_{2}< 0. \end{cases}\displaystyle \end{aligned}$$
If $\alpha=0 $, then
$$ \eta_{E(b_{1}),E(b_{2})}(\gamma) = \textstyle\begin{cases} {[ b_{2} +\gamma( b_{1}-b_{2}) ]} ;& b_{1}b_{2}\geq0, \\ {[ b_{2} +\gamma( b_{2}- b_{1}) ]} ;& b_{1}b_{2}< 0. \end{cases} $$
If $b_{1}, b_{2}\geq0 $, then
$$\begin{aligned} f\bigl(\eta_{E(b_{1}),E(b_{2})}(\gamma)\bigr) =& f\bigl(b_{2}+\gamma (b_{1}-b_{2})\bigr) \\ =& -\bigl[(1-\gamma)b_{2}+\gamma b_{1}\bigr]. \end{aligned}$$
$$ \gamma f\bigl(E(b_{1})\bigr)+(1-\gamma)f\bigl(E(b_{2}) \bigr)= \gamma f(-b_{1})+(1-\gamma )f(-b_{2}) = -\bigl[(1-\gamma)b_{2}+\gamma b_{1}\bigr]. $$
Hence, $f(\eta_{E(b_{1}),E(b_{2})}(\gamma))\leq\gamma f(E(b_{1}))+(1-\gamma)f(E(b_{2})) $, $\forall\gamma\in[0,1] $.
Similarly, the above inequality holds true when $b_{1},b_{2}<0 $.
Now, let $b_{1}<0$, $b_{2}>0 $, then
$$\begin{aligned} f\bigl(\eta_{E(b_{1}),E(b_{2})}(\gamma)\bigr) =& f\bigl(b_{2}+ \gamma(b_{2}-b_{1})\bigr) \\ =& -\bigl[(1+\gamma)b_{2}-\gamma b_{1}\bigr]. \end{aligned}$$
$$ \gamma f\bigl(E(b_{1})\bigr)+(1-\gamma)f\bigl(E(b_{2})\bigr) = \gamma f(-b_{1})+(1-\gamma )f(-b_{2}) = \gamma b_{1}-(1-\gamma)b_{2}. $$
$$f\bigl(\eta_{E(b_{1}),E(b_{2})}(\gamma)\bigr)\leq\gamma f\bigl(E(b_{1}) \bigr)+(1-\gamma )f\bigl(E(b_{2})\bigr) $$
if and only if
$$-\bigl[(1+\gamma)b_{2}-\gamma b_{1}\bigr]\leq\gamma b_{1}-(1-\gamma)b_{2} $$
$$-2\gamma b_{2}\leq0, $$
which is always true for all $\gamma\in[0,1] $.
Similarly, $f(\eta_{E(b_{1}),E(b_{2})}(\gamma))\leq\gamma f(E(b_{1}))+(1-\gamma)f(E(b_{2})) $, $\forall\gamma\in[0,1] $ also holds for $b_{1}>0 $ and $b_{2}<0 $.
Thus, f is a GEC function on $\mathbb{R} $, but it is not a GSEC function because if we take $b_{1}=0$, $b_{2}=-1 $ and $\gamma=\frac {1}{2} $, then
$$\begin{aligned} f\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\bigr) =& f\biggl(\frac{1}{2}\alpha- \frac{1}{2}\biggr) \\ =& \frac{1}{2}\alpha-\frac {1}{2} \\ > & \frac{1}{2}f\bigl(E(0)\bigr)+\frac{1}{2}f\bigl(E(-1)\bigr) \\ =& \frac{-1}{2} ,\quad \forall\alpha\in ( 0,1 ] . \end{aligned}$$
Every g-convex function f on a convex set B is a GSEC function when $\alpha=0 $ and E is the identity map.
Proposition 3.11
Assume that $f\colon B\rightarrow\mathbb{R} $ is a GSEC function on a GSEC set $B\subseteq N $, then $f(\alpha b+E(b))\leq f(E(b)) $, $\forall b\in B $ and $\alpha\in[0,1] $.
Since $f\colon B\rightarrow\mathbb{R} $ is a GSEC function on a GSEC set $B\subseteq N $, then $\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma) \in B$, $\forall b_{1},b_{2}\in B$, $\alpha\in [0,1]$, and $\gamma\in[0,1] $. Also,
$$f\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\bigr)\leq \gamma f\bigl(E(b_{1}) \bigr)+(1-\gamma)f\bigl(E(b_{2})\bigr) $$
thus, for $\gamma=1 $, we get $\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)=\alpha b_{1}+E(b_{1}) $. Then
$$f\bigl(\alpha b_{1}+E(b_{1})\bigr)\leq f \bigl(E(b_{1})\bigr) . $$
Theorem 3.12
Consider that $B\subseteq N $ is a GSEC set and $f_{1}\colon B\rightarrow\mathbb{R} $ is a GSEC function. If $f_{2}\colon I\rightarrow\mathbb{R} $ is a non-decreasing convex function such that $\operatorname{rang}(f_{1})\subset I $, then $f_{2}\circ f_{1} $ is a GSEC function on B.
Since $f_{1} $ is a GSEC function, for all $b_{1},b_{2}\in B$, $\alpha \in[0,1] $, and $\gamma\in[0,1] $,
$$f_{1}\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\bigr)\leq \gamma f_{1}\bigl(E(b_{1})\bigr)+(1-\gamma)f_{1} \bigl(E(b_{2})\bigr). $$
Since $f_{2} $ is a non-decreasing convex function,
$$\begin{aligned}& f_{2}\circ f_{2}\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma )\bigr) \\& \quad = f_{2} \bigl( f_{2}\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})} (\gamma)\bigr) \bigr) \\& \quad \leq f_{2} \bigl(\gamma f_{1} \bigl(E(b_{1}) \bigr)+(1-\gamma) f_{1}\bigl(E(b_{2})\bigr) \bigr) \\& \quad \leq \gamma f_{2} \bigl( f_{1} \bigl(E(b_{1}) \bigr) \bigr) +(1-\gamma) f_{2} \bigl( f_{1} \bigl(E(b_{2})\bigr) \bigr) \\& \quad = \gamma(f_{2}\circ f_{1}) \bigl(E(b_{1})\bigr) +(1-\gamma) (f_{2}\circ f_{1}) \bigl(E(b_{2}) \bigr), \end{aligned}$$
which means that $f_{2}\circ f_{1} $ is a GSEC function on B. Similarly, if $f_{2} $ is a strictly non-decreasing convex function, then $f_{2}\circ f_{1} $ is a strictly GSEC function. □
Assume that $B\subseteq N $ is a GSEC set and $f_{j}\colon B\rightarrow\mathbb{R}$, $j=1,2,\ldots,m $ are GSEC functions. Then the function
$$f=\sum_{j=1}^{m}n_{j}f_{j} $$
is GSEC on B, $\forall n_{j}\in\mathbb{R}$, $n_{j}\geq0 $.
Since $f_{j}$, $j=1,2,\ldots,m $ are GSEC functions, $\forall b_{1},b_{2}\in B $, $\alpha\in[0,1]$, and $\gamma\in[0,1] $, we have
$$f_{j}\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\bigr)\leq \gamma f_{j}\bigl(E(b_{1})\bigr)+(1-\gamma)f_{j} \bigl(E(b_{2})\bigr). $$
$$n_{j}f_{j}\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})} (\gamma )\bigr)\leq \gamma n_{j} f_{j}\bigl(E(b_{1})\bigr)+(1- \gamma)n_{j}f_{j}\bigl(E(b_{2})\bigr). $$
$$\begin{aligned}& \sum_{j=1}^{m}n_{j}f_{j} \bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})} (\gamma)\bigr) \\& \quad \leq \gamma\sum _{j=1}^{m} n_{j} f_{j} \bigl(E(b_{1})\bigr)+(1-\gamma)\sum_{j=1}^{m}n_{j}f_{j} \bigl(E(b_{2})\bigr) \\& \quad = \gamma f\bigl(E(b_{1})\bigr)+(1-\gamma)f\bigl(E(b_{2}) \bigr). \end{aligned}$$
Thus, f is a GSEC function. □
Let $B\subseteq N $ be a GSEC set and $\lbrace f_{j},j\in I \rbrace$ be a family of real-valued functions defined on B such that $\sup_{j\in I}f_{j}(b) $ exists in $\mathbb{R} $, $\forall b\in B $. If $f_{j}\colon B\rightarrow\mathbb{R} $, $j\in I$ are GSEC functions on B, then the function $f\colon B\rightarrow \mathbb{R} $, defined by $f(b)=\sup_{j\in I}f_{j}(b)$, $\forall b\in B $ is GSEC on B.
Since $f_{j}$, $j\in I $ are GSEC functions on a GSEC set B, $\forall b_{1},b_{2}\in B $, $\alpha\in[0,1]$, and $\gamma\in[0,1] $, we have
$$\begin{aligned}& \sup_{j\in I}f_{j}\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma) \bigr) \\& \quad \leq \sup_{j\in I} \bigl[ \gamma f_{j} \bigl(E(b_{1})\bigr)+(1-\gamma)f_{j}\bigl(E(b_{2}) \bigr) \bigr] \\& \quad = \gamma\sup_{j\in I} f_{j}\bigl(E(b_{1}) \bigr)+(1-\gamma)\sup_{j\in I} f_{j} \bigl(E(b_{2})\bigr) \\& \quad = \gamma f\bigl(E(b_{1})\bigr)+(1-\gamma)f\bigl(E(b_{2}) \bigr). \end{aligned}$$
Hence,
which means that f is a GSEC function on B. □
Assume that $h_{j}\colon N\rightarrow\mathbb{R} $, $j=1,2,\ldots,m$ are GSEC functions on N, with respect to $E\colon N\rightarrow N $. If $E(B)\subseteq B $, then $B= \lbrace b\in N\colon h_{j}(b)\leq0, j=1,2,\ldots,m \rbrace $ is a GSEC set.
Since $h_{j}$, $j=1,2,\ldots m $ are GSEC functions,
$$\begin{aligned} h_{j}\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\bigr) \leq & \gamma h_{j}\bigl(E(b_{1})\bigr)+(1-\gamma)h_{j} \bigl(E(b_{2})\bigr) \\ \leq& 0, \end{aligned}$$
$\forall b_{1},b_{2}\in B $, $\alpha\in[0,1]$, and $\gamma\in[0,1] $. Since $E(B) \subseteq B $, $\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma) \in B $. Hence, B is a GSEC set. □
Epigraphs
Youness and Emam [14] defined a strongly $E\times F $-convex set where $E\colon\mathbb{R}^{n} \rightarrow\mathbb{R}^{n}$ and $F\colon\mathbb{R} \rightarrow\mathbb{R}$ and studied some of its properties. In this section, we generalize a strongly $E\times F $-convex set to a geodesic strongly $E\times F $-convex set on Riemannian manifolds and discuss GSEC functions in terms of their epigraphs. Furthermore, some properties of GSE sets are given.
Let $B\subset N\times\mathbb{R} $, $E\colon N\rightarrow N$ and $F\colon\mathbb{R} \rightarrow\mathbb{R}$. A set B is called geodesic strongly $E\times F $-convex if $(b_{1},\beta _{1}),(b_{2},\beta_{2})\in B $ implies
$$\bigl( \eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma ),\gamma F(\beta_{1})+(1-\gamma)F( \beta_{2}) \bigr) \in B $$
for all $\alpha\in[0,1] $ and $\gamma\in[0,1] $.
It is not difficult to prove that $B\subseteq N $ is a GSEC set if and only if $B\times\mathbb{R} $ is a geodesic strongly $E\times F $-convex set.
An epigraph of f is given by
$$\operatorname{epi}(f)= \bigl\lbrace (b,a)\colon b\in B, a\in\mathbb{R}, f(b)\leq a \bigr\rbrace . $$
A characterization of a GSEC function in terms of its $\operatorname{epi}(f) $ is given by the following theorem.
Let $E\colon N\rightarrow N $ be a map, $B\subseteq N $ be a GSEC set, $f\colon B\rightarrow\mathbb{R} $ be a real-valued function and $F\colon\mathbb{R}\rightarrow\mathbb{R} $ be a map such that $F(f(b)+a)=f(E(b))+a $, for each non-negative real number a. Then f is a GSEC function on B if and only if $\operatorname{epi}(f) $ is geodesic strongly $E\times F $-convex on $B\times\mathbb{R} $.
Assume that $(b_{1},a_{1}) ,(b_{2},a_{2})\in \operatorname{epi}(f)$. If B is a GSEC set, then $\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\in B$, $\forall\alpha\in[0,1] $ and $\gamma\in [0,1] $. Since $E(b_{1})\in B $ for $\alpha=0$, $\gamma=1 $, also $E(b_{2})\in B $ for $\alpha=0$, $\gamma=0 $, let $F(a_{1}) $ and $F(a_{2}) $ be such that $f(E(b_{1}))\leq F(a_{1}) $ and $f(E(b_{2}))\leq F(a_{2}) $. Then $(E(b_{1}),F(a_{1})),(E(b_{2}),F(a_{2}))\in \operatorname{epi}(f) $.
Let f be GSEC on B, then
$$\begin{aligned} f\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\bigr) \leq& \gamma f\bigl(E(b_{1}) \bigr)+(1-\gamma)f\bigl(E(b_{2})\bigr) \\ \leq& \gamma F(a_{1})+(1-\gamma)F(a_{2}). \end{aligned}$$
Thus, $( \eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma ),\gamma F(a_{1})+(1-\gamma)F(a_{2}) ) \in \operatorname{epi}(f) $, then $\operatorname{epi}(f)$ is geodesic strongly $E\times F $-convex on $B\times\mathbb{R} $.
Conversely, assume that $\operatorname{epi}(f)$ is geodesic strongly $E\times F $-convex on $B\times\mathbb{R} $. Let $b_{1},b_{2}\in B $, $\alpha\in [0,1] $, and $\gamma\in[0,1] $, then $(b_{1},f(b_{1}))\in \operatorname{epi}(f) $ and $(b_{2},f(b_{2}))\in \operatorname{epi}(f) $. Now, since $\operatorname{epi}(f)$ is geodesic strongly $E\times F $-convex on $B\times\mathbb{R} $, we obtain $( \eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma ),\gamma F(f(b_{1}))+(1-\gamma)F(f(b_{2})) ) \in \operatorname{epi}(f) $, then
$$\begin{aligned} f\bigl(\eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma)\bigr) \leq& \gamma F\bigl(f(b_{1}) \bigr)+(1-\gamma)F\bigl(f(b_{2})\bigr) \\ =& \gamma f\bigl(E(b_{1})\bigr)+(1-\gamma)f\bigl(E(b_{2}) \bigr). \end{aligned}$$
This shows that f is a GSEC function on B. □
Assume that $\lbrace B_{j}, j\in I \rbrace $ is a family of geodesic strongly $E\times F $-convex sets. Then the intersection $\bigcap_{j\in I}B_{j} $ is a geodesic strongly $E\times F $-convex set.
Assume that $(b_{1},a_{1}) ,(b_{2},a_{2})\in\bigcap_{j\in I}B_{j} $, so $\forall j\in I $, $(b_{1},a_{1}) ,(b_{2},a_{2})\in B_{j}$. Since $B_{j} $ is the geodesic strongly $E\times F $-convex sets $\forall j\in I $, we have
$$\bigl( \eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma ),\gamma F(a_{1})+(1- \gamma)F(a_{2}) \bigr)\in B_{j} , $$
$\forall\alpha\in[0,1]$ and $\gamma\in[0,1] $. Therefore,
$$\bigl( \eta_{\alpha b_{1}+E(b_{1}),\alpha b_{2}+E(b_{2})}(\gamma ),\gamma F(a_{1})+(1- \gamma)F(a_{2}) \bigr)\in\bigcap_{j\in I}B_{j}, $$
$\forall\alpha\in[0,1]$ and $\gamma\in[0,1] $. Then $\bigcap_{j\in I}B_{j} $ is a geodesic strongly $E\times F $-convex set. □
Assume that $E\colon N \rightarrow N $ and $F\colon\mathbb {R}\rightarrow\mathbb{R} $ are two maps such that $F(f(b)+a)=f(E(b))+a $ for each non-negative real number a. Suppose that $\lbrace f_{j}, j\in I \rbrace $ is a family of real-valued functions defined on a GSEC set $B\subseteq N $ which are bounded from above. If $\operatorname{epi}(f_{j}) $ are geodesic strongly $E\times F $-convex sets, then the function f which is given by $f(b)=\sup_{j\in I}f_{j}(b)$, $\forall b\in B $, is a GSEC function on B.
If each $f_{j}$, $j\in I $ is a GSEC function on a GSEC geodesic set B, then
$$\operatorname{epi}(f_{j})= \bigl\lbrace (b,a)\colon b\in B, a\in \mathbb{R}, f_{j}(b)\leq a \bigr\rbrace $$
are geodesic strongly $E\times F $-convex on $B\times\mathbb{R} $. Therefore,
$$\begin{aligned} \bigcap_{j\in I}\operatorname{epi}(f_{j}) =& \bigl\lbrace (b,a)\colon b\in B, a\in \mathbb{R}, f_{j}(b) \leq a, j\in I \bigr\rbrace \\ =& \bigl\lbrace (b,a)\colon b\in B, a\in\mathbb{R}, f(b)\leq a \bigr\rbrace \end{aligned}$$
is geodesic strongly $E\times F $-convex set. Then, according to Theorem 4.2 we see that f is a GSEC function on B. □
Kılıçman, A, Saleh, W: A note on starshaped sets in 2-dimensional manifolds without conjugate points. J. Funct. Spaces 2014, Article ID 675735 (2014)
Boltyanski, V, Martini, H, Soltan, PS: Excursions into Combinatorial Geometry. Springer, Berlin (1997)
Danzer, L, Grünbaum, B, Klee, V: Helly's theorem and its relatives. In: Klee, V (ed.) Convexity. Proc. Sympos. Pure Math., vol. 7, pp. 101-180 (1963)
Jiménez, MA, Garzón, GR, Lizana, AR: Optimality Conditions in Vector Optimization. Bentham Science Publishers, Sharjah (2010)
Martini, H, Swanepoel, KJ: Generalized convexity notions and combinatorial geometry. Congr. Numer. 164, 65-93 (2003)
Martini, H, Swanepoel, KJ: The geometry of Minkowski spaces - a survey. Part II. Expo. Math. 22, 14-93 (2004)
Saleh, W, Kılıçman, A: On generalized s-convex functions on fractal sets. JP J. Geom. Topol. 17(1), 63-82 (2015)
Youness, EA: E-Convex sets, E-convex functions and E-convex programming. J. Optim. Theory Appl. 102, 439-450 (1999)
Abou-Tair, I, Sulaiman, WT: Inequalities via convex functions. Int. J. Math. Math. Sci. 22(3), 543-546 (1999)
Noor, MA: Fuzzy preinvex functions. Fuzzy Sets Syst. 64, 95-104 (1994)
Noor, MA, Noor, KI, Awan, MU: Generalized convexity and integral inequalities. Appl. Math. Inf. Sci. 24(8), 1384-1388 (2015)
Yang, X: On E-convex sets, E-convex functions, and E-convex programming. J. Optim. Theory Appl. 109(3), 699-704 (2001)
Chen, X: Some properties of semi-E-convex functions. J. Math. Anal. Appl. 275(1), 251-262 (2002)
Youness, EA, Emam, T: Strongly E-convex sets and strongly E-convex functions. J. Interdiscip. Math. 8(1), 107-117 (2005)
Youness, EA, Emam, T: Semi-strongly E-convex functions. J. Math. Stat. 1(1), 51-57 (2005)
Youness, EA, Emam, T: Characterization of efficient solutions for multi-objective optimization problems involving semi-strong and generalized semi-strong E-convexity. Acta Math. Sci., Ser. B Engl. Ed. 28(1), 7-16 (2008)
Rapcsak, T: Smooth Nonlinear Optimization in $\mathbb{R}^{n}$. Kluwer Academic, Dordrecht (1997)
Udrist, C: Convex Functions and Optimization Methods on Riemannian Manifolds. Kluwer Academic, Dordrecht (1994)
Iqbal, A, Ali, S, Ahmad, I: On geodesic E-convex sets, geodesic E-convex functions and E-epigraphs. J. Optim. Theory Appl. 55(1), 239-251 (2012)
Fulga, C, Preda, V: Nonlinear programming with E-preinvex and local E-preinvex function. Eur. J. Oper. Res. 192, 737-743 (2009)
Iqbal, A, Ahmad, I, Ali, S: Strong geodesic α-preinvexity and invariant α-monotonicity on Riemannian manifolds. Numer. Funct. Anal. Optim. 31, 1342-1361 (2010)
Megahed, AEMA, Gomma, HG, Youness, EA, El-Banna, AZH: Optimality conditions of E-convex programming for an E-differentiable function. J. Inequal. Appl. 2013(1), 246 (2013)
Mirzapour, F, Mirzapour, A, Meghdadi, M: Generalization of some important theorems to E-midconvex functions. Appl. Math. Lett. 24(8), 1384-1388 (2011)
Syau, YR, Lee, ES: Some properties of E-convex functions. Appl. Math. Lett. 18, 1074-1080 (2005)
Yang, XM: On E-convex programming. J. Optim. Theory Appl. 109, 699-704 (2001)
The authors would like to thank the referees for valuable suggestions and comments, which helped the authors to improve this article substantially.
Department of Mathematics, University Putra Malaysia, Serdang, Malaysia
Adem Kılıçman
& Wedad Saleh
Search for Adem Kılıçman in:
Search for Wedad Saleh in:
Correspondence to Adem Kılıçman.
All authors jointly worked on deriving the results and approved the final manuscript.
https://doi.org/10.1186/s13660-015-0824-z
geodesic E-convex sets
geodesic E-convex functions
Riemannian manifolds
Recent Advances in Nonlinear Analysis and Optimization | CommonCrawl |
SimScale DocumentationSimulation SetupContacts
In many cases, the simulation domain doesn't only consist of a single solid body, but multiple parts. A valid simulation setup requires all relations between parts to be fully defined. In this article, we will discuss contacts and interfaces.
Contacts in Solid Mechanics
Two bodies are said to be in contact when they share at least one common boundary and the boundaries are constrained by a relation (i.e. no relative movement).
In the case of structural simulations, the multiple parts in an assembly are discretized into multiple non-conforming mesh parts, i.e. the single bodies are meshed separately by the meshing algorithm and do not share the nodes lying on their contacting entities, thus they are not connected. In order to ensure the mechanical interaction between the parts, they have to be related via contact constraints, which create the proper coupling between the degrees of freedom.
Automatic Contact Detection
In order to guarantee that the simulated domain is properly constrained, all contacts in the system will be detected automatically whenever a new CAD assembly is assigned to a simulation (this also includes simulation creation). By default, all contacts in the assembly will be created as bonded contacts, which can later be edited by the user.
Contact detection can also be triggered manually via the context menu of the contact node in the simulation tree by clicking on '+':
Figure 1: Creating a new contact, highlighting the automatic contact detection option.
While contacts are being detected, the Contacts in the simulation tree is locked. The time required for contact detection depends on the size and complexity of the geometry and can take between a few seconds up to a few minutes. A loading indicator on the contact tree node signals that contact detection is ongoing.
Bulk Selection
Depending on the size and complexity of an assembly, the number of contacts created can become quite large. An easy way to edit multiple contacts at once is via bulk selection. The bulk selection panel exposes all contact options besides assignments to the user for editing.
Contacts can be selected in bulk via CTRL + Click and/or SHIFT + Click in the contact list or via the filter contacts by selection option in the viewer context menu. The 'Filter contacts by selection' option returns contacts based on the current selection. The following selection modes are possible:
One volume selected: All contacts that contain at least 1 face on the selected volume will be selected.
Two or more volumes selected: All contacts that contain at least one face on at least two of the selected volumes will be selected.
One or multiple faces on one body selected: All contacts that contain at least one of the selected faces will be selected.
Multiple faces across more than one volume selected: All contacts that contain at least one of the selected faces from at least two of the volumes will be selected.
Contact Types
Currently, there are four types of contact constraints available:
Bonded Contact
The bonded contact is a type of contact which allows no relative displacement between two connected solid bodies. This type of contact constraint is used to glue together different parts of an assembly.
You can assign faces or sets of faces that should be tied together via the assignment box under Pick Faces. For numerical purposes, you have to choose one of these selections as master and the other one as slave. During the calculation, the degrees of freedom of slave nodes are constrained to the master surface.
When running contact analyzes, the position tolerance can be set manually or be turned off. The position tolerance defines the distance between any slave node and the closest point to the nearest master face. When turned on, only those slave nodes will be constrained, which are within the defined range from a master face. When the tolerance is set to off, all slave nodes will be tied to the master surface absolutely. Therefore, if a larger face is used as a master, one slave node will be tied to multiple master nodes leading to artificial stiffness in the slave surface.
Figure 2: Bonded contact setup panel, where position tolerance, master surface and slave surface can be assigned.
If a larger surface (or surface with higher mesh density) is chosen as slave, the computation time will increase significantly and it might also result in a wrong solution, especially when no specific tolerance criteria is provided.
Sliding Contact
The sliding contact allows for displacement tangential to the contact surface but no relative movement along the normal direction. This type of contact constraint is used to simulate sliding movement in the assembly for linear simulations. The two surfaces that are in contact are classified as master and slave. Every node in slave surface (slave node) is tied to a node in the master surface (master node) by a constraint.
You can assign faces or face sets that should be tied together via the assignment boxes under Pick Faces. For numerical purposes, you have to choose one of these selections as master and the other one as slave. During the calculation, the degrees of freedom of slave nodes are constrained to the master surface while only allowing tangential movement.
Figure 3: Sliding contact setup panel.
The sliding contact is a linear constraint which is intended for planar sliding interfaces. Therefore, no large displacements and rotations are allowed in the proximity of a sliding contact. In other words, this constraint is not suitable for nonlinear simulations.
Cyclic Symmetry Contact
The cyclic symmetry constraint enables to model only a section of a 360° cyclic periodic structure and reduces the computation time and memory consumption considerably. Required settings include the center and axis of the cyclic symmetry as well as the sector angle. The master and slave surfaces define the cyclic periodicity boundaries.
It's required to define the axis of revolution and the sector angle explicitly. The sector angle has to be given in degrees. Available ranges for the angle are from 0° to 180° and only values that divide 360° to an integer number are valid.
The axis is defined by the axis origin and the axis direction. The definition of the Axis direction and the Sector angle have to be in accordance with the right hand rule, such that it defines a rotation that starts on the slave surface and goes to the master surface. For a graphical example, see the picture below:
Figure 4: Illustration for a cyclic symmetry condition, showing the revolution axis origin, direction, and proper slave and master surfaces according to the right-hand rule.
Figure 5: Resulting von Mises Stress on section (left) and transformed on the full 360° model (right) as viewed in Paraview.
The effect of the cyclic symmetry condition is to map the deformations of the master face onto the slave face, transforming them through the sector rotation. This creates the cyclic effect but does not constraint the body in the radial, tangential nor axial directions. Proper additional constraints must be added to prevent rigid body motions.
As all the DOFs of the slave nodes will be constrained by the cyclic symmetry connection, adding an additional constraint on those nodes could lead to an overconstrained system.
This is a linear constraint, so no large rotations or large deformations are allowed in the proximity of cyclic symmetry boundaries.
A cyclic symmetry condition is only valid if the geometry and loading conditions are symmetric around the axis of revolution.
Physical Contact
Physical (or "nonlinear") contacts enable you to calculate realistic contact interaction between two parts of the assembly. Also, it allows to calculate the self-contact between different faces of the same part. Unlike for linear contacts, those faces are not just connected via linear relations but the actual contact forces are computed. Please go to the following page for more details:
https://www.simscale.com/docs/simulation-setup/contacts/physical-contacts/
Conflict Resolution and Optimization
The two surfaces that are in contact are classified as master and slave. Every node in the slave surface (slave nodes) is tied to a node in the master surface (master node) by a constraint.
Please be aware that one face can not be the slave assignment of multiple contact definitions simultaneously. This also applies for shared edges and nodes between surfaces of different contact definitions.
Generally, the more refined of the two periodic boundary surfaces should be chosen to be the slave. In the case of cyclic symmetry, this will, in most cases, not matter since both faces should be meshed with nearly the same element sizes.
There are some general rules that help you to decide which of the contact faces or sets to choose as master and which to choose as slave entities. Although those rules do not apply strictly in every case, they provide a good starting point. Choose as slave entities, face(s) if:
it is considerably smaller than the counterpart.
it is more curved, compared to the other part of the contact pair.
it belongs to the more flexible part, especially if the other part is constrained in displacement.
it has a considerably finer mesh than the counterpart.
Automatic contact detection tries to always find an optimized solution, therefore it is preferable to use automatic contact detection (Figure 1) instead of manually constraining the system. Conflicting contacts are marked with a warning icon in the contact list. A more detailed description of the conflict type and how to resolve it can be found on top of the contact settings panel.
Another warning in case of remaining conflicts is shown on run creation, along with an additional check to detect underconstraints in the system.
In case conflicts can not be resolved manually or by automatic contact detection, consider imprinting your CAD geometry.
Interfaces in Conjugate Heat Transfer
In a CHT analysis, an interface defines the physical behavior between the common boundaries of two regions that are in contact, e.g. solid-solid, or solid-fluid.
Note that having interfaces between two fluid regions is not possible and results in an error.
Automatic Interface Detection
When creating a new CHT simulation, all possible interfaces will automatically be detected and populated in the simulation setup tree. Interfaces will be grouped together and defined as Coupled thermal interface with No-slip velocity condition.
How To Modify Specific Interfaces?
Individual interfaces or a group of interfaces can be filtered via entity selection. Select the entities (faces or solids) for which you want to select all interfaces that exist between them. Then choose the "Filter contacts by selection" option in the viewer context menu or in the simulation setup tree.
Figure 6: Specific interfaces can be selected individually or in bulk by selecting one or multiple entities in the viewer and then using the 'Filter contacts by selection' option. In the example above, all interfaces between the processor and its heat sink can be retrieved by selecting both the heat sink and the processor entity and then using the 'Filter contacts by selection' option.
All interfaces that exist between two of the selected entities will be bulk selected and exposed in the contact tree individually.
Figure 7: All interfaces that are returned by the filter will be selected in bulk and exposed individually in the contacts tree. By customizing their settings, individual interfaces will stay exposed in the tree.
It is also possible to select only one entity before filtering, which will return all interfaces between this entity and any other entity in the model.
Interfaces which differ in settings from the standard bulk interfaces group will stay exposed individually in the simulation setup tree.
Partial Contacts
An interface is required to always be defined between two congruent surfaces, meaning that these surfaces must have the same area and overlap completely. After contact detection, the platform will also perform a check for partial contacts. If partial contacts are detected, the platform will show a warning and recommend an imprinting operation.
Figure 8: Partial contact warning after automatic contact detection in Conjugate Heat Transfer analyses.
Imprinting is a single-click operation built into SimScale, which splits existing faces into smaller ones in order to guarantee perfect overlap between contacting faces. It is recommended to perform an imprint operation in order to guarantee accurate heat transition modeling for the simulation.
By default, any detected partial contact will be defined as an adiabatic interface, and not participate in heat conduction unless specified otherwise.
Contact Detection Errors
As all possible interfaces are detected automatically, it is no longer possible to manually add an interface or to change the entity assignment for a specific interface. In case no interfaces can be detected automatically, SimScale will show an error message.
Figure 9: It is not possible to continue with the current simulation setup in case automatic contact detection fails for the currently assigned geometry. Investigate your CAD model and ensure that contacting parts are indeed in contact.
In this case, it is not possible to create a mesh or start a simulation run for this simulation. Instead, the CAD model needs to be investigated for potential errors which prevent successful contact detection. Please reach out to support via email or chat in case you encounter this issue.
The Velocity options define the fluid velocity conditions at the interface. For each interface, the momentum (velocity) profile can be set to either slip or no-slip condition. If the interface is between two solids, this option is irrelevant.
By default, the velocity profile is set to no-slip condition, which imposes a friction wall (or real wall) condition by setting the velocity components (tangential and normal) to Zero value at the interface.
$$ V_t=V_n=0 $$
The 'slip' option imposes a frictionless wall condition. In this case, the tangential velocities at the interface are adjusted according to the flow conditions, while the normal component is zero.
The Thermal options define the heat exchange conditions at the interface. The five Thermal types available for the interfaces are reported below:
Coupled
The coupled thermal interface models a perfect heat transfer across the interface. This is the default setting, in case an interface is not defined by the user.
Adiabatic
In this case, thermal energy cannot be exchanged between the domains across the interface.
Total Resistance
The Total Resistance interface allows users to model an imperfectly matching interface (e.g. due to the surface roughness) which reduces the heat exchange across it. The total resistance is defined as:
$$ R = \frac{1}{K A} = \frac{1}{\frac{\kappa}{t} A} $$
It is worth noticing that the area of the interface appears in the definition. So this option must be assigned only to the relevant face. Let's suppose that a heat exchanger is being simulated. The effect of solid sediment on the tube's wall is only known as a total resistance. A first simulation proves that heat exchange performance is insufficient. Consequently, the length of the tubes is increased. The new simulation will only be correct if the total resistance is changed according to the new area of the tubes.
Specific Conductance
This interface type is very similar to the Contact Interface material (below). It only requires users to set the specific conductance of the interface which is defined as:
$$ K = \frac{\kappa}{t} $$
with thickness t [m] and thermal conductivity κ [W/mK] between the two interface regions.
For instance, this option may be used for an interface where the layer thickness is negligible or unknown, i.e., a radiator for which the paint coating's specific conductance may be given instead of its thickness and κ.
Contact Interface Material
The contact interface material allows modelling a layer with thickness t and thermal conductivity κ between the two interface regions.
For example, it is possible to model the thermal paste between a chip and a heat sink without needing to resolve it in the geometry. The latter operation is usually a problem, considering that the thickness of these layers is two or three orders of magnitude smaller than other components in the assembly.
CAD and Mesh Requirements
A CHT simulation always requires a multi-region mesh. As far as the mesh is concerned, it is fundamental that the cell size at the interface is similar between the two faces. As a rule of thumb, the cells on one face should be less than 1.5 times the size of the others. The figure below shows an example of this issue. In the left case, the cells at the interface on the inner region are too small with respect to those on the outer body. In the case on the right side, the cells on the interface are approximately the same size.
Figure 10: Left: Cell sizes at the interface do not match closely enough to ensure a robust simulation run. Right: Cell sizes are matching closely. This is the intended multi-region mesh interface for use in a CHT analysis.
Last updated: January 7th, 2021
Did this article solve your issue?
How can we do better?
We appreciate and value your feedback.
part of: Simulation Setup
Previous article: Contacts
Skip to next topic: Post-Processing With SimScale | CommonCrawl |
Power imbalance induced BER performance loss under limited-feedback CoMP techniques
Beneyam B. Haile1,
Jyri Hämäläinen1 &
Zhi Ding2
Coordinated multipoint (CoMP) technology utilizes simultaneous transmission/reception from/to different access points, and it is considered as an important feature to exploit and/or mitigate intercell interference in fourth-generation mobile networks. Yet, channel power imbalance at the receiver is experienced in CoMP systems due to, e.g., spatially distributed transmissions. Traditional co-located multi-antenna systems may also experience power imbalance among antenna branches due to inaccurate antenna calibration. This paper presents a bit error rate (BER) analysis and derives asymptotic and approximate BER expressions for some practical CoMP transmission techniques under channel power imbalance. Besides the analytical results, numerical analysis is made to thoroughly capture the performance impact of channel power imbalance on the performance gain of the CoMP methods. The results demonstrate that power imbalance considerably affects BER performance and applying long-term amplitude information with fast phase feedback has insignificant benefit to effectively compensate the detrimental effect of large channel power imbalance when base stations use a single antenna. In this case, exploiting both short-term amplitude and phase information is a very good choice. On the contrary, for a large number of diversity antennas in base stations, using long-term amplitude information with a sparsely quantized phase shows BER performance close to the case where full channel state information is applied.
The inconsistent quality of experience across mobile networks is an important challenge of contemporary mobile communications, the intercell interference being one of the main causes of the inconsistency [1]. Coordinated multipoint (CoMP) transmission has been recently proposed to mitigate and/or exploit intercell interference in mobile systems [2–6]. In CoMP, user data transmission is dynamically executed either from all coordinating base stations (BSs) or from one BS while scheduling/beamforming decisions are made together by coordinating BSs. CoMP transmission techniques have also been standardized for 3rd Generation Partnership Project (3GPP) long-term evolution (LTE) [7, 8].
Various limited-feedback precoding methods with different implementation requirements for practical CoMP scenarios have been studied [9–12]. Most analytical studies rely on the conventional assumption that antenna branches are homogeneous and antenna channels seen by the receiver admit the same statistical properties. Yet, in many practical joint-transmission CoMP scenarios, mean power imbalance occurs among signals received by a user from different coordinating BSs. The power imbalance is due to either spatially distributed transmissions in the case of inter-site CoMP or different directions of sector antenna main lobes in the case of intra-site CoMP. Besides CoMP, a similar channel power imbalance problem occurs in distributed antenna systems where transmit antenna elements are geographically distributed but connected to the same controlling BS [13]. For instance, for a 3GPP urban micro pathloss model (i.e., L=34.53+38 log10(d) for a distance d [14]), a mobile station (MS) located between two neighboring BS sites with an inter-site distance of ISD experiences channel power imbalance of σ d =38 log10((ISD+△d)/(ISD−△d)) dB due to distance difference △d from the sites. Also, for the 3GPP three-sector antenna pattern (i.e., A(θ)=−12(θ/θ 3dB)2 for 3-dB beamwidth θ 3dB [14]), a MS located between two neighboring co-located sectors at an angle of θ from the main lobe direction of one of the sectors experiences channel power imbalance of σ θ =288(θ−60)/(θ 3dB)2. An illustration for channel power imblance values is presented in Table 1 for ISD=500 m and θ 3dB=70°. Channel power imbalance also occurs among antenna branches in the conventional co-located multi-antenna systems due to imperfect antenna calibration. For example, in third-generation BS, the difference between reference signals from two different antenna connectors must be within ±2 dB [15].
Table 1 Channel power imbalance values for a given △d or θ when ISD=500 m and θ 3dB=70°
In [16], we presented a detailed analysis for the optimal amplitude weights, signal-to-noise ratio (SNR) gain, and average capacity under channel power imbalance in the case of selected CoMP techniques that are consistent with standardized limited-feedback methods. The paper [16] provides thorough insights on the performance loss due to power imbalance in terms of coherent combining gain. In this work, we study the impact of channel power imbalance on the bit error rate (BER) performance of the CoMP methods. To that end, we derive asymptotic and approximate BER expressions for the CoMP techniques assuming single-antenna BSs. As the analytical computations are cumbersome for the multi-antenna BSs, the channel power imbalance impact study in a more general case is made through simulation. For benchmarking purposes, we also recall BER results for transmitter selection combining and the case where full channel state information (CSI) is employed at the transmitter side. To the authors best knowledge, this BER performance analysis for the limited-feedback CoMP techniques has not been carried out in previous literature.
Our results illustrate how sensitive a limited-feedback technique that exploits long-term CSI feedback to maximize the SNR is to the channel power imbalance in the single-antenna case. We show the necessity of short-term amplitude feedback when channel power imbalance is large. Furthermore, we note that the short-term amplitude feedback is not important when a larger number of transmit antennas are employed in BSs.
In terms of organization, Section 2 provides an overview of the system model and the CoMP algorithms. In Section 3, we compute analytical expressions for the asymptotic and the approximate BERs when BSs use a single antenna. In Section 4, we verify analytical results and provide performance comparisons and simulation results when BSs apply more than one antenna. Finally, we present our conclusions in Section 5.
System model and CoMP schemes
The general system model is depicted in Fig. 1 where two groups of M antenna branches transmit identical information to a single-antenna mobile station. Antenna groups can be located either in different sites as in the case of inter-site CoMP and distributed antenna systems or in the same site as in the case of intra-site CoMP. We note that CoMP involving two groups of antennas is an important joint transmission scenario that can mitigate/exploit the most dominant interference with the least overhead and complexity.
General system model
A feedback system with low-rate CSI from a single-antenna MS is used to select transmission weights for the antenna groups and branches in Fig. 1. In this model, the received signal at a given time instant can be written as
$$ r = \sum\limits_{m~=~1}^{2} \mathbf{h}_{m} \cdot \mathbf{x}_{m} + n = \sum\limits_{m~=~1}^{2} \left[(\mathbf{h}_{m} \cdot {\mathbf{u}_{m}})w_{m}\right] s + n, $$
where \(\mathbf {x}_{m} \in \mathbb {C}^{1 \times M}\) is the transmitted vector signal from the mth BS antennas containing the information symbol s of the active user, \(\mathbf {h}_{m} \in \mathbb {C}^{1 \times M}\) is the channel gain vector of the mth group, and n refers to zero-mean complex additive white Gaussian noise with power P n. We note that x m comes from s via beamforming, where w and u m with normalized powers ∥w∥=1 and ∥u m ∥=1 represent complex weight vectors selected from given codebooks according to applied precoding techniques. The power constraint of the input signal implies that \(\sum _{m~=~1}^{2} \mathbb {E} \left \{ \mathbf {x}_{m}^{\dag } \, \mathbf {x}_{m} \right \} \le P_{\mathrm {t}}\), where P t is the total transmitted energy per channel use and (·)† denotes Hermitian transpose. When M=1, (2) is reduced to the form
$$ r = [h_{1,1}w_{1}+h_{2,1}w_{2}]s+n=\left(\mathbf{h} \cdot {\mathbf{w}} \right) s + n, $$
where h=(h 1,1,h 2,1).
We consider a flat block fading channel model where channel gains remain stationary during each block of transmitted symbols, and channel responses from temporally separate transmission blocks are independent. Furthermore, the complex channel gains h m,l are assumed to be independent and identically distributed zero-mean circularly symmetric complex Gaussian random variables: \(h_{m,l} = \sqrt {\gamma _{m,l}} \, e^{j \psi _{m,l}}\). Hence, the branch power γ m,l follows exponential distribution with mean \(\bar {\gamma }_{m}=\mathbb {E} \{ \gamma _{m,l} \}\), and the branch phase ψ m,l is uniformly distributed on (−π,π). Spatially uncorrelated channel assumption is considered as for the different groups we have antennas spatially well separated, and within the same group, both polarization and spatial separation of antennas can be effectively used especially in urban environments [17]. Of course, when the number of antennas becomes large, it will be increasingly difficult to obtain configuration where mutual correlation between antennas is small. We consider negligible power imbalance between antenna branches within the same group as they are co-located and directed in the same fashion and inexpensive calibration can be applied. We also assume that MS has perfect CSI and provides both short-term and long-term feedback to the distributed antenna transmitter. The short-term quantized CSI is available at the transmitter side without errors or latency while perfect time synchronization is assumed between transmission points such that coherent combining is possible. The long-term channel statistics is perfectly known at the transmitter side.
CoMP algorithms
This work investigates three joint processing CoMP algorithms. The algorithms are briefly defined to choose the best weight \(\widehat {\mathbf {w}}\) when M=1. If M>1, the definitions are used by replacing h m,1 with the signal \(\mathbf {h}_{m} \cdot \widehat {\mathbf {u}}_{m}\) where \(\widehat {\mathbf {u}}_{m}\) is the best weight vector selected for the mth antenna group [16].
Transmitter selection combining (TSC)
This is a simple classical method where the antenna branch providing the largest signal power is selected. Selection of better weight \(\widehat {\mathbf {w}}\) is made according to \(|\mathbf {h} \cdot \widehat {\mathbf {w}}| = \max \{|h_{m,1}|:\, m=1,2\}\). Only a 1-bit feedback overhead is required for this method which makes the algorithm attractive from an implementation perspective although its performance is inferior when compared to the more sophisticated CoMP algorithms, and it is also very sensitive to errors in feedback signaling [18]. TSC is included in the 3GPP CoMP category under the name dynamic point selection [7].
Quantized co-phasing (QCP)
In this CoMP scheme, MS reports the quantized relative phase of the channel gains. Thus, when using the N w bit feedback, we have
$$ \left| \mathbf{h} \cdot \widehat{\mathbf{w}} \right| ~=~ \max \left\{\left|h_{1,1}v_{1}+h_{2,1}v_{2}e^{j\phi_{n}}\right| : 1\leq n\leq2^{N_{w}}\right\}, $$
where \(\phi _{n}=\pi n/2^{N_{w}-1}\phantom {\dot {i}\!}\) and v 1,v 2 refer to selected transmit weights that determine the ratio of transmit power in each antenna group/branch. We select either equal weights (\(v_{1}=v_{2}=\sqrt {1/2}\)) or long-term weights maximizing SNR based on the long-term CSI feedback [16]. If N w =2 and there is no mean power imbalance between channels, then (3) resembles closed-loop transmit-diversity that is applied in 3GPP high-speed downlink packet access and LTE [19, 20].
Ordered quantized co-phasing (OQCP)
This algorithm is a natural extension of QCP formed by using short-term order information of channel amplitudes in addition to the phase difference. In OQCP, the receiver first ranks instantaneous SNRs, |h (1)|= max{|h 1,1|,|h 2,1|}, and |h (2)|= min{|h 1,1|,|h 2,1|} before deciding the phase feedback using criteria (3). Both order and phase difference information are signaled to the transmitter using N w +1 feedback bits. After precoding, the received signal is of the form
$$ \left|\mathbf{h} \cdot \widehat{\mathbf{w}} \right| = |h_{(1)}v_{1}+h_{(2)}v_{2}e^{j\hat{\phi}_{n}}|, $$
where \(\hat {\phi }_{n}\) refers to the best phase and the normalized amplitude weights are selected based on the channel order statistics. In the case of Rayleigh fading under channel power balance, it has been shown that \({v_{1}^{2}}=(1+(1+(\pi \rho /2))^{2})^{-1/2})/2\), \(\phantom {\dot {i}\!}\rho =\text {sinc} (\pi /2^{N_{w}})\) and \({v_{2}^{2}}=1-{v_{1}^{2}}\) [21].
If M>1, the best weight vector \(\widehat {\mathbf {u}}_{m}\) is selected according to QCP. We let the first antenna branch of the mth group as a reference, and the best phase for each remaining antenna branches is chosen and reported using N u feedback bits.
We note that although amplitude weights maximizing expected SNR are presented in [16] for both QCP and OQCP, the following BER analysis is valid for any transmit amplitude weights.
Bit error rate analysis
In the following, we characterize the impact of power imbalance on the average BER—denoted by P e —for M=1. We start the analysis by recalling that the average BER can be computed from the formula
$$ {P}_{e} = \int_{0}^{\infty} P_{\text{mod}}(z)f(z) dz, $$
where \(z=|\mathbf {h} \cdot \widehat {\mathbf {w}}|^{2}\) represents the instantaneous SNR when feedback algorithm is used and P mod(·) is the error rate of the applied modulation scheme. For simplicity, we consider here only the BPSK modulation: \(P_{\text {mod}} \left (z \right) = 1/2\cdot \text {erfc}(\sqrt {z})\). It is also known that the symbol/bit error rates for the higher order modulation methods like M-QAM can be approximated or even in some cases expressed exactly by using the very same complementary error function \(\text {erfc}(\sqrt {z})\), after some simple scaling. Therefore, BER results for many modulation methods over the fading channel are easily obtained once BER has been computed for the BPSK modulation.
Closed-form expression for BER can be derived so long as the distribution f(z) is known and (5) is analytically integrable. This is the case if perfect CSI is available or TSC is applied. On the other hand, for QCP and OQCP algorithms, the distribution f(z) is difficult to obtain but we compute for both algorithms the asymptotic BER in closed form. We note that asymptotic analysis has been previously presented in [22] for the special case \(\bar {\gamma }_{1} = \bar {\gamma }_{2}\). Then we formulate approximate BER expressions that can be used in the low-to-moderate SNR region based on the asymptotic BER expressions.
Although mean SNRs are not equal, we can assume that \(\bar {\gamma }_{1}\) and \(\bar {\gamma }_{2}\) grow at the same rate in the asymptotic SNR region. Then \(\bar {\gamma }_{1}\bar {\gamma }_{2}=\sigma _{0}\bar {\gamma }_{1}^{2}\), where σ 0 denotes the channel power imbalance between the antenna branches. Now the asymptotic BER can be written in the form
$$ \log_{10}P_{e}(\bar{\gamma}_{1},\bar{\gamma}_{2})\approx \mathcal{E}(\sigma_{0})-d\cdot\log_{10}\bar{\gamma}_{1}, \qquad \bar{\gamma}_{1} \gg 1, $$
where the slope d is the diversity gain. To validate the formula (6), we show that diversity gains of the investigated methods are equal to two despite the mean power imbalance. Furthermore, we deduce closed-form expressions for the constant \(\mathcal {E}(\sigma _{0})\) in the case of the CoMP methods of Section 2.
Asymptotic BERs for reference methods
If perfect CSI is available in the transmitter, then the distribution of received SNR is of the form \(f(z)=1/\left (\bar {\gamma }_{1}-\bar {\gamma }_{2}\right)\left (e^{-z/\bar {\gamma }_{1}}-e^{-z/\bar {\gamma }_{2}}\right)\) [23] and the computation of BER can be carried out using (5). The result is well known, and with the aid of notation σ 0, we can express it as follows [24]:
$$\begin{array}{*{20}l} &P_{e}= \\ &\frac12\left[\!1-\frac1{1-\sigma_{0}}\sqrt{\frac{\bar{\gamma}_{1}}{1+\bar{\gamma}_{1}}}\left(1-\sigma_{0}\sqrt{\frac{\sigma_{0}(1+\bar{\gamma}_{1})}{1+ \sigma_{0}\bar{\gamma}_{1}}}\right)\right]. \end{array} $$
Furthermore, after utilizing Taylor series expansion on the square roots, we obtain the asymptotic formula
$$ {\lim}_{\bar{\gamma}_{1}\rightarrow\infty} \bar{\gamma}_{1}^{2}\cdot P_{e}={3}/{(16\cdot\sigma_{0})}. $$
When comparing this formula with (6), we find that d=2 and \(\mathcal {E}(\sigma _{0})=\log _{10}(3/(16\sigma _{0}))\).
The TSC has been well examined in the literature [25, 26]. Using the adopted notations, the BER of TSC can be written in the form
$$\begin{array}{*{20}l} &P_{e}= \\ &\frac{1}{2}\left[1-\sqrt{\frac{\bar{\gamma}_{1}}{ 1 + \bar{\gamma}_{1}}}-\sqrt{\frac{\bar{\gamma}_{1}\sigma_{0}}{1 + \bar{\gamma}_{1}\sigma_{0}}} + \sqrt{\frac{\bar{\gamma}_{1}\sigma_{0}}{1+\sigma_{0}(1+\bar{\gamma}_{1})}} \right]. \end{array} $$
After applying the Taylor series expansion on the square roots, we obtain
$$ {\lim}_{\bar{\gamma}_{1}\rightarrow\infty}\bar{\gamma}_{1}^{2}\cdot P_{e} =3/(8\cdot\sigma_{0}). $$
Comparing (10) with (6), we find now that d=2 and \(\mathcal {E}(\sigma _{0})=\log _{10}(3/(8\sigma _{0}))\).
Asymptotic BER for QCP
For the computation of BER, we write the formula (5) as
$$\begin{array}{*{20}l} P_{e}= \int_{\mathbb{R}^{3}} &P_{\text{mod}} \left(z(\gamma_{1,1}, \gamma_{2,1}, \varphi) \right)f(\varphi,\gamma_{1,2}, \gamma_{2,1}) \\ & d\gamma_{1,1} d\gamma_{2,1} d\varphi, \end{array} $$
where \(\varphi =\psi _{2,1}-\psi _{1,1}+\hat {\phi }_{n}\). Since random variables φ, γ 1,1, and γ 2,1 are independent, we have f(φ,γ 1,1,γ 2,1)=U N (φ)f(γ 1,1)f(γ 2,1), where U N refers to the uniform distribution in the interval \(\left (-\pi /2^{N_{w}},\pi /2^{N_{w}}\right)\) and \(f(\gamma _{m,1})=e^{-\gamma _{m,1}/\bar {\gamma }_{m}}/\bar {\gamma }_{m}\), m=1,2. Let us substitute γ 1,1=y and γ 2,1=ty. Then \(z= y\, | v_{1} + v_{2} \sqrt {t}\, e^{j\varphi }|^{2}\), and we can write the BER of QCP in the form
$$ P_{e}= \frac{2^{N_{w}}}{\pi}\int_{0}^{\pi/2^{N_{w}}}\int_{0}^{\infty} \frac{\mathcal{I}(t,\varphi) \, dt \, d\varphi}{\bar{\gamma}_{1}\bar{\gamma}_{2}\vert v_{1}+v_{2} \sqrt{t} e^{j \varphi} \vert^{4}}, $$
where for \(c(t,\varphi) = \left (1/\bar {\gamma }_{1}+t/\bar {\gamma }_{2}\right)\big /|v_{1} + v_{2} \sqrt {t} e^{j \varphi }|^{2},\)
$$\begin{array}{*{20}l} \mathcal{I}(t,\varphi)=\frac{1}{2} \int_{0}^{\infty} \text{erfc}(\sqrt{\eta})\eta e^{-c(t,\varphi)\eta} \, d\eta. \end{array} $$
In Appendix 1, we have shown that \({\lim }_{\bar {\gamma }_{1}\rightarrow \infty }\mathcal {I}(t,\varphi)=3/16\). Using this result and (12), we obtain
$$ {\lim}_{\bar{\gamma}_{1}\rightarrow\infty}\bar{\gamma}_{1}^{2}\cdot P_{e} =3/(16\sigma_{0})\cdot A_{N_{w}}(v_{1},v_{2}), $$
$$ A_{N_{w}}(v_{1},v_{2})=\frac{2^{N_{w}}}{\pi}\int_{0}^{\pi/2^{N_{w}}}\int_{0}^{\infty} \frac{dt \, d\varphi}{|v_{1}+v_{2} \sqrt{t} e^{j \varphi}|^{4}}. $$
Furthermore, utilizing equality (3.252.4) from [27], we achieve a closed-form expression
$$\begin{array}{*{20}l} &A_{N_{w}}(v_{1},v_{2})=\\ &\frac{1}{2{v_{1}^{2}} {v_{2}^{2}}} \left[ \csc^{2}\left(\frac{\pi}{2^{N_{w}}}\right)- \!\!\frac{2^{N_{w}}}{\pi} \cot\left(\frac{\pi}{2^{N_{w}}} \right)\right]. \end{array} $$
Let us compare (14) and (16) with the asymptotic BER in the case of full CSI in (8). We conclude that the degradation of asymptotic BER due to quantized channel information is characterized by a constant which depends on the number of phase bits and long-term transmit weights v 1 and v 2. Furthermore, by comparing (14) with (6), we find that d=2 and \(\mathcal {E}(\sigma _{0})=\log _{10}\left (3/(16\sigma _{0})\cdot A_{N_{w}}(v_{1},v_{2}) \right)\). Moreover, optimal asymptotic BER is achieved when \(v_{1}=v_{2}=\sqrt {1/2}\) irrespective of the value of σ 0 as can be deduced from (14) and (16).
Asymptotic BER for OQCP
In this case, the average BER is of the form
$$\begin{array}{*{20}l} P_{e}= \int_{\mathbb{R}^{3}} & P_{\text{mod}}\left(z(\gamma_{(1)},\gamma_{(2)},\varphi) \right) f(\varphi,\gamma_{(1)}, \gamma_{(2)}) \\ & d\gamma_{(1)} d\gamma_{(2)} d\varphi, \end{array} $$
where f(φ,γ (1),γ (2))=U N (φ)f(γ (1),γ (2)); channel gains γ (1), γ (2) are ordered; and f(γ (1),γ (2)) is the corresponding joint PDF. In Appendix 2, we have shown that (17) reduces to
$$ P_{e} = \frac{2^{N_{w}}}{ \pi} \int_{0}^{\pi/2^{N_{w}}}{\int_{0}^{1}} \frac{\left(\mathcal{I}_{1}(t,\varphi) + \mathcal{I}_{2}(t,\varphi) \right) dt \, d\varphi}{\bar{\gamma}_{1}\bar{\gamma}_{2} \, \left| v_{1} + v_{2} \, \sqrt{t} \, e^{j \varphi} \right|^{4}}. $$
Similarly with (13), notations \(\mathcal {I}_{1}(t,\varphi)\) and \(\mathcal {I}_{2}(t,\varphi)\) refer to definite integrals for which \({\lim }_{\bar {\gamma }_{1}\rightarrow \infty }\mathcal {I}_{1}(t,\varphi)={\lim }_{\bar {\gamma }_{1}\rightarrow \infty }\mathcal {I}_{2}(t,\varphi) = 3/16, \) and we obtain
$$ {\lim}_{\bar{\gamma}_{1}\rightarrow\infty}\bar{\gamma}_{1}^{2}\cdot P_{e} = 3/(8\sigma_{0})\cdot B_{Nw}(v_{1},v_{2}), $$
$$ B_{Nw}(v_{1},v_{2}) = \frac{2^{N_{w}}}{\pi}\int_{0}^{\pi/2^{N_{w}}}{\int_{0}^{1}} \frac{dt d\varphi}{\left| v_{1} + v_{2} \, \sqrt{t} \, e^{j \varphi} \right|^{4}}. $$
By changing variables, we find as expected that B Nw (v 1,v 2)=A Nw (v 1,v 2)/2 when v 1=v 2 irrespective of power imbalance. Let us now compute B Nw (v 1,v 2) when v 1≠v 2. Exploiting the Taylor expansion of (1+x)−2, we can write B Nw (u 1,u 2) in the form
$$\begin{array}{*{20}l} B_{Nw}&\!\!~=~\!\!\frac{2^{N_{w}}}{\pi}\int_{0}^{\pi/2^{N_{w}}} \!\!\! {\int_{0}^{1}} \frac{1}{\left(1+\frac{2v_{1}v_{2}\sqrt{t}\cos \varphi}{{v_{1}^{2}}+{v_{2}^{2}} t}\right)^{2}} \frac{dt d\varphi}{({v_{1}^{2}}+{v_{2}^{2}} t)^{2}} \\ &=\sum\limits_{n~=~0}^{\infty} \!\!\frac{2^{N_{w}} (-1)^{n} (n+1)}{\pi} \!\!\int_{0}^{\pi/2^{N_{w}}} \!\!\!\!\!\cos^{n} (\varphi) d\varphi \\ &\quad{\int_{0}^{1}} \frac{\left(2v_{1}v_{2}\sqrt{t}\right)^{n}}{\left({v_{1}^{2}}+{v_{2}^{2}}t\right)^{n+2}} dt. \end{array} $$
We solve the remaining integrals using equations (2.513.3), (2.513.4), and (3.194.1) of [27]. Then B Nw (v 1,v 2) becomes
$$\begin{array}{*{20}l} &B_{Nw}=\!\!\sum\limits_{n=0}^{\infty}\! \frac{2^{(N_{w}+1)}(-2)^{n} (n+1) {v_{2}^{n}}}{\pi(n+2) v_{1}^{n+4}} \left[ A_{n}+ \frac{1}{2^{n-1}}\sum\limits_{k~=~0}^{B_{n}}\right. \\ &\left. \binom{n}{k}\!\frac{\sin \left(\frac{(n-2k)\pi}{2^{N_{w}}}\right)}{n-2k}\!\right]\!\!~_{2}F_{1}\left(n+2, \frac{n}{2}+1; \frac{n}{2}+2;\! \frac{-{v_{2}^{2}}}{{v_{1}^{2}}}\right), \end{array} $$
where 2 F 1 is the confluent hypergeometric function, \(A_{n}=\binom {n}{n/2} \pi /2^{N+n}\) and B n =n/2−1 for even n and A n =0 and B n =(n−1)/2 for odd n. We see from (19) and (6) that d=2 and \(\mathcal {E}(\sigma _{0})=\log _{10} \left (3/(8\sigma _{0})\cdot B_{Nw}(v_{1},v_{2})\right)\), for OQCP.
Before we close the asymptotic BER analysis, in Table 2, we revise results of \(\mathcal {E}(\sigma _{0})\) obtained for the CoMP techniques.
Table 2 Achieved results for \(\mathcal {E} (\sigma _{0})\)
Approximation for the BER of QCP and OQCP
Let us now formulate BER expression in the low-to-moderate SNR region for both QCP and OQCP based on their asymptotic BER and expected SNR expressions. We approximate the SNR distribution f z of QCP and OQCP by using the distribution \(f_{\tilde {z}}\) of variable \(\tilde {z}=\xi _{1}+\xi _{2}\), where ξ m , m=1,2 follow the exponential distribution \(f_{m}(\xi)=\exp (-\xi /\bar {\xi }_{m})\). Here means \(\bar {\xi }_{1}\) and \(\bar {\xi }_{2}\) are selected such that the following requirements hold the following:
(i) The first moments of \(\tilde {z}\) and z are equal, i.e., \(E\{\tilde {z}\}=E\{z\}\).
(ii) The asymptotic BERs, computed using \(f_{\tilde {z}}\) and f Z , are equal, i.e., following the results (14) and (19), we require
$$ {\lim}_{\bar{\xi}_{1}\rightarrow\infty}\bar{\xi}_{1}^{2}\tilde{P}_{e}={\lim}_{\bar{\gamma}_{1}\rightarrow\infty}\bar{\gamma}_{1}^{2}P_{e}, $$
where \(\tilde {P}_{e}=\int _{0}^{\infty } P_{\text {mod}}(z)f_{\tilde {z}}(z)dz\) refers to the BER approximation and \(P_{e}=\int _{0}^{\infty } P_{\text {mod}}(z)f_{z}(z)dz\) is the BER of QCP/OQCP, and it is assumed that ratios \(\bar {\xi }_{1}/\bar {\xi }_{2}\) and \(\bar {\gamma }_{1}/\bar {\gamma }_{2}\) are both fixed and larger than one. Requirement (i) using (50) and (54) in [16] and \(E\{\tilde {z}\}=\bar {\xi }_{1}+\bar {\xi }_{2}\) leads to formula
$$\begin{array}{*{20}l} \bar{\xi}_{1}+\bar{\xi}_{2}=\bar{\gamma}_{1}\mathcal{L}, \end{array} $$
$$\begin{array}{*{20}l} \mathcal{L}=\frac{1}{2}\left[1+\sigma_{0}+\sqrt{(1-\sigma_{0})^{2}+\frac{\sigma \pi^{2}}{4}\text{sinc}(\frac{\pi}{2^{N_{w}}})}\right], \end{array} $$
for QCP and
$$\begin{array}{*{20}l} \mathcal{L}=\frac{1}{2}\left[1+\sigma_{0}+\sqrt{(\frac{1+{\sigma_{0}^{2}}}{1+\sigma_{0}})^{2}+\frac{\sigma \pi^{2}}{4}\text{sinc}(\frac{\pi}{2^{N_{w}}})}\right], \end{array} $$
for OQCP. Furthermore, requirement (ii) using the asymptotic BER formulas in (8), (14), and (19) provides
$$\begin{array}{*{20}l} \bar{\xi}_{1}/\bar{\xi}_{2}=\bar{\gamma}_{1}/\bar{\gamma}_{2}\cdot C_{N_{w}}(v_{1},v_{2}), \end{array} $$
where \(C_{N_{w}}(v_{1},v_{2})\,=\,A_{N_{w}}(v_{1},\! v_{2})\) for QCP and \(C_{N_{w}}(v_{1},v_{2})\,=\, 2B_{N_{w}}(v_{1}, v_{2})\) for OQCP. After combining (23) and (26), we find that
$$ \bar{\xi}_{1}=\frac{C_{N_{w}}\cdot\mathcal{L}\cdot\bar{\gamma}_{1}}{\sigma_{0}+C_{N_{w}}},\qquad \bar{\xi}_{2}=\frac{\sigma_{0}\cdot\mathcal{L}\cdot\bar{\gamma}_{1}}{\sigma_{0}+C_{N_{w}}}, $$
where we have shortened notations by dropping out arguments of \(C_{N_{w}}\). Then the BER approximation \(\tilde {P}_{e}\) for QCP/OQCP is obtained by combining (27) and (7). After some elementary manipulations, we find that
$$\begin{array}{*{20}l} P_{e}=&\frac12\Bigg[1-\frac1{1-\sigma_{0}/C_{N_{w}}}\sqrt{\frac{\mathcal{L}\bar{\gamma}_{1}}{1+\sigma_{0}/C_{N_{w}}+\mathcal{L}\bar{\gamma}_{1}}} \\ &\left(1-\frac{\sigma_{0}}{C_{N_{w}}}\sqrt{\frac{1+\sigma_{0}/C_{N_{w}}+\mathcal{L}\bar{\gamma}_{1}}{1+C_{N_{w}}/\sigma_{0}+\mathcal{L}\bar{\gamma}_{1}}}\right)\Bigg]. \end{array} $$
We note that this formula is valid if \(\sigma _{0}<C_{N_{w}}\phantom {\dot {i}\!}\). Yet, this is not a limitation since \(A_{N_{w}}\) and \(2B_{N_{w}}\) are both larger than one. According to our knowledge, this approximation has been previously used only in [22].
Validation and performance evaluations
Validation and performance results for M=1
The asymptotic and approximate BER expressions presented in Section 3 are validated in Fig. 2. Markers refer to the simulated BER while dashed and solid curves refer to the analytical asymptotic and the approximate BERs, respectively. Results are presented for power balance and 6-dB imbalance cases assuming N w =3 for both QCP and OQCP. For OQCP, we use the SNR maximizing long-term weights presented in [16]:
$$\begin{array}{*{20}l} v_{1,2}^{2} = \frac{1}{2} \left[ 1 \pm \frac{1 + \sigma_{0} - \frac{2 \sigma_{0}}{1+\sigma_{0}}}{\sqrt{\left(1 + \sigma_{0} - \frac{2 \sigma_{0}}{1+\sigma_{0}} \right)^{2} + \frac{\pi^{2}\sigma_{0}}4\text{sinc}^{2}\left(\frac{\pi}{2^{N_{w}}}\right)}} \right], \end{array} $$
Bit error rate as a function of \(\bar {\gamma }_{1}\) when σ 0=0 dB and σ 0=6 dB. Dashed curves refer to the analytical asymptotic BER results; solid curves refer to the approximate BER results; and markers refer to the simulated BER results
and for QCP we set \(v_{1}=v_{2}=\sqrt {1/2}\). As expected, we see that the CoMP techniques are negatively impacted when the power imbalance increases from 0 to 6 dB.
The impact of channel power imbalance on the BER of the studied schemes is shown in Figs. 3 and 4 where average BER results are presented as a function of power imbalance assuming \(\bar {\gamma }_{1}=10\) dB and \(\bar {\gamma }_{1}=15\) dB, respectively. The results are obtained for N w =3, and note that QCP is plotted with equal weights (diamond marked) and SNR maximizing weights (circle marked) that are formulated in [16]:
$$\begin{array}{*{20}l} v_{1,2}^{2} \!\!~=~\!\!\frac{1}{2} \left[ 1 \pm \frac{1-\sigma_{0}}{\sqrt{\left(1 - \sigma_{0} \right)^{2} +\frac{\pi^{2}\sigma_{0}}4\text{sinc}^{2}\left(\frac{\pi}{2^{N_{\text{rp}}}}\right) }} \right]. \end{array} $$
Analytical BER as a function of σ 0 for QCP, OQCP, TSC, and full CSI schemes when \(\bar {\gamma }_{1}=10\) dB
We observe from both figures that OQCP performs close to the case where full CSI is applied irrespective of the value of power imbalance. Interestingly, we also see that TSC outperforms QCP applying the SNR maximizing long-term weights when there is large power imbalance. As can be seen from Figs. 3 and 4, TSC outperforms the QCP after a power imbalance value of around −6 and −5 dB when \(\bar {\gamma }_{1}=10\) dB and \(\bar {\gamma }_{1}=15\) dB, respectively. On the other hand, QCP applying the asymptotic BER minimizing weights (v 1=v 2) performs close to OQCP at a large imbalance.
Numerical results for M>1
To observe impacts of using more numbers of transmit antennas in BSs, we present simulated BER results in Fig. 5 when two antennas are applied in each BS. The results are obtained for \(\bar {\gamma }_{1}=10\) dB, N w =3, and N u =2. We see from the figure that unlike the M=1 case, QCP applying SNR-maximizing long-term weights outperforms the TSC throughout the power imbalance range. This comes from the diversity gain achieved from the multiple antennas utilized in the BSs. More performance improvement is achieved by the QCP when more numbers of antennas are employed by the BSs as illustrated in Fig. 6 where we depict the BER results for M=4, \(\bar {\gamma }_{1}=3\) dB, N w =3, and N u =2. In this case, QCP applying SNR-maximizing weights performs very close to OQCP, particularly for large power imbalance.
Simulated BER as a function of σ 0 for QCP, OQCP, TSC, and full CSI schemes when M=2 and \(\bar {\gamma }_{1}=10\) dB
Simulated BER as a function of σ 0 for QCP, OQCP, TSC, and full CSI schemes when M=4 and \(\bar {\gamma }_{1}=3\) dB
We studied the impact of mean channel power imbalance on CoMP transmission. Specifically, we investigated CoMP techniques called QCP and OQCP where the former scheme applies quantized channel phase feedback while the latter technique applies order information in addition to the quantized phase at the transmitter side. We derived closed-form expressions for the asymptotic and the approximate BERs and verified analytical results using numerical simulations when base stations apply a single-transmit antenna. For complete understanding of channel power imbalance impacts, we also presented numerical analysis for cases where base stations employ more than one transmit antenna. Even with few feedback bits and presence of channel power imbalance, the OQCP provides performance that is very close to the performance achieved with the case where full CSI is applied. Unlike the average capacity performance, presented in [16], using SNR maximizing long-term amplitude weights for QCP under power imbalance worsens the BER performance when a single-transmit antenna is utilized in BSs due to limited diversity gain. This is not the case when BSs apply more than one diversity antenna. In future work, analytical analysis will be extended for general M and other modulation schemes.
We show here that \({\lim }_{c \to 0} \frac {1}{2} \int _{0}^{\infty } \eta e^{-c \eta } \text {erfc}(\sqrt {\eta })d\eta =3/16.\) Utilizing equations (7.1.2) and (7.4.19) in [28], the integral can be written as
$$\begin{array}{*{20}l} &\int_{0}^{\infty} \eta e^{-c \eta} \text{erfc}(\sqrt{\eta}) d\eta= \\ &-\frac{d}{dc} \int_{0}^{\infty} e^{-c\eta}\text{erfc}(\sqrt{\eta}) d\eta =-\frac{d}{dc} \left[ \frac{1}{c} \left(1-\sqrt{\frac{1}{1+c}}\right)\right]. \end{array} $$
Using Taylor expansion of (1+c)−1/2, we obtain
$$\begin{array}{*{20}l} \frac{1}{c} \left(1-\sqrt{\frac{1}{1+c}}\right)=\sum\limits_{k=0}^{\infty} \frac{(-1)^{k} (2k+1)!}{2^{2k+1} k! (k+1)!} c^{k} = \sum\limits_{k=0}^{\infty} D_{k} c^{k}, \end{array} $$
where \(D_{k}=\frac {(-1)^{k} (2k+1)!}{2^{2k+1} k! (k+1)!}\). Combining the last two formulas yields the desired result.
Our goal is to prove (18). Applying basic theory of order statistic, we find the joint PDF f(γ (1),γ (2)) as
$$ f(\gamma_{(1)}, \gamma_{(2)})=\frac{1}{\gamma_{(1)} \gamma_{(2)}} \left[ e^{-\left(\frac{\gamma_{(2)}} {\bar{\gamma}_{1}}+ \frac{\gamma_{(1)}} {\bar{\gamma}_{2}}\right) }+ e^{-\left(\frac{\gamma_{(1)}} {\bar{\gamma}_{1}}+ \frac{\gamma_{(2)}} {\bar{\gamma}_{2}}\right)} \right]. $$
With this joint PDF, we can easily deduce from (17) that
$$\begin{array}{*{20}l} \bar{P}_{e}~=~&\frac{2^{{N_{w}}}}{2\pi} \int_{0}^{\frac{\pi}{2^{N_{w}}}} \int_{0}^{\gamma_{(1)}} \int_{0}^{\infty} \text{erfc} \left(\vert v_{1}\sqrt{\gamma_{(1)}} + v_{2} \sqrt{\gamma_{(2)}} e^{j\varphi} \vert \right) \\ &\frac{e^{-\left(\frac{\gamma_{(2)}}{\bar{\gamma}_{1}}+ \frac{\gamma_{(1)}}{\bar{\gamma}_{2}}\right) }+ e^{-\left(\frac{\gamma_{(1)}}{\bar{\gamma}_{1}}+ \frac{\gamma_{(2)}}{\bar{\gamma}_{2}}\right)}}{\bar{\gamma}_{1} \bar{\gamma}_{2}} d\gamma_{(1)} d\gamma_{(2)} d\varphi. \end{array} $$
First, we set γ 1,1=y,γ 2,1=ty, and then, we substitute \(\eta = y \vert v_{1}+v_{2} \sqrt {t e^{j\varphi }}\vert ^{2}.\) Finally, we get
$$\begin{array}{*{20}l} \bar{P}_{e}= \frac{2^{N_{w}}}{ \pi} \int_{0}^{\frac{\pi}{2^{N_{w}}}} {\int_{0}^{1}} \frac{(I_{1}(t,\varphi)+I_{2}(t,\varphi))dt d\varphi}{ \bar{\gamma}_{1} \bar{\gamma}_{2} \vert v_{1}+v_{2} \sqrt{t} e^{j \varphi} \vert^{4}}, \end{array} $$
where I 1(t,φ) and I 2(t,φ) refer to the integrals
$$\begin{array}{*{20}l} I_{1}(t,\varphi)\,=\,\frac{1}{2} \int_{0}^{\infty} \!\!\text{erfc}(\sqrt{\eta}) \eta e^{-c\eta} d\eta,\; c=\frac{\frac{t}{\bar{\gamma}_{1}}+ \frac{1}{\bar{\gamma}_{2}}}{ \vert v_{1} + v_{2} \sqrt{t} e^{j \varphi}\vert^{2}}, \\ I_{1}(t,\varphi)\,=\,\frac{1}{2} \int_{0}^{\infty} \!\!\text{erfc}(\sqrt{\eta}) \eta e^{-d\eta} d\eta,\; d=\frac{\frac{1}{\bar{\gamma}_{1}}+ \frac{t}{\bar{\gamma}_{2}}}{ \vert v_{1} + v_{2} \sqrt{t} e^{j \varphi}\vert^{2}}. \end{array} $$
N Alliance, CoMP evaluation and enhancement. Ran evolution project deliverable (2015).
MK Karakayali, GJ Foschini, RA Valenzuela, Network coordination for spectrally efficient communications in cellular systems. IEEE Wireless Commun. Mag.13(4), 56–61 (2006).
R Irmer, H Droste, P Marsch, M Grieger, G Fettweis, S Brueck, H-P Mayer, L Thiele, V Jungnickel, Coordinated multipoint: concepts, performance, and field trial results. IEEE Commun. Mag.49(2), 102–111 (2011). doi:10.1109/MCOM.2011.5706317.
X Tao, X Xu, Q Cui, An overview of cooperative communications. IEEE Commun. Mag.50(6), 65–71 (2012). doi:10.1109/MCOM.2012.6211487.
H Dahrouj, W Yu, Coordinated beamforming for the multicell multi-antenna wireless system. IEEE Trans. Wireless Commun.9(5), 1748–1759 (2010). doi:10.1109/TWC.2010.05.090936.
J Zhao, TQS Quek, Z Lei, Coordinated multipoint transmission with limited backhaul data transfer. IEEE Trans. Wireless Commun.12(6), 2762–2775 (2013). doi:10.1109/TWC.2013.050613.120825.
3GPP, Coordinated multi-point operation for LTE physical layer aspects. 3GPP Technical Report (2011). 36.819, Ver. 11.0.0.
3GPP, Coordinated multi-point operation for LTE with non-ideal backhaul. 3GPP Technical Report (2013). 36.874, Ver. 12.0.0.
DJ Love, RW Heath, VKN Lau, D Gesbert, BD Rao, M Andrews, An overview of limited feedback in wireless communication systems. IEEE J. Sel. Areas Commun.26(8), 1341–1365 (2008). doi:10.1109/JSAC.2008.081002.
N Jindal, MIMO broadcast channels with finite-rate feedback. IEEE Trans. Inf. Theory. 52(11), 5045–5060 (2006). doi:10.1109/TIT.2006.883550.
T Yoo, N Jindal, A Goldsmith, Multi-antenna downlink channels with limited feedback and user selection. IEEE J. Sel. Areas Commun.25(7), 1478–1491 (2007). doi:10.1109/JSAC.2007.070920.
J Hämäläinen, R Wichman, AA Dowhuszko, G Corral-Briones, Capacity of generalized UTRA FDD closed-loop transmit diversity modes. Wireless Personal Commun., 1–18 (2009).
HC Papadopoulos, C-EW Sundberg, Space-time codes for MIMO systems with non-collocated transmit antennas. IEEE J. Sel. Areas Commun.26(6), 927–937 (2008). doi:10.1109/JSAC.2008.080809.
3GPP, Spatial channel model for Multiple Input Multiple Output (MIMO) simulations. 3GPP Technical Specification (2015). 25.996, Ver. 13.0.0.
3GPP, Base Station (BS) radio transmission and reception (FDD). 3GPP Technical Specification (2011). 25.104, Ver. 10.3.0.
BB Haile, AA Dowhuszko, J Hamalainen, R Wichman, Z Ding, On performance loss of some CoMP techniques under channel power imbalance and limited feedback. IEEE Trans. Wireless Commun.14(8), 4469–4481 (2015). doi:10.1109/TWC.2015.2421898.
J Hamalainen, R Wichman, in Proc. IEEE Global Telecommun. Conf., 3. On correlations between dual-polarized base station antennas (IEEEUS, 2003), pp. 1664–16683.
J Hämäläinen, R Wichman, in Proc. IEEE Int. Symp. on Personal, Indoor and Mobile Radio Commun, 5. Performance analysis of closed-loop transmit diversity in the presence of feedback errors (IEEEUS, 2002), pp. 2297–2301.
3GPP, Physical layer procedures (FDD). 3GPP Technical Specification (2013). 25.214, Ver. 13.0.0.
3GPP, Physical Channels and Modulation. 3GPP Technical Specification (2015). 36.211, Ver. 12.7.0.
J Hämäläinen, R Wichman, in Proc. Asilomar Conf. on Signals, Systems and Computers, 1. Closed-loop transmit diversity for FDD WCDMA systems (IEEEUS, 2000), pp. 111–115, doi:10.1109/ACSSC.2000.910927.
J Hämäläinen, R Wichman, in Proc. IEEE Global Telecommun. Conf, 1. Asymptotic bit error probabilities of some closed-loop transmit diversity schemes (IEEEUS, 2002), pp. 360–364.
A Papoulis, Probability, Random Variables, and Stochastic Processes (McGraw-Hill, New York, 1984).
A Goldsmith, Wireless Communications (Cambridge University Press, New York, 2005).
AF Coskun, O Kucur, Performance analysis of joint single transmit and receive antenna selection in non-identical nakagami-m fading channels. IET Commun.5(14), 1947–1953 (2011). doi:10.1049/iet-com.2010.0719.
JM Romero-Jerez, AJ Goldsmith, Performance of multichannel reception with transmit antenna selection in arbitrarily distributed nagakami fading channels. IEEE Trans. Wireless Commun.8(4), 2006–2013 (2009). doi:10.1109/TWC.2009.080333.
IS Gradshteyn, IM Ryzhik, Table of Integrals, Series, and Products, 7th edn. (Elsevier/Academic Press, Amsterdam, 2007).
M Abramowitz, IA Stegun, Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables (Dover Publications, New York, 1972).
This material is based on works supported in part by the academy of Finland under grant 284634 and the European institute for innovation and technology under grant 602954.
School of Electrical Engineering, Aalto University, P.O.Box 13000, FI-00076 Aalto, Espoo, Finland
Beneyam B. Haile
& Jyri Hämäläinen
Department of Electrical and Computer Engineering, University of California, Davis, USA
Zhi Ding
Search for Beneyam B. Haile in:
Search for Jyri Hämäläinen in:
Search for Zhi Ding in:
Correspondence to Beneyam B. Haile.
Haile, B.B., Hämäläinen, J. & Ding, Z. Power imbalance induced BER performance loss under limited-feedback CoMP techniques. J Wireless Com Network 2016, 212 (2016) doi:10.1186/s13638-016-0697-y
DOI: https://doi.org/10.1186/s13638-016-0697-y
Intercell interference
Coordinated multipoint
LTE-advanced
Channel power imbalance
Transmit beamforming
Bit error rate | CommonCrawl |
Techno-economic evaluation of microalgae high-density liquid fuel production at 12 international locations
John Roles1 na1,
Jennifer Yarnold1,2 na1,
Karen Hussey2 &
Ben Hankamer1
Microalgae-based high-density fuels offer an efficient and environmental pathway towards decarbonization of the transport sector and could be produced as part of a globally distributed network without competing with food systems for arable land. Variations in climatic and economic conditions significantly impact the economic feasibility and productivity of such fuel systems, requiring harmonized technoeconomic assessments to identify important conditions required for commercial scale up.
Here, our previously validated Techno-economic and Lifecycle Analysis (TELCA) platform was extended to provide a direct performance comparison of microalgae diesel production at 12 international locations with variable climatic and economic settings. For each location, historical weather data, and jurisdiction-specific policy and economic inputs were used to simulate algal productivity, evaporation rates, harvest regime, CapEx and OpEx, interest and tax under location-specific operational parameters optimized for Minimum Diesel Selling Price (MDSP, US$ L−1). The economic feasibility, production capacity and CO2-eq emissions of a defined 500 ha algae-based diesel production facility is reported for each.
Under a for-profit business model, 10 of the 12 locations achieved a minimum diesel selling price (MDSP) under US$ 1.85 L−1 / US$ 6.99 gal−1. At a fixed theoretical MDSP of US$ 2 L−1 (US$ 7.57 gal−1) these locations could achieve a profitable Internal Rate of Return (IRR) of 9.5–22.1%. Under a public utility model (0% profit, 0% tax) eight locations delivered cost-competitive renewable diesel at an MDSP of < US$ 1.24 L−1 (US$ 4.69 gal−1). The CO2-eq emissions of microalgae diesel were about one-third of fossil-based diesel.
The public utility approach could reduce the fuel price toward cost-competitiveness, providing a key step on the path to a profitable fully commercial renewable fuel industry by attracting the investment needed to advance technology and commercial biorefinery co-production options. Governments' adoption of such an approach could accelerate decarbonization, improve fuel security, and help support a local COVID-19 economic recovery. This study highlights the benefits and limitations of different factors at each location (e.g., climate, labour costs, policy, C-credits) in terms of the development of the technology—providing insights on how governments, investors and industry can drive the technology forward.
Graphic abstract
In 2018, global energy consumption grew at twice the average rate recorded in 2010 [1], driven by a growing economy valued at US$ 136 trillion [2] and increased heating and cooling demands [1]. Despite global commitments on climate action, significant growth in renewables failed to keep pace with energy demand, resulting in a rise in greenhouse gas emissions (GHGs). Previously, the OECD called upon governments to develop enabling policy frameworks that will catalyze private sector investment to drive the large-scale transformation needed for a low carbon energy sector [3]. Substantial progress in renewable wind and solar PV technologies is driving a significant increase in renewable electricity supply and, coupled with battery technologies, is also transitioning the small vehicles market. However, high-density liquid fuels are critically underdeveloped and are expected to remain essential for the heavy transport, aviation, shipping, and logistics sectors for the foreseeable future, which combined, account for 12.7% of global energy demand [4]. As these fuels account for approximately 10% of global anthropogenic CO2 emissions [5], the development of low carbon alternative fuels is essential to meet international COP21 Paris CO2 emission reduction commitments and UN Sustainable Development Goals [6].
Advanced microalgae-based renewable fuel systems have significant potential to address these needs and to support a globally distributed and dispatchable fuel network to contribute to political, economic, social, environmental, fuel and climate security [7]. Current first-generation biofuel technologies, reliant on food crops, such as bioethanol from corn or sugar and biodiesel from soy or palm oil, compete with food production for arable land and fresh water and contribute to eutrophication [8, 9]. In contrast, microalgae systems can utilize saltwater and/or nutrient-rich wastewater and be deployed on non-arable land or in the oceans. These factors, coupled with high solar conversion efficiencies, can tap into the abundance of available solar energy (~ 3000 ZJ year−1 or ~ 5000 × global energy demand) to capture CO2, provide feedstocks for renewable fuel production, and expand global photosynthetic productivity. Ringsmuth et al. [10] estimated that supply of global diesel, aviation and shipping fuel needs could theoretically be provided by microalgae-based fuel production [4, 10] using only 0.18% of global surface area [10]—less than 10% of the area currently used by agriculture.
Advancing microalgae-based fuel technologies to a sustainable and commercial scale requires detailed and robust techno-economic and lifecycle analysis. This, in turn, is critical to attract an appropriate share of the renewable energy investment pool (cumulative US2.9 trillion since 2004) [11] that can advance the technology further. It can also support governments to define key areas of policy development, more quickly [12].
A number of reported models have evaluated the potential of algae-based renewable fuel systems [12,13,14,15,16,17,18,19,20,21,22,23,24]. Such studies have considered the effects of factors related to climate (e.g., solar radiation, temperature); operating conditions (e.g., nutrients, mixing regime, light regime, cell density); biology (growth, light tolerance, metabolic profile); or processes (e.g., harvest regime, fuel conversion method) on output variables categorized by: productivity (e.g., photosynthetic conversion efficiency, biomass yield, lipid yield or biofuel yield); economic feasibility (e.g., internal rate of return (IRR), minimum selling price (MSP); environmental performance (e.g., energy return on energy invested (EROEI), CO2 emissions per unit energy, life cycle analyses); scalability; or a combination thereof.
Naturally, the key determinants of economic feasibility are to produce the most fuel at the minimum cost. Selection of appropriate locations to establish microalgae-based biofuel production facilities is, therefore, critical due to the dual effects of climatic conditions on algae growth and production potential, and widely differing economic and policy settings between jurisdictions that effect the production cost.
Comparisons between locations, to date, have mostly assessed the productivity potential of microalgae systems as a function of climatic variables, particularly solar radiation [22, 24] and temperature [24, 25]. For example, Moody and co-authors (2014) integrated historical meteorological data with a growth model to evaluate lipid productivity of Nannochloropsis at 4388 global locations and reported the highest annual average lipid yields to be in the range of 24 and 27 m3 ha−1 year−1, in Australia, Brazil, Colombia, Egypt, Ethiopia, India, Kenya, and Saudi Arabia [26]. In contrast, techno-economic assessments (TEA) evaluate the economic feasibility and often combine process-based modelling related to reactor or facility designs and technologies with economic input values. Many TEAs are limited to one climatic zone or several climatic zones within one jurisdiction. For example, a study by Davis et al. [24] modelled the costs, resource requirements and emissions for production of five billion gallons of fuel at various locations across the US. Biomass peak productivities of up to 25–30 g m−2 day−1 were assumed to be achievable and fuel produced at a minimum diesel selling price (MDSP) of < US$ 1.82 L−1 (US$ 7 Gal−1) [24].
In general, wide variations between model assumptions and approaches has made it difficult to compare like with like, to identify the most suitable systems, processes, and locations for deployment at scale. A comprehensive review of algae-based biofuel models by Quinn and Davis [27] emphasized the importance of harmonized assessments to enable direct comparisons, and highlighted the need to consider the exact location of the production plant which has important impacts on productivity, CapEx, OpEx [28] as well as financial inputs. Our recent work confirmed these findings and further revealed the critical influence of policy settings which vary markedly across global jurisdictions [12]. The lack of harmonization in current assessments has resulted in large discrepancies between estimated algae-based renewable fuel costs that range from US$ 0.43 L−1 (US$ 1.64 gal−1) to over US$ 7.92 L−1(US$ 30.00 gal−1) [27].
This study builds on Roles et al. [12] to address this critical knowledge gap by benchmarking the economic feasibility of microalgae-based biodiesel production across 12 international locations to identify important conditions required for commercial scale up. The specific objectives of this study were to:
Simulate the operation of a microalgae high-density liquid fuel production facility benchmarked with the same key system and operational assumptions at 12 international locations.
Assess the production capacity across sites by accounting for variations in light- and temperature-dependent algal biomass production potential of each location.
Determine the lowest theoretical Minimum Diesel Selling Price (MDSP) based on the 12 locations analyzed, compare the range in MDSP variations across these sites and explore a process for the identification of promising locations for global microalgae fuel production.
Identify and prioritize the factors including financial drivers that created the largest differences in MDSP.
Our analysis accounts for critical location-dependent variables that affect production capacity, production cost and net emissions. It is based on extensive work on the development and validation of our integrated Techno-Economic and Life Cycle Assessment (TELCA) model of the microalgae liquid fuel production facility detailed in Roles et al. [12] (see also Additional file 1). This work demonstrated an economic, energy-efficient, and low CO2 emission pathway to deliver micro-algae-based high-density liquid fuels through a combination of technology, scale, policy and location-specific cost settings. The study highlighted the critical importance of factors other than technological advancements on the economic feasibility of fuel production—in particular, the role of policy settings. Here, our simulation is extended with location-specific inputs to provide a techno-economic evaluation of microalgae-based high-density liquid fuel production across a diverse range of locations and jurisdictions at a commercially optimized scale of 500 ha total pond area (see "Methods" and Additional file 1). Actual temporally and spatially resolved weather data including solar radiation, temperature, and humidity were used as inputs to enable dynamic modelling of biomass productivity and evaporation. Materials, labour costs, tax and interest rates were applied for each jurisdiction. The analysis provides a direct performance comparison of a well-defined microalgae renewable diesel production system [29] across 12 locations distributed throughout six continents, and covering a broad range of climatic (Graphical abstract, temperate to tropical) and economic conditions (Table 2). A base system was fixed for all locations, while process modelling was used to optimize a range of operational settings to improve the economics for each location including: strain selection, pond depth, culture density, harvesting regime and water sourcing.
Significantly, we identify important operational factors that can be improved for individual locations to increase productivity while driving down price and emissions; evaluate the impact of different economic and policy settings between jurisdictions and demonstrate the use of our TELCA platform to assist in model guided systems optimization to de-risk scale up and support business development.
Analytical framework
All techno-economic analyses are limited by the quality of the input data, the assumptions made, and the calculations conducted. Extensive work has previously been completed to validate the input data, the response of each process module, subprocesses and the whole process described by the 500 ha renewable high-density liquid fuel production facility [12]. Additional file 1 details the simulation used, and within it, Section 4 provides the model validation. Following internal data, module, subprocess and process validation, the TELCA model was next validated against a broad range of independent techno-economic and life-cycle analyses (Additional file 1: Figure S26). Of these, we consider the NREL model [13] (Additional file 1: Figure S26) to be the most comprehensive. Given the complexity of our TELCA model and that of the NREL model, and the fact that when set to the same production conditions they yielded a mean diesel selling Price within 1% of one another, we conclude that the NREL and TELCA models independently validate each other. This analysis not only confirms the robustness of TELCA but also of the NREL model. Finally, we conducted validation against an operational demonstration scale 0.4 ha microalgae production facility; the TELCA simulation of this facility identified the facilities CapEx to within 5% of the actual construction cost. Indeed, the TELCA evaluation delivered a calculated CapEx cost 5% above the actual construction costs suggesting that the assumptions were reasonably conservative (i.e., US$ 52.5 m2) at the 0.4 ha scale (i.e., US$ 525,000 ha−1).
The Algae Productivity Model incorporated into TELCA 2, here (Fig. 1) enables a more dynamic evaluation of spatiotemporal effects on the biological response of algae which is a critical determinant of success. One limitation of this study was the extrapolation of reported algae growth parameters to outdoor conditions. We recognize that such an approach does not take into account the many other potential factors that can affect productivity in natural systems, such as grazing, contamination, and culture crash, nor does the input weather data take into account severe weather events. However, it also does not include future improvements. The average annual values that we have calculated and used for our analyses range from 8.6 to 22.1 g m−2 day−1 and these productivities have been shown to be achievable in long-term outdoor experimental conditions [25, 30]. Future perspectives of this model are to integrate long-term actual productivity data.
Overview of analytical framework. a Techno-economic calculation scheme. b Microalgae-based renewable diesel production process flow diagram and model inputs (modified from [12]). International Location Specific Environmental Inputs (green) and the Algae Productivity Model (orange) connect with the high-rate pond module of TELCA, to enable location, system, and strain specific growth modelling (1 h temporal resolution). Location Specific Economic Inputs (blue, top right) influence the final minimum diesel selling price and internal rate of return
The economic feasibility, biodiesel production capacity as well as embodied and process associated greenhouse gas (GHG) emissions were evaluated for 12 international locations using an expanded version of our previously reported Techno-Economic and Life Cycle Analysis (TELCA) tool (Fig. 1a, b) [29]. The updated TELCA2 simulation used for this study is described in detail in Additional file 1. It includes:
location-specific environmental inputs (Fig. 1; Additional file 1: Section 2) to model spatio-temporal pond culture irradiance, temperature and evaporation profiles,
the algae productivity model (Fig. 1, Table 1) to model growth performance under different climatic conditions (Fig. 1; orange, Additional file 1: Section 2) and
location specific economic inputs (Table 2; Additional file 1: Section 1), such as the costings of capital and operational expenditure, interest, labour and tax (Fig. 1b; blue, Additional file 1: Section 1.1).
Table 1 Growth characteristics of D. tertiolecta and N. oceanica used for Eq. 2
Table 2 Variables and parameter inputs summary table used for TELCA (see Additional file 1 for sources)
Using this information systems were optimized to reduce MDSP at each location.
Two business case scenarios at each location were assessed: a standard commercial for-profit business model (Scenario 1); and a public utility not-for-profit model (Scenario 2). For Scenario 1, economic feasibility was calculated using the Internal Rate of Return (IRR, %) based on the difference between the MDSP and a fixed theoretical diesel selling price of US$ 2 L−1. For Scenarios 1 and 2, the MDSP is reported (Table 3). Microalgae-based biodiesel production capacity is defined as kL diesel ha−1 year−1 based on optimized conditions for biomass production and harvesting regimes which resulted in the lowest MDSP.
CO2 emissions were calculated from CO2 (gCO2eq MJ−1) absorbed during the overall photosynthetic biomass production and fuel production processes, offset against the amount of fossil-based CO2 released during construction (e.g., via embodied emissions in the construction, equipment supply and supply of consumable items), operation of the facility (external CO2 supply for biomass production—11% CO2 concentration (Additional file 1: Appendix 1), and embodied CO2 emissions in the production and supply of nutrients), as well as emissions from subsequent fuel use. To minimise emissions, the model has been structured around a fully self-sufficient energy design (i.e., all of the energy required to operate the plant including electro-flocculation and hydrogen production was produced internally with solar PV (Additional file 1: Section 3). All emissions have been fully incorporated into the net energy and CO2 accounting, and balanced over the productive life of the facility (30 years). Reduction in emissions was assessed as the difference between the overall emissions from the process and emission from conventional fossil-based diesel fuel production and use, that it displaces (i.e., displaced fossil fuel (gCO2eq MJ−1)—renewable diesel (gCO2eq MJ−1) = CO2 emission reductions (gCO2eq MJ−1).
System boundaries and key assumptions
Simulations were performed for a facility comprising 177 high-rate microalgae production ponds of 4.27 ha each, (total pond area = 500 ha), on-site harvest, processing and refining facilities (Fig. 1b, Additional file 1). Algal biomass was harvested using electro-flocculation and concentrated via centrifugation (Additional file 1: Section 3), before being converted to crude oil via hydrothermal liquefaction (HTL) with a biomass to green crude conversion of 55% [12, 31, 32] (Additional file 1: Appendix 1). Renewable diesel was refined using conventional hydrotreatment/hydrocracking and fractionation processes [12] (Additional file 1: Section 3). Based on reported values [33] 75% nitrogen recovery was assumed in the HTL aqueous phase with nutrients further treated via anaerobic digestion. Overall, the model allowed for 40% of all nutrients to be recycled back to the high-rate ponds [12], where they have previously been reported to support good growth rates [34] (Additional file 1: Appendix 1). CO2 supply was taken from a free issue source (11% CO2 concentration) (Additional file 1: Appendix 1) immediately adjacent to the production facility with all piping, cooling, filtration, and compression accounted for in the cost analysis (Additional file 1: Section 3). CO2 was supplied to the algae culture at a concentration of 1% and utilisation efficiency was set to 80% (Additional file 1: Appendix 1). Nutrients were assumed to be non-limiting to growth. A complete description of assumptions and boundary conditions is provided in Roles et al. [12, 29] with advanced components and modifications detailed below and in Additional file 1: Sections 1–3.
Twelve geographical locations were selected across North and South America, Europe, Africa, the Middle East, India, Asia and Oceania (Table 2, Graphical Abstract) for comparative analyses. Sites were selected to cover a broad range of irradiance levels, temperatures and other climatic conditions and economic variables. All sites were chosen, because they provide access to seawater, suitable land and topography (low slope, low density or undeveloped) within a 100 km radius.
Productivity modelling
Under non-limiting nutrient conditions, light and temperature are the most important variables affecting photosynthetic algal growth and the resultant yield of biomass. Light and temperature regimes vary widely between geographical locations and over time due to daily and seasonal cycles. To account for dynamic fluxes in light, temperature and growth, algal biomass productivity was modelled at 1 h intervals using typical weather data over 365 days of the year for each location. Input variables included: global horizontal radiation (W m−2), diffuse horizontal radiation (W m−2), wind speed (m s−1), relative humidity (kg kg−1) and air temperature (°C, EnergyPlus, US Department of Energy and the National Renewable Energy Laboratory, US). These inputs were used in a heat balance model to predict changes in culture media temperature [35] (Additional file 1: Methods, Section 3). Diffuse and global solar radiation values were used to predict light transfer through the culture [36].
The temperature of the pond's liquid culture was predicted using a simplified mechanical heat balance described by Bechet et al. [35]. Although temperature gradients within the liquid phase can occur, the culture temperature is assumed to be homogenous due to paddlewheel mixing and gas supply. In contrast, the exponential decay of light as it is attenuated by algal pigments through the depth of the culture results in a light gradient ranging from photo-inhibitory light at the pond surface to photo-limited or dark areas toward the pond base. This causes specific growth rates to differ through the culture. Here, we modelled local irradiance along the optical pathlength (i.e., from the pond surface to the base), using a simple and validated radiative transfer model described by Lee et al. [36] that accounts for both direct beam radiation and diffuse, or scattered radiation. Hourly predictions of pond culture temperature, Tpond (t) (°C) and local irradiance through the pond depth Iloc (t, z) (μmol m−2 s−1) were used to predict the specific growth rate of algae using the light and temperature dependent algae growth model described by Bernard and Remond [37]. Growth rates were integrated over time, t, and pond depth, z (m−1) to estimate volumetric productivities. Productivity modelling algorithm development and simulations were performed in MATLAB (R2015b, MathWorks).
Governing equations
The full model algorithm is outlined in Additional file 1: Section 2. During the growth phase, the volumetric biomass productivity of the system, Pvol (g biomass dry weight L−1) was determined by the rate of change of the algal biomass concentration over time:
$$P_{{{\text{vol}}\left( t \right)}} = \frac{{{\text{d}}C_{x} }}{{{\text{d}}t}} = \overline{\mu }C_{x} - C_{x} R$$
where Cx is the biomass concentration (g L−1), µ is the specific growth rate (h−1) and R is the basal respiration rate (h−1). According to Bernard and Remond [37], µ is a function of irradiance and temperature:
$$\mu \left( {T,I} \right) = \mu_{{{\text{max}}}} \frac{{I_{{{\text{loc}}}} }}{{\frac{{\mu_{{{\text{max}}}} }}{\sigma }\left( {\frac{{I_{{{\text{loc}}}} }}{{I_{{{\text{opt}}}} }} - 1} \right)^{2} }} \Phi \left( T \right).$$
In Eq. 2., µmax is the maximum growth rate of a given species (day−1); the light response parameters σ and Iopt define the irradiance values (µmol m−2 s−1) at half saturation rate of photosynthesis (µmol m−2 s−1) and at maximum growth, respectively; and Φ is the proportional effect of temperature (dimensionless), using the inflexion function of Rosso et al. [38]:
$$\Phi \left( T \right) = \frac{{\left( {T - T_{\max } } \right) \left( {T - T_{\min } } \right)^{2} }}{{\left( {T_{{{\text{opt}}}} - T_{\min } } \right) \left[ {\left( {T_{{{\text{opt}}}} - T_{\min } } \right)\left( {T - T_{{{\text{opt}}}} } \right) - \left( {T_{{{\text{opt}}}} - T_{\max } } \right)\left( {T_{{{\text{opt}}}} + T_{\min } - 2T} \right)} \right]}}$$
In Eq. 3, the parameters Topt, Tmin and Tmax represent three cardinal temperatures of biological significance, these being, respectively, the optimal temperature at which growth is highest at a given irradiance, and the minimum and maximum temperatures which define the threshold beyond which no growth occurs (Eq. 4):
$$\mu _{{\max }} = \left\{ {\begin{array}{*{20}l} 0 & {{\text{for}}~T < T_{{\min }} } \\ {\mu _{{\max }} \cdot \Phi \left( T \right)} & {{\text{for}}~T_{{\min }} < T < ~T_{{\max }} } \\ 0 & {{\text{for}}~T > T_{{\max }} } \\ \end{array} } \right.$$
To predict local irradiance, I(z) along the culture depth, we use the simple two-flux approximation of light transfer for open ponds (Eq. 5) proposed by Lee et al. [36]:
$$I_{\left( z \right)} = I_{B\left( z \right)} + I_{D\left( z \right)} .$$
In Eq. 5, IB(z) and ID(Z) are the direct beam irradiance and diffuse irradiance, respectively, at a given point, z, through the reactor depth, L (m−1), with 0 being the illuminated surface, and
$$I_{B\left( z \right)} = I_{B} e^{{ - \frac{{\alpha C_{x} }}{{{\text{cos}}\left( \theta \right)}}z}}$$
$$I_{D\left( z \right)} = 2I_{D} e^{{ - 2\alpha C_{x} Z}} ,$$
where α is the mass extinction coefficient of the algae (m2 kg−1, averaged across the 400–700 nm photosynthetically active radiation range), and θ is the zenith angle of direct beam radiation hitting the surface of the pond.
The culture temperature was predicted using a heat flux model that provides an overall energy balance defined by Q (W), such that the change in temperature of the liquid media is defined as
$$\frac{{{\text{d}}T}}{{{\text{d}}t}}V\rho c_{p } = Q_{{{\text{solar}}}} + Q_{{{\text{evaporation}}}} + Q_{{{\text{thermal}}}} + Q_{{{\text{conduction}}}} ,$$
where the heat fluxes are solar radiation, Qsolar (W), evaporation, Qevaporation (W), thermal radiation at the pond surface between the air and the water, Qthermal (W) and conduction to the soil, Qconduction (W).
Algal species selection
Two industrially relevant marine microalgae species were chosen, Nannochloropsis oceanica and Dunaliella tertiolecta. Both strains exhibit high autotrophic growth rates, a lipid content of ~ 30–40%, and tolerance to wide ranges of temperature and high salinity [41,42,43,44]. This is particularly important for operations under high evaporation conditions which can result in rapid increases in salt content, up to double that of seawater. The growth response parameters to temperature and light (Eq. 2) were characterized and validated by Bernard and Remond [37], providing the coefficients listed in Table 1. N. oceanica exhibits optimal growth at a lower optimal temperature and light intensity compared to D. Tertiolecta, suggesting that these species will perform better under temperate and tropical conditions, respectively. For each location, productivity simulations were performed for each strain. The alga exhibiting the highest productivity at each location under the range of conditions analyzed was used for the results reports (Fig. 2, Table 2).
Productivity model validation
The three models used to estimate productivity (liquid culture temperature; local irradiance; and light- and temperature-dependent algal growth) have been previously validated within acceptable ranges against experimental data sets. Lee et al. [36] showed that the simple two-flux approximation predicted local irradiance in a photobioreactor with a variation of 2–13% compared to more complex radiative transfer models, depending on the time of the day. To ensure the accuracy of our model algorithm, we validated radiative transfer with their reported modelled predictions. The simple radiative transfer equation has been widely used within the literature to estimate light mediated growth. Moreover, Lee et al. [36] found that such differences in estimated PAR resulted in productivity estimations within a 2–10% variation.
For prediction of temperature of the algal culture, Bechet et al. [35] validated the heat transfer model (Eq. 8) with an accuracy of 2.4 °C against experimental data collected over a 28-day period consisting of 108 temperature measurements taken from the liquid culture of an outdoor 50 L column photobioreactor in Singapore. Because of the complexity of the various heat components of the model, we compared our model simulations against experimental temperature measurements taken within the culture of two 2000 L ponds at the Centre for Solar Biotechnology Pilot plant, Brisbane (Additional file 1: Section 2). The model produced a tight fit between the measured and predicted media temperature in both ponds over a 6-day period, (R2 ≥ 0.9).
The simulations of algal growth for D. tertiolecta and N. oceanica were compared with those against actual data by Bernard and Remond [37] for the species used in this study.
Beside strain selection, simulations were performed for variables of pond depth (0.1–0.3 m) and quasi-steady-state operating biomass concentrations (0.05–1 g BDW L−1). The former affects thermal mass and light regime and latter effects light regime (heat dissipation from algae is considered negligible). Algal productivity modelling algorithm development and simulations were performed in MATLAB (R2015b, MathWorks). All productivity simulations were exported from MATLAB as tables into the TELCA model to optimize harvest regime, depth and concentration to MDSP.
Viability & feasibility
Under a for-profit business model (Scenario 1), the economic effectiveness of algae diesel was assessed using the Internal Rate of Return (IRR) over the life of the facility at a fixed product price. Here, IRR is calculated for each location based on a hypothetical fixed Minimum Diesel Selling Price (MDSP) of US$ 2.00 L−1. For Scenario 2, the feasibility was assessed on the cost-competitiveness of the MDSP that could be achieved. In this not-for-profit public utility scenario, profit and tax rates were reduced to zero. Interest rates were reduced from commercial rates to match government bond rates prevailing at each location (Table 2). The resulting MDSP was benchmarked between locations and against existing fossil fuel prices.
Optimization was performed at each location to minimize the MDSP for the following variables: algal strain selection (based on the highest annual-averaged productivity); freshwater replenishment for evaporation (MDSP minimized based on CapEx (e.g., piping, storage) and OpEx (e.g., water purchase, blowdown) requirements over the 30-year lifespan of the facility); operational algae concentration; and harvest duration (MDSP minimized based on CapEx and OpEx over the 30-year lifespan of the facility).
Boundary conditions
High-rate pond depth and concentration The pond depth, harvest duration and steady-state biomass concentration were the primary set of optimized variables adjusted monthly to optimize MDSP for the entire production, harvest and product processing system. A fixed, rather than variable, harvesting rate was set for the operation as the extra cost for variable speed equipment could not be economically justified. Daily harvest duration was adjusted to optimize culture density for MDSP. In all cases optimum MDSP was obtained by minimizing pond depth. This optimisation, however, was limited to a minimum of 0.25 m by engineering constraints of the capacity to construct and operate very shallow depth ponds in conjunction with large 4.3 ha pond areas (see Additional file 1: Appendix 1).
Water replenishment Three options to balance evaporative losses after accounting for available rainwater were analyzed (Additional file 1: Section 1). Essentially, incorporation of water storage based on a percentage of total pond capacity, replacement with locally purchased fresh water, or replenishment with seawater which necessitates further discharge of pond water (blowdown) to avoid excess salt build-up was analyzed. Blowdown also results in the loss of valuable nutrients from the system. The ideal replenishment choice was location dependent and detailed in the results, but in each case was optimised based on the MDSP.
Tax rates Tax rates applied were corporate only, and did not include value added taxes. The latter, however, may have an impact in some jurisdictions (Table 2).
Labour rates were based predominantly on the tradingeconomics.com/labour-costs website. Rates were established for skilled labour and relative rates were then found for a range of labour categories (Table 2). The categories were identified and estimated for all construction and operational tasks. Base working hours, overtime loadings and non-wage costs were established from various sources (Additional file 1: Section 1).
Labour efficiency was primarily based on GDP per hour worked data, provided in World Bank and OECD databases (Additional file 1: Section 1). GDP output per hour worked for the construction industry differs from these numbers but construction specific and consistent data was only available for European Union countries. For labour efficiency (Table 2), the ratio between whole of economy and the construction sector from the Euro zone countries was assumed to be similar in all countries and was thus used for modelling.
Employment and labour costs The 500 ha microalgae facility simulation is based on a set of interconnected process modules (Fig. 1b). Each process module accounts for the associated construction and operation tasks. The work required (and associated cost) to complete each task is calculated based on a fixed labour component and process variable labour component. The component variable base was selected based on the most applicable process variable to each task (e.g., Pond Area for pond cleaning, Flow Rate for filter cleaning, Additional file 1: Appendix 1).
Project finance rates applicable to a variety of project types, conditions and risk profiles is generally regarded by industry participants as commercial in confidence. The project interest rates modelled here (Table 2) represent the rates applicable to well established technology being operated by a financially sound project proponent. To provide a broad approach for determining these rates a relationship between government benchmark interest rates and project finance rates was established from known data in the solar PV industry[42].
The costs of supplied construction materials were divided into two groups; fabricated items the price of which was determined in accordance with local labour costs and efficiencies, and equipment supply that would be purchased at internationally competitive rates [29].
Rates for currency, inflation, water, land and electricity are detailed in Table 2 and sources are detailed in Additional file 1: Section 1.
A summary of projections related to technoeconomic, productivity and emissions performance for each location is presented in Table 3, with detailed results discussed below.
Table 3 Summary results of techno-economic analysis by location
Optimisation of processes to minimise the diesel selling price
Matching the algae strain to suit the production location can significantly improve productivity
The Algae Productivity Model computed hourly growth rates as a function of solar irradiance and culture temperature, based on actual weather data. Simulations were performed over pond depths of 0.1–0.3 m and operating quasi-steady-state biomass concentrations ranging from 0.05 to 1 g L−1 at each location (Additional file 1: Figures S4–17). Figure 2a summarizes the simulated maximum (11.3–29.9 g m−2 day−1) and average (8.6–22.1 g m−2 day−1) productivities of the best performing algal species at each location. The annual-average range (8.6–22.1 g m−2 day−1) biomass corresponds to 31.4–80.7 T ha−1 year−1. For most locations, higher biomass productivities could be achieved at a greater depth of 0.3 m, as this provided more stable temperatures and reduced extreme fluctuations, but only under more dilute operational concentrations (≤ 0.1 g L−1). However, the economic optimization identified a 0.25 m depth and a higher operating concentration to reduce harvesting costs (see below). As expected, several near-equatorial locations (e.g., Mombasa, Kenya; Recife, Brazil; Acapulco, Mexico; Darwin, Australia; and Kona, USA) exhibiting relatively high irradiance and air temperature yielded the highest annual-average productivities between 20.0 and 22.1 g m−2 day−1. These values are within the range of achievable biomass yields in high-rate pond systems [25, 30]. Abu Dhabi (United Arab Emirates—UAE), also in this cluster, had lower average yields (17.5 g m−2 day−1) due to its desert climate of extreme temperatures and irradiance. A second cluster is shown for the sub-tropical locations of Tunis, Tunisia; Almeria, Spain; and Izmir, Turkey at ~ 13–16 g m−2 day−1. Haikou (China), with its high temperature but relatively lower irradiance due to relatively high rainfall yielded 11.5 g m−2 day−1 and the cool, temperate climate of Amsterdam (Netherlands) yielded the lowest at 8.6 g m−2 day−1. The alga D. tertiolecta (Fig. 3a, orange circles) performed best in equatorial regions that had consistently high temperatures, while N. oceanica (Fig. 2a, blue circles) performed better in locations with cooler climates and lower irradiance. Locations that had a broader temperature range over the year exhibited reduced productivities compared with less variable locations (e.g., Kona, USA and Recife, Brazil) using a single strain (Fig. 2a). For example, in the desert climate of Abu Dhabi (UAE), N. oculata exhibited higher productivity through winter (Dec–March), while D. tertiolecta performed significantly better in the summer (Mar–Nov) (Fig. 2b). For the technoeconomic evaluation (Fig. 4), average productivity values were used for the highest yielding strain (i.e., D. tertiolecta or N. oceanica) for each location to ensure a conservative modelling position. These results indicate that for certain locations, different strains could be used seasonally to improve yields.
a Maximum and average location specific productivities as a function of solar irradiance and temperature for the best performing strain. Maximum productivity refers to the biomass productivity optimized for yield (Light blue: N. oculate; Light orange: D. tertiolecta) at 0.3 m pond depth and optimal operating biomass concentration (0.1 g L−1). Average productivity refers to the biomass productivity optimized for IRR in the techno-economic evaluation (Dark blue: N. oculate; Dark orange: D. tertiolecta) to ensure a conservative techno-economic modelling position (8.6–22.1 g m−2 day−1; see also Fig. 1 and Table 3) AB Abu Dhabi, United Arab Emirates; AC Acapulco, Mexico; AM Amsterdam, Netherlands; AL Almeria, Spain; CH Chennai, India; DA Darwin, Australia; HA Haikou, China; IZ Izmir, Turkey; KO Kona, USA; MO Mombasa, Kenya; RE Recife, Brazil; TU Tunis, Tunisia; The light to dark grey shaded circles represent high to low annual irradiance levels. b Strain specific productivity in Abu Dhabi illustrates the benefit of dual strain cultivation over an annual cycle
Fig. 3
Comparison of systems optimized for productivity and IRR in Darwin, Australia. The blue arrow indicates an increase in IRR from ~ 4% (Peak productivity settings) to ~ 13% (Peak IRR settings)
Breakdown of the key components contributing to the minimum diesel selling price (MDSP). a Scenario 1 (for profit model) shows that 10 locations are profitable at an MDSP of US$ 2 L−1. b Scenario 2 (public utility model, not for profit) includes production system costs (blue), land and misc. (yellow), and low interest (red) with 0% tax and 0% profit. Under this scenario, 8 locations (Mombasa, Kenya, Recife, Brazil; Tunis, Tunisia; Acapulco, Mexico; Darwin, Australia; Kona, USA; Chennai, India and Almeria, Spain) could achieve an MDSP of < US$ 1.25 (under black dotted line), almost at parity with maximum historical fossil diesel prices. Far right: shows an improvement in MDSP and profitability that could be achieved in Almeria (Spain) if the EU adopted a carbon tax (CT) of US$ 100 tonne−1. *Interest figures shown in this figure represent total interest payments over a 10-year loan repayment period at respective interest rates (values in Table 2). The inclusion of a carbon price in Scenarios 1 and 2 reduces the contribution of each costed item to the MDSP. c CapEx breakdown of the 500 ha facility ranged between US$ 128–245 Million (i.e., US$ 256,000–490,000 ha−1), d Annual OpEx ranged between US$ 7.7–16.4 Million (i.e., US$ 15,400–32,800 ha−1)
The trade-off between productivity and harvest costs to minimise diesel selling price
Late afternoon semi-continuous harvesting is considered optimal for productivity, as it minimises biomass loss via respiration in the dark [43]. While this principle is correct, financially optimised systems (Fig. 3, Peak IRR) require the minimisation of harvesting CAPEX (i.e., the smallest harvesting system operated for the maximum duration). In addition, the lowest cost system involves fixed rate harvesting and so can only be adjusted through the start time and operational duration. At an industrial scale, harvesting is usually conducted continuously, to minimise the CapEx of the harvesting equipment (i.e., longer harvesting times = smaller harvest system requirements), with the proviso that this results in a net improvement in the MDSP. The TELCA model has been constructed to conduct such cost benefit analysis and to determine the optimum harvesting regime. We, therefore, focused on minimising the non-harvesting periods, to keep harvesting CapEx low. For all months and at all locations stopping harvest in the morning was found to be beneficial as it allowed cell numbers to increase rapidly during the morning, while harvesting from the afternoon onwards allowed harvesting at higher cell densities, making the process more efficient. Collectively, this high culture density/low harvesting CapEx strategy, yielded a better MDSP and IRR (Fig. 4, Table 3). This financially optimised system (Darwin, Australia) increased IRR from ~ 4% (at the peak productivity setting, ~ 0.1 g biomass dw L−1 operating concentration) to 13% (at a 3.5 fold higher operating concentration, ~ 0.35 g biomass dw L−1).
Productivity increased with pond depth (g m−2) in most locations, but IRR decreased. The use of shallower ponds (i.e., 25 vs. 30 cm) with higher concentration reduced harvesting costs. Construction and thermal stability constraints for large shallow ponds limited further depth reductions.
Balancing evaporation and water replenishment is important for optimal IRR and freshwater use
Saltwater systems are designed to minimise their environmental freshwater-use footprint. Ideally, evaporated water is replenished with rainwater but in practice an imbalance usually exists and must be corrected. Three options to balance evaporative losses after accounting for available rainwater were analyzed: (1) incorporation of water storage based on a percentage of total pond capacity; (2) replacement with locally purchased fresh water; or (3) replenishment with seawater which necessitates further discharge of pond water (blowdown) to avoid excess salt build-up, but also results in the loss of valuable nutrients from the system. For all locations, the option of water storage between rain events proved to be the least economic due to added CapEx and land requirements, and consequently required a combination of freshwater and seawater replenishment. For financial optimisation, the proportion of freshwater purchased vs. new seawater added with blowdown to maintain salinity was location dependent, ranging from 0% freshwater purchase in Abu Dhabi (UAE), where the cost of freshwater is high (US$ 0.86 kL−1), to 96% in Chennai (India), where the freshwater price is low (US$ 0.04 kL−1) (see Table 2). Consequently, the high discharge rates in Abu Dhabi resulted in an approximate twofold increase in nutrient costs compared to most of the other locations analyzed (Fig. 4a).
Liquid fuel production capacity
Hydrothermal liquefaction-based biorefinery methods have shown conversion rates of total biomass to oil at an efficiency of 55% [31, 32] which is significantly higher than processes based on traditional oil extraction (e.g., Tri-acyl glyceride extraction). Based on the simulated average biomass productivities (8.6 g m−2 day−1 to 22.1 g m−2 day−1, see Fig. 2a, Table 3) and downstream processing, this equates to biodiesel yields ranging from 17 kL ha−1 year−1 (Amsterdam, Netherlands) to 44.7 kL ha−1 year−1 (Mombasa, Kenya). A summary of reported oil yields among 20 studies [44], showed estimations of algal-based biofuel ranging from 10 kL up to 130 kL ha−1 year−1; however, the vast majority of studies ranged from 15 to 60 kL, suggesting that our indications are in the mid-range of previous reports.
Technoeconomic evaluation
Scenario 1—For-profit business model
Under a commercial for-profit business model scenario (Fig. 4a), the IRR is calculated for the described algae biodiesel production system at each of the 12 chosen locations on the basis of a US$ 2 L−1 MDSP (Graphical abstract, Table 3). This enables the identification of specific cost components that can be further optimized to drive the MDSP down towards cost parity with fossil-fuel-based diesel. At all locations, the proportional contribution (US$ L−1) rather than the absolute cost (US$ system−1) of each component is shown. To agglomerate construction and operational costs, all future costs were discounted at inflation-adjusted interest rates prevailing in each jurisdiction (Table 2). IRR values are presented for a full for-profit business model (at an assumed US$ 2 L−1 Minimum Diesel Selling Price (MDSP) at the factory gate) and range from –7% (Amsterdam, Netherlands) to close to 22.1% (Mombasa, Kenya), with four locations presenting an IRR > 18% (Table 3).
Scenario 2—not-for-profit public utility model
The not-for-profit public utility model excludes the need to generate profit and assumed 0% IRR, 0% tax and a base interest rate aligned to respective government bond rates. The achievable Minimum Selling Price and component breakdowns are shown in Fig. 4b. The achievable MDSP under a public utility model ranged from US$ 1.15–2.61 L−1 (US$ 4.35–9.87 gal−1): i.e., Chennai, India and Amsterdam, Netherlands.
Production component analysis
Labor cost in terms of the final MDSP ranged from 6.8% in Chennai to 38% in Amsterdam (Fig. 4a). It includes direct-wage costs, non-wage on-costs and labor productivity in each location (Table 2). Process automation could significantly impact the final product price.
Equipment supply and operational supply costs were reasonably consistent (21–33% of MDSP) across all jurisdictions (Fig. 4a), being based on international supply prices. The differences were predominantly due to different production rates and applicable discount rates (Table 2). It is not anticipated that major savings can be made here but incremental improvements are possible.
Fabrication (4–8% of MDSP) was modelled to occur in each local jurisdiction, except the Netherlands, Spain, USA and Australia (Fig. 4a). Importation of fabricated items (e.g., steelwork) into these excepted countries from lower cost centers resulted in minor real cost differences across the range. Notably, high discount rates reduce the contribution of future costs and consequently increase the impact of fabrication costs associated with upfront construction. In some countries (e.g., Turkey, Brazil and Kenya) this effect had a notable impact. Consequently, it is anticipated that any savings in fabrication will have little impact on MDSP.
Nutrient costs (2–9% of MDSP) are generally directly proportional to the biomass production in each location (Fig. 4a). The major exception to this was Abu Dhabi, where high evaporation rates required increased saline discharge resulting in high nutrient losses. The use of strains able to grow in hyper saline conditions can help to reduce nutrient losses and freshwater costs as discharge between rain events can potentially be reduced.
Land costs were a comparatively small contributor to overall MDSP and viability (Fig. 4a) but ranged from 0.2% in Tunisia and Australia, to 6.8% in Turkey (influenced by the discount rate) and 7.2% in the Netherlands. These figures represent the current value of suitable land. Ultimately microalgae system deployment, however, may be affected by the absolute land availability in some locations, such as Europe, USA and China. It is anticipated that significant savings in land costs are unlikely.
Miscellaneous remaining costs (e.g., insurance and land use charges) were small and had little effect on the MDSP.
Overall, the average production costs of the top 10 sites contributed US$ 1.15 to the US$ 2 L−1 MDSP. The other US$ 0.85 of the for-profit model were policy and profit related costs.
CapEx and Opex: the CapEx for the 500 ha facility (Fig. 4c) ranged between US$ 128–245 Million (i.e., US$ 256,000–490,000 ha−1), with the three largest cost components being growth, HTL/refining and harvest/concentration systems. On a per hectare basis the sub-component CapEx cost of the algae production portion for the 500 ha facility (Darwin, Australia) was US$ 446,000 ha−1. This is ~ 15% below the actual construction cost of a 0.4 ha facility (US$ 525,000). This cost saving is in line with expected economies of scale achieved through the scale up from 0.4 to 500 ha (1250-fold scale up). The annual OpEx (Fig. 4d) ranged between US$ 7.7–16.4 Million (i.e., US$ 15,400–32,800 ha−1) with the three largest cost components being growth, harvest and utilities.
However, CapEx and OpEx alone were insufficient to predict profitability. For example, despite Darwin and Amsterdam having similarly high CapEx values (~ US$ 223 and US$ 245 Mil, respectively) and OpEx (US$ 13.7 Mil year−1 and US$ 12.6 Mil year−1), due to its operational conditions Darwin yielded an IRR of 13.2% (vs. Amsterdam –7%) and an MDSP of US$ 1.23 (vs. Amsterdam US$ 2.61) in the not-for-profit scenario. This highlights the importance of local climatic and operational conditions.
Employment: the facility will typically employ around 290 personnel during 2 years of design and construction and 74 personnel on a continuous basis during the 30-year operational life of the plant (based on Darwin, Australia).
Local policy settings also have a major impact. For example, a public utility approach can considerably reduce the price of fuel providing a key step on the path to a profitable commercial renewable fuel industry by attracting the required investment needed to advance technology and commercial biorefinery co-production options.
Policy effects
Interest rates were the second largest factor effecting financial viability. Their effect on the MDSP varied between 5% in Spain and 44% in Turkey (Fig. 4a). The 24% project finance rate prevailing in Turkey during 2018 when this data was assembled was the primary reason that this location did not return a positive IRR.
Corporate tax rates varied between 0% (Abu Dhabi) and 34% (Brazil) and the impact was proportional to the profitability at each location.
Profit: the most profitable location was Almeria primarily due to its very low discount rates and despite its modest 10.9% IRR. At the set US$ 2 MDSP price, most of the locations (Fig. 4a) demonstrated surprisingly similar inflation adjusted profits. Haikou, China had a comparatively low 9.5% IRR further reduced by the 6.5% discount rate. This lower IRR may stem from the particular climatic conditions in the selected location of Haikou, whereas other locations within China may deliver better results. The least profitable locations were Izmir (Turkey) and Amsterdam (Netherlands). While interest rates were the primary negative factor for Turkey, the Netherlands was affected by a combination of high labor and land costs and the lowest biomass productivity (8.6 g m−2 day−1). This information suggests that Turkey would be more attractive for renewable fuel production under conditions of reduced sovereign risk; Amsterdam appears better suited to the expansion of microalgae industries focused on higher value products.
The CO2-eq emissions of microalgae diesel correspond to about one-third of non-renewable diesel based on the boundary conditions set in this study and so process profitability would benefit from carbon pricing. Carbon pricing was (2018) only applicable in 4 of the 12 locations, Amsterdam, Netherlands (US$ 8.20T−1 CO2 equivalent greenhouse gas emissions—CO2eq), (Almeria, Spain (US$ 8.20T−1 CO2eq), Chennai, India (US$ 5.85T−1 CO2eq) and Acapulco, Mexico (US$ 3.50T−1 CO2eq). Nevertheless, the locations governed by a carbon price, and the price itself are forecast to rise in the coming years. The Carbon Pricing Leadership Coalition forecasts that a carbon price of US$ 100T−1 by 2030 [45] will be needed as one of a series of measures to stay within a 2 °C rise in global temperatures. The effect of carbon pricing was, therefore, also analyzed at US$ 100T−1 for Almeria (Fig. 4a) to measure its effects at this forecast future price point. Under a US$ 100T−1 carbon price the profitability rose from 10.9 to 14.1%.
A rapidly expanding body of advanced climate [46] and global energy-use data [7] has firmly established the urgent need for strategic leadership and action on CO2 emissions reductions. Failure to deliver this is forecast to influence the future for centuries, if not millennia [46]. Despite significant advances in renewable stationary energy and electric vehicles, parallel development of renewable fuels (e.g., methane, ethanol, high-density liquid fuels and H2) is critical to meet international COP21 Paris CO2 emission reduction commitments and key UN Sustainable Development Goals (in particular SDG 7: affordable and clean energy and SDG 13: climate action, and others indirectly) [47]. Microalgae-based renewable fuel systems are a frontrunner option that can help to support this energy mix as they can supply high-density liquid fuels for aviation, shipping and long-haul transport using existing infrastructure with relatively low environmental impact. Moreover, the vulnerability of global supply chain disruptions revealed during the COVID-19 crisis underscores the importance of decentralized and distributed energy networks consistent with algae-based fuel production.
Prices of non-renewable diesel over the past 20 years have ranged between US$ 0.19–1.04 L−1. For the purposes of this study, we have set a benchmark target for microalgae diesel to achieve price parity, to US$ 0.80 L−1 (US$ 3.02 gal−1). It should, however, be noted that in 2019 the International Monetary Fund (IMF) concluded that fossil fuel subsidies, 'defined as fuel consumption times the gap between existing and efficient prices (i.e., prices warranted by supply costs, environmental costs, and revenue considerations), for 191 countries" ranged between US$ 4.7 (2015)–5.2 trillion (2017), corresponding to 6.3–6.5% of annual GDP, respectively. Furthermore, the IMF concluded that "Efficient fossil fuel pricing in 2015 would have lowered global carbon emissions by 28 percent and fossil fuel air pollution deaths by 46 percent, and increased government revenue by 3.8 percent of GDP' [48]. These factors are likely to exert an upward pressure on the price of traditional non-renewable diesel into the future. In contrast, technical and policy advances foresee a downward trajectory of microalgae renewable fuel price toward an intersect in the price of non-renewable diesel—especially if meaningful carbon pricing can be implemented.
In terms of technical advances, Fig. 2b, shows that the use of the dual microalgae strain approach, designed to optimize biomass productivity over the full 12-month period can increase the IRR in Abu Dhabi from 11.5 to 14.4%. Recent advanced, synthetic cell engineering technologies have potential to greatly improve algae traits for increased photosynthetic efficiency, biomass and lipid yields. For instance, ExxonMobil (EM) has partnered with Synthetic Genomics, Inc (SGI) with the aim to produce 10,000 barrels of algae fuels per day by 2025 [49]. Using synthetic biology techniques, EM-SGI researchers doubled lipid content of Nannochloropis gaditana by fine tuning a genetic switch that partitions carbon to oil, without compromising growth [50].
Automated pond construction techniques and process automation are likely to reduce CapEx and OpEx. Atmospheric CO2 capture [51], optimisation of light capture [40, 52, 53], production conditions [54, 55], strain selection [56] and breeding [57, 58] can also increase productivity. Improved biomass productivity can also reduce harvesting costs due to increased cell densities (Fig. 3). Bio-refinery concepts for the co-production of fuel and other higher value co-products can also improve profitability (see below).
Our analyses show that under a for-profit business model focused only on diesel production, 10 of the 12 locations achieved a minimum diesel selling price (MDSP) under US$ 1.85 L−1/US$ 6.99 gal−1 and nine under US$ 1.60 L−1 (US$ 6.04 gal−1). While encouraging, US$ 1.60 L−1 is still US$ 0.80 L−1 above the non-renewable diesel benchmark price of US$ 0.80 L−1. Increased international carbon pricing could reduce this gap but has proven difficult and consequently this study highlights an alternative path to competitive low CO2 emissions renewable fuel systems [29].
Under the not-for-profit utility model, eight locations achieved an MDSP of less than US$ 1.25 (US$ 4.73 gal−1). This price comparison can be extended to most other fuel types (e.g., jet fuel, petrol and bunker fuel), as the production and processing costs are similar on an energy content basis. The establishment of fuel utilities could, therefore, bring microalgae fuel prices to within US$ 0.45 L−1 of the US$ 0.80 non-renewable diesel bench mark price and less in an environment in which fossil fuel subsidies are reduced. Chennai actually returned an MDSP of US$ 1.15 reducing this gap to US$ 0.35 L−1. While a fuel price of US$ 1.15–1.25 is still US$ 0.35–0.45 L−1 above the US$ 0.80 non-renewable diesel benchmark, the introduction of co-product streams (e.g., protein, biopolymers and nanomaterials) could bridge this gap on the path to fully commercial biorefineries under future policy setting in which the carbon-price increases over time.
Microalgae-based fuels also offer local benefits through the provision of employment. For example, the Darwin (Australia) facility would employ 290 personnel during 2 years of design and construction and 74 personnel on a continuous basis during the 30-year operational life of a 500 ha plant. It would also support sustainable economic development which in turn can generate tax income [12]. Internationally, these technologies could provide a series of advantages which range from economic resilience and increased fuel, climate, political, social and environmental security enshrined in the UN Sustainable Development Goals (in particular Affordable and Clean Energy and Climate Action). Microalgae also provide mechanisms to contribute to circular economies and to support initiatives to keep these within our planetary boundaries.
The CapEx for the twelve 500 ha facilities simulated (Fig. 4c) ranged between US$ 128–245 Million (i.e., US$ 256,000–490,000 ha−1) and the annual OpEx (Fig. 4d) between US$ 7.7–16.4 Million (i.e., US$ 15,400–32,800 ha−1). CapEx and OpEx alone were insufficient to predict profitability as climatic, operational and economic conditions had major impacts (Figs. 1, 3 and 4) highlighting the importance of conducting the location specific analyses presented. Under a for-profit business model focused only on diesel production, 10 of the 12 locations achieved a minimum diesel selling price (MDSP) under US$ 1.85 L−1/US$ 6.99 gal−1, while using the not-for-profit utility model, eight locations achieved an MDSP of less than US$ 1.25 (US$ 4.73 gal−1). Moving forward, the judicious use of technology and policy optimisation could help to bridge the gap on the path to fully commercial biorefineries under future policy setting in which the carbon-price increases over time. The TELCA model can now be used to enable model guided systems design, assist with systems optimization, de-risk scale up and advance business models. The analysis presented also provides governments and other investors with a solid basis on which to assess whether they wish to encourage establishment of a microalgae industry in their jurisdiction, and if so, which technical advances and policy settings are likely to be most favorable. The analysis indicates that microalgae high-density renewable liquid fuels could be produced close to competitively in a broad range of countries (Graphical abstract and Fig. 4) and that price parity is likely achievable through the introduction of scaleable and higher value co-product streams (e.g., protein and biopolymers). As has been demonstrated in numerous other industries, early adopters are likely to be best positioned to establish the critical mass necessary to develop beneficial value chains, supply local markets and expand export opportunities.
The data sets used and/or analyzed during the current study are either reported in additional Data or available from the corresponding author on reasonable request.
International Energy Agency. Global energy and CO2 status report. Paris: International Energy Agency; 2019.
World Bank. International comparison program database. Economic policy & debt: purchasing power parity. https://www.worldbank.org/en/programs/icp#1; 2020.
OECD. OECD green growth studies energy. Paris: OECD publishing; 2012.
British Petroleum. BP statistical review of world energy. London: British Petroleum Company; 2019.
International Energy Agency. CO2 emissions from fuel combustion-highlights 2019. Paris: International Energy Agency; 2019.
McCollum DL, et al. Energy investment needs for fulfilling the Paris Agreement and achieving the Sustainable Development Goals (vol 3, pg 589, 2018). Nat Energy. 2018;3:699–699. https://doi.org/10.1038/s41560-018-0215-z.
Wagner L, Ross I, Foster J, Hankamer B. Trading off global fuel supply, CO2 emissions and sustainable development. PLoS ONE. 2016. https://doi.org/10.1371/journal.pone.0149406.
Pimentel D, et al. Food versus biofuels: environmental and economic costs. Hum Ecol. 2009;37:1. https://doi.org/10.1007/s10745-009-9215-8.
Marta A, Orlando F, Mancini M, Orlandini S. In: Hussey K, Pittock J, Dovers S, editors. Climate, energy and water: managing trade-offs, seizing opportunities. Cambridge: Cambridge University Press; 2015. p. 108–22
Ringsmuth AK, Landsberg MJ, Hankamer B. Can photosynthesis enable a global transition from fossil fuels to solar fuels, to mitigate climate change and fuel-supply limitations? Renew Sust Energ Rev. 2016;62:134–63. https://doi.org/10.1016/j.rser.2016.04.016.
United Nations. Global trends in renewable energy investment 2018. New York: United Nations; 2018.
Roles J, et al. Charting a development path to deliver cost competitive microalgae-based fuels. Algae Res. 2019;45:101721.
Davis R, Aden A, Pienkos PT. Techno-economic analysis of autotrophic microalgae for fuel production. Appl Energ. 2011;88:3524–31. https://doi.org/10.1016/j.apenergy.2011.04.018.
Borowitzka MA. Algal biotechnology products and processes—matching science and economics. J Appl Phycol. 1992;4:267–79. https://doi.org/10.1007/Bf02161212.
Stephens E, et al. An economic and technical evaluation of microalgal biofuels. Nat Biotechnol. 2010;28:126–8. https://doi.org/10.1038/nbt0210-126.
Norsker NH, Barbosa M, Wijffels R. Microalgal biotechnology in the production of nutraceuticals. Biotechnol Funct Foods Nutraceuticals. 2010. https://doi.org/10.1201/9781420087123-c17.
Williams PJLB, Laurens LMJE, Science E. Microalgae as biodiesel & biomass feedstocks: review & analysis of the biochemistry, energetics & economics. Energy Environ Sci. 2010;3:554–90.
Lundquist TJ, Woertz IC, Quinn NWT, Benemann J. A realistic technology and engineering assessment of algae biofuel production. Energy Biosciences Institute (2010).
Jones SB, et al. Process design and economics for the conversion of algal biomass to hydrocarbons: whole algae hydrothermal liquefaction and upgrading. Richland: Pacific Northwest National Lab (PNNL); 2014.
Thilakaratne R, Wright MM, Brown RCJF. A techno-economic analysis of microalgae remnant catalytic pyrolysis and upgrading to fuels. Fuel. 2014;128:104–12.
Benemann J, Goebel R, Weissman J, Augenstein DC. Microalgae as a source of liquid fuels. Report to DOE Office of Energy Research; 1982. p. 1–17.
Weyer KM, Bush DR, Darzins A, Willson BDJB. R theoretical maximum algal oil production. Bioenergy Res. 2010;3:204–13.
Béchet Q, Shilton A, Guieysse BJBA. Modeling the effects of light and temperature on algae growth: state of the art and critical assessment for productivity prediction during outdoor cultivation. Biotechnol Adv. 2013;31:1648–63.
Davis RE, et al. Integrated evaluation of cost, emissions, and resource potential for algal biofuels at the national scale. Environ Sci Technol. 2014;48:6035–42.
Bechet Q, Shilton A, Guieysse B. Modeling the effects of light and temperature on algae growth: state of the art and critical assessment for productivity prediction during outdoor cultivation. Biotechnol Adv. 2013;31:1648–63. https://doi.org/10.1016/j.biotechadv.2013.08.014.
Moody JW, McGinty CM, Quinn JC. Global evaluation of biofuel potential from microalgae. Proc Natl Acad Sci. 2014;111:8691. https://doi.org/10.1073/pnas.1321652111.
Quinn JC, Davis R. The potentials and challenges of algae based biofuels: a review of the techno-economic, life cycle, and resource assessment modeling. Bioresource Technol. 2015;184:444–52. https://doi.org/10.1016/j.biortech.2014.10.075.
Borowitzka MA. In: Borowitzka MA, Moheimani NR, editors. Algae for biofuels and energy. Netherlands: Springer; 2013. p. 255–64.
Roles J, et al. Charting a development path to deliver cost competitive solar fuels. Algae Res. 2018;45:101721.
Park JBK, Craggs RJ, Shilton AN. Wastewater treatment high rate algal ponds for biofuel production. Bioresour Technol. 2011;102:35–42. https://doi.org/10.1016/j.biortech.2010.06.158.
Li H, et al. Conversion efficiency and oil quality of low-lipid high-protein and high-lipid low-protein microalgae via hydrothermal liquefaction. Bioresour Technol. 2014;154:322–9. https://doi.org/10.1016/j.biortech.2013.12.074.
Jones S et al. Process design and economics for the conversion of algal biomass to hydrocarbons. US Department of Energy (PNNL-23227); 2014.
Jena U, Vaidyanathan N, Chinnasamy S, Das KC. Evaluation of microalgae cultivation using recovered aqueous co-product from thermochemical liquefaction of algal biomass. Bioresour Technol. 2011;102:3380–7. https://doi.org/10.1016/j.biortech.2010.09.111.
Edmundson S, et al. Phosphorus and nitrogen recycle following algal bio-crude production via continuous hydrothermal liquefaction. Algal Res. 2017;26:415–21. https://doi.org/10.1016/j.algal.2017.07.016.
Bechet Q, Shilton A, Fringer OB, Munoz R, Guieysse B. Mechanistic modeling of broth temperature in outdoor photobioreactors. Environ Sci Technol. 2010;44:2197–203. https://doi.org/10.1021/es903214u.
Lee E, Pruvost J, He X, Munipalli R, Pilon L. Design tool and guidelines for outdoor photobioreactors. Chem Eng Sci. 2014;106:18–29. https://doi.org/10.1016/j.ces.2013.11.014.
Bernard O, Remond B. Validation of a simple model accounting for light and temperature effect on microalgal growth. Bioresource Technol. 2012;123:520–7. https://doi.org/10.1016/j.biortech.2012.07.022.
Rosso L, Lobry JR, Flandrois JP. An unexpected correlation between cardinal temperatures of microbial-growth highlighted by a new model. J Theor Biol. 1993;162:447–63. https://doi.org/10.1006/jtbi.1993.1099.
Papadakis IA, Kotzabasis K, Lika K. A cell-based model for the photoacclimation and CO2-acclimation of the photosynthetic apparatus. Bba-Bioenergetics. 2005;1708:250–61. https://doi.org/10.1016/j.bbabio.2005.03.001.
Yarnold J, Ross IL, Hankamer B. Photoacclimation and productivity of Chlamydomonas reinhardtii grown in fluctuating light regimes which simulate outdoor algal culture conditions. Algal Res. 2016;13:182–94. https://doi.org/10.1016/j.algal.2015.11.001.
Geider RJ, Osborne BA. Respiration and microalgal growth—a review of the quantitative relationship between dark respiration and growth. N Phytol. 1989;112:327–41.
Feldman D, Lowder T, Schwabe P. PV project finance in the United States, 2016. Golden: National Renewable Energy Laboratory; 2017.
Hewes CD. Timing is everything: optimizing crop yield for Thalassiosira pseudonana (Bacillariophyceae) with semi-continuous culture. J Appl Phycol. 2016;28:3213–23. https://doi.org/10.1007/s10811-016-0900-x.
Moody JW, McGinty CM, Quinn JC. Global evaluation of biofuel potential from microalgae. Proc Natl Acad Sci USA. 2014;111:8691–6. https://doi.org/10.1073/pnas.1321652111.
The World Bank. Carbon pricing. Paris: Carbon Pricing Leadership Coalition; 2017.
Steffen W, et al. Trajectories of the earth system in the anthropocene. Proc Natl Acad Sci USA. 2018;115:8252–9. https://doi.org/10.1073/pnas.1810141115.
United Nations. Transforming our world: The 2030 agenda for sustainable development; 2015.
IMF. Global fossil fuel subsidies remain large: an update based on country. IMF working papers; 2019.
Mobil, E. Advanced biofuels and algae research: targeting the technical capability to produce 10,000 barrels per day by 2025. Adv Biofuels; 2018.
Ajjawi I, et al. Lipid production in Nannochloropsis gaditana is doubled by decreasing expression of a single transcriptional regulator. Nat Biotechnol. 2017;35:647. https://doi.org/10.1038/nbt.3865.
Service RF. Cost of carbon capture drops, but does anyone want it? Science. 2016;354:1362–3. https://doi.org/10.1126/science.354.6318.1362.
Sivakaminathan S, Hankamer B, Wolf J, Yarnold J. High-throughput optimisation of light-driven microalgae biotechnologies. Sci Rep. 2018. https://doi.org/10.1038/s41598-018-29954-x.
Wolf J, et al. Multifactorial comparison of photobioreactor geometries in parallel microalgae cultivations. Algal Res. 2016;15:187–201. https://doi.org/10.1016/j.algal.2016.02.018.
Wolf J, et al. High-throughput screen for high performance microalgae strain selection and integrated media design. Algal Res. 2015;11:313–25. https://doi.org/10.1016/j.algal.2015.07.005.
Sivakaminathan S, et al. Light guide systems enhance microalgae production efficiency in outdoor high rate ponds. Algal Res. 2020. https://doi.org/10.1016/j.algal.2020.101846.
Larkum AWD, Ross IL, Kruse O, Hankamer B. Selection, breeding and engineering of microalgae for bioenergy and biofuel production. Trends Biotechnol. 2012;30:198–205. https://doi.org/10.1016/j.tibtech.2011.11.003.
Mussgnug JH, et al. Engineering photosynthetic light capture: impacts on improved solar energy to biomass conversion. Plant Biotechnol J. 2007;5:802–14. https://doi.org/10.1111/j.1467-7652.2007.00285.x.
Oey M, et al. RNAi knock-down of LHCBM1, 2 and 3 increases photosynthetic H-2 production efficiency of the green alga Chlamydomonas reinhardtii. PLoS ONE. 2013. https://doi.org/10.1371/journal.pone.0061375.
The authors wish to acknowledge the following for their constructive feedback and valuable assistance of Hakan Karan—PhD candidate, Institute for Molecular Bioscience in review of this paper.
The authors gratefully acknowledge the support of the Australian Research Council (LP150101147, DP150100740 and LP180100269) and the Science and Industry Endowment Fund (John Stocker Postdoctoral Fellowship PF16-087).
John Roles and Jennifer Yarnold—Joint first authors
Institute for Molecular Bioscience, The University of Queensland, 306 Carmody Road, Brisbane, QLD, 4072, Australia
John Roles, Jennifer Yarnold & Ben Hankamer
Centre for Policy Futures, Faculty of Humanities and Social Sciences, The University of Queensland, Brisbane, QLD, 4072, Australia
Jennifer Yarnold & Karen Hussey
John Roles
Jennifer Yarnold
Karen Hussey
Ben Hankamer
Authors made the following contributions to this paper: JR—software development, economic modelling, validation, paper writing. JY—software development, light modelling, validation, paper writing. KH—policy guidance. BH—concept, paper writing. All authors read and approved the final manuscript.
Correspondence to Ben Hankamer.
Supplementary information detailing the presented TELCA simulation.
Roles, J., Yarnold, J., Hussey, K. et al. Techno-economic evaluation of microalgae high-density liquid fuel production at 12 international locations. Biotechnol Biofuels 14, 133 (2021). https://doi.org/10.1186/s13068-021-01972-4
Received: 03 July 2020
Algae-based fuel
Fuel security
Transitioning towards GHG neutrality: The role of bioeconomy | CommonCrawl |
Space Exploration Meta
Space Exploration Beta
Why does NASA plan to put a meteoroid in Lunar orbit instead of Earth orbit?
NASA is working on an asteroid retrieval mission. A small asteroid will be moved into Lunar orbit where it will be visited by space walking astronauts. Since it will be only max seven meters in diameter, I prefer to call it a meteoroid. The one which exploded over Chelyabinsk last year was about 20 meters in diameter and hence about almost nine times more massive. The retrieved meteoroid will be too small to pose any danger to Earth. Also, I suppose it will have a lower speed relative to Earth, than would an average interplanetary asteroid. They will look for a low delta-v asteroid.
Wouldn't it be easier to visit it in low Earth orbit and maybe use the ISS to investigate it?
nasa asteroid multi-launch neo
TildalWave
LocalFluffLocalFluff
$\begingroup$ Can you share the some information regarding the mission. Is it official? $\endgroup$ – this Jan 26 '14 at 19:12
$\begingroup$ I don't know where it is in the political labyrinth but the president mentioned going to an asteroid and a retrieval to Lunar orbit was the NASA response to that. Some news link here: nbcnews.com/science/… a pdf somewhere in NASA webb space: nasa.gov/pdf/746689main_SLS_Highlights_April_2013.pdf youtube illustration: youtube.com/watch?v=lg0uX0ogA5k I bet you google better than I do from there. $\endgroup$ – LocalFluff Jan 27 '14 at 15:21
$\begingroup$ You might prefer to call it a meteoroid, but you'd be wrong. Meteoroids are one meter in diameter or less (but more than 10 microns). $\endgroup$ – Mark Adler Jan 28 '14 at 6:15
It would take vastly more $\Delta V$ to get it to a low-Earth orbit. The targets selected are close enough to Earth's orbit about the Sun that it only takes around $200\,\mathrm{m/s}$ to get it into a distant retrograde orbit about the Moon. To get the thing to a low Earth orbit would be around $3\,\mathrm{km/s}$. The tyranny of the rocket equation makes that infeasible.
Mark AdlerMark Adler
$\begingroup$ Very interesting! That sounds like a potentially very good explanation. But could you please explain a bit about what basically makes it so? The Moon orbits Earth at only 1 km/s, that could intuitively explain 1/3 of the difference in your numbers. But why would near Earth asteroids have lower than that delta-V relative to the Moon than to Earth? Because they could be pushed into trajectories which pass by the Moon several times, maybe? $\endgroup$ – LocalFluff Jan 28 '14 at 14:17
$\begingroup$ Because the Moon has very nearly escaped from the Earth. Going the other way, it takes about $3.1\,\mathrm{km/s}$ to get from LEO (low-Earth orbit) to an orbit that just reaches the Moon. From LEO, it takes $3.2\,\mathrm{km/s}$ to escape Earth's gravity completely. Only $100\,\mathrm{m/s}$ more. So in terms of $\Delta V$, the Moon is much closer to the rest of the Solar System than it is to Earth. $\endgroup$ – Mark Adler Jan 28 '14 at 15:19
$\begingroup$ @LocalFluff Gravity well of the Moon is shallower since it's only about 1.23% the mass of the Earth. It's also non-negligible that it orbits the Earth, which can potentially also reduce required delta-v, depending on how the captured object moves relative to it. Since we're talking of NEO asteroids here that would already be at orbital velocities close to those of the Earth-Moon system, that difference wouldn't be small. What Mark answers in numbers, while I don't have any doubt is correct, also feels right. $\endgroup$ – TildalWave Jan 28 '14 at 15:19
Safety of our blue planet. Eventually, gravity anomalies would cause even a perfectly orbited object (a moonlet?) to preces and hit the body it orbits around. Since orbiting an asteroid means reducing large fraction of its momentum to bring it closer to celestials it naturally orbits (NASA's plans involve capturing a near-Earth asteroid, or NEO, as part of its Asteroid Initiative program) it wouldn't impact with such velocity as the Chelyabinsk meteor did (required LEO velocity is only about 8 km/s, which would have to be reduced to deorbit, while the Chelyabinsk meteor entered Earth's atmosphere at an estimated velocity of 18.6 km/s), which caused it to disintegrate to smaller fragments in the atmosphere due to friction and aerodynamic heating. So it would still pose a significant threat to Earth, even if it was much smaller than the Chelyabinsk superbolide was. I'm myself much more in favor of this happening on the Moon than on the Earth.
It also wouldn't make much difference for the safety of the astronauts studying this asteroid (while it's much bigger than about 1 m in diameter, it can't be considered a meteoroid) or much more difficult to reach, since it couldn't be orbited in Low Earth Orbit (LEO) due to still sufficient atmospheric drag decaying its orbit, and where astronauts would still be somewhat protected from solar wind and high-energy proton particles of cosmic radiation by the Van Allen radiation belt, or they could be reached easier or sent supplies by currently available orbital launch vehicles in case of something going sour.
It is also easier (smaller required delta-v) to later haul the rock back into a safe orbit in cis-lunar space once you're done studying it, if it's orbited around a smaller mass body that's already a significant portion away from the Earth's Hill sphere.
TildalWaveTildalWave
Multiple reasons exist.
Weapon of Mass Destruction
Any large body in orbit is a potential weapon of mass destruction. Drop a 20-ton rock from orbit, and it may not survive, but if you do it right, it creates a crater some 100m across.
While NASA is not planning on using deadfall ortillery, the possibility is a political issue.
Ownership and access
Legally, all natural bodies in space are common property of all Earth's peoples.
Putting it in LEO would make it accessible to a variety of states that are seen as politically untrustworthy. The key example is North Korea - they have demonstrated the ability to launch SRBM's, which are precursors to a space program as well as to ICBMs. (The initial US space launchers were in fact developed as part of ICBM programs.) The concern that these nations may reach and deorbit a body as a form of terrorism is a real and present (but low probability) threat.
A Lunar Orbit location has far less access to everyone, however, it's a near total bar to most nations. The US, Russia, China, India, and the EU have the capability to get there, as evidenced by the ability to land lunar rovers and install lunar orbiters. It's still close enough for most real-time experimental controls.
More margin for error
The lack of atmosphere ➀ allows for a miss to not result in slowing and impact, nor to result in surface melting of the body.
Lower ∆V
Less overall energy is needed to put items into a stable Lunar orbit than a stable Earth orbit. The speed change needed is less, and thus the duration of acceleration and total thrust can be less.
Less to impact
Fundamentally, inserting anything into Earth orbit involves hitting the mark at the right time and speed to not hit anything else in Earth orbit. There are huge numbers (thousands) of tracked items, and hundreds of useful items, including a manned facility, that might be affected.
Lunar orbit is far less crowded, and no lunar orbiting equipment is essential infrastructure.
Much less impact on satellite orbits.
If I recall correctly, the intent is to move an asteroid of sub-kilometer diameter - but preferably over 100m diameter. These are objects large enough to have a noticeable effect upon other orbiting bodies. The cumulative effect could be catastrophic. The impact is tiny, but would be persistent and affect all other orbiting structures notably.
Less tidal stress
Tidal stress being the difference in pull between the ends of an object in orbit around a more massive object. Lunar orbit results in far less tidal stress than Earth orbit, due to the much lower lunar mass. (Note that Earth tidal stress will still exist in lunar orbit, but will be considerably less than even geosynch.
Proving the tech doesn't require a Earth orbit.
The technology in question is the ability to put asteroids into desired orbits. The principles are exactly the same whether that's Earth, Lunar, or Martian orbit. Success will result in proving the technology.
A base for future science
A lunar orbiting asteroid is a good base for further lunar and deep space oriented science. Being closer to the moon, it provides relay access for far-side rovers, as well as a location outside the van allen belts to test various practical radiation shielding approaches without having to land on the moon itself.
➀ Lunar atmosphere is present, but is at so low a fraction of a pascal at surface as to be safely ignored for orbital purposes.
aramisaramis
$\begingroup$ Thanks for your answers. Low deltaV is the only valid reason, I think, and it might be enough. But the terrorist paranoia is an illness and planet safety is a joke. The max size will be 8 meter with a mass of maybe 3 tons. A solar ion thruster could never do anything with a real asteroid for planetary defence or of economic importance. The meteoroid retrieval mission has obviously been designed only to use the ion thruster, SLS and Orion. Not for science, security or resource extraction. I read online that many are of the opinion that this crazy mission will never happen. $\endgroup$ – LocalFluff Feb 16 '14 at 11:00
$\begingroup$ The initial proposal I read about was for a 100m asteroid - and using a mass driver, not an ion drive. The ion drive is more readily fundable. And NASA has done crazier things. The Kittinger jump comes to mind. $\endgroup$ – aramis Feb 17 '14 at 4:23
$\begingroup$ For small-asteroid science, they could go to Chelyabinsk instead. They have half a ton pieces of a 20 meter asteroid highly accesible right there on the ground. That'd save a number of billions of dollars. But what to use the SLS+Orion and ion thruster for now? Cancellatiooon... $\endgroup$ – LocalFluff Feb 18 '14 at 11:14
$\begingroup$ @LocalFluff you're utterly missing the point of moving an asteroid into orbit - not the least of which is determining if 0.01G is significantly different physiologically and psychologically, from freefall. A 20m asteroid is worthless for those purposes - a 100m asteroid, as the early proposals were, was sufficient to start looking at the effects separate from freefall. $\endgroup$ – aramis Feb 19 '14 at 7:22
$\begingroup$ A 100 m asteroid would be different, yes. For planetary defence, for futuristic mining visions, and perhaps scientifically if they generally form quite differently than 7 m asteroids do. But now, 7 m it is! (Wouldn't it be much cheaper to put the largest recovered part of the Chelyabinsk meteorite in Lunar orbit instead?) :-p $\endgroup$ – LocalFluff Feb 19 '14 at 11:42
Thanks for contributing an answer to Space Exploration Stack Exchange!
Not the answer you're looking for? Browse other questions tagged nasa asteroid multi-launch neo or ask your own question.
I'm taking a leave of absence from moderating
Why is NASA redirecting an asteroid to Lunar Orbit instead of LEO?
Does NASA plan to land on Europa?
What does NASA plan to do with the world's largest quantum computer?
Does the Earth have any Trojan asteroids?
Calculate NEO object position with nasa Near Earth Object data
Why does NASA still use film cameras?
Does NASA still plan for the Mars Ascent Vehicle to burn wax from the surface to orbit?
Will NASA put astronauts into a polar lunar orbit? If so, how?
Why does NASA publish all the results/data it gets? | CommonCrawl |
Why integrate over $2\pi$ in inverse DTFT?
In DTFT of a signal, the spectrum of a sequence is periodic with period $2\pi$ and all the information needed for derivation of the original signal from its spectrum is contained in $\pi <\omega <\pi$.
But , why do they integrate over $2\pi$ and not from $-\infty$ to $+\infty$ , as in continuous Fourier transform in inverse DTFT? After all, when you derive a transform of a signal, the inverse of it is not arbitrary and to recover the original signal, you can't integrate over an other interval because all the information is contained in there.
discrete-signals fourier-transform
Zorich
ZorichZorich
$\begingroup$ Doesn't your first sentence give the answer to your question? $\endgroup$ – Matt L. May 21 '13 at 15:00
$\begingroup$ This transform is somehow derived from the continuous transform,and the definition for inverse of it isn't arbitrary at all in CFT.If you don't integrate over all real axis, you will have wrong answer for $x$. Because this inversion formula must give you your original function,from which you have obtained the spectrum. $\endgroup$ – Zorich May 21 '13 at 15:10
$\begingroup$ Yes, but there's a difference between continuous and discrete-time transforms, as you have correctly pointed out. You can see the inverse transform in the discrete-time case simply as computing the Fourier series coefficients of the periodic spectrum. $\endgroup$ – Matt L. May 21 '13 at 15:32
$\begingroup$ Short answer: for the same reason that when calculating a Fourier series you only integrate over one period of the waveform instead of over the entire real line (as you do in the Fourier transform). It's due to the periodicity of the input (in your case, the periodic DTFT spectrum). $\endgroup$ – Jason R May 21 '13 at 15:36
There are two reasons:
As you say, the sequence is periodic so all of the information is contained in the $-\pi < w <\pi$ region. Thus, integrating over that region tells you everything that there is to be learned from the inverse transform.
Integrating from $-\infty < w< \infty$ would only give answers of $0$, $\infty$, or $-\infty$. The reason for that is that integrating over $-\pi < w <\pi$ will generally give a finite answer which can be something negative (e.g. -4), 0, or something positive (e.g. 2.5). If you integrate over $-\infty < w< \infty$ each $2\pi$ region will give the exact same answer, so the negative results will accumulate to $-\infty$, the zeros will sum to $0$, and the positive results will sum to $\infty$. (Note that the results are often complex, but that doesn't change the primary point. It just increases the dimensionality of the infinities.)
Mathematically , you have to integrate over $2\pi$: $$X(\omega) = \sum_{n=-\infty}^{\infty} x[n] \,e^{-i \omega n}$$
$$x[n]= \frac{1}{2 \pi}\int_{2\pi} X(\omega)\cdot e^{i \omega n} d\omega= \frac{1}{2 \pi}\int_{2\pi} \sum_{k=-\infty}^{\infty} x[k] \,e^{-i \omega k}\cdot e^{i \omega n} d\omega $$ because of orthogonality of sinusoidals , this will be zero for all $k$s except $k=n$: $$\to x[n]= \frac{1}{2 \pi}\int_{2\pi}x[n]d\omega $$
In the case of a continuous Fourier transform you'll arrive at a delta function and you have to integrate over $(-\infty,\infty)$.
Mo_Mo_
Not the answer you're looking for? Browse other questions tagged discrete-signals fourier-transform or ask your own question.
Techniques to deriving DTFTs
Derive Frequency Representation of Impulse Train Function
Relation between the DTFT and the spectrum of a sampled signal
Why DTFT coefficients are periodic and why continuous Fourier transform coefficients are not periodic?
Link between DFS, DFT, DTFT
How is a continuous spectrum for the DTFT possible?
Periodicity of the discrete-time Fourier Transform
Relation between the DTFT and CTFT in sampling- sample period isn't as the impulse train period
Discrete inverse Fourier transform
DTFT frequency range | CommonCrawl |
Corporate Finance & Accounting Financial Ratios
Payout Ratio Definition
Andrew Bloomenthal
Andrew Bloomenthal has 20+ years of editorial experience as a financial journalist and as a financial services marketing writer.
Reviewed by Margaret James
Peggy James is a CPA with over 9 years of experience in accounting and finance, including corporate, nonprofit, and personal finance environments. She most recently worked at Duke University and is the owner of Peggy James, CPA, PLLC, serving small businesses, nonprofits, solopreneurs, freelancers, and individuals.
Suzanne Kvilhaug
Fact checked by Suzanne Kvilhaug
Suzanne is a researcher, writer, and fact-checker. She holds a Bachelor of Science in Finance degree from Bridgewater State University and has worked on print content for business owners, national brands, and major publications.
Guide to Financial Ratios
What Is Payout Ratio?
The payout ratio is a financial metric showing the proportion of earnings a company pays its shareholders in the form of dividends, expressed as a percentage of the company's total earnings. On some occasions, the payout ratio refers to the dividends paid out as a percentage of a company's cash flow. The payout ratio is also known as the dividend payout ratio.
The payout ratio, also known as the dividend payout ratio, shows the percentage of a company's earnings paid out as dividends to shareholders.
A low payout ratio can signal that a company is reinvesting the bulk of its earnings into expanding operations.
A payout ratio over 100% indicates that the company is paying out more in dividends than its earning can support, which some view as an unsustainable practice.
Understanding the Payout Ratio
The payout ratio is a key financial metric used to determine the sustainability of a company's dividend payment program. It is the amount of dividends paid to shareholders relative to the total net income of a company.
For example, let's assume Company ABC has earnings per share of $1 and pays dividends per share of $0.60. In this scenario, the payout ratio would be 60% (0.6 / 1). Let's further assume that Company XYZ has earnings per share of $2 and dividends per share of $1.50. In this scenario, the payout ratio is 75% (1.5 / 2). Comparatively speaking, Company ABC pays out a smaller percentage of its earnings to shareholders as dividends, giving it a more sustainable payout ratio than Company XYZ.
While the payout ratio is an important metric for determining the sustainability of a company's dividend payment program, other considerations should likewise be observed. Case in point: in the aforementioned analysis, if Company ABC is a commodity producer and Company XYZ is a regulated utility, the latter may boast greater dividend sustainability, even though the former demonstrates a lower absolute payout ratio.
In essence, there is no single number that defines an ideal payout ratio because the adequacy largely depends on the sector in which a given company operates. Companies in defensive industries, such as utilities, pipelines, and telecommunications, tend to boast stable earnings and cash flows that are able to support high payouts over the long haul.
On the other hand, companies in cyclical industries typically make less reliable payouts, because their profits are vulnerable to macroeconomic fluctuations. In times of economic hardship, people spend less of their incomes on new cars, entertainment, and luxury goods. Consequently, companies in these sectors tend to experience earnings peaks and valleys that fall in line with economic cycles.
Payout Ratio Formula
D P R = Total dividends Net income where: D P R = Divided payout ratio (or simply payout ratio) \begin{aligned} &DPR=\dfrac{\textit{Total dividends}}{\textit{Net income}} \\ &\textbf{where:} \\ &DPR = \text{Divided payout ratio (or simply payout ratio)}\\ \end{aligned} DPR=Net incomeTotal dividendswhere:DPR=Divided payout ratio (or simply payout ratio)
Some companies pay out all their earnings to shareholders, while others dole out just a portion and funnel the remaining assets back into their businesses. The measure of retained earnings is known as the retention ratio. The higher the retention ratio is, the lower the payout ratio is. For example, if a company reports a net income of $100,000 and issues $25,000 in dividends, the payout ratio would be $25,000 / $100,000 = 25%. This implies that the company boasts a 75% retention ratio, meaning it records the remaining $75,000 of its income for the period in its financial statements as retained earnings, which appears in the equity section of the company's balance sheet the following year.
Generally speaking, companies with the best long-term records of dividend payments have stable payout ratios over many years. But a payout ratio greater than 100% suggests a company is paying out more in dividends than its earnings can support and might be cause for concern regarding sustainability.
What Does the Payout Ratio Tell You?
The payout ratio is a key financial metric used to determine the sustainability of a company's dividend payment program. It is the amount of dividends paid to shareholders relative to the total net income of a company. Generally, the higher the payout ratio, especially if it is over 100%, the more its sustainability is in question. Conversely, a low payout ratio can signal that a company is reinvesting the bulk of its earnings into expanding operations. Historically, companies with the best long-term records of dividend payments have had stable payout ratios over many years.
How Is the Payout Ratio Calculated?
The payout ratio shows the proportion of earnings a company pays its shareholders in the form of dividends, expressed as a percentage of the company's total earnings. The calculation is derived by dividing the total dividends being paid out by the net income generated. Another way to express it is to calculate the dividends per share (DPS) and divide that by the earnings per share (EPS) figure.
Is There an Ideal Payout Ratio?
There is no single number that defines an ideal payout ratio because the adequacy largely depends on the sector in which a given company operates. Companies in defensive industries tend to boast stable earnings and cash flows that are able to support high payouts over the long haul while companies in cyclical industries typically make less reliable payouts, because their profits are vulnerable to macroeconomic fluctuations.
Take the Next Step to Invest
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
What Is a Dividend Payout Ratio?
The dividend payout ratio is the measure of dividends paid out to shareholders relative to the company's net income.
Dividend Per Share (DPS)
Dividend per share (DPS) is the total dividends declared in a period divided by the number of outstanding ordinary shares issued.
Retained Earnings Definition
Retained earnings are a firm's cumulative net earnings or profit after accounting for dividends. They're also referred to as the earnings surplus.
How Return on Equity (ROE) Works
Return on equity (ROE) is a measure of financial performance calculated by dividing net income by shareholders' equity.
Retention Ratio Definition
The retention ratio is the proportion of earnings kept back in a business as retained earnings rather than being paid out as dividends.
How Share Repurchases Can Raise the Price of a Company's Stock
A share repurchase is when a company buys back its own shares from the marketplace, which increases the demand for the shares and the price.
4 Ratios to Evaluate Dividend Stocks
Calculate the Dividend Payout Ratio Using Just the Income Statement
Top Dividend Stocks for January 2022
How Do You Calculate a Payout Ratio Using Excel?
How Dividends Affect Stock Prices
Facts About Dividends | CommonCrawl |
KASTURI SINGH
Articles written in Journal of Earth System Science
Volume 129 All articles Published: 5 March 2020 Article ID 0084 Research Article
Robustness of best track data and associated cyclone activity over the North Indian Ocean region during and prior to satellite era
KASTURI SINGH JAGABANDHU PANDA M MOHAPATRA
There are few studies focusing on analysing climatological variation in cyclone activity by utilising the best track data provided by the India Meteorological Department (IMD) over the North Indian Ocean (NIO). The result of such studies has been beneficial in decision-making by government and meteorological agencies. It is essential to assess the quality and reliability of the currently available version of the dataset so that its robustness can be established and the current study focuses on this aspect. The analysis indicates that there is an improvement over the years in the quality and availability of the data related to cyclones over NIO, especially in terms of frequency of genesis, intensity, landfall etc. The available data from 1961 onwards has been found robust enough with the advent of satellite technology. However, there can be still missing information and inaccuracy in determining the location and intensity of cyclones during the polar satellite era (1961–1973). The study also indicates undercount of severe cyclones during the pre-satellite era. Considering the relatively smaller size of NIO basin, these errors can be neglected and thus, the IMD best track data can be considered as reliable enough for analysing cyclone activity in this region.
Volume 131 All articles Published: 28 September 2022 Article ID 0211 Research article
Simulated dynamics and thermodynamics processes leading to the rapid intensification of rare tropical cyclones over the North Indian Oceans
ARPITA MUNSI AMIT KESARKAR JYOTI BHATE KASTURI SINGH ABHISHEK PANCHAL GOVINDAN KUTTY RAMKUMAR GIRI
The life cycle dynamics and intensification processes of three long-duration tropical cyclones (TCs), viz., Fani (2019), Luban (2018), and Ockhi (2017) formed over the North Indian Ocean (NIO) have been investigated by developing a high-resolution (6 km ${\times}$ 6 km) mesoscale analysis using WRF and En3DVAR data assimilation system. The release of CAPE in nearly saturated middle-level relative humidity caused intense diabatic heating, leading to an increase in low-level convergence triggering rapid intensification (RI). The strengthening of the relative vorticity tendency terms was due to vertical stretching (TC Fani) and middle tropospheric advection (TCs Luban and Ockhi). The increase or decrease in uppertropospheric divergence led to RI through two different mechanisms. The increase in upper divergence strengthens the vortical convection (in TC Luban and Fani) by enhancing the moisture and heat transport, whereas its decrease caused a reduction in the upper-level ventilation flow at 200 hPa followed by moisture accumulation, enhanced diabatic heating, and strengthened the warm core (TC Ockhi). The RI caused the vortex of three cyclones to extend up to the upper troposphere. The well organised wind during RI led the unorganised, weak, discontinuous vertical vortex columns to become organised with intense vertical velocity throughout the column. Spatial distributions of Okubo–Wiess (OW) parameter showed TC core dominated by vorticity than strain, since deep depression (DD) stages.
$\bf{Highlights}$
$\bullet$ The saturated middle-level relative humidity caused intense diabatic heating, and then release of CAPE led to a rise in low-level spin-up triggering the RI.
$\bullet$ The strengthening of the relative vorticity tendency terms was due to stretching (TC Fani) and middle tropospheric advection (TCs Luban and Ockhi).
$\bullet$ The increase or decrease in upper-tropospheric divergence led to RI through two different mechanisms.
$\bullet$ The RI caused the vortex of three cyclones to extend up to the upper troposphere.
$\bullet$ RI led unorganised, weak, discontinuous vertical vortex columns to become organised with intense vertical velocity throughout the column.
Best Referee Awards
Journal of Earth System Science | News | CommonCrawl |
Home Journals MMEP Mathematical modeling applied to renewable fishery management
Mathematical modeling applied to renewable fishery management
Mst. Reshma Khatun| Md. Haider Ali Biswas*
Mathematics Discipline, Science, Engineering and Technology School, Khulna University, Khulna-9208, Bangladesh
[email protected]
Nowadays, controlling dynamics of renewable resources such as fishery and forestry is the major environmental challenge. In this regard, this research study was aimed to find the facile tool by using mathematical modeling to study and monitor the dynamics of the system consisting of two regions: one is reserved region and the other is unreserved region. Holling type II functional response is considered to formulate the model. The boundedness of the solution of the model is discussed. The model has been analyzed by finding the existence of equilibrium points and also the conditions of stability and instability of the system has been derived. Finally, the reliability of the analytical model was confirmed with the numerical simulations.
mathematical model, prey-predator model, Renewable resource, stability, nonlinear differential equation
Renewable resources are under extreme pressure worldwide in spite of taking efforts to design regulation to control the excessive and unsustainable exploitation. Fish is a renewable but finite resource. With the rapid growth of industrialization and population, the exploitation of fisheries has increased significantly. Although exploitation of resources is essential for the growth and development of a country, unplanned exploitation eventually leads to the extinction of the resources. Consequently, this will affect the growth and survival of species depending on the resource. From this point of view, it is a major challenge to manage renewable resources for the sustainable development of the country.
Mathematical modeling can play a significant role in the efficient and sustainable management of renewable resources. It is mainly used to describe the real phenomena leading to design better prediction, prevention, management and control techniques. Several well documented mathematical models regarding real life problems can be found in [1-7].
During the last few decades, mathematical models regarding renewable fishery resource management have been described. Mathematical modeling in harvesting of fisheries was studied first by Clark [5]. Biswas et al. presented a model for fishery resource with reserve area [1]. They studied the dynamics of a fishery resource in a two patch environment: a free fishing zone and a reserved zone where fishing is not allowed. Dubey et al. [6] presented a model for fishery resource with reserve area. They studied the dynamics of a fishery resource in a two patch environment: a free fishing zone and a reserved zone where fishing is not allowed. An optimal harvesting strategy is also derived using Pontryagin's Maximum Principle. Chaudhury [9] analyzed the dynamic optimization of combined harvesting of a two species fishery. Kar [10] presented a model for fishery resource with reserved area and facing prey-predator interaction. He considered that predation takes place in the unreserved zone. Local and global stability, optimal harvesting policy are also discussed. Roy et al. [11] investigated the effects of two predators on a prey population. They considered different types of functional responses to formulate the model.
From the literature discussed above and to the best of our views, a model for the cultivation of black tiger prawn was proposed and the work of Biswas et al. [1] was extended in this study. The dynamics of single species fishery in reserved and unreserved zone were discussed in the work but the effect of predation that the species might face in the unreserved zone were not studied. Also, it was considered that fish population can migrate from unreserved to reserved zone and vice versa. The effect of predation on the fish production was aimed to study and the migration from reserved to unreserved area was also restricted in this present work. In this paper, a mathematical model of a prey predator fishery was proposed with the help of system of nonlinear differential equation. It was considered that the fish species in the unreserved area were related in prey predator relationship. Holling type II functional response was taken into account to study the interaction between prey and predator fish species. Harvesting was permissible only in the unreserved region. Predation and harvesting were restricted in the reserved regions. To analyze the model, the existence of equilibrium points, dynamical behavior of the points and also the stability and instability conditions were discussed. Finally, numerical simulations were carried out to verify the analytical result of our proposed model.
2. Model Formulation
We consider a three compartmental model of fishery consisting of two zones: one is reserved where only prey species can reside while the other is unreserved zone where both prey and predator species can reside. Harvesting is permissible in unreserved zone but it is prohibited in reserved zone. The prey species can migrate from unreserved to reserved area while the vice versa is restricted. Keeping all these in view, the schematic diagram of the interaction among the fish species and predator in two zones has been shown in Figure 1.
Figure 1. Compartmental fishery of prey predator fishery in different zones
Taking the diagram represented in Figure 1 into consideration, the mathematical model of the prey predator fishery can be written as
$\frac{d{{x}_{1}}}{dt}={{r}_{1}}{{x}_{1}}\left( 1-\frac{{{x}_{1}}}{{{k}_{1}}} \right)-\left( \mu +\sigma +qE \right){{x}_{1}}-\frac{m{{x}_{1}}{{x}_{3}}}{a+{{x}_{1}}}$ (1)
$\frac{d{{x}_{2}}}{dt}={{r}_{2}}{{x}_{2}}\left( 1-\frac{{{x}_{2}}}{{{k}_{2}}} \right)-\beta {{x}_{2}}-\alpha {{x}_{2}}+\sigma {{x}_{1}}\text{ }$ (2)
$\frac{d{{x}_{3}}}{dt}={{r}_{3}}{{x}_{3}}\left( 1-\frac{{{x}_{3}}}{{{k}_{3}}} \right)+\frac{n{{x}_{1}}{{x}_{3}}}{a+{{x}_{1}}}-{{\beta }_{1}}{{x}_{3}}\text{ }$ (3)
$\text{with }{{x}_{1}}\left( 0 \right)\ge 0,\text{ }{{x}_{2}}\left( 0 \right)\ge 0\text{ and }{{x}_{3}}\left( 0 \right)\ge 0.$ (4)
Here, x1(t) denotes the biomass density of prey fish species (black tiger prawn) in unreserved zone, x2(t) represents the biomass density of same species in reserved zone and x3(t) denotes the biomass density of predator fish species in unreserved zone. In this model, it is considered that the predator consumes the prey population as an alternative food. Here, r1, r2 and r3 represents the intrinsic growth rate of the prey and predator species in both zone respectively. k1, k2 and k3 are the environment carrying capacity of prey and the predator species respectively. Therefore, $\frac{{{r}_{i}}x_{i}^{2}}{{{k}_{i}}}$ (i=1,2,3) is the amount by which the fish species decrease due to the interaction among themselves. Let E be the total effort applied for harvesting the prey population in the unreserved area and q is the catch ability coefficient, σ be the migration rate of the prey from unreserved to reserved area. We consider μ is the natural death rate of the species in the unreserved area. Therefore, (μ+σ+qE)x1 is the number of prey population that has been decreased from unreserved area due to fishing, migration and death rate. Let m be the depletion rate of prey species due to predation and n be the growth rate of the predator due to consumption. So, $\frac{m{{x}_{1}}{{x}_{3}}}{a+{{x}_{1}}}$ is the depleted number of prey due to the interaction with predator species. Here, a denotes the saturation constant. Let β be the death rate of prey in reserved area due to disease and a be the rate at which the prey population may be stolen due to insecurity. The term (α+β)x2 denotes the number by which the prey population decreases from the reserved area. In this prey predator system, we have considered Holling type II functional response to show the interaction between prey and predator species.
3. Model Analysis
The model (1)-(3) had been analyzed in order to describe the dynamics of the fish species. For the analysis of the model the following studies were considered:
3.1 Boundedness of the model
To prove that the model system is biologically well posed the following Lemma was to be satisfied.
Lemma 1: The set
$\Omega =\left\{ \left( {{x}_{1}},{{x}_{2}},{{x}_{3}} \right):w\left( t \right)={{x}_{1}}\left( t \right)+{{x}_{2}}\left( t \right)+{{x}_{3}}\left( t \right),0<w\left( t \right)<\frac{\delta }{\eta } \right\}$
attracts all solutions initiating in the interior of the positive orthant, where η is a constant and
$\mu \text{=}\frac{{{k}_{1}}}{4{{r}_{1}}}{{({{r}_{1}}+\eta -\mu -qE)}^{2}}+\frac{{{k}_{2}}}{4{{r}_{2}}}{{({{r}_{2}}+\eta -\beta -\alpha )}^{2}}+\frac{{{k}_{3}}}{4{{r}_{3}}}{{({{r}_{3}}+\eta )}^{2}}$
Proof: Let, $w\left( t \right)={{x}_{1}}\left( t \right)+{{x}_{2}}\left( t \right)+{{x}_{3}}\left( t \right),\text{ }\eta >0$ be a constant. Then we can write,
$\begin{align} & \frac{dw}{dt}+\eta w=\left( {{r}_{1}}+\eta -\mu -qE \right){{x}_{1}}-\frac{{{r}_{1}}}{{{k}_{1}}}{{x}_{1}}^{2}+\left( {{r}_{2}}+\eta -\alpha -\beta \right){{x}_{2}}-\frac{{{r}_{2}}}{{{k}_{2}}}{{x}_{2}}^{2} \\ & -\left( m-n \right)\frac{{{x}_{1}}{{x}_{3}}}{a+{{x}_{1}}}+\left( {{r}_{3}}+\eta -{{\beta }_{1}} \right){{x}_{3}}-\frac{{{r}_{3}}}{{{k}_{3}}}{{x}_{3}}^{2} \\ \end{align}$
Since m is the depletion rate coefficient of prey due to its intake by the predator and n is the growth rate coefficient of predator due to its interaction with their prey, so it is assumed that m≥n. Now η is chosen such that 0<η<β1.
$\begin{align} & \frac{dw}{dt}+\eta w\le \frac{{{k}_{1}}}{4{{r}_{1}}}{{\left( {{r}_{1}}+\eta -\mu -qE \right)}^{2}}+\frac{{{k}_{2}}}{4{{r}_{2}}}{{\left( {{r}_{2}}+\eta -\alpha -\beta \right)}^{2}} \\ & +\frac{{{k}_{3}}}{4{{r}_{3}}}{{\left( {{r}_{3}}+\eta \right)}^{2}} \\\end{align}$
By using differential inequality, we get,
$0<w({{x}_{1}}(t),{{x}_{2}}(t),{{x}_{3}}(t))\le \frac{\delta }{\eta }(1-{{e}^{-\eta t}})+({{x}_{1}}(0),{{x}_{2}}(0),{{x}_{3}}(0)){{e}^{-\eta t}}$
Taking limit as t→∞, we get, 0<w(t)≤δ/η.
3.2 Positivity of the solution of the model
Lemma 2: For${{x}_{1}}\left( 0 \right)\ge 0,{{x}_{2}}\left( 0 \right)\ge 0,{{x}_{3}}\left( 0 \right)\ge 0$, the solutions ${{x}_{1}}\left( t \right),{{x}_{2}}\left( t \right),{{x}_{3}}\left( t \right)$ are all non-negative for all t≥0
Proof: For positivity, equation (2.1) can be written as, $\frac{d{{x}_{1}}}{dt}\ge -\left( \mu +\sigma +qE \right){{x}_{1}}$ $\Rightarrow \frac{d{{x}_{1}}}{{{x}_{1}}}\ge -kdt$ where,$k=\left( \mu +\sigma +qE \right)$ $\Rightarrow {{x}_{1}}\ge {{c}_{1}}{{e}^{-kt}}$, where c1 is an integrating constant. Applying the initial condition, at $t=0,{{x}_{1}}\left( 0 \right)\ge 0$, we get, ${{x}_{1}}\left( 0 \right)={{c}_{1}}$. Putting the value of c1 in the equation, we get, ${{x}_{1}}\left( t \right)\ge {{x}_{1}}\left( 0 \right){{e}^{-kt}}$. When $t\to \infty ,{{x}_{1}}\left( t \right)\ge 0$. Therefore x1(t) is positive for all t≥0. Again equation (2.2) can be written as $\frac{d{{x}_{2}}}{dt}\ge -\left( \alpha +\beta \right){{x}_{2}}$ $\Rightarrow \frac{d{{x}_{2}}}{{{x}_{2}}}\ge -Ldt$, where $L=\left( \alpha +\beta \right)$. Integrating we get, ${{x}_{2}}\left( t \right)\ge {{c}_{2}}{{e}^{-Lt}}$, where c2 is an integrating constant. Applying the initial condition, at $t=0,{{x}_{2}}\left( 0 \right)\ge 0$, we get, ${{x}_{2}}\left( 0 \right)={{c}_{2}}$. Putting the value of c2, we get, ${{x}_{2}}\left( t \right)\ge {{x}_{2}}\left( 0 \right){{e}^{-Lt}}$. When $t\to \infty ,\text{ }{{x}_{2}}\left( t \right)\ge 0$. Hence x2(t) is positive for all t≥0. To prove that x3(t) is positive, equation (2.3) is written as, $\frac{d{{x}_{3}}}{dt}\ge -{{\beta }_{1}}{{x}_{3}}$ $\Rightarrow \frac{d{{x}_{3}}}{{{x}_{3}}}\ge -{{\beta }_{1}}dt$. Integrating ${{x}_{3}}(t)\ge {{c}_{3}}{{e}^{-{{\beta }_{1}}t}}$, where c3 is an integrating constant. Now, at $t=0,{{x}_{3}}\left( 0 \right)\ge 0.$
So, ${{x}_{3}}\left( 0 \right)={{c}_{3}}.$
Putting the value of c3, we obtain, ${{x}_{3}}\left( t \right)\ge {{x}_{3}}\left( 0 \right){{e}^{-{{\beta }_{1}}t}}$. At $t\to \infty ,\text{ }{{x}_{3}}\left( t \right)\ge 0$. Therefore, x3(t) is positive for all t≥0. Hence this completes the proof.
3.3 Existence of equilibria
Let $\left( {{{\bar{x}}}_{1}},{{{\bar{x}}}_{2}},0 \right)$ be the positive solution of the equations (1) and (2)
${{\varphi }_{1}}{{x}_{1}}-\frac{{{r}_{1}}{{x}_{1}}^{2}}{{{k}_{1}}}=0$ (5)
$\frac{d{{x}_{2}}}{dt}={{\varphi }_{2}}{{x}_{2}}-\frac{{{r}_{2}}{{x}_{2}}^{2}}{{{k}_{2}}}+\sigma {{x}_{1}}\text{=0 }$ (6)
where, ${{\varphi }_{1}}={{r}_{1}}-\mu -\sigma -qE$and ${{\varphi }_{2}}={{r}_{2}}-\alpha -\beta $. From (5), we get, ${{x}_{1}}=0,{{\bar{x}}_{1}}=\frac{{{\varphi }_{1}}{{k}_{1}}}{{{r}_{1}}}$. Putting the value of ${{\bar{x}}_{1}}$in (6), we get,
${{\bar{x}}_{2}}=\frac{{{\varphi }_{2}}\pm \sqrt{\varphi _{2}^{2}+\frac{4\sigma {{r}_{2}}{{k}_{1}}{{\varphi }_{1}}}{{{r}_{1}}{{k}_{2}}}}}{\frac{2{{r}_{2}}}{{{k}_{2}}}}$
Again, let ($x_{1}^{*},x_{2}^{*},x_{3}^{*}$ ) be the positive solution of the equations
${{\varphi }_{1}}{{x}_{1}}-\frac{{{r}_{1}}{{x}_{1}}^{2}}{{{k}_{1}}}-\frac{m{{x}_{1}}{{x}_{3}}}{a+{{x}_{1}}}=0$ (7)
${{\varphi }_{2}}{{x}_{2}}-\frac{{{r}_{2}}{{x}_{2}}^{2}}{{{k}_{2}}}+\sigma {{x}_{1}}\text{=0 }$ (8)
${{\varphi }_{3}}{{x}_{3}}+\frac{n{{x}_{1}}{{x}_{3}}}{a+{{x}_{1}}}-\frac{{{r}_{3}}{{x}_{3}}^{2}}{{{k}_{3}}}=0$ (9)
where, ${{\varphi }_{3}}={{r}_{3}}-{{\beta }_{1}}$. Equation (9) yields
${{x}_{3}}=0,\text{ }a{{\varphi }_{3}}+{{\varphi }_{3}}{{x}_{1}}-\frac{a{{r}_{3}}{{x}_{3}}}{{{k}_{3}}}-\frac{{{r}_{3}}{{x}_{1}}{{x}_{3}}}{{{k}_{3}}}+n{{x}_{1}}=0$ $\Rightarrow {{x}_{1}}^{*}=\frac{a{{r}_{3}}{{x}_{3}}^{*}-a{{\varphi }_{3}}{{k}_{3}}}{{{\varphi }_{3}}{{k}_{3}}+n{{k}_{3}}-{{r}_{3}}{{x}_{3}}^{*}}$.
Using the value of ${{x}_{1}}^{*}$ in equation (8), we get
$\frac{{{r}_{2}}x_{2}^{{{*}^{2}}}}{{{k}_{2}}}-{{\varphi }_{2}}x_{2}^{*}-\sigma (\frac{a{{r}_{3}}x_{3}^{*}-a{{\varphi }_{3}}{{k}_{3}}}{{{\varphi }_{3}}{{k}_{3}}+n{{k}_{3}}-{{r}_{3}}x_{3}^{*}})=0$
This equation will have a positive solution if ${{\varphi }_{2}}\ge 0\Rightarrow {{r}_{2}}-\alpha -\beta \ge 0$. Finally from equation (3.3), we obtain
${{x}_{3}}^{*}=\frac{a+{{x}_{1}}^{*}}{m}\left( {{\varphi }_{1}}-\frac{{{r}_{1}}{{x}_{1}}^{*}}{{{k}_{1}}} \right)$
Hence the equilibrium points of the system are ${{P}_{1}}(0,0,0),{{P}_{2}}({{\bar{x}}_{1}},{{\bar{x}}_{2}},0)\text{ and }{{P}_{3}}\left( {{x}_{1}}^{*},{{x}_{2}}^{*},{{x}_{3}}^{*} \right)$.
3.3 Stability analysis at steady state
The Jacobean of the system is
$J\left( {{x}_{1}},{{x}_{2}},{{x}_{3}} \right)=\left( \begin{align} & {{\text{r}}_{1}}\text{-}\frac{2{{r}_{1}}{{x}_{1}}}{{{k}_{1}}}\text{-}\left( \mu +\sigma +qE \right)\text{-}\frac{am{{x}_{3}}}{{{\left( a+{{x}_{1}} \right)}^{2}}}\text{ 0 -}\frac{m{{x}_{1}}}{a+{{x}_{1}}} \\ & \text{ }\sigma \text{ }{{\text{r}}_{2}}\text{-}\frac{2{{r}_{2}}{{x}_{2}}}{{{k}_{2}}}\text{-}\beta -\alpha \text{ 0} \\ & \text{ }\frac{an{{x}_{3}}}{{{\left( a+{{x}_{1}} \right)}^{2}}}\text{ 0 }{{\text{r}}_{3}}-\frac{2{{r}_{3}}{{x}_{3}}}{{{k}_{3}}}\text{-}{{\beta }_{1}} \\ \end{align} \right)$ $\therefore \text{ }J\left( 0,0,0 \right)=\left( \begin{align} & {{\text{r}}_{1}}-\left( \mu +\sigma +qE \right)\text{ 0 0} \\ & \text{ }\sigma \text{ }{{\text{r}}_{2}}\text{-}\beta -\alpha \text{ 0} \\ & \text{ 0 0 }{{\text{r}}_{3}}-{{\beta }_{1}} \\\end{align} \right)$
Then the characteristics equation of the matrix with eigenvalue λ is
$\left| J-\lambda I \right|=\left| \begin{align} & {{\text{r}}_{1}}-\left( \mu +\sigma +qE \right)\text{-}\lambda \text{ 0 0} \\ & \text{ }\sigma \text{ }{{\text{r}}_{2}}\text{-}\beta -\alpha -\lambda \text{ 0} \\ & \text{ 0 0 }{{\text{r}}_{3}}-{{\beta }_{1}}-\lambda \\\end{align} \right|=0$
$\Rightarrow {{\text{r}}_{1}}-\left( \mu +\sigma +qE \right)\text{-}\lambda =0,\text{ }{{\text{r}}_{2}}\text{-}\beta -\alpha -\lambda =0\text{ or }{{\text{r}}_{3}}-{{\beta }_{1}}-\lambda =0$
$\Rightarrow {{\lambda }_{1}}={{\text{r}}_{1}}-\left( \mu +\sigma +qE \right),\text{ }{{\lambda }_{2}}={{\text{r}}_{2}}\text{-}\beta -\alpha \text{ and }{{\lambda }_{3}}={{\text{r}}_{3}}-{{\beta }_{1}}$
The eigenvalue λ of the matrix determines the stability of the states. Depending on λ, the stability conditions are: 1. if the eigenvalue λ>0, then the steady state is unstable, 2. if the eigenvalue λ<0, then the system is stable.
In the following lemma, we show that ${{P}_{3}}(x_{1}^{*},x_{2}^{*},x_{3}^{*})$ is locally asymptotically stable.
Lemma 3. The equilibrium point ${{P}_{3}}(x_{1}^{*},x_{2}^{*},x_{3}^{*})$ is always locally asymptotically stable.
Proof: Let, the three functions be $u=u\left( {{x}_{1}},{{x}_{2}},{{x}_{3}} \right),v=v\left( {{x}_{1}},{{x}_{2}},{{x}_{3}} \right)\text{ and }w=w\left( {{x}_{1}},{{x}_{2}},{{x}_{3}} \right)$ of the system (2.1)-(2.3). Then the Jacobean at ${{P}_{3}}(x_{1}^{*},x_{2}^{*},x_{3}^{*})$ is
${{J}_{3}}(x_{1}^{*},x_{2}^{*},x_{3}^{*})=\left( \begin{align} & {{\text{r}}_{1}}\text{-}\frac{2{{r}_{1}}{{x}_{1}}^{*}}{{{k}_{1}}}\text{-}\left( \mu +\sigma +qE \right)\text{-}\frac{am{{x}_{3}}^{*}}{{{\left( a+{{x}_{1}}^{*} \right)}^{2}}}\text{ 0 }\frac{m{{x}_{1}}^{*}}{a+{{x}_{1}}^{*}} \\ & \text{ }\sigma \text{ }{{\text{r}}_{2}}\text{-}\frac{2{{r}_{2}}{{x}_{2}}^{*}}{{{k}_{2}}}\text{-}\beta -\alpha \text{ 0} \\ & \text{ }\frac{an{{x}_{3}}^{*}}{{{\left( a+{{x}_{1}}^{*} \right)}^{2}}}\text{ 0 }{{\text{r}}_{3}}-\frac{2{{r}_{3}}{{x}_{3}}^{*}}{{{k}_{3}}}\text{-}{{\beta }_{1}} \\ \end{align} \right)$
Then the characteristics equation of the matrix with eigenvalue is λ.
|J-λI|=0
$\left| \begin{align} & {{\text{r}}_{1}}\text{-}\frac{2{{r}_{1}}{{x}_{1}}^{*}}{{{k}_{1}}}\text{-}\left( \mu +\sigma +qE \right)\text{-}\frac{am{{x}_{3}}^{*}}{{{\left( a+{{x}_{1}}^{*} \right)}^{2}}}\text{-}\lambda \text{ 0 -}\frac{m{{x}_{1}}^{*}}{a+{{x}_{1}}^{*}} \\ & \text{ }\sigma \text{ }{{\text{r}}_{2}}\text{-}\frac{2{{r}_{2}}{{x}_{2}}^{*}}{{{k}_{2}}}\text{-}\beta -\alpha -\lambda\text{ 0} \\ & \text{ }\frac{an{{x}_{3}}^{*}}{{{\left( a+{{x}_{1}}^{*} \right)}^{2}}}\text{ 0 }{{\text{r}}_{3}}-\frac{2{{r}_{3}}{{x}_{3}}^{*}}{{{k}_{3}}}\text{-}{{\beta }_{1}}-\lambda \\\end{align} \right|=0$
$\begin{align} & \Rightarrow \left( {{\text{r}}_{1}}\text{-}\frac{2{{r}_{1}}{{x}_{1}}^{*}}{{{k}_{1}}}\text{-}\left( \mu +\sigma +qE \right)\text{-}\frac{am{{x}_{3}}^{*}}{{{\left( a+{{x}_{1}}^{*} \right)}^{2}}}\text{-}\lambda \right)\left( {{\text{r}}_{2}}\text{-}\frac{2{{r}_{2}}{{x}_{2}}^{*}}{{{k}_{2}}}\text{-}\beta -\alpha -\lambda \right) \\ & \left( {{\text{r}}_{3}}-\frac{2{{r}_{3}}{{x}_{3}}^{*}}{{{k}_{3}}}\text{-}{{\beta }_{1}}-\lambda \right)+\left( \frac{m{{x}_{1}}^{*}}{a+{{x}_{1}}^{*}} \right)\left( \frac{an{{x}_{3}}^{*}}{{{\left( a+{{x}_{1}}^{*} \right)}^{2}}} \right)\left( {{\text{r}}_{2}}\text{-}\frac{2{{r}_{2}}{{x}_{2}}^{*}}{{{k}_{2}}}\text{-}\beta -\alpha -\lambda \right) \\\end{align}=0$$\Rightarrow \left( {{A}_{1}}\text{-}\lambda \right)\left( {{A}_{2}}-\lambda \right)\left( {{A}_{3}}-\lambda \right)+{{A}_{4}}\left( {{A}_{2}}-\lambda \right)=0$
$\Rightarrow {{\lambda }^{3}}+{{a}_{1}}{{\lambda }^{2}}+{{a}_{2}}\lambda +{{a}_{3}}=0$ (10)
${{a}_{1}}=-\left( {{A}_{1}}+{{A}_{2}}-{{A}_{3}} \right),{{a}_{2}}=\left( {{A}_{1}}{{A}_{2}}+{{A}_{2}}{{A}_{3}}+{{A}_{3}}{{A}_{1}}+{{A}_{4}} \right),{{a}_{3}}=-\left( {{A}_{1}}{{A}_{2}}{{A}_{3}}+{{A}_{2}}{{A}_{4}} \right)$,
$\begin{align} & {{A}_{1}}={{\text{r}}_{1}}\text{-}\frac{2{{r}_{1}}{{x}_{1}}^{*}}{{{k}_{1}}}\text{-}\left( \mu +\sigma +qE \right)\text{-}\frac{am{{x}_{3}}^{*}}{{{\left( a+{{x}_{1}}^{*} \right)}^{2}}},\text{ }{{A}_{2}}={{\text{r}}_{2}}\text{-}\frac{2{{r}_{2}}{{x}_{2}}^{*}}{{{k}_{2}}}\text{-}\beta -\alpha ,\text{ } \\ & {{A}_{3}}={{\text{r}}_{3}}-\frac{2{{r}_{3}}{{x}_{3}}^{*}}{{{k}_{3}}}\text{-}{{\beta }_{1}},\text{ }{{A}_{4}}=\left( \frac{m{{x}_{1}}^{*}}{a+{{x}_{1}}^{*}} \right)\left( \frac{an{{x}_{3}}^{*}}{{{\left( a+{{x}_{1}}^{*} \right)}^{2}}} \right) \\\end{align}$
By Routh-Hurwitch criterion, all the eigenvalues of (3.7) have negative real roots if and only if a1>0, a3>0 and a1a2>a3.
Then the equilibrium point ${{P}_{3}}(x_{1}^{*},x_{2}^{*},x_{3}^{*})$ is locally asymptotically stable.
4. Numerical Simulations
In this section, some numerical simulations of our proposed model were presented to investigate the dynamical behavior of the model. To do these simulations, MATLAB ode45 solver was used. The description and values of all parameters used in our proposed model are presented in Table 1.
Table 1. Description of parameters and their values
Description of parameters
$\sigma $
$\alpha$
$\beta$
${{\beta }_{1}}$
$\mu$
intrinsic growth rate of black tiger prawn in unreserved area
intrinsic growth rate of black tiger prawn in reserved area
intrinsic growth rate of predator population in unreserved area
migration rate
decay rate due to being stolen
(illegal pouching)
death rate due to disease
death rate of predator
death rate of prey in unreserved area
depletion rate of prey due to the predation
growth rate of predator due to predation
saturation constant
carrying capacity of black tiger prawn in unreserved area
carrying capacity of black tiger prawn in reserved area
carrying capacity of predator population in unreserved area
Figure 2. Variation of population with time (70 days)
A time interval of 70 days was considered to show the dynamics. The figures displayed depict the densities of fish and predator population in reserved and unreserved area and shows where the population increases and decreases within 70 days. Figures 4.3-4.14 show the variation of black tiger prawn and predator population in two zones for changing values of different parameters.
The Figure 2 represents that fish population first increases, then decreases and finally grows at a constant rate in the unreserved area and the species in the reserved area first increases, then shows a slight decrease and finally grows at a constant rate. The predator population shows an increasing effect for the above values of the parameter.
Figure 3. Variation of population with time (70) days for
keeping all values same
Figure 3 illustrates that for the increasing values of depletion rate and growth rate due to predation, the black tiger prawn in the unreserved area increases in the first 9 days, then decreases for after 9 days and at last it extinct while in the reserved area it shows a slight changing effect. The figure also shows that the predator population increases for the parameter values.
Figure 4. Variation of black tiger prawn inreserved area for different values of $\sigma$
Figure 5. Variation of black tiger prawn in reserved area for different values of $\sigma$
Figure 6. Variation of black tiger prawn in unreserved area for different values of n
Figure 7. Variation of black tiger prawn in reserved area for different values of n
Figure 8. Variation of predator population in unreserved area for different values of n
Figure 9. Variation of black tiger prawn for different values of m
Figure 10. Variation of black tiger prawn in reserved area for different values of m
Figure 11. Variation of predator in the unreserved area for different values of m
The change in the biomass density of black tiger prawn in the two zones for increasing values of $\sigma$ has been represented in Figures 4 and 5. The fish population in the unreserved area decreases for the increasing value of migration rate while population in reserved area shows an increasing effect.
Figures 6-8 show the variation of fish and predator population for different values of consumption rate due to predation. It is seen from the Figures that black tiger prawn in both zones decreases and predator population increases due to the increasing values of n .
The variation of fish and predator population for different values of depletion rate due to predation is represented in Figures 9-11. It is seen from the figures that black tiger prawn in both zones decreases and predator population increases due to the increasing values of m . The fish population in unreserved area extinct after 50 days as depletion rate increases.
In Figures 12 and 13, we observe that fish population decreases from the unreserved area while the population increases in reserved area due to the increasing values of fishing or harvesting rate.
Figures 14 and 15 indicate the variation of black tiger prawn in reserved area for the changing values of the rate at which the species are stolen and death rate. Both the figures show a decreasing effect for the increasing values of $α$ and $β$ .
Some arbitrary data are assumed for describing the phase diagram of the system. Using the Maple2018 software, we have analyzed the stability analysis of the fishery model. The phase diagram of the model in presence and absence of predator in both reserved and unreserved area has been analyzed for the system. Figures 16 and 17 describe the phase diagram of the system.
Figure 12. Variation of black tiger prawn in unreserved area for different values of E
Figure 13. Variation of black tiger prawn in reserved area for different values of E
Figure 14. Variation of black tiger prawn in the reserved area for different values of $α$
Figure 15. Variation of black tiger prawn in reserved area for different values of $\beta$
Figure 16. Phase space diagram of the system in the presence of predator in the unreserved area with the parameter values
${{r}_{1}}=1,\text{ }{{k}_{1}}=7,\text{ }m=6,\text{ }n=6,\text{ }a=7,$
$\text{ }{{k}_{3}}=9,\text{ }{{r}_{3}}=0.2,\text{ }\beta =0.4,\text{ }\mu =0.2,\text{ }\sigma =0.3,\text{ }qE=0.2$
Figure 17. Phase space diagram of the system in the absence of predator in the unreserved and reserved area with the parameter values
${{r}_{1}}=1,\text{ }{{k}_{1}}=7,\text{ }{{k}_{2}}=9,\text{ }{{r}_{2}}=0.5,\text{ }\mu =0.2,$
$\sigma =0.4,\text{ }qE=0.2,\beta =0.2,\alpha =0.1$
Considering two ecosystems, a mathematical model of fishery has been formulated in this paper. We have analyzed the behavior of the model, focusing on the parameters that are mainly responsible for the production and reduction of black tiger prawn. The numerical results reveal that the high mortality rate, fishing rate and predation rate of black tiger prawn in the unreserved area have a decreasing effect on the species in the reserved area. It also shows that the number of black tiger prawn increases in the reserved area due to the high migration from the unreserved area. When predation increases, the number of black tiger prawn in both reserved and unreserved area reduces by a significant amount. It is also evident from the simulations that the production of black tiger prawn in the reserved area decreases due to death from different diseases and for being stolen in absence of security. The main conclusion based on the result is that it is possible to maximize the production of black tiger prawn by proper management and thus it can play a significant role in the economy of a country.
The authors would like to thank the reviewers for the careful reading of this manuscript and their fruitful comments and suggestions for further modifications of this manuscript. Khatun greatly acknowledged the partial financial support during her Masters program provided by the National Science and Technology (NST) fellowship with the ref. no.39.00.0000.012.002.03.18, under Ministry of Science and Technology, Government of the People's Republic of Bangladesh.
[1] Biswas MHA, Hossain MR, Mondal MK. (2017). Mathematical modeling applied to sustainable management of marine resources. Procedia Engineering 194: 337–344.https://doi.org/10.1016/j.proeng.2017.08.154
[2] Biswas MHA, Rahman T, Haque N. (2016). Modeling the potential impacts of global climate change in Bangladesh: an optimal control approach. Journal of Fundamental and Applied Science 8(1): 1-19. http://dx.doi.org/10.4314/jfas.v8i1.1
[3] Agarwal M, Pandey P. (2006). Combined harvesting of two competitive species having a resource dependent carrying capacity. Indian J. pure appl. Math. 37(2): 63-73.
[4] Chaudhury KS. (1986). A bioeconomic model of harvesting a multispecies fishery. Ecol. Model. 32(4): 267-279. https://doi.org/10.1016/0304-3800(86)90091-8
[5] Clark CW. (1979). Mathematical models in the economics of renewable resources. SIAM Rev. 21(1): 81-99. https://doi.org/10.1137/1021006
[6] Dubey B, Chandra P, Sinha P. (2003). A model for fishery resource with reserve area. Nonlinear Analysis: Real World Application 4(4): 625-637. https://doi.org/10.1016/S1468-1218(02)00082-2
[7] Mondal MK, Hanif M, Biswas MHA. (2017). A mathematical analysis for controlling the spread of Nipah virus infection. International Journal of Modeling and Simulation 37(3): 185-197. https://doi.org/10.1080/02286203.2017.1320820
[8] Dubey B, Chandra P, Sinha P. (2002). A resource dependent fishery model with optimal harvesting policy. J. Biol. Systems 10(1): 1-13. https://doi.org/10.1142/S0218339002000494
[9] Chaudhury KS. (1988). Dynamic optimization of combined harvesting of a two species fishery. Ecol. Model 41(1-2): 17-25. https://doi.org/10.1016/0304-3800(88)90041-5
[10] Kar TK. (2006). A model for fishery resource with reserve area and facing prey-predator interaction. Canadian Applied Mathematics Quarterly 14(4): 385-399.
[11] Roy B, Roy SK, Biswas MHA. (2017). Effects on prey-predator with different functional responses. Int. Journal of Biomathematics 10(8): 1750113-22. https://doi.org/10.1142/S1793524517501133
[12] Biswas MHA. (2014). On the evaluation of AIDS/HIV treatment: An optimal control approach. Current HIV Research 12(1): 1-12. https://doi.org/10.2174/1570162X1201140716094638
[13] Biswas MHA. (2013). Necessary conditions for optimal control problems with state constraints: Theory and applications. PhD Thesis, Department of Electrical and Computer Engineering, Faculty of Engineering, University of Porto, Portugal.
[14] Biswas MHA. (2012). AIDS epidemic worldwide and the millennium development strategies: A light for lives. HIV and AIDS Review 11(4): 87-94. https://doi.org/10.1016/j.hivar.2012.08.004
[15] Dym CL. (2004). Principles of Mathematical Modeling. Second Edition, Academia Press, New York.
[16] Louartassi Y, Alami JEl, Elalami N. (2017). Harvesting model for fishery resource with reserve area and modified effort function. Malaya J. Mat. 5(4): 660-666. https://doi.org/10.26637/MJM0504/0008
[17] Murray JD. (1993). Mathematical Biology, Second Edition, Springer-Verlag Berlin Heidelberg.
[18] Ross SL. (2004). Differential equation, Third Edition, Jhon Wiley & Sons, UK.
[19] Sarwardi S, Mandal PK, Ray S. (2013). Dynamic behavior of a two predator model with prey refuge. J. Biol. Phys. 39(4): 701-722. https://doi.org/10.1007/s10867-013-9327-7
[20] Zhang X, Chen L, Neuman AU. (2000). The stage structured predator-prey model and optimal harvesting policy. Math. Biosci. 168(2): 201-210. https://doi.org/10.1016/S0025-5564(00)00033-X
[21] Dubey B, Patra A. (2013). Optimal management of a renewable resource utilized by a population with taxation as a control variable. Nonlinear Analysis: Modeling and Control 18(1): 37-52. | CommonCrawl |
Video salient region detection model based on wavelet transform and feature comparison
Fuquan Zhang1,2,
Tsu-Yang Wu ORCID: orcid.org/0000-0001-8970-24523 &
Guangyuan Zheng2,4
With the advent of the era of big data, the Internet industry has produced massive amounts of multimedia video data. In order to process these video sequence data quickly and effectively, a visual information extraction method based on wavelet transform and feature comparison is proposed to perceive the target of interest by simulating the multi-channel spatial frequency decomposition function of human visual system which can quickly extract the significant distribution from the image and obtain region of interest (ROI). Firstly, the principle of visual attention mechanism and the visual salience detection were analyzed. Then, the DOG (Difference of Gaussian) function is taken as the wavelet basis function, and the wavelet data is used to decompose the image data in the spatial domain and frequency domain, thus applying to the multi-channel of the human visual system. Finally, the significant distribution in the entire image is obtained by global color comparison; thus, the region of interest is extracted. The extraction model of visual information proposed is simulated in MATLAB environment. The simulation results show that the proposed algorithm can extract ROI more accurately and efficiently, compared with the existing algorithms.
With the rapid development of 4G communication technology and mobile device performance, Internet socialization, multimedia entertainment, and emerging media have gradually penetrated into people's daily life and work with explosion of and video and image data [1,2,3,4]. Traditional manual processing methods can no longer cope with such a large number of video sequence data processing tasks [5]. At the same time, artificial intelligence technology is gradually helping humans to free themselves from purely repetitive work and becomes widely used. The relevant research has become a hotspot in recent years. However, how to make the computer understand the surrounding environment has always been the focus and difficulty of artificial intelligence research [5]. At present, the most commonly adopted route is to establish an effective visual processing system by imitating how humans perceive the surrounding environment [6].
The human visual system (HVS) is the most direct and important way for humans to perceive the surrounding environment [7]. The human sense can perceive visual information such as color, brightness, shape, and motion of the image through the eye visual system, and process them through the brain visual central system [8]. Experiment shows that more than 75% of all information received by humans from the outside is obtained through vision [5]. In the process of visual information generating, human can quickly focus on the region or object of interest (generally a significant distribution region) [9], so as to realize the perception of the visual scene [10]. The purpose of computer vision technology research is to enable the machine to process and analyze the visual information of the environment. By simulating the human visual system, the computer can realize the same visual function like human beings and can filter the image information for pattern recognition. By mimicking the mechanism of human visual system visual attention can achieve rapid and efficient processing of video data, which has become a research hotspot nowadays. It can quickly extract a few significant visual objects from the image, namely, the region of interest (ROI). Scientific research has found that the human visual system can simultaneously process the temporal and spatial domain information to perceive the distributed ROIs in video information [11, 12]. Furthermore, similar to camera devices, the human visual system exhibits high resolution in the central area and a low resolution in the peripheral region.
There are many state-of-the-art works on visually salient regions detection. Gaussian filter is a frequently used method in visual salient analysis research works. However, as the evaluation by Cheng et al. [13] shows, the resulting saliency maps are generally blurry [14] and often overemphasize small, purely local features, which makes this approach less useful for applications such as segmentation, and detection [15]. A saliency model called the phase spectrum of quaternion Fourier transform (PQFT) proposed by C Guo et al. [16] may fail when detecting large saliency regions. MM Cheng et al. [17] proposed a method to decompose an image into large-scale perceptually homogeneous elements for efficient salient region detection, using a soft image abstraction representation. While it may fail when a salient object is off-center or significantly overlaps with the image boundary.
By choosing an appropriate basis function, the wavelet transform method can analyze images or information within different scales, and realize multi-resolution analysis, which perfectly simulates the human visual information extraction process [18]. Many studies have used wavelet transform method to simulate the extraction of visual information. Bhardwaj et al. [19] presented a significant frame selection (SFS) and quantization of coefficient difference-based robust video watermarking scheme in the lifting wavelet transform domain (LWT). Literature [20] studied the computer binocular vision information system under the condition of mixed illumination interference. This paper considers the influence of illumination-changing environment on visual information processing model and proposes an adaptive image illumination interference filtering algorithm based on wavelet transform principle. It builds a computer binocular vision information processing model under mixed illumination interference and improves the performance of visual information system in complex environment. Zhong and Shih [21] proposed an efficient bottom-up saliency detection model based on wavelet generalized lifting, which requires no kernel and prior knowledge. Song et al. [22] proposed an artificial retina model based on wavelet transform to simulate visual information extraction. The above methods use the wavelet transform to decompose the image signal in frequency domain, and appropriately transform each image component to different channels corresponding to that of the human visual system, and then use the wavelet inverse transform to synthesize the image which is consistent with the information extracted by human vision. In addition, for ROI also appears fusion of temporal and spatial domain features of video sequences, Liu et al. [23] used visual saliency and graph cutting to achieve effective image segmentation. Li et al. [24] proposed a visual saliency model analysis method based on frequency domain scale spatial analysis. It should be noted that since visual information is a complex combination of temporal and spatial domain information [25], if the HVS characteristics are used to analyze the visual saliency, the global uniqueness of the saliency region must be considered.
Based on the above research, in order to process video data quickly and effectively, we proposed a visual information extraction model combining wavelet transform and contrast principle by simulating the human visual system. It can quickly extract saliency distribution from the image and acquire ROI. The overall architectural design of the model is shown in Fig. 3. The DOG function is used as the wavelet basis function, and the wavelet data is used to decompose the image data in the spatial and frequency domains, thus it can be applied to the multi-channel of human visual system. By comparing the color global feature, we obtain the significant distribution in the entire image, and finally extract the region of interest. The simulation experiment is carried out on the test video data sequence, and good results have been achieved, which shows the effectiveness of the proposed method.
Fundamentals of algorithm
Visual attention
In human visual system, the human vision can produce different degrees of attention based on the distribution of different regions in the image. How much of the focused attention is directly related to video visual saliency? Human's vision can usually quickly locate the region with a large amount of information and focus on it. It is necessary to effectively integrate the temporal and the spatial domain information (color, intensity, direction, motion). At this stage, most studies use the feature fusion theory of Treisman and Gelade to achieve significant analysis of temporal and spatial domain information [26].
The theory of feature fusion divides the visual attention process into two phases: (1) the pre-attention phase. The precaution phase can be considered the primary phase of the vision system. In this phase, the vision system acquires various different primary visual feature information (texture, color, intensity, size, direction, shape, motion, etc.). These visual features are independent of each other and parallel. The vision system encodes the above contents differently, and then forms feature maps corresponding different channels. (2) Feature integration phase. In this phase, the visual system integrates the separated features (feature representations) and their location to form a location map, and then fuses the primary visual features on the location in a serial way to form an entity for further analysis and understanding. The principle of feature fusion theory is shown in Fig. 1.
Feature fusion theory
Principle of visual saliency detection
In order to simulate HVS to obtain a significant picture, it is necessary to use the principle of visual correlation saliency detection. This paper mainly adopts the contrast method in visual saliency detection method [13]. By using the contrast method, it can be judged whether an object will get the attention of human eyes, which can be embodied by the difference in color, texture, or shape of a certain area and its adjacent area. As shown in Fig. 2, in column left, the points of pale orange in upper and lower images that get more attention for their color are different from others; in the middle column, the tilting 5 and red 5 in upper image are more attractive to others for their distinctive texture direction and color, so to the red short lines in the lower image; eyes prefer to focus more on the cross star in the right upper image, its shape is unique compared to the surrounding ones, and the minus sign in the right lower image is the same. The contrast can usually include two categories [27]: (1) global contrast and (2) local contrast.
Contrast of color texture and shape
The main principle of global comparison is to analyze the difference between the central area and the entire background area. This principle assumes that there is a large difference in features between the salient region and the global region. The general evaluate method is to calculate the distance between the color of an area and that of the entire image. Global comparison can achieve significant area detection without too many parameters, but with poor robustness. The main principle of local contrast is to analyze the degree of difference between the center and its neighboring areas, and compare them to achieve significant area detection. Compared with the global comparison method, the local contrast method has a better ability to imitate the human visual system, but the anti-interference ability to noise is worse, and the implementation complexity is higher.
Visual information extraction method
Through the above analysis of the visual attention mechanism, it can be seen that the human visual process consists of many steps. Research shows that there are multiple discrete frequency channels in the process of visual information extraction [28], and the wavelet transform method can analyze an image into different scales by selecting an appropriate basis function, and realize multi-resolution analysis characteristics. The simulation implements the human visual information extraction process.
Visual information extraction model
In this section, we simulate the process of perceptual interest of the human visual system and propose a method of video information extraction based on wavelet transform and feature comparison. As shown in Fig. 3, in the video sequence consisting of video frames, we analyze the image information in the spatial and frequency domains from a multi-channel perspective, perform wavelet transform on the visual information. Based on the global feature comparison of color, the visual salient distribution is obtained to extract the region of interest.
A flowchart of visual information extraction model for video sequence
Wavelet basis functions
In general, a ∇2G filter can effectively satisfy the multi-resolution analysis characteristics, where ∇2 represents a Laplacian operator and G represents a two-dimensional Gaussian distribution function. The work of Sui and Xu [29] proved that ∇2G filter can be better replaced by DOG function and the setting is as follows:
$$ D(r)=\frac{1}{\sqrt{2}{\pi \sigma}_2}\exp \left(\frac{-{r}^2}{2{\pi \sigma}_2^2}\right)-\frac{1}{\sqrt{2}{\pi \sigma}_1}\exp \left(\frac{-{r}^2}{2{\pi \sigma}_1^2}\right) $$
Where r is the radial distance, and σ1 and σ2 are the two standard deviations of the Gaussian distribution, respectively.
The basis function of the wavelet is the DOG function described in Eq. (1), then
$$ f(r)=D(r) $$
Therefore, its wavelet family of 2D transform can be expressed as
$$ {\displaystyle \begin{array}{l}{\psi}_{a,b}(r)={D}_{a,b}(r)=\frac{1}{\sqrt{2}\pi a{\sigma}_2}\exp \left(\frac{-\left[{\left(x-b\right)}^2+{\left(y-b\right)}^2\right]}{2{\pi \sigma}_2^2{a}^2}\right)\\ {}\kern6.999996em -\frac{1}{\sqrt{2}\pi a{\sigma}_1}\exp \left(\frac{-\left[{\left(x-b\right)}^2+{\left(y-b\right)}^2\right]}{2{\pi \sigma}_1^2{a}^2}\right)\end{array}} $$
a is the expansion factor. To simplify the analysis, it usually takes σ1 = 1 and σ2 = 0.625. In most practical scenarios, the input signal is usually discretely sampled based on the discrete value a0 of a, an = a0n, then from the literature [29], the following can be derived:
$$ {\psi}_{m,b}(r)=\frac{1}{a_0^m}\psi \left(\frac{x-b}{a_0^m},\frac{y-b}{a_0^m}\right) $$
Therefore, the 2D wavelet transform can be expressed as
$$ {W}_f\left(m,b\right)=\int {\int}_{\pm \infty }f\left(x,y\right){\psi}_{m,b}\left(x,y\right) dxdy $$
Therefore, the f obtained by inverse transformation of w can be expressed as
$$ f\left(x,y\right)=\frac{1}{C}\sum \limits_m{\int}_{-\infty}^{+\infty }{W}_f\left(m,b\right){\psi}_{m,b}(r) db\kern0.5em $$
Where C is the number of channels, and the value is generally 6. That is, six discrete channels. The discrete value of the spreading factor is a0 = 1/0.625 [18]. Figure 4a–c shows cross-sectional views showing signal components included in respective wavelet coefficients when a = a0, a = a02, and a = a06, respectively. It can be seen from Fig. 4 that when the value of a is continuously increasing, the frequency of the corresponding wavelet component is continuously decreasing, and the scalability of the expansion factor a is verified. Figure 5 is a cross-sectional view of output f(x, y) when a0 = 1/0.625, b = 0.
Cross section of in each wavelet coefficient signal when b = 0
Cross section of output f(x, y) when a0 = 1/0.625 and b = 0
In the visual information extraction model based on wavelet transform, through the EEG information processing model, we have proposed [30], if the brain visual processing organ perceives the object of interest, the feedback link is used to change the spreading factor a to increase the center frequency of the band pass filter, thereby expanding the bandpass width and reducing the observation scope. In addition, by modifying the parameters to adjust the position of the observation center, the details of the prominent area can be extracted better. Figure 6 exhibits a specific adjust procedure for the wavelet transform model to an input image.
Visual information extraction model based on wavelet transform
Visual saliency distribution based on color global feature comparison
In this paper, the global contrast feature calculation of CIELab color space is used to realize visual saliency detection. To simplify the analysis, let the coordinate vector of the ith pixel be Pi, and the vector of the ith pixel in the CIELAB color space be Ci, as defined by Eq. (7):
$$ {\mathbf{P}}_i=\left[\begin{array}{l}{x}_i\\ {}{y}_i\end{array}\right],{\mathbf{C}}_i=\left[\begin{array}{l}{l}_i\\ {}{a}_i\\ {}{b}_i\end{array}\right] $$
Different from the RGB color model, the CIELAB color model can better simulate the human visual system. In order to effectively extract the color global comparison image of the image, the RGB color model of the input image needs to be converted into a CIELAB color model. The image is quickly divided into N superpixels R = {R1, R2, … , RN} by using the superpixel segmentation technique in [31]. The segmentation pictures and detail display are shown in Fig. 7.
Examples of super-pixel segmentation. a The comparisons between original images and the corresponding results of the images using super-pixel segmentation. b The details of super-pixel segmentation in the bottom of right corner for Fig. 7(a)
As can be seen from Fig. 7b, the color values of all the pixels in the same superpixel are almost identical. The global comparison calculation method for the color of the superpixel Rk is as shown in Formula (8):
$$ C\left({R}_K\right)=\sum \limits_{n=1}^{N-1}\frac{\left\Vert {\overline{C}}_K,{\overline{C}}_n\right\Vert }{1+\alpha \times \left\Vert \overline{X_K,{Y}_K},\overline{X_n,{Y}_n}\right\Vert } $$
Where N is the total number of superpixels, \( {\overline{C}}_K \) is the average value of the color of the Kth superpixel, and \( \left\Vert {\overline{C}}_K,{\overline{C}}_N\right\Vert \) is the Euclidean distance of two different superpixel colors (used to indicate the degree of difference between the two), which is to different the spatial distance of the superpixel position, a is the adjustment parameter. If a is large, the color contrast is greatly interfered by the spatial position factor. If a is small, the color contrast is less interfered by the spatial position factor. According to experience, this paper takes a = 3.
Experimental results and discussion
In this section, MATLAB simulation software is used to realize video information processing and interest region extraction, and a large number of tests have been carried out. The experiment focuses on saliency detection or ROI extraction model, compares the quantitative indicators with the proposed method and existing methods, and verifies the effectiveness of the method.
We collected 1000 real video of 60 s in length data as experimental datasets. In the experimental simulation of real video sequence data, the test video sequence selects several network video sequences of the same length. The test video sequence has a resolution of 328 × 248. During the calculation a = 3, a0 = 1/0.625, σ1 = 1, and σ2 = 0.625.
Comparative analysis of visual information extraction results
Figure 8 is a comparison of the results of the saliency map distribution when processing still images. Figure 8a, c shows a comparison of global color features of two different input images. Figures 8b, d are saliency diagrams calculated using color global feature comparisons. As can be seen from Fig. 8, a visual saliency map can be effectively obtained based on the global feature comparison of colors. Significantly, the black chess main area in Fig. 8b is brighter. Significantly, the arrow on the sign in Fig. 8d is brighter. Using color global feature comparison can better detect visual saliency distribution.
Examples of saliency diagrams using color global feature comparisons. a Original figure of chess. b The result of chess image using color global feature comparisons. c Original figure of warning notice. d The result of warning notice image using color global feature comparisons
In four video sequence experiments, the proposed method is compared with the space-time domain information method (STI) [31] and the SGUFL method [32]. The ROI extraction results are compared as shown in Fig. 9. As can be seen from the above four figures, the result of the significant distribution detection is significantly reduced when processing the video sequence data compared to the processing result of the image in Fig. 8. This is because the video sequence is dynamic, and the effect of performing superpixel segmentation on each frame is much lower than when processing a pair of still images alone.
Comparison of different video simulation results
Figure 9 column "original video frame" shows the original video frames of the long-distance running, the badminton playing, the long jumping, and the pole vaulting, respectively. Image patches in Fig. 9 column "STI" are ROIs detected after the STI algorithm processes the original video frame, respectively. Patches in column "SGUFL" are ROIs detected after the original video frame is processed by the SGUFL algorithm, respectively. In the last column of Fig. 9, images are ROIs detected after the original video frame is processed by the method herein. It can be seen that the STI method can only detect the approximate position of the object, and it is vulnerable to noise interference, resulting in wrong ROI judgment, so the detection result ROI of the STI is the worst. Secondly, the ROI extraction result of the SGUFL method is similar to the method of this paper, but it does not perform well when processing motion information. All experimental result shows that the hybrid method proposed in this paper has the best performance, can detect the ROI region more accurately, and has better anti-interference ability, and it shows better accuracy when processing still and motion information.
Comparative analysis of quantitative evaluation indicators
In order to further evaluate the performance of the visual information extraction model, the parameters HR (hit rate) and FAR (false alarm rate) were adopted to quantitatively evaluate the performance of the saliency detection [33].
$$ \mathrm{HR}=E\left(\prod \limits_i{O}_i(t)\cdot S(t)\right) $$
$$ \mathrm{FAR}=E\left(\prod \limits_i\left(1-{O}_i(t)\right)\cdot S(t)\right) $$
Where Oi(t) is the black and white image after the ith observer marks the tth frame image, then 1 indicates a region of interest, and 0 indicates a background region. S(t) is the saliency distribution extracted by the saliency detection algorithm. The performance evaluation of the saliency detection algorithm or the ROI extraction model can be performed by the Formula (9) and the Formula (10). In this experiment, 20 observers were set up to manually mark the areas of interest in the experimental video sequence. The eigenvalues ground truth of salient and non-salient regions of the videos are labeled manually and then are normalized to the interval of [0, 1].
Table 1 shows the performance comparison of the three methods. It can be seen that the proposed method outperforms the other two algorithms in performance. In addition, compared to the other two methods, the calculation run time of the method is reduced by about 35% and 18%, respectively. In addition, the mean value of HR to SGUFL, STI, and Ours method are 0.1548, 0.4568, and 0.4849, respectively. The means of FAR are 0.1013, 0.3108, and 0.0898. The STD of HR are 0.55895, 0.40394, and 0.31849,. The STD of FAR are 0.01778, 0.02797, and 0.01638. All of these statistical analysis results show that our method has significant superiority in various video scenarios.
Table 1 Comparison of the performance of the three methods
By simulating the process of perceptual interest of human visual system, this paper proposed a visual information extraction method combining wavelet transform and contrast principle, which can quickly extract the significant distribution from the image and obtain ROI. First, the mechanism of visual attention and the principle of visual saliency detection were analyzed. Then, the DOG function is employed as the wavelet basis function, and the wavelet data is employed to decompose the image data in the spatial and frequency domains, thus applied to multi-channel of human visual system. Finally, the significant distribution in the entire image is obtained by global color comparison, and finally the ROI is extracted. The simulation experiment is carried out on the test video data sequence, and the results are amazing, which shows the effectiveness of the proposed method.
DOG function:
Difference of Gaussian function
HVS:
Human visual system
C. Deng, Z. Chen, X. Liu, X. Gao, D. Tao, Triplet-based deep hashing network for cross-modal retrieval. IEEE Trans. Image Process. 27(8), 3893–3903 (2018)
E. Yang, C. Deng, C. Li, W. Liu, J. Li, D. Tao, Shared predictive cross-modal deep quantization. IEEE Trans. Neural. Netw. Learn. Syst 99, 1–12 (2018)
X. Huang, Image encryption algorithm using chaotic Chebyshev generator. Nonlinear Dynamics. 67(4), 2411–2417 (2012)
J.-S. Pan, L. Kong, T.-W. Sung, P.-W. Tsai, V. Snášel, α-Fraction first strategy for hierarchical model in wireless sensor networks. J. Internet Technol. 19(6), 1717–1726 (2018)
T. Schwitzer, R. Schwan, K. Angioi, I. Ingster-Moati, L. Lalanne, A. Giersch, V. Laprevote, The cannabinoid system and visual processing: A review on experimental findings and clinical presumptions. Eur. Neuropsychopharmacol. 25(1), 100–112 (2015)
J. Han, S. He, X. Qian, D. Wang, L. Guo, T. Liu, An object-oriented visual saliency detection framework based on sparse coding representations. IEEE Trans. Circuits Syst. Video Technol. 23(12), 2009–2021 (2013)
A. Borji, L. Itti, State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 185–207 (2013)
Y. Zhou, L. Li, J. Wu, K. Gu, W. Dong, G. Shi, Blind quality index for multiply distorted images using Biorder structure degradation and nonlocal statistics. IEEE Trans. Multimedia. 20(11), 3019–3032 (2018)
Y. Zhai, M. Shah, Visual attention detection in video sequences using spatiotemporal cues, Proceedings of the 14th ACM international conference on Multimedia, pp. 815–824, Santa Barbara, CA, USA, October 23–27, 2006, ACM New York, NY, USA
L. Li, W. Xia, W. Lin, Y. Fang, S. Wang, No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features. IEEE Trans. Multimedia. 19(5), 1030–1040 (2017)
G. Bhatnagar, Q.M.J. Wu, Z. Liu, Human visual system inspired multi-modal medical image fusion framework. Expert Syst. Appl. 40(5), 1708–1720 (2013)
L. Itti, C. Koch, A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 40(10–12), 1489–1506 (2000)
M.-M. Cheng, N.J. Mitra, X. Huang, P.H. Torr, S.-M. Hu, Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 569–582 (2015)
L. Li, W. Lin, X. Wang, G. Yang, K. Bahrami, A.C. Kot, No-reference image blur assessment based on discrete orthogonal moments. IEEE Trans. Cybern. 46(1), 39–50 (2016)
F. Perazzi, P. Krähenbühl, Y. Pritch, A. Hornung, Saliency filters: Contrast based filtering for salient region detection, 2012 IEEE conference on computer vision and pattern recognition, pp. 733–740, Providence, RI, USA, 16–21 June, IEEE Hoboken, NJ, USA
C. Guo, L. Zhang, A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Trans. Image Process. 19(1), 185–198 (2010)
M.-M. Cheng, J. Warrell, W.-Y. Lin, S. Zheng, V. Vineet, N. Crook, Efficient salient region detection with soft image abstraction, Proceedings of the IEEE International Conference on Computer vision (2013), pp. 1529–1536
C.H. Sui and J. Ling: "A wavelet-based model of visual information abstraction." Laser & Infrared. 2002
A. Bhardwaj, V.S. Verma, R.K. Jha, Robust video watermarking using significant frame selection based on coefficient difference of lifting wavelet transform. Multimed. Tools Appl. 77(15), 19659–19678 (2018)
R. Ke, Computer binocular visual information processing model in the case of hybrid light interference. Bull. Sci. Technol. (2), 230–232 (2014)
X. Zhong, F.Y. Shih, An efficient saliency detection model based on wavelet generalized lifting. Int. J. Pattern Recognit. Artif. Intell. 33(02), 1954006 (2018)
X. Song, Y. Zeng, F. Jiang, D. Chang, Discussion on the artificial retina model based on visual information abstracting simulated by wavelet transform. Opt. Instrum. 29(2), 36–40 (2007)
Y. Liu, B. Huang, H. Sun, Nanjing, Image segmentation based on visual saliency and graph cuts. J. Comput.-Aided Des. Comput. Graph. 25(3), 402–409 (2013)
L. Jian, D. Levine Martin, A. Xiangjing, X. Xin, H. Hangen, Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans. Pattern Anal. Mach. Intell. 35(4), 996–1010 (2013)
B. Hu, L. Li, H. Liu, W. Lin, J. Qian, Pairwise-comparison-based rank learning for benchmarking image restoration algorithms. IEEE Trans. Multimedia. (2019) https://doi.org/10.1109/TMM.2019.2894958
E. Erkut, E. Aykut, Visual saliency estimation by nonlinearly integrating features using region covariances. J. Vis. 13(4), 11 (2013)
X. Dong, X. Huang, Y. Zheng, L. Shen, S. Bai, Infrared dim and small target detecting and tracking method inspired by human visual system. Infrared Phys. Technol. 62(1), 100–109 (2014)
K. Tiwari, P. Gupta, An efficient technique for automatic segmentation of fingerprint ROI from digital slap image. Neurocomputing. 151, 1163–1170 (2015)
S. Chenghua, X. Laiding, Abstraction and combination of image character by Multi-Frequency Channel wavelet transform. Chin. J. Lasers. 27(8), 733–736 (2000)
F. Zhang, Z. Mao, Y. Huang, X. Lin, G. Ding, Deep learning models for EEG-based rapid serial visual presentation event classification. J. Inf. Hiding Multimedia Sig. Process. 9(1), 177–187 (2018)
X. Gu, G. Qiu, X. Feng, L. Debing, C. Zhibo, Region of interest weighted pooling strategy for video quality metric. Telecommun. Syst. 49(1), 63–73 (2012)
F. Zhang, B. Du, L. Zhang, Saliency-guided unsupervised feature learning for scene classification. IEEE Trans. Geosci. Remote Sens. 53(4), 2175–2184 (2015)
N.T. Hai, Wavelet-based image fusion for enhancement of ROI in CT image. J. Biomed. Eng. Med. Imaging. 1(4), 1–13 (2014)
The authors thank the referees' careful reviewing and constructive suggestions. Also, we would like to thank the editorial boards for a great support and give a chance to modify our manuscript.
This work was supported by the Research Program Foundation of Minjiang University under Grants No.MYK17021, No.MYK18033, No.MJW201831408 and No.MJW201833313 and supported by the Major Project of Sichuan Province Key Laboratory of Digital Media Art under Grants No.17DMAKL01 and also supported by Fujian Province Guiding Project under Grants No.2018H0028. We also acknowledge the solution from National Natural Science Foundation of China (61772254 and 61871204), Key Project of College Youth Natural Science Foundation of Fujian Province (JZ160467), Fujian Provincial Leading Project (2017H0030), Fuzhou Science and Technology Planning Project (2016-S-116), Program for New Century Excellent Talents in Fujian Province University (NCETFJ) and Program for Young Scholars in Minjiang University (Mjqn201601).
We do not open our experimental data set.
Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang University, Fuzhou, 350117, China
Fuquan Zhang
School of Computer Science & Technology, Beijing Institute of Technology, Beijing, 100081, China
& Guangyuan Zheng
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, China
Tsu-Yang Wu
Shangqiu Normal University, Shangqiu, 476000, China
Guangyuan Zheng
Search for Fuquan Zhang in:
Search for Tsu-Yang Wu in:
Search for Guangyuan Zheng in:
FZ conceived the structure of the manuscript and gave analytical methods. T-YW performed the experiments and analyzed the results. GZ made language rendering and paper revision work. All authors read and approved the final manuscript.
Correspondence to Tsu-Yang Wu.
Fuquan Zhang is an associate professor of Minjiang University, China. He have received silver medal of the 6.18 cross strait staff innovation exhibition, gold medal of nineteenth National Invention Exhibition in 2010. In 2012, his proposed project has won the gold award of the seventh international invention exhibition. He was awarded the "top ten inventor of Fuzhou" honorary title by Fuzhou City. He is now a member of the National Computer Basic Education Research Association of the National Higher Education Institutions, a member of the Online Education Committee of the National Computer Basic Education Research Association of the National Institute of Higher Education, a member of the MOOC Alliance of the College of Education and Higher Education Teaching Guidance Committee, ACM SIGCSE, CCF member, CCF YOCSEF member, director of Fujian Artificial Intelligence Society He has published about 70 research papers.
Tsu-Yang Wu received the PhD degree in Department of Mathematics, National Changhua University of Education, Taiwan in 2010. Currently, he is an associate professor in College of Computer Science and Engineering, Shandong University of Science and Technology, China. In the past, he is an assistant professor in Innovative Information Industry Research Center at Shenzhen Graduate School, Harbin Institute of Technology. He serves as executive editor in Journal of Network Intelligence and as associate editor in Data Science and Pattern Recognition. His research interests include video security and information security.
Guangyuan Zheng received the BS degree in 2010 from China University of Geosciences, China. Now is study in Beijing Institute of Technology. As a doctoral student, his major research interests include Machine learning, Computer vision, Medical image analysis and Computer safety.
Zhang, F., Wu, T. & Zheng, G. Video salient region detection model based on wavelet transform and feature comparison. J Image Video Proc. 2019, 58 (2019) doi:10.1186/s13640-019-0455-2
Wavelet transform
Global feature
Video sequence
Significant distribution
DOG function
Modeling and Representation for Big Visual Data | CommonCrawl |
Why does a polynomial with real, simple roots change its sign between its roots?
In the mathematics book I have, there is a sub-chapter called "Practical procedure to resolve inequalities" that states:
Given a polynomial $P(x)$ that has real, simple roots, and finding the solutions to the equation $P(x) = 0$, afterwards sorting the solutions $x_1, x_2, ..., x_n$, then the sign of $P$ over an interval $(x_i, x_{i + 1})$ is the opposite of its neighboring intervals $(x_{i - 1}, x_i)$ and $(x_{i + 1}, x_{i + 2})$.
I've plotted functions of the form $$a\prod_{i = 1}^{n}(x - a_i), \space a, a_1, a_2, ..., a_n \in [0, \infty), \space a_i \ne a_j \space \forall i, j \in \{1, 2, ..., n\} $$
What's an intuitive way of thinking about this and why it happens?
real-analysis calculus polynomials real-numbers
edited Jan 6 at 20:05
Alekos Robotis
Marc GrecMarc Grec
$\begingroup$ Isn't this just the Intermediate Value Theorem? $\endgroup$ – Adam Rubinson Jan 6 at 19:59
$\begingroup$ I suppose it's slightly more than that, because the intermediate value theorem essentially uses the sign change as a premise, rather than a conclusion. Indeed, a sign change guarantees a root, but not conversely. Take $f(x)=x^2$. $\endgroup$ – Alekos Robotis Jan 6 at 20:05
$\begingroup$ Then this becomes why to single roots always "cross" whereas multiple roots can "kiss". If $a$ is a root then $(x-a)$ is a factor and $P(x)=(x-a)Q(x)$. If $a$ is a single root then $Q(a)\ne 0$ $\endgroup$ – fleablood Jan 6 at 23:07
$\begingroup$ It changes sign at its roots, not between its roots. $\endgroup$ – Ross Millikan Jan 7 at 23:43
$\begingroup$ I'd say the picture you provided IS the intuitive way of thinking about this and why this happens. $\endgroup$ – Vincent Jan 8 at 13:05
Firstly, to simplify the problem, start by re-numbering all the $a_{i}$'s from least to greatest. Think of the behavior at $x=a_n$ notice how the polynomial will look like $$(\text{neg numb})(\text{neg numb})\cdots(\text{neg numb})(x-a_{n})(\text{pos numb})(\text{pos numb})\cdots(\text{pos numb})$$ Recall that in a series of multiplied numbers, the sign of the product is determined by whether the amount of negative numbers is even or odd. Odd making the product negative, even making it positive. Then, imagine changing $x$. Whenever it's below $a_{n}$ there will be one more negative number than when it's just barely above $a_{n}$ Therefore, the sign must change when passing through $x=a_{n}$
edited Jan 7 at 5:59
Paŭlo Ebermann
connor laneconnor lane
$\begingroup$ +1 for not mentioning derivatives :) $\endgroup$ – Carsten S Jan 7 at 14:40
$\begingroup$ @Carsten_S The first derivative was exactly what I first thought of, as intuitive! $\endgroup$ – nigel222 Jan 9 at 10:39
$\begingroup$ This is also a nice explanation to why the sign won't change for double roots, but will change for triple roots, and so on. $\endgroup$ – blupp Jan 9 at 19:36
A polynomial has a zero in $a$ if and only if there is a (necessarily unique) polynomial $q$ such that $p(x)=(x-a)q(x)$ for all $x$. By this it follows that the zero in $a$ is simple if and only if $q(a)\ne0$, for if this weren't the case, then $q(x)=(x-a)f(x)$ and thus $p(x)=(x-a)^2f(x)$. If $a$ is a simple zero, then by continuity $q(x)$ has fixed sign in some nieghbourhood of $a$ and therefore, in said neighbourhood $$p(x)=(\text{function that changes sign at }a)\times(\text{function of fixed sign})$$
On the other hand, by the intermediate value theorem every interval where $p$ changes sign must contain a zero.
Gae. S.Gae. S.
Suppose you have $p(x)=(x-1)(x-2)(x-3)$ .If $x<1$ then $x-1,x-2,x-3$ would be negative. If $1<x<2$ then $x-1$ is positive and the other two negative....Every time you go to the next interval you have one more positive factor of the polynomial.
dmtridmtri
You asked for intuitive so the following is intuitive but intuitive only.
Well if you imagine the graph as a path to follow along the graph will occasionally cross from one side of the $x$ axis to the other. When its on the up side of the axis the value of $P$ is positive and the graph is on the down side of the axis the value of $P$ is negative. And the points where the graph is actually crossing the axis ar the points where $P$ is equal to zero. Those are the roots.
Now for the path between the roots $P$ must stay on one side or the other. And if this is a simple root when $P$ meets the $x$ axis, the graph will cross it and go to the other side.
Hence $P$ will go from positive in one interval to negative in the next.
The only real question is it is a simply root why does $P$ always cross? Why doesn't the graph just meet the $x$ axis and then turn tail and go back the way it came. Well that only occurs if that is a multiple root.
Okay. If $P$ is the polynomial and $a$ is a root then $P(a) = 0$ and $(x-a)$ is one of the factors of $P(x)$. So if we actually divide $(x-a)$ out of $P(x)$ we will get $P(x) = (x-a)Q(x)$.
Now if $Q(a) = 0$ then $a$ is a multiple root. But we know it isn't so $Q(a)\ne 0$. So either $Q(a)$ is positive or negative.
Now let's take a really teeny interval around $a$; say the interval $(b,c)$ where $b<a < c$. And lets suppose that $b$ a $c$ are close enough to $a$ so that $Q(x)$ is never $0$ on the interval $b < x < c$.
Well then from the points just below $a$, where $b < x < a$ then $P(x)= (x- a)Q(x)$ then $x-a$ is negative. So $P(x)$ is the opposite sign of $Q(x)$ at that point. And for the points just above $a$ where $a < x < c$ then $(x-a)$ is positive so $P(x) = (x-a)Q(x)$ is the same sign or $Q(x)$ at that point. But remember the interval is small enough that $Q(x)$ doesn't change signs.
So $P(x) = (x-a)Q(x)$ is one sign for $b< x < a$ and $P(x) = (x-a)Q(x)$ is the other sign for $a < x < c$.
fleabloodfleablood
For a function like this, the roots are the places where the function crosses the $x$-axis, i.e. the places where it changes sign. If you think of the polynomial as the product of its linear factors and imagine how this value changes as you change $x$, you'll notice that it can only change sign (and does) when one of these factors changes sign.
KarlKarl
The basic intuition is that at a root $x_0$, the graph of the function $y=p(x)$ touches the $x-$axis. Now, roots of polynomials are isolated, so that at this point the graph has to depart from the axis for the values of $p(x\pm\varepsilon)$ for small $\varepsilon$. It can go up, or down. So, there are four possibilities: $+/-$, $-/+$, $+/+$, and $-/-$, where for example $+/-$ means positive for $x_0-\varepsilon$ and negative for $x_0+\varepsilon$. There are plenty of examples of all of these behaviors.
We can change coordinates so that $x_0=0$. Locally at $0$, $p(x)$ looks like its lowest order term. So, we can say that $p(x)\approx a_dx^d+(\text{higher order terms}).$ In the case of a simple root, $p(x)\approx ax+(\text{higher order terms})$. So, the graph looks like the graph of $y=mx$ for $m\ne 0$. In particular, for $m<0$ we have the $+/-$ situation, and for $m>0$ we have the $-/+$ situation. If the root is not simple, we can take $p(x)=x^2$ and observe that $+/+$ behavior, or $p(x)=-x^2$ for the $-/-$ behavior. However, when we have simple roots, it is always the case that the sign of the function changes locally at the root.
Alekos RobotisAlekos Robotis
Since the roots are simple, this means that if $a$ is a root of $P(x)$ then $P'(a)\ne 0$.
Also when $P'(a)=0$ it means that it has a minimum or a maximum on this point i.e. there is a neighborhood around this point in which the functiom doesn't change sign (i.e. if it is above the $x$-axis it'll stay above it and if it is below the $x$-axis it'll stay below it in this neighborhood.
Fareed Abi FarrajFareed Abi Farraj
Let $p(x)=(x-a)(x-b)(x-c)$; $a<b<c$, i.e. simple zeroes at $a,b,c$.
$p(a)=0$;
$p'(a)= (a-b)(a-c) \not = 0$.
We have $p'(a)>0$.
Since $p'$ is continuous, there is a neighborhood of $a$, $(a-\epsilon,a+\epsilon)$, where $p' >0$.
Then $p$ is strictly increasing in this neighborhood, i e changes sides.
Fairly straightforward to generalize to polynomials of degree $n>3$.
Peter SzilasPeter Szilas
15.3k22 gold badges88 silver badges2525 bronze badges
There are two ways for a function to form a root:
reaching the zero value and continuing with the opposite sign;
reaching the zero value and bouncing with the same sign.
In the first case you have an ordinary root and in the second a double root.
Technically, a (non-)change of sign can occur with a multiple root of odd (even) multiplicity.
The existing answers provide good intuition. On the other hand, more advanced readers might want a more rigorous point of view, and also a more general one. In the following longish post, I'll address these points.
Firstly, note the following. While a polynomial like $(x-1)(x-2)$ in which the roots are simple does indeed have the behaviour you describe, this is also true of a polynomial like $(x-1)^3(x-2)^5$, in which instead of simple roots we have (more generally) roots of odd degree. And this behaviour continues to exist if we multiply by a nowhere-vanishing factor, like $x^2 + 1$.
Most generally, if we know the multiset of (real) roots of a univariate polynomial, and if we also know the coefficient of the leading term, then we can work out exactly where the function will be positive or negative irrespective of whether the polynomial can be written wholly as a product of linear factors or not. So, this is the level of generality I suggest we work at; in high-school they sometimes call this the "sign-diagram viewpoint" or something along those lines.
Anyway, from a sign-diagram point of view, the relevant theorem is:
Theorem 8. If $P$ is a non-zero univariate polynomial with real coefficients, then:
$$\mathrm{sgn}(P(x)) = \mathrm{sgn}(\mathrm{lc}(P)) \cdot (\mathrm{step} \,\mathrm{roots}\, P)(x)$$
$\mathrm{sgn}$ refers to the sign function
$\mathrm{lc}(P)$ refers to the leading coefficient of $P$
$\mathrm{roots}\, P$ refers to the multiset of roots of $P$
If $M$ is a multiset, then $\mathrm{step} \,M : \mathbb{R} \rightarrow \{-1,0,1\}$ is such that the value of $(\mathrm{step}\mathrm{roots}\,M)(x)$ fluctuates between $1$, $0$ and $-1$ based on the number of roots to the right of $x$ in the multiset $M$. See Definition 6 below for more information.
Anyway, if you think about it, this basically answers your question (but in more generality). In particular, if all the elements of the multiset $\mathrm{roots}\, P$ have odd multiplicity, then $\mathrm{step}\, \mathrm{roots}\, P$ is going to cycle through the values $-1,0,1$ without faltering, which is what you've observed in the special case where all the elements of $\mathrm{roots}\,P$ have multiplicity $1$. I might include a formal statement/proof of this "cycling without faltering" claim eventually, but for now I'm a bit out of steam, so I'll just post a proof of Theorem 8 above.
Proposition 1. If $Q$ is a nowhere-vanishing continuous function $\mathbb{R} \rightarrow \mathbb{R}$, then $x \mapsto \mathrm{sgn}(Q(x))$ is a constant function.
Proof. Suppose not. Then there exist real numbers $a, b \in \mathbb{R}$ with $\mathrm{sgn}(Q(a)) \neq \mathrm{sgn}(Q(b))$. Since $Q$ is nowhere-vanishing, there's only two cases:
Case 1. $\mathrm{sgn}(Q(a)) = -1$ and $\mathrm{sgn}(Q(b)) = 1$
Case 2. $\mathrm{sgn}(Q(a)) = 1$ and $\mathrm{sgn}(Q(b)) = -1$
For Case 1, infer that $Q(a) < 0$ and $0 < Q(b)$. Hence since $Q$ is continuous, there exists $c \in [a,b]$ such that $Q(c) = 0$, by the intermediate value theorem. But this contradicts the assumption that $Q$ is nowhere vanishing. For Case 2, a similar argument suffices. $$\tag*{$\blacksquare$}$$
Definition 2. The above element of $\{-1,1\}$ is called the universal sign of $Q$ and denoted $\mathrm{usgn}\, Q$.
For example, the universal sign of $x \mapsto e^x$ is $1$, and the universal sign of $x \mapsto -e^x$ is $-1$.
Proposition 3. Given a real number $\lambda > 0$, and given also a non-empty upward-closed subset $C$ of the real line together with a function $R : C \rightarrow \mathbb{R}$ satisfying $$\mathrm{lim}_{x \rightarrow +\infty}R(x) = 0,$$ we have: $$\exists x \in C : \lambda + R(x) > 0.$$
Proof. Since $\mathrm{lim}_{x \rightarrow +\infty}R(x) = 0,$ thus $$\forall \varepsilon > 0 : \exists x \in C : \forall y \geq X : |R(y)|<\varepsilon.$$ Thus in particular $$\exists x \in C : \forall y \geq x : |R(y)|<\lambda.$$ Thus in particular, $$\exists x \in C : |R(x)|<\lambda,$$ as required. $$\tag*{$\blacksquare$}$$
Proposition 4. Given a real number $a < 0$, we have $$\mathrm{lim}_{x \rightarrow +\infty} x^a = 0$$
Proof. It's enough to show that $$\mathrm{lim}_{x \rightarrow +\infty} x^{-a} = +\infty.$$ Since $a < 0$, hence $-a > 0$. Thus it's clear that $x^{-a}$ is an increasing function. Hence it's enough to show that $x^{-a}$ is unbounded. Thus it's enough to show that $x \in \mathbb{R}_{>0} \mapsto x^{-a}$ has an inverse function. Thus it's enough to show that the inverse of $x \in \mathbb{R}_{>0} \mapsto x^{-a}$ is $x \in \mathbb{R}_{>0} \mapsto x^{-1/a}.$ But this is easily demonstrated by elementary arithmetic. $$\tag*{$\blacksquare$}$$
Proposition 5. If $Q$ is a nowhere-vanishing univariate polynomial with real coefficients, then $\mathrm{usgn}\, Q = \mathrm{sgn}(\mathrm{lc}(Q)),$ where $\mathrm{lc}(Q)$ refers to the leading coefficient of $Q$.
Proof. There are two cases, namely $\mathrm{lc}(Q)>0$ and $\mathrm{lc}(Q)<0.$ We'll prove the result in the first case; the proof in the second case is similar. Thus our goal is to show that $\mathrm{usgn}\, Q = 1.$ It's enough to prove $$\exists x \in \mathbb{R} : Q(x) > 0.$$
We know that we can write $Q$ as its leading term plus the remaining terms, like so: $$Q(x) = \mathrm{lc}(Q)x^{\mathrm{deg}(Q)} + \sum_{i = 0}^{\mathrm{deg}(Q)-1} a_i x^i$$
Thus for $x \neq 0$, we have:
$$Q(x) = x^{\mathrm{deg}(Q)} \left(\mathrm{lc}(Q) + \sum_{i = 0}^{\mathrm{deg}(Q)-1} a_i x^{i-\mathrm{deg}(Q)}\right)$$
So our goal is to show that $$\exists x \in \mathbb{R}_{\neq 0} : x^{\mathrm{deg}(Q)} \left(\mathrm{lc}(Q) + \sum_{i = 0}^{\mathrm{deg}(Q)-1} a_i x^{i-\mathrm{deg}(Q)}\right) > 0.$$ Note that if $x > 0$, then $x^{\mathrm{deg}(Q)}$ is automatically positive. Thus it's enough to show that $$\exists x \in \mathbb{R}_{\neq 0} : \mathrm{lc}(Q) + \sum_{i = 0}^{\mathrm{deg}(Q)-1} a_i x^{i-\mathrm{deg}(Q)} > 0.$$
Since $\mathrm{lc}(Q) > 0$ by hypothesis, we can invoke Proposition 3 above. It's therefore enough to show $$\lim_{x \rightarrow +\infty} \sum_{i = 0}^{\mathrm{deg}(Q)-1} a_i x^{i-\mathrm{deg}(Q)} = 0.$$ Hence it's enough to show that $$\sum_{i = 0}^{\mathrm{deg}(Q)-1} \lim_{x \rightarrow +\infty} a_i x^{i-\mathrm{deg}(Q)} = 0.$$ Hence it's enough to show that $$\forall_{i = 0}^{\mathrm{deg}(Q)-1} \lim_{x \rightarrow +\infty} a_i x^{i-\mathrm{deg}(Q)} = 0.$$
So consider $i \in \{0,\ldots,\mathrm{deg}(Q)-1\}.$ Our goal is to show that $\lim_{x \rightarrow +\infty} a_i x^{i-\mathrm{deg}(Q)} = 0.$ It's enough to show that $\lim_{x \rightarrow +\infty} x^{i-\mathrm{deg}(Q)} = 0.$ Thus by Proposition 4, it's enough to show that $i-\mathrm{deg}(Q) < 0.$ But since $i \leq \mathrm{deg}(Q)-1$, and since $\mathrm{deg}(Q)-1 < \mathrm{deg}(Q)$, hence this completes the proof. $$\tag*{$\blacksquare$}$$
Definition 6. If $M$ is a multiset of real numbers, we get an associated function $\mathrm{step} \,M : \mathbb{R} \rightarrow \{-1,0,1\}$ defined as follows: $$(\mathrm{step} \,M)(x) = 0 \mbox{ if } x \in M$$ $$(\mathrm{step} \,M)(x) = \prod_{a \in M : a > x} (-1) \mbox{ if } x \notin M$$
For example, if $M = 0$, then $\mathrm{step} \,M$ is $1$ everywhere. If $M = \langle a\rangle$, then $\mathrm{step} \,M$ takes the value $-1$ to the left of $a$ and $+1$ to the right of $a$, and take the value $0$ at $a$. If it's still not clear what $\mathrm{step} M$ looks like in general, think about the multset $M = \langle a \rangle + \langle b \rangle$ in the cases $a < b$ and $a = b$ respectively.
Proposition 7. If $M$ is a multiset of real numbers, then $$(\mathrm{step} \,M)(x) = \prod_{a \in M} \mathrm{sgn}(x-a)$$
Proof. There are two cases.
First case. Assume $x \in M$, this just means that $\exists a \in M : x = a$. Hence $\exists a \in M : x - a = 0$. Hence $\exists a \in M : \mathrm{sgn}(x - a) = 0.$ Hence $$\prod_{a \in M} \mathrm{sgn}(x-a) = 0.$$ Thus $$\prod_{a \in M} \mathrm{sgn}(x-a) = (\mathrm{step} \,M)(x),$$ as required.
Second case. Assume $x \notin M$. Then $\forall a \in M : x \neq a$. Thus $\forall a \in M : x - a \neq 0$. Thus $\forall a \in M : \mathrm{sgn}(x - a) \in \{-1,1\}$. Thus $$\prod_{a \in M} \mathrm{sgn}(x - a) = \prod_{a \in M : \mathrm{sgn}(x - a) = -1} -1 = \prod_{a \in M : a>x} -1.$$ Thus $$\prod_{a \in M} \mathrm{sgn}(x-a) = (\mathrm{step} \,M)(x),$$ as required. $$\tag*{$\blacksquare$}$$
Theorem 8. If $P$ is a non-zero univariate polynomial with real coefficients whose leading term is positive, then:
Proof. Since $Q$ is a non-zero univariate polynomial with real coefficients, hence there exists a nowhere-vanishing univariate polynomial $Q$ with real coefficients satisfying the following equation: $$P(x) = Q(x)\prod_{a \in \mathrm{roots}\, P} (x - a).$$ Taking $\mathrm{sgn}$ of both sides, we see that $$\mathrm{sgn}(P(x)) = \mathrm{sgn}(Q(x))\prod_{a \in \mathrm{roots}\, P} \mathrm{sgn}(x - a).$$ By Proposition 6, this implies $$\mathrm{sgn}(P(x)) = \mathrm{sgn}(Q(x))(\mathrm{step} \,\mathrm{roots}\, P)(x).$$
Hence it's enough to show that $\mathrm{sgn}(Q(x)) = \mathrm{sgn}(\mathrm{lc}(P)).$ By Proposition 1/Definition 2, we know that $\mathrm{sgn}(Q(x)) = \mathrm{usgn}(Q),$ and by Proposition 5, we know that $\mathrm{usgn}(Q) = \mathrm{sgn}(\mathrm{lc}(Q)).$ Hence it's enough to show that $$\mathrm{sgn}(\mathrm{lc}(P)) = \mathrm{sgn}(\mathrm{lc}(Q)).$$ Thus it's enough to show that $\mathrm{lc}(P) = \mathrm{lc}(Q)$. Recall also that $$P(x) = Q(x)\prod_{a \in \mathrm{roots}\, P} (x - a).$$ Hence $$\mathrm{lc}(P) = \mathrm{lc}(Q)\prod_{a \in \mathrm{roots}\, P} \mathrm{lc}(x \mapsto x - a).$$ Thus $$\mathrm{lc}(P) = \mathrm{lc}(Q)\prod_{a \in \mathrm{roots}\, P} 1.$$ Thus $\mathrm{lc}(P) = \mathrm{lc}(Q)$, as desired. $$\tag*{$\blacksquare$}$$
edited Jan 10 at 4:44
goblingoblin
Not the answer you're looking for? Browse other questions tagged real-analysis calculus polynomials real-numbers or ask your own question.
Polynomial with n real roots
Relation between real roots of a polynomial and real roots of its derivative
Polynomial with real roots
Roots of a polynomial with real cofficients
prove factored polynomial has no real roots
Parameterize a polynomial with no real roots | CommonCrawl |
Structural change, marginal land and economic development in Latin America and the Caribbean
Edward B. Barbier1 &
John S. Bugas1
Latin American Economic Review volume 23, Article number: 3 (2014) Cite this article
The Erratum to this article has been published in Latin American Economic Review 2014 23:7
Empirical evidence indicates that in Latin America and the Caribbean, households on less favored, or marginal, agricultural land form a "residual" pool of rural labor. Although the modern sector may be the source of dynamic growth through learning-by-doing and knowledge spillovers, patterns of labor, land and other natural resources use in the rural economy matter in the overall dynamics of structural change. The concentration of rural populations on marginal land is essentially a barometer of economy-wide development. As long as there is abundant marginal land for cultivation, they serve to absorb rural migrants, increased population, and displaced unskilled labor from elsewhere in the economy. Moreover, the economy is vulnerable to the "Dutch disease" effects of a booming primary products sector. As a consequence, productivity increases and expansion in the commercial primary production sector will cause manufacturing employment and output to contract, until complete specialization occurs. Avoiding such an outcome and combating the inherent dualism of the economy require both targeted polices for the modern sector and traditional agriculture on marginal land.
The purpose of the following paper is to show that structural transformation in Latin American and Caribbean (LAC) economies depends crucially on the pattern of production, land expansion and resource use in the rural economy. That is, even if the modern sector is the source of dynamic growth through learning-by-doing and knowledge spillovers, how labor and land (including natural resources) are utilized in the rural economy matters to the overall dynamics of structural change. In particular, in Latin America and the Caribbean, the rural economy comprises two separate sectors that exhibit distinctly different patterns of labor, land and natural resource use. One sector consists of commercially oriented activities that convert and exploit available land and natural resources for a variety of traded primary product outputs. The other sector contains smallholders employing traditional methods to cultivate less favorable agricultural land.
As a consequence, the rural economy displays Ricardian land surplus conditions, as first identified by Hansen (1979).Footnote 1 Because there is an unlimited supply of marginal land with negligible productivity, smallholders practicing traditional agriculture earn no rents. Real wages are invariant to rural employment and determined by the average product of labor. The result is that the more productive and dynamic modern sector competes with the commercial primary production sector for available labor, with marginal land absorbing the residual.
Essentially, the concentration of rural populations on less favored, or marginal, agricultural land is the barometer of economy-wide development. As long as there is abundant marginal land for cultivation, it absorbs rural migrants, population increases and displaced unskilled labor from elsewhere in the economy. Moreover, the economy is vulnerable to the type of "Dutch disease" effects of a booming primary products sector first analyzed by Matsuyama (1992), and also observed for LAC economies (Astorga 2010; Barbier 2004; López 2003; Maloney 2002). Rising commodity prices will cause manufacturing employment and output to contract while the primary sector expands, until complete specialization occurs. Avoiding such an outcome and combating the inherent dualism of the economy require both targeted polices for the modern sector and traditional agriculture on marginal land.
The paper is organized as follows. The next section provides evidence on the two key stylized facts of land use and rural poverty in LAC countries. The subsequent section develops a dual economy model, with the rural sector displaying Ricardian "land surplus" conditions. The influence of primary product price booms and the implementation of targeted policies for the modern sector and traditional agriculture on marginal land are then analyzed. Finally, the paper includes an empirical analysis of long-run growth over 1990–2011 for 35 LAC economies to test some of the predictions of the dual economy model.
Land use and rural poverty in Latin America
Land use change is critically bound with the pattern of economic development in Latin America and the Caribbean. There are two aspects to this pattern. First, the economies of the region are still largely dependent on primary products for their export earnings. For LAC countries, 55.3 % of merchandise exports consist of primary products, which comprise agricultural raw materials (1.7 %), food (17.9 %), fuel (23.1 %), and ores and metals (12.6 %) (World Bank 2013). Despite the region's efforts to diversify exports, for many LAC economies one or two major primary products still accounts for a significant share of total exports (Jiménez and Tromben 2006). Second, over the past 50 years, and especially in LAC developing economies, cropland area has continued to expand (see Fig. 1). In the region, tropical forests were the primary sources of new agricultural land in the 1980s and 1990s (Gibbs et al. 2010). That trend appears to have continued since 1990 (Fig. 1). Despite some reforestation in the region, net forest loss over 2001–2010 amounted to over 179,000 km2 (Aide et al. 2013). Extensive conversion of forests, wetlands and other non-cultivated land is expected to continue through 2030, as new land is required for agricultural and biofuel crops, grazing pasture, industrial forestry, and to replace land lost to degradation (Lambin and Meyfroidt 2011).
Long-run land use change in Latin American and Caribbean, 1961–2011. Source World Bank (2013)
For Latin America and the Caribbean, the current pattern of resource and land use also has two important implications for the structure of the rural sector.
First, expansion of less favored, or "marginal", agricultural land is occurring primarily to meet the subsistence and near-subsistence needs of rural households. That is, many of the region's rural poor continue to be concentrated in less ecologically favored and remote areas, such as converted forest frontier areas, poor quality uplands, converted wetlands, and similar land with limited agricultural potential (Barbier 2010, 2012; Comprehensive Assessment of Water Management in Agriculture 2007; CPRC 2004; IFAD 2010; World Bank 2003, 2008). This is not a new phenomenon; as noted by Coxhead et al. (2002, p. 345), "the land frontier has long served as the employer of last resort for underemployed, unskilled labor". For many LAC countries, this process has long been a major structural feature (Aldrich et al. 2006; Barbier 2011; Borras et al. 2012; Browder et al. 2008; Carr 2009a, b; Caviglia-Harris et al. 2013; Mueller 1997; Pacheco et al. 2011; Pichón 1997; Solís et al. 2009). Population increases, rural migration and other economic pressures mean that marginal land expansion will continue to absorb the growing number of rural households in the region. The result is that the rural poor located on marginal and low productivity agricultural land typically employ traditional farming methods, earn negligible land rents or profits, and have inadequate access to transport, infrastructure and markets (Aldrich et al. 2006; Barbier 2010, 2012; Banerjee and Duflo 2007; Browder et al. 2008; Caviglia-Harris and Harris 2008; Caviglia-Harris et al. 2013; IFAD 2010; Jalan and Ravallion 1997; Solís et al. 2009).
Second, less favorable agricultural land may be an important outlet for the rural poor, but increasingly commercially oriented economic activities are responsible for much of the resource exploitation and expansion of the agricultural land base that is occurring in LAC economies (Aide et al. 2013; Borras et al. 2012; Boucher et al. 2011; Chomitz et al. 2007; DeFries et al. 2010; FAO 2006; Pacheco et al. 2011; Rudel 2007). The primary product activities responsible for extensive land conversion include plantation agriculture, ranching, forestry and mining activities, and often result in export-oriented extractive enclaves with little or no forward and backward linkages to the rest of the economy (Barbier 2011; Borras et al. 2012; Bridge 2008; Jiménez and Tromben 2006; Pacheco et al. 2011; van der Ploeg 2011). In addition, developing countries have been actively promoting these commercial activities as a means to expand the primary products sector, especially in the land and resource abundant regions of Latin America (Borras et al. 2012; Deininger and Byerlee 2012; Pacheco et al. 2011; Rudel 2007). The result is that many LAC economies still depend on the exploitation of natural resources and are unable to diversify from primary production as the dominant economic sector (Astorga 2010; Jiménez and Tromben 2006).
Table 1 indicates the link between low levels of GDP per capita, poverty, the concentration of rural populations on less favored agricultural land and resource dependency for LAC economies. All data in the table are for 2011, or the latest year, with the exception for less favored agricultural land, which is a 2010 estimate. On average across 35 LAC countries, real GDP per capita is $6,007, 25.3 % of the rural population is located on less favored agricultural land, the poverty rate is 41.2 %, the share of primary products in total exports is 63.5, and 20.8 % of the workforce is in industry. However, the table also confirms that the lower income economies, with real GDP per capita below $6,000, have more of their rural populations concentrated on less favored agricultural land, display higher poverty rates, and are more resource dependent, in terms of a higher share of primary product exports but a lower share of industry employment. The lowest income countries, with less than $3,000 real GDP per capita, have the highest concentration of rural populations on less favored agricultural land and the greatest poverty rates.
Table 1 Population on marginal land, GDP per capita, poverty and resource dependency
Figure 2 confirms the negative correlation between levels of GDP per capita (in 2011 or latest year) and the share of rural populations on less favored agricultural land across Latin America and the Caribbean (in 2010). Those LAC economies that are relatively poorer tend to have more of the rural population concentrated on marginal agricultural land, whereas the rich LAC countries tend to have lower concentrations of their rural population on less favored agricultural land.
GDP per capita and population on marginal land, LAC economies. Notes GDP per capita ($2,000), 2011 or latest year, is from World Bank (2013). Less favored agricultural land consists of irrigated land on terrain greater than 8 % median slope; rainfed land with a length of growing period (LGP) of more than 120 days but either on terrain greater than 8 % median slope or with poor soil quality; semi-arid land (land with LGP 60–119 days); and arid land (land with LGP <60–119 days). Estimates are for 2010 and based on GAEZ dataset. Number of observations = 35 countries. Average (median) real GDP per capita id $6,007 ($4,845). Average (median) share of rural population on less favored agricultural land is 25.3 % (25.1 %). Pairwise correlation coefficient r = −0.41
In sum, the pattern of land use and expansion prevalent throughout much of the LAC region is symptomatic of a dualistic rural economy. That is, the rural economy of many LAC countries contains both a traditional sector that converts and exploits available land to produce a non-traded agricultural output, and a fully developed, commercially oriented sector that converts and exploits available land and natural resources for a variety of traded outputs. The latter includes plantation agriculture, ranching, forestry and mining activities. In addition, the traditional agricultural sector is dominated by farm holdings that occupy marginal or ecologically fragile land with poor land quality and productivity potential. Although these two types of economic activities differ significantly and may also be geographically separated, they are linked by labor use, as the rural populations on marginal land form a large pool of surplus unskilled labor that can be employed in commercial primary production activities. This linkage is important not only to the dynamics of land expansion and use within economies but also to the overall structure of economic development (Barbier 2013).
A model of a land surplus rural economy
Following the above discussion of land use change in Latin America and the Caribbean, it is assumed that the rural economy displays land surplus characteristics. In addition, it comprises two separate sectors that exhibit distinctly different patterns of labor, land and natural resource use. One sector consists of commercially oriented activities that convert and exploit available land and natural resources for a variety of traded primary product outputs. Land and other natural resources are sufficiently abundant for use in primary production, but can only be appropriated through employing an increasing amount of labor for this purpose. The other rural sector contains smallholders employing traditional methods to cultivate less favorable agricultural land. For these smallholders, land is also abundant but of extremely poor quality for agricultural production. There is perfect labor mobility throughout the dualistic rural economy.
In addition to the rural economy, there is a modern or leading sector. Following models of structural transformation in developing countries, "the modern sector basically comprises industry along with parts of agriculture and services" (Ocampo et al. 2009, p. 122). Firms in this sector employ capital and labor, innovate through learning-by-doing technological change and generate knowledge spillovers. There is perfect labor mobility between the rural economy and the modern sector.
Commercial primary production
In this rural sector, production of the primary product (plantation crops, timber, beef, mineral, etc.) depends directly on inputs of land and/or natural resources N1 and labor L1; any capital input is fixed and fully funded out of normal profits. Primary production Q1 is determined by a function with the normal concave properties and is homogeneous of degree one
$$ Q_{1} = F\left( {N_{1} ,L_{1} } \right),F_{i} > 0,F_{ii} < 0,\quad i = N,L. $$
The commercial activity can obtain more land or natural resources (hereafter referred to as "resource") for primary production, but only by employing and allocating more labor for this purpose. It is assumed that increasing N1 incurs a rising input of L1
$$ L_{1} = z\left( {N_{1} } \right),z^{{\prime }} > 0,z^{{\prime \prime }} \ge 0, $$
where \( z^{\prime}\left( {N_{1} } \right) \) is the marginal labor requirement of obtaining and transforming a unit of the resource input, which is a convex function of the amount of N1 appropriated.
Letting p1 be the world price of traded primary products and w the wage rate, it follows that total profits are \( \pi = L_{1} \left[ {p_{1} f\left( {n_{1} } \right) - w} \right] = L_{1} \left[ {p_{1} f\left( {{{z^{ - 1} \left( {L_{1} } \right)} \mathord{\left/ {\vphantom {{z^{ - 1} \left( {L_{1} } \right)} {L_{1} }}} \right. \kern-0pt} {L_{1} }}} \right) - w} \right] \), \( n_{1} = {{N_{1} } \mathord{\left/ {\vphantom {{N_{1} } {L_{1} }}} \right. \kern-0pt} {L_{1} }} \), \( f\left( {n_{1} } \right) = F\left( {{{N_{1} } \mathord{\left/ {\vphantom {{N_{1} } {L_{1} ,1}}} \right. \kern-0pt} {L_{1} ,1}}} \right) \). Profit-maximizing, therefore, leads to
$$ f\left( {n_{1} } \right) + f_{N} (n_{1} )\left[ {\frac{1}{{z^{{\prime }} \left( {N_{1} } \right)}} - n_{1} } \right] = f\left( {n_{1} } \right) - f_{N} (n_{1} )n_{1} \left[ {1 - \varepsilon \left( {N_{1} } \right)} \right] = \frac{w}{{p_{1} }},\quad 0 < \varepsilon \left( {N_{1} } \right) < 1, $$
where \( \varepsilon \left( {N_{1} } \right) \equiv \frac{{\partial N_{1} }}{{\partial L_{1} }}\frac{{L_{1} }}{{N_{1} }} = \frac{1}{{z^{{\prime }} n_{1} }} \) is the elasticity of resource conversion, i.e., the percentage increase in resources appropriated for primary production in response to proportionately more labor devoted to this purpose. It is assumed that the normal case is \( 0 < \varepsilon \left( {N_{1} } \right) < 1 \).Footnote 2 Condition (3) indicates that labor will be used in commercial primary production activities until its value marginal productivity equals the wage rate. As labor is used for both appropriating resources and production, the wage rate will be higher if resources are fixed in supply and thus labor was used only for production (\( \varepsilon \left( {N_{1} } \right) = 0 \)). Because in (3) the value marginal productivity of labor declines with respect to L1, the wage rate is a decreasing function of labor employed in the primary product sector.
Traditional agriculture on marginal land
Production of non-traded agricultural output on less favored, or marginal, land also involves two inputs, land \( \left( {N^{m} } \right) \) and labor \( (L^{m} ) \); any capital input is fixed and fully funded out of normal profits. Both land and labor are required for traditional agricultural production, \( Q^{m} \), which is determined by the following linearly homogeneous function
$$ Q^{m} = G\left( {N^{m} ,L^{m} } \right),G_{i} \ge 0,G_{ii} < 0,\quad i = N,L $$
Note that the marginal productivity of land is not necessarily positive, which is the case for all less favored agricultural land. This Ricardian surplus land condition follows from the assumption that poor quality marginal land is unproductive in cultivation (Hansen 1979). That is, for traditional agriculture on less favored land \( G_{N} = 0 \), and because (3) does not apply to marginal land conversion, equilibrium is determined by
$$ g_{N} \left( {n^{m} } \right) = 0, \quad q^{m} = g\left( {n^{m} } \right) = \frac{w}{{p^{m} }}, \quad n^{m} = \frac{{N^{m} }}{{L^{m} }}, \quad q^{m} = \frac{{Q^{m} }}{{L^{m} }} = G\left( {{{N^{m} } \mathord{\left/ {\vphantom {{N^{m} } {L^{m} ,1}}} \right. \kern-0pt} {L^{m} ,1}}} \right). $$
The result is that there are no diminishing returns to labor in the use of less favored land for agricultural production. Real wages are invariant to rural employment and determined by the average product of labor. Moreover, the condition of zero marginal productivity fixes the land–labor ratio on less favored agricultural land, which can be designated as \( n^{m} \). Given the average product of labor relationship in (5), the fixed land–labor ratio will determine the nominal wage rate w for the predetermined output price pm. Thus, the best that rural households on less favored agricultural land can do is either to sell their labor to each other and obtain an equilibrium real wage \( {w \mathord{\left/ {\vphantom {w p}} \right. \kern-0pt} p}^{m} \), or alternatively, farm their own plots of land and earn the same real wage. Since there is little advantage in selling their labor, households will tend to use their labor to farm their own land. Hence, under this marginal land condition, small family farms consuming their own production will predominate. Unless the population increases, no more land will be brought into production and there will be surplus of unfarmed less favored land.
Modern sector
The modern sector, which includes industry but also technically advanced agriculture and services, has labor-augmenting technology that benefits from learning-by-doing and knowledge spillovers. Production in the sector depends on both unskilled labor and capital, which could also comprise human capital (skills). For the representative firm, an increase in the firm's capital stock leads to a parallel increase in its stock of knowledge. Each firm's knowledge is a public good, however, that any other firm can access at zero cost.
For the representative ith firm, output Q2i is produced by hiring capital K2i and labor L2i, and A2i is the amount of labor-augmenting technology available to the firm. But, with the presence of learning-by-doing and knowledge spillovers \( A_{2i} = K_{2} = \sum\limits_{i} {K_{2i} } \) and the representative firm's production function is
$$ Q_{2i} = H\left( {K_{2i} ,A_{2i} L_{2i} } \right) = H\left( {K_{2i} ,K_{2} L_{2i} } \right),H_{j} > 0,H_{jj} < 0,\quad j = K,L. $$
Production of the firm displays diminishing returns to its own stock of capital K2i, provided that K2 and L2i are constant. However, if each producer in the sector expands its own capital, then K2 will rise and produce a spillover benefit that increases the productivity of all firms, which is the increasing returns effect. Each firm's production is nonetheless homogeneous of degree one with respect to its own capital K2i and labor L2i, and if K2i and K2 expand together by the same amount while L2i is fixed, production also displays constant returns to scale.
For each firm, the total capital stock K2 of the modern sector is exogenously determined. In addition, assume that the output of each firm is a homogenous product with a given price p2. If all firms make the same choices so that k2i = k2 and K2 = k2L2, then profit-maximizing by each firm yields
$$ p_{2} h_{K} \left( {k_{2} ,K_{2} } \right) = p_{2} \left[ {\tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right)} \right] = r,\quad q_{2} = \frac{{Q_{2} }}{{L_{2} }} = h\left( {k_{2} ,K_{2} } \right) $$
$$ h\left( {k_{2} ,K_{2} } \right) - k_{2} h_{K} \left( {k_{2} ,K_{2} } \right) = K_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right) = \frac{w}{{p_{2} }} $$
where use is made of the following expressions for the average product of capital \( \tilde{h}\left( {L_{2} } \right) \) and private marginal product of capital \( h_{K} \left( {k_{2} ,K_{2} } \right) \), respectively.
$$ \frac{{h\left( {k_{2} ,K_{2} } \right)}}{{k_{2} }} = \tilde{h}\left( {\frac{{K_{2} }}{{k_{2} }}} \right) = \tilde{h}\left( {L_{2} } \right),h_{K} \left( {k_{2} ,K_{2} } \right) = \tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right),\tilde{h}^{{\prime }} \left( {L_{2} } \right) > 0,\tilde{h}^{{\prime \prime }} \left( {L_{2} } \right) < 0. $$
Condition (7) indicates that the value marginal productivity of capital for a modern sector firm equals the interest rate, r. Condition (8) indicates that the value marginal productivity of labor employed by a modern sector firm equals the real wage rate.
Both the private marginal product of capital and average product of capital are invariant with respect to the capital–labor ratio because learning-by-doing and spillovers eliminate diminishing returns to capital. As (9) indicates, the private marginal product of capital is less than the average marginal product of capital. The private marginal product of capital is increasing in L2, given \( \tilde{h}^{{\prime \prime }} \left( {L_{2} } \right) < 0 \). These results (7)–(9) for production and input use involving learning-by-doing and knowledge spillovers are standard for these types of relationships (Barro and Sala-I-Martin 2004).
Primary production trade, growth dynamics and labor market equilibrium
Because condition (5) indicates that the fixed land–labor ratio on less favored land nm determines the nominal wage rate w, the rural economy is recursive with respect to resource use, labor and output in the primary production sector. If the elasticity of resource conversion \( \varepsilon \left( {N_{1} } \right) \) is constant, p1 given and w known, then (3) yields the resource–land ratio n1 for primary production. With n1 determined, the relationship \( \varepsilon = {1 \mathord{\left/ {\vphantom {1 {z^{{\prime }} n_{1} }}} \right. \kern-0pt} {z^{{\prime }} n_{1} }} \) can be solved for resource conversion and use N1. Employment L1, and from (1), primary production Q1 can then be found.
Primary products are exported, and p1 is the given world price for these commodities. These products are exchanged for imports M, which are substitutes for consumption of domestic output from the modern sector. The balance of trade is
$$ pQ_{1} = M,\quad p = \frac{{p_{1} }}{{p_{2} }}, $$
where p is the terms of trade, expressed in terms of modern sector commodities as the numeraire. Note that, because p is given and Q1 known, imports to the small economy are recursively determined.
If there is no population growth, the representative household seeks to maximize its discounted flow of welfare over time as given by \( U = \int_{0}^{\infty } {\left[ {\frac{{\left( {c^{1 - \theta } + m^{1 - \theta } } \right) - 1}}{1 - \theta }} \right]e^{ - \rho t} {\text{d}}t} \) subject to the budget constraint \( \dot{a} = {ra} + w - c \), where m is per capita imports, a is the household's assets per person, r is the interest rate, w the wage rate, ρ is the rate of time preference, and θ is the intertemporal elasticity of substitution.
However, as imports are determined by the primary products balance of trade condition (6), the household is free to choose only its per capita consumption. As shown in the "Appendix", the growth dynamics of the modern sector and thus the economy are governed by
$$ \dot{k}_{2} = p_{2} \tilde{h}\left( {L_{2} } \right)k_{2} - c,\quad k_{2} \left( 0 \right) = k_{20} $$
$$ c\left( t \right) = \varphi k_{2} \left( t \right),\quad \varphi = p_{2} \tilde{h}\left( {L_{2} } \right) - \gamma $$
$$ \frac{{\dot{q}_{2} }}{{q_{2} }} = \frac{{\dot{k}_{2} }}{{k_{2} }} = \frac{{\dot{c}}}{c} = \gamma ,\quad \gamma = \frac{1}{\theta }\left( {p_{2} \left[ {\tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right)} \right] - \rho } \right) $$
Equation (11) is the usual condition for capital accumulation in an economy. If output per capita, valued at the price p2, exceeds consumption, and will increase capital per person. Condition (12) indicates that per capita consumption is proportional to capital per person. Consequently, as (13) depicts, capital and output per worker in the modern sector grow at the same (constant) rate as consumption per capita. The per capita growth rate, γ, is determined by the total number of workers employed in the sector, L2. An expansion (contraction) in the aggregate modern sector labor force, L2, therefore, increases (decreases) per capita growth in this sector.
With the nominal wage determined by the fixed land–labor ratio on less favored agricultural land, the value marginal productivity condition (8) for the modern sector must equal w. However, suppose that initially capital in the sector is some given level K20. Equilibrium employment must, therefore, be the unique solution to \( \tilde{h}^{{\prime }} \left( {L_{2} } \right) = {w \mathord{\left/ {\vphantom {w {p_{2} }}} \right. \kern-0pt} {p_{2} }}K_{20} \). It is possible that this level of employment is large enough so that growth of the modern sector is positive, i.e., \( \gamma > 0 \). But this requires a relatively large initial stock of aggregate capital for the modern sector, as the equilibrium employment condition implies that more L2 requires a higher K20. For most LAC economies, the initial stock of aggregate capital in the modern sector is likely to be relatively small rather than large. Thus, it follows that employment L2 will also be small, and if this is the case, it is more likely that (13) will yield \( \gamma \le 0 \). If it turns out that \( \gamma < 0 \), then the capital–labor ratio and aggregate capital will decline, employment will fall and the modern sector will contract.
It is also possible that the modern sector neither contracts nor declines. For example, with w predetermined and for a given K20, the equilibrium L2 that satisfies (8) is just sufficient to ensure \( \gamma = 0 \) in (13). This outcome ensures constant employment and aggregate capital in the modern sector, and thus an equilibrium output level Q2. Such a steady-state result is depicted in Fig. 3.
Labor market equilibrium and growth in the land surplus economy
The total labor force in the developing economy is \( L = L_{1} + L_{2} + L^{m} \). With employment in primary production and the modern sector known, the residual labor on marginal land Lm can be found. As nm is already known, the total marginal land used in traditional agriculture Nm is determined. Thus, the full labor market equilibrium corresponds to
$$ p_{1} \left[ {f\left( {n_{1} } \right) - f_{N} \left( {n_{1} } \right)n_{1} \left( {1 - \varepsilon } \right)} \right] = p_{2} K_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right) = p^{m} g\left( {n^{m} } \right) = w. $$
As described previously, the average productivity of labor on marginal land determines the equilibrium wage rate in the economy, and employment in both the primary production and modern sectors equates their respective marginal productivities with w. In addition, the amount of labor employed in the modern sector L2 must correspond to \( \gamma = 0 \). The solid lines in Fig. 3 depict the labor market equilibrium for the economy and the corresponding zero growth rate for the modern sector.
The equilibrium outcome indicated in Fig. 3 is not very optimistic. Although some labor may be employed in the modern sector, a constant capital stock eliminates any productivity gains from spillover and learning-by-doing in the sector. As a consequence, the modern sector competes with the commercial primary production sector for available labor, with marginal land absorbing the residual. Without the dynamic productivity effects of positive growth, the modern sector does not generate a self-reinforcing labor absorption process that leads workers to shift from the rural economy to this sector. The economy remains fundamentally dualistic; commercial primary production and a static modern sector are the two principal sectors, with less favored agricultural land absorbing the remaining rural households. This latter process is a key structural feature of the land surplus rural economy. The concentration of the rural populations on marginal land is essentially a barometer of economy-wide development. As long as there is abundant less favored land for cultivation, it absorbs rural migrants, population increases and displaced unskilled labor from elsewhere in the economy.
On the other hand, the rural populations on less favored agricultural land can also be thought of as a large pool of unskilled surplus labor that, under the right conditions, could potentially be absorbed by the commercial primary production and modern sectors. These conditions are explored in turn.
Primary product price boom
Rising commodity prices frequently lead to expansion of the commercial primary product sector of developing countries (Barbier 2005, 2012; Deininger and Byerlee 2012; van der Ploeg 2011). In such instances, commodity price booms can provide some employment opportunities for low-skilled labor, and thus alleviate the pressure on marginal land less suitable for agriculture by smallholders. An example from Colombia illustrates that, if such employment opportunities are sufficiently large and sustained, they can actually reduce long-term less favored land expansion. In Colombia, since 1970 high-input, intensified, highly mechanized cropping on the most suitable land, as well as expansion in cattle grazing, has drawn labor from more traditional agriculture, so that areas of marginal land are slowly being abandoned and re-vegetating (Etter et al. 2008).
The labor, resource and less favored land impacts of a commodity price boom can be illustrated with the land surplus model. If p1 rises, then real wages in the commercial primary products sector \( {w \mathord{\left/ {\vphantom {w {p_{1} }}} \right. \kern-0pt} {p_{1} }} \) fall. The result is increased demand for labor L1 in primary production. From (3), it follows that the resource–labor ratio for primary production n1 must decline. However, from (2), attracting additional labor to the sector will lead in turn to more resource conversion. In order for n1 to fall, the rise in L1 must exceed the increase in N1. With real wages unchanged in the modern sector, the increase in L1 can come only from reducing labor on less favored land. The fall in Lm must be accompanied by an equivalent decline in Nm in order to keep the fixed land–labor ratio on marginal land. Thus, the increase in employment, resource use and output in the primary production sector in response to the rise in p1 will reduce labor, cultivation and production on less favored land. In Fig. 3, this outcome is represented by the dotted lines that indicate the shifting out of the marginal productivity curve for labor use in primary production.
Of course, if the price of primary products falls, the opposite occurs. The commercial-oriented primary sector contracts, L1 falls and the resulting surplus labor is absorbed on less favored land. The result of rise in Lm is more land conversion until the land–labor ratio on marginal land returns to nm. Once again, less favored land expansion serves as an outlet for residual rural labor, in this case the unemployed are displaced from commercial primary production as the result of a commodity price "bust". Such short-term boom and bust patterns of commercial primary products expansion occur frequently for many LA countries, including commodities such as cattle, cocoa, coffee, grains, oil palm, shrimp, sugar and other key primary products (Aldrich et al. 2006; Barbier 2011; Borras et al. 2012; Bridge 2008; Browder et al. 2008; Caviglia-Harris et al. 2013; DeFries et al. 2010; Mann et al. 2010; Pacheco et al. 2011; Rodrigues et al. 2009; Sills and Caviglia-Harris 2008).
However, even if the commodity price boom is sustained, it can have long-term consequences for the overall pattern of economic development. Suppose that commodity price rises continue to lower real wages in the primary product sector, so that the productivity curve in Fig. 3 shifts further to the left until all the surplus labor on less favored land Lm is absorbed as L1. Any further increases in the marginal productivity of labor in primary production will have an impact on the wage rate of the economy, as the labor market equilibrium is now
$$ p\left[ {f\left( {\frac{{N_{1} }}{{L - L_{2} }}} \right) - f_{N} \left( {\frac{{N_{1} }}{{L - L_{2} }}} \right)\frac{{N_{1} }}{{L - L_{2} }}\left[ {1 - \varepsilon } \right]} \right] = K_{2} \tilde{h}^{\prime } \left( {L_{2} } \right) = w. $$
Equilibrium condition (15) indicates that labor is allocated between primary production and manufacturing until its value marginal products in the two sectors are equalized. This equilibrium also determines the nominal wage.
As a consequence, as shown in Fig. 4, any further shifting out of the value marginal product curve for L1 due to rising commodity prices will cause the nominal wage rate to rise. As p2 in the modern sector remains unchanged, real wages will rise and the demand for L2 declines. The unemployed labor will shift to the primary production sector instead. Although nominal wages have also risen for primary producers, the increase in p1 must be sufficiently large to cause real wages in the sector to fall, in order for it to absorb the additional workers \( L - L_{2} \). Similarly, resource conversion and use N1 will increase for primary production, but less than the increase in L1, so that the resource–labor ratio n1 still declines.
Specialization in primary production
However, the shift in labor from the modern sector to primary production will also lead to dynamic changes to the economy. As indicated by the dotted lines in Fig. 4, if workers leave the modern sector, then from (13), the fall in L2 causes the per capita growth rate in the modern sector to become negative γ < 0. Capital per person in the economy will now be falling, which implies a declining capital stock K2. In this case, the marginal productivity of labor in manufactures will decline, causing more labor to shift to primary production. Growth will continue to fall, primary production expands and manufacturing disappears, until the economy becomes fully specialized in primary production. This outcome is similar to the Dutch disease "resource dependency" phenomenon first identified by Matsuyama (1992), and also observed for LAC economies at various times (Astorga 2010; Barbier 2004; López 2003; Maloney 2002). In a small open economy, productivity increases in a traded agricultural or primary producing sector will cause manufacturing employment and output to contract while the primary sector expands, until complete specialization occurs.
Targeted policies for the modern sector
The equilibrium outcome for the land surplus economy depicted in Fig. 3 indicates a static modern sector displaying zero per capita growth. Without the dynamic productivity effects of positive growth, the modern sector is unable to generate the self-reinforcing labor absorption process that causes workers to shift from the rural economy to this sector, and the overall economy remains dualistic.
However, as indicated in the "Appendix", the growth outcome for the modern sector as represented by condition (13) is the result of the decentralized decisions made by competitive firms and households. Because individual producers in the modern sector do not internalize the learning-by-doing and spillover effects of capital accumulation, they base their decisions on the private marginal product of capital (see 7). In contrast, as shown in the "Appendix", the optimal growth for the modern sector should internalize learning-by-doing and knowledge spillovers across the sector. If so, then optimal modern sector growth is not determined by (13), but in accordance with the average product of capital, i.e.,
$$ \frac{{\dot{q}_{2} }}{{q_{2} }} = \frac{{\dot{k}_{2} }}{{k_{2} }} = \frac{{\dot{c}}}{c} = \gamma^{*} ,\quad \gamma^{*} = \frac{1}{\theta }\left( {p_{2} \tilde{h}\left( {L_{2}^{*} } \right) - \rho } \right). $$
A comparison between (13) and (16) indicates that \( \gamma^{*} > \gamma \). The optimal growth of the modern sector exceeds growth based solely on the decentralized decisions of consumers and firms.
As indicated in the "Appendix", a targeted policy intervention could ensure that the decentralized economy of the modern sector can still attain the higher socially optimal growth rate \( \gamma^{*} \). Specifically, a lump sum tax on the wages of consumers could be used to subsidize purchases of capital goods, through mechanisms such as an investment tax credit, and thus effectively ensure that individual producers are making decisions based on the average product of capital. Such a targeted policy has the possibility of ensuring that the modern sector escapes the "zero growth" trap depicted in Fig. 3.
For example, if the growth rate \( \gamma = 0 \) in Fig. 3 corresponds to the decentralized decision of producers based on the private marginal productivity of capital (7), then a subsidy for capital purchases will raise the growth rate to \( \gamma^{*} > 0 \). However, as (16) indicates, this must correspond to a higher rate of modern sector employment \( L_{2}^{*} \). Workers will have shifted from the rural economy to the modern sector. As depicted in Fig. 5, the outcome generates a self-reinforcing process of growth and labor absorption. Positive growth in the modern sector implies that its capital–labor ratio is rising, and any corresponding increase in capital will shift out the marginal productivity curve for labor. More workers will transfer from the rural economy to the modern sector, and the growth rate γ will increase further. This self-reinforcing process ensures that the land surplus rural economy will shrink, and the modern sector expands, until eventually a fully modern economy will emerge.
Targeted policies in the modern sector
This outcome is in accord with the industrial and structural transformation policies that are frequently advocated to encourage modern sector growth in developing economies, and especially in Latin America and the Caribbean (Lin 2011; McMillan and Rodrik 2011; Ocampo et al. 2009; Rodrik 2007, 2010). For example, as argued by Rodrik (2010, p. 90), "all successful countries have followed what one might call 'productivist' policies. These are activist policies aimed at enhancing the profitability of modern industrial activities and accelerating the movement of resources towards modern industrial activities", which include explicit industrial policies such as "tax and credit incentives" for investment.Footnote 3 Similarly, Ocampo et al. (2009, p. 132) maintain that, where successful, the overall goal for industrial and credit policies in developing economies has been "to induce firms 'to learn' or acquire 'specific assets' …with the objective of building up technically advanced productive capacity."
Targeted policies for traditional agriculture on marginal land
However, even if targeted policies in the modern sector succeed in raising the productivity of labor in that sector, the rising productivity does not translate into higher real wages for labor. The reason has to do with the key structural feature of the land surplus rural economy; as long as there remains significant numbers of the rural population farming marginal land, the Ricardian surplus land condition (15) ensures that the unchanging land–labor ratio for traditional agriculture on less favored land will determine the nominal wage rate for all sectors of the economy. Thus, as shown in Fig. 5, although workers will shift from the rural economy to the modern sector, they will not necessarily be better off. Although eventually when a fully modern economy occurs, all workers will be paid their marginal productivity. However, in the transition to that outcome, with significant numbers of rural households still located in marginal areas, there may be a need for targeted policies to these households to raise real wages and alleviate widespread rural poverty.
The introduction of new inputs, such as fertilizers or improved varieties, and other technical improvements on marginal land may be neutral, or biased in favor of either land or labor. But if any such technical progress fails to affect the zero marginal productivity condition indicated in (15), then the land–labor ratio for production on marginal land must, therefore, remain the same. However, the average productivity of labor \( g\left( {n_{2}^{m} } \right) \) can rise as a result of technical improvements on marginal land, and if that is the case, real wages \( {w \mathord{\left/ {\vphantom {w {p_{2} }}} \right. \kern-0pt} {p_{2} }} \) will increase. Since p2 is fixed, this implies a rise in the nominal wage.
As shown in Fig. 5, an increase in the nominal wage for the entire economy has the effect of shifting up the straight line represented by \( p^{m} g\left( {n^{m} } \right) = w \). As condition (17) indicates, there will be a new labor market equilibrium. However, as depicted in Fig. 5, if there are targeted modern sector policies in place, the implications of this new equilibrium will be different for the primary production as opposed to the modern sector.
The rise in the nominal wage leads to an increase in real wages \( {w \mathord{\left/ {\vphantom {w {p_{1} }}} \right. \kern-0pt} {p_{1} }} \) in commercial primary production activities. Labor employment L1 declines and the resource–labor ratio increases. From (12), N1 must also decrease as employment in primary production falls. However, in order for n1 to rise, L1 must decline more than N1. Thus, the effect of technical progress on marginal land and the consequent rise in wages is a contraction in export-oriented primary production and employment.
Without modern sector expansion, there should also be contraction in employment in this sector, too. However, as outlined in the previous section and illustrated in Fig. 5, targeted policies for the modern sector will cause the marginal productivity of labor (the \( p_{2} K_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right) \) curve in Fig. 5) to shift out, and thus some increase in L2. This will lead to positive growth in the modern sector and a rising capital–labor ratio, which means that the resulting increase in capital will cause the marginal productivity curve for labor to shift out continuously. Although as shown in Fig. 5 rising nominal and real wages may reduce some of the labor absorption caused by the expanding modern sector, as long as the productivity curve for L2 shifts out, there will be some labor absorption by the sector. Eventually the self-reinforcing process of increasing growth, capital investment and labor employment in the modern sector will induce more workers to transfer from the rural economy to the modern sector, and the growth rate γ will increase further.
The self-reinforcing process of dynamic growth in the modern sector ensures once again that the land surplus rural economy will shrink, and modern production activities expand, until eventually a fully modern economy will emerge.Footnote 4 However, by targeting investments and policies to improve the livelihoods and productivity of traditional agriculture on marginal land, this process leads to higher real wages and reductions in rural poverty in the interim period before the emergence of the fully modern economy.
Such an outcome supports recent efforts to target investments directly to improve the livelihoods of the rural populations in remote and fragile environments (World Bank 2008). For example, in Ecuador poverty maps have been developed to target public investments to geographically defined sub-groups of the population according to their relative poverty status, which could substantially improve the performance of the programs in term of poverty alleviation (Elbers et al. 2007). A World Bank study that examined 122 targeted programs in 48 developing countries confirms their effectiveness in reducing poverty, if they are designed properly (Coady et al. 2004). The benefits are even larger when programs, such as PROGRESA in Mexico, were successful in employing second-round targeting to identify households in less favored locations and thus reducing leakages to non-poor households (Higgins et al. 2010, p. 20).
Appropriate targeting of research, extension and agricultural development has been shown to improve the livelihoods of the poor, increase employment opportunities and even reduce environmental degradation (Barbier 2010, 2012; Carr 2009a, b; Caviglia-Harris and Harris 2008; Coxhead et al. 2002). Empirical evidence of technical change, increased public investments and improved extension services in remote regions indicates that any resulting land improvements that do increase the value of homesteads can have a positive effect on both land rents and reducing agricultural expansion (Bellon et al. 2005; Coxhead et al. 2002; Sills and Caviglia-Harris 2008).
Improving market integration for the rural poor may also depend on targeted investments in a range of public services and infrastructure in remote and ecologically fragile regions, such as extension services, roads, communications, protection of property, marketing services and other strategies to improve smallholder accessibility to larger markets (World Bank 2008). Targeting agricultural research and extension services to poor farmers combined with investments in rural road infrastructure to improve market access appears to generate positive development and poverty alleviation benefits (Bellon et al. 2005; Pattanayak et al. 2003). For example, in Mexico, poverty mapping was found to enhance the targeting of maize crop breeding efforts to poor rural communities in less favorable and remote areas (Bellon et al. 2005).
The dual economy model with surplus rural land yields two predictions. First, the concentration of rural populations on less favored agricultural land is a barometer of economy-wide development. Economies with a larger share of their rural populations on marginal land are likely to be developing less rapidly and thus display lower rates of long-run economic growth compared to economies with smaller concentrations of rural households on this type of land. On the other hand, economies with a greater share of their workforce employed in industry and other modern sector activities are likely to develop more rapidly, and hence have higher rates of long-run economic growth. Both of these predictions are examined empirically for the economies of Latin America and Caribbean.
The estimation to test these predictions is based on the standard empirical neoclassical growth framework for conditional convergence. This approach relates the real per capita growth rate over a given period to an initial level of per capita real gross domestic product (GDP), plus a variety of control and environmental variables representing international openness, governance, and prevailing human, physical and natural capital endowments (Barro and Sala-I-Martin 2004). These factors have been found to influence long-term growth in Latin America since 1900 (Astorga 2010). The possible endogeneity of these explanatory variables was taken into account using lagged values as instruments. For each variable, the instrument consists of the average over 5 years preceding 1990. The exceptions include (log) per capita real GDP in 1990, the governance variables, which were averaged over 1996–2011, and the dummy variable for small island developing states. The governance variables are from the Worldwide Governance Indicators (Kaufmann et al. 2010), the UN classification of small island developing states was employed to create the dummy variable (see http://www.un.org/special-rep/ohrlls/sid/list.htm), and the remaining variables are from the World Development Indicators (World Bank 2013).
To test the predictions of the model of this paper, the growth analysis is further extended to include the share of the rural population concentrated on less favored agricultural land and the share of the workforce employed in industry as additional explanatory variables.Footnote 5 The latter variable is from World Bank (2013), averaged over the 5 years preceding 1990 to avoid endogeneity problems.
Less favored agricultural land consists of irrigated land on terrain greater than 8 % median slope; rainfed land with a length of growing period (LGP) of more than 120 days but either on terrain greater than 8 % median slope or with poor soil quality; semi-arid land (land with LGP 60–119 days); and arid land (land with LGP <60–119 days). These various land areas were determined by employing in Arc GIS 10.1 the datasets from the FAO Global Agro-Ecological Zones (GAEZ) Data Portal version 3 (Available online: http://gaez.fao.org/) combined with national boundaries from the Gridded Population of the World, Version 3 (GPWv3) of the Center for International Earth Science Information Network (CIESIN) and Centro Internacional de Agricultura Tropical (CIAT). Agricultural land (% of land area) data were obtained from the World Development Indicators (World Bank 2013), and rural populations determined from the rural–urban extent dataset that was published as part of CIESIN Global Rural Urban Mapping Project (GRUMPv1). Use of these spatial datasets enabled the measurement of the share of rural population on less favored agricultural land for the 35 LAC countries in 2000, the mid-point of the 1990–2011 estimation period.Footnote 6
Because a lagged value for years preceding 1990 could not be created for the share of rural population on marginal land, this variable is likely to be endogenous in the regression for average annual growth over 1990–2011. OLS regression and the consequent Hausman specification test for simultaneity confirms at the 5 % significance level the possibility of endogeneity. The instrumental variables three-stage least squares (3SLS) estimation is used to correct for this problem. In this procedure, it is assumed that the structural system includes both the growth regression and a second equation for the share of rural population on less favored agricultural land. Based on the insights of the theoretical model, the explanatory variables in the latter equation are factors that explain the prominence of an agricultural-based rather than a modern economy: arable land per capita, gross fixed capital formation as a percentage of GDP, agricultural value added as a share of GDP and the small island developing states dummy. In the 3SLS procedure, the instruments of the first stage include all the exogenous variables of the two structural equations. Three additional exogenous instruments, also averaged over 5 years preceding 1990, were also included: primary school enrolment rate, secondary school enrolment rate and land area. The second and third stages involve the seemingly unrelated linear regression equations (SURE) procedure, employing two-step (or iterative) feasible generalized least square (GLS) that accounts for contemporaneous correlation in the errors across equations.
The analysis was conducted for long-term annual average growth over the 1990–2011 period for the 35 LAC economies listed in Table 1.Footnote 7 The final 3SLS regression results for long-run growth for LAC economies and the percentage of rural population on less favored agricultural land are depicted in Table 2. Average annual growth over 1990–2011 is significantly reduced across LAC countries as the share of rural population on marginal land increases. On the other hand, economies with a greater share of employment in industry display on average higher long-run growth. These results seem to confirm the two main predictions of the dual economy model.
Table 2 3SLS estimation of long-run growth, Latin America and Caribbean, 1990–2011
Both agricultural value added and investment share of GDP appear to have an indirect impact on long-run growth via the concentration of rural population on marginal land. Gross fixed capital formation as a percentage of GDP seems to reduce this concentration, which translates into a positive influence on long-run growth. However, a higher share of agriculture in GDP leads to more rural people located on less favored agricultural land, which reduces long-run growth. Small island developing states appear to have less people on marginal land, which is good for growth, but these economies also display lower long-run growth rates compared to other LAC countries. Political stability and absence of violence or terrorism is correlated with higher growth in Latin America and the Caribbean.
Finally, the empirical results also show that the concentration of rural populations on marginal land and the industrial share of total employment have an important influence on how fast each LAC economy "converges" to its long-run steady state. Table 2 indicates that the growth path of LAC economies displays conditional convergence. That is, growth over 1990–2011 is inversely related to the initial level of real GDP per capita in 1990. However, this relationship is clearly conditioned on how much of the rural population is concentrated on less favored agricultural land and on the share of industry in total employment.
The model of this paper is based on two important stylized facts concerning the rural sector of LAC economies. First, many economies have a "residual" pool of rural poor located on abundant but less favored agricultural land, and second, considerable land use conversion and resource exploitation are occurring through expansion of a commercial primary products sector. These Ricardian land surplus conditions can lead to a permanent "dualistic" outcome in the economy, where the modern sector competes with the commercial primary production sector for available labor, with marginal land absorbing the residual. In addition, the economy is vulnerable to primary product price booms and productivity increases, which will cause manufacturing employment and output to contract while the primary sector expands, until complete specialization occurs.
The empirical analysis of long-run growth over 1990–2011 for 35 LAC economies confirms two predictions of the model. As the share of rural population on less favored agricultural land increases, long-run growth diminishes. In contrast, a greater share of employment in industry is consistent with higher long-run growth.
As the paper has shown, avoiding some of the negative outcomes predicted by the dual economy model can occur through implementing targeted modern sector policies, as suggested by some (Lin 2011; McMillan and Rodrik 2011; Ocampo et al. 2009; Rodrik 2007, 2010). However, even if such policies succeed in raising the productivity of modern sector workers, the rising productivity does not translate into higher real wages for labor. Moreover, before the emergence of the fully modernized economy, there is likely to be an interim period during which poor rural households will remain on marginal land. As long as this residual pool of labor exists, workers shifting from the rural economy to the modern sector will not necessarily be better off. During this transition period, targeted policies are required to raise real wages and alleviate widespread rural poverty in marginal areas. Such policies include investments to improve the livelihoods of the rural poor in remote and fragile environments, appropriate research, extension and agricultural development for marginal land, and better market integration through extension service, roads, communication, protection of property, marketing services and other strategies to improve smallholder accessibility to larger markets.
Any policy strategy targeted at improving the livelihoods of the rural poor located in remote and fragile environments should be assessed against an alternative strategy, which is to encourage greater out-migration from these areas. As pointed out by Lall et al. (2006, p. 48), rural development is essentially an indirect way of deterring migration to cities, yet because of the costliness of rural investments, "policies in developing countries are increasingly more concerned with influencing the direction of rural to urban migration flows—e.g. to particular areas—with the implicit understanding that migration will occur anyway and thus should be accommodated at as low a cost as possible." Rarely, however, are the two types of policy strategies, investment in poor rural areas and targeted out-migration, directly compared. In addition, only recently the linkages between rural out-migration, smallholder agriculture and land use change and degradation in remote and marginal areas have been analyzed (Mendola 2008, 2012; Gray 2009; Greiner and Sakdapolrak 2013; VanWey et al. 2012). Researching such linkages will become increasingly important to understand the conditions under which policies to encourage greater rural out-migration should be preferred to a targeted strategy to overcome poverty in remote and fragile areas. It may be, as argued by the World Bank (2008, p. 49), that "until migration provides alternative opportunities, the challenge is to improve the stability and resilience of livelihoods in these regions". As this paper has pointed out, this may become a critical feature in the design of structural transformation policies to overcome widespread rural poverty in many LAC economies.
For an updated treatment of these conditions, see Barbier (in press).
The case \( \varepsilon \left( {N_{1} } \right) = 1 \) implies \( {{\partial L_{1} } \mathord{\left/ {\vphantom {{\partial L_{1} } {\partial N_{1} }}} \right. \kern-0pt} {\partial N_{1} }} = {{L_{1} } \mathord{\left/ {\vphantom {{L_{1} } {N_{1} }}} \right. \kern-0pt} {N_{1} }} = z \), which is a violation of the convex properties of (2). It also corresponds to the case first suggested by Domar (1970), where natural resources and land are so abundant that they essentially comprise a limitless "frontier" that they can be appropriated proportionately with increases in labor. In contrast, if \( \varepsilon \left( {N_{1} } \right) = 0 \), then resources are no longer abundantly available but fixed in supply.
As outlined by Rodrik (2007, pp. 117–118), government could implement a broad range of incentive programs, including subsidizing costs of "self discovery" of profitable new products, developing mechanisms for higher-risk finance, internalizing coordination externalities, public R&D, subsidizing general technical training, and taking advantage of national abroad.
Note that Fig. 5 depicts an interim period where residual labor still has to be absorbed on marginal land. Although modern sector employment has expanded, it cannot absorb all the labor released from primary production. Some of the resulting unemployed labor must, therefore, be absorbed through greater traditional agricultural cultivation of marginal land. As \( L_{2}^{m} \) increases, \( N_{2}^{m} \) must rise proportionately in order to keep the land–labor ratio fixed.
To test the possibility that booms and busts in primary products trade may impact, at least temporarily, an economy's overall development, the share of primary products in total merchandise exports was also added as an explanatory variable in different versions of the growth analysis. However, the estimated coefficient of this variable is insignificant, and its inclusion does not improve the robustness of the estimation.
I am grateful to Jacob P. Hochard for assistance in determining this variable. The spatial data sets allowed estimates of the percentage of the rural population on less favored agricultural land for 2000, 2005 and 2010. The 2010 estimate was employed in Table 1. The 2000 estimate was used in the long-run grown regression as it is the mid-point over the 1990–2011 time period of the regression.
Although other governance indicators from the Worldwide Governance Indicators, such as control of corruption, government effectiveness, regulatory quality, rule of law, and voice and accountability, were employed in the estimation, political stability and absence of violence were the most consistently significant variable in the long-run growth estimation. Unfortunately, collinearity problems prevented the inclusion of more than one governance indicator in the regression.
Aide TM, Clark ML, Grau HR, López-Carr D, Levy MA, Redo D, Bonilla-Moheno M, Riner G, Andrade-Núñez MJ, Muñiz M (2013) Deforestation and reforestation of Latin America and the Caribbean (2001–2010). Biotropica 45:262–271
Aldrich S, Walker R, Arima E, Caldas M (2006) Land-cover and land-use change in the Brazlian Amazon: smallholders, ranchers, and frontier stratification. Econ Geogr 82:265–288
Astorga P (2010) A century of growth in Latin America. J Dev Econ 92:232–243
Banerjee AV, Duflo E (2007) The economic lives of the poor. J Econ Perspect 21(1):141–168
Barbier EB (2004) Agricultural expansion, resource booms and growth in Latin America: implications for long-run economic development. World Dev 32:137–157
Barbier EB (2005) Natural resources and economic development. Cambridge University Press, Cambridge
Barbier EB (2010) Poverty, development and environment. Environ Dev Econ 15:635–660
Barbier EB (2011) Scarcity and frontiers: how economies have developed through natural resource exploitation. Cambridge University Press, Cambridge
Barbier EB (2012) Natural capital, ecological scarcity and rural poverty. In: Policy research working paper no. 6232. The World Bank, Washington, DC
Barbier EB (2013) Structural change, dualism and economics development: the role of the vulnerable poor on marginal lands.In: Policy research working paper no. 6456. The World Bank, Washington, DC
Barbier EB (in press) Land use and sustainable economic development: developing world. Chapter 16. In: Duke JM, Wu J (eds) The oxford handbook of land economics. Oxford University Press, Oxford
Barro RJ, Sala-I-Martin X (2004) Economic growth, 2nd edn. MIT Press, Cambridge
Bellon MR, Hodson D, Bergvinson D, Beck D, Martinez-Romero E, Montoya Y (2005) Targeting agricultural research to benefit poor farmers: relating poverty mapping to maize environments in Mexico. Food Policy 30:476–492
Borras SM, Franco JC, Gómez S, Kay C, Spoor M (2012) Land grabbing in Latin America and the Caribbean. J Peasant Stud 39:845–872
Boucher D, Elias P, Lininger K, May-Tobin C, Roquemore S, Saxon E (2011) The root of the problem: What's driving tropical deforestation today?. Union of Concerned Scientists, Cambridge
Bridge G (2008) Global production networks and the extractive sector: governing resource-based development. J Econ Geogr 8:389–419
Browder J, Pedlowski M, Walker R, Wynne R, Summers P, Abad A, Becerra-Cordoba N, Mil-Homens J (2008) Revisiting theories of frontier expansion in the Brazilian Amazon: a survey of colonist farming population in Rondônia's post-frontier, 1992–2002. World Dev 36:1469–1492
Carr D (2009a) Population and deforestation: why rural migration matters. Prog Hum Geogr 33:355–378
Carr D (2009b) Population and deforestation: why rural migration matters. Prog Hum Geogr 33:355–378
Caviglia-Harris JL, Harris D (2008) Integrating survey and remote sensing data to analyze land use scale: insights from agricultural households in the Brazilian Amazon. Int Reg Sci Rev 31:115–137
Caviglia-Harris J, Sills EO, Mullan K (2013) Migration and mobility on the Amazon frontier. Popul Environ 34:338–369
Chomitz K, Buys P, De Luca G, Thomas T, Wertz-Kanounnikoff S (2007) At loggerheads? Agricultural expansion, poverty reduction, and environment in the tropical forests. The World Bank, Washington, DC
Chronic Poverty Research Centre (CPRC) (2004) Chronic poverty report 2004–5. CPRC, University of Manchester, Manchester
Coady D, Grosh M, Hoddinott J (2004) Targeting outcomes redux. World Bank Res Obs 19(1):61–85
Comprehensive Assessment of Water Management in Agriculture (2007) Water for food, water for life: a comprehensive assessment of water management in agriculture. Earthscan and International Water Management Institute, Colombo, Sri Lanka, London
Coxhead I, Shively GE, Shuai X (2002) Development policies, resource constraints, and agricultural expansion on the Philippine land frontier. Environ Dev Econ 7:341–364
DeFries R, Rudel T, Uriarte M, Hansen M (2010) Deforestation driven by urban population growth and agricultural trade in the twenty-first century. Nat Geosci 3:178–801
Deininger K, Byerlee D (2012) The rise of large farms in land abundant countries: do they have a future? World Dev 40:701–714
Domar E (1970) The causes of slavery or serfdom: a hypothesis. J Econ Hist 30(1):18–32
Elbers C, Fujii T, Lanjouw P, Özler B, Yin W (2007) Poverty alleviation through geographic targeting: how much does disaggregation help? J Dev Econ 83:198–213
Etter A, McAlpine C, Possingham H (2008) Historical patterns and drivers of landscape change in Colombia since 1500: a regionalized spatial approach. Ann Assoc Am Geogr 98:2–23
Food and Agricultural Organization (FAO) of the United Nations (2006) Global forest resources assessment 2005, main report. Progress towards sustainable forest management. FAO Forestry paper 147. FAO, Rome
Gibbs HK, Ruesch AS, Achard F, Clayton MK, Holmgren P, Ramankutty N, Foley JA (2010) Tropical forests were the primary sources of new agricultural lands in the 1980 s and 1990 s. Proc Natl Acad Sci 107:16732–16737
Gray CL (2009) Rural out-migration and smallholder agriculture in the southern Ecuadorian Andes. Popul Environ 30:193–217
Greiner C, Sakdapolrak P (2013) Rural-urban migration, agrarian change, and the environment in Kenya: a critical review of the literature. Popul Environ 34(4):524–533
Hansen B (1979) Colonial economic development with unlimited supply of land: a Ricardian case. Econ Dev Cult Chang 27(4):611–627
Higgins K, Bird K, Harris D (2010) Policy responses to the spatial dimensions of poverty. ODI working paper 328. Overseas Development Institute, London
International Fund for Agricultural Development (IFAD) (2010) Rural poverty report 2011. New realities, new challenges: new opportunities for tomorrow's generation. IFAD, Rome
Jalan J, Ravallion M (1997) Spatial poverty traps? Policy research working paper 1798. World Bank, Washington, DC
Jiménez JP, Tromben V (2006) Fiscal policy and the commodities boom: the impact of higher prices for non-renewables in Latin America and the Caribbean. CEPAL Rev 90:59–84
Kaufmann D, Kraay A, Mastruzzi M (2010) The worldwide governance indicators: methodology and analytical issues. Policy research working paper no. 5430. The World Bank, Washington, DC
Lall SV, Selod H, Shalizi Z (2006) Rural-urban migration in developing countries: a survey of theoretical predictions and empirical findings. World bank policy research working paper 3915, May 2006. The World Bank, Washington, DC
Lambin EF, Meyfroidt P (2011) Global land use change, economic globalization, and the looming land scarcity. Proc Natl Acad Sci 108:3465–3472
Lin JY (2011) New structural economics: a framework for rethinking development. World Bank Res Obs 26:193–221
López RE (2003) The policy roots of socioeconomic stagnation and environmental implosion: Latin America 1950–2000. World Dev 31(2):259–280
Maloney WF (2002) Missed opportunities: innovation and resource-based growth in Latin America. Economia 3:111–167
Mann ML, Kaufmann RK, Bauer D, Gopal S, Del Carmen Vera-Diaz M, Nepstad D, Merry F, Kallay J, Amacher GS (2010) The economics of cropland conversion in Amazonia: the importance of agricultural rent. Ecol Econ 69:1503–1509
Matsuyama K (1992) Agricultural productivity, comparative advantage, and economic growth. J Econ Theory 58:317–334
McMillan M, Rodrik D (2011) Globalization, structural change, and productivity growth. NBER working paper no. 17143. National Bureau of Economic Research, Boston, MA
Mendola M (2008) Migration and technological change in rural households: complements or substitutes? J Dev Econ 85:150–175
Mendola M (2012) Review Article: rural out-migration and economic development at origin: a review of the evidence. J Int Dev 24:102–122
Mueller B (1997) Property rights and the evolution of the a frontier. Land Econ 73:42–57
Ocampo JA, Rada C, Taylor L (2009) Growth and policy in developing countries: a structuralist approach. Columbia University Press, New York
Pacheco P, Aguilar-Støen M, Börner J, Etter A, Putzel L, del Carmen Vera Diaz M (2011) Landscape transformation in tropical Latin America: assessing trends and policy implications for REDD+. Forests 2:1–29
Pattanayak SK, Mercer DE, Sills E, Yang J-C (2003) Taking stock of agroforestry adoption studies. Agrofor Syst 57:173–186
Pichón F (1997) Colonist land-allocation decisions, land use, and deforestation in the Ecuadorian frontier Economic. Dev Cult Chang 45:707–744
Rodrigues A, Ewers R, Parry L, Souza C, Verissimo A, Balmford A (2009) Boom-and-bust development patterns across the Amazonian deforestation frontier. Science 324:1435–1437
Rodrik D (2007) One economics, many recipes: globalization, institutions and economic growth. Princeton University Press, Princeton, NJ
Rodrik D (2010) Making room for China in the world economy. Am Econ Rev Papers Proc 100:89–93
Rudel T (2007) Changing agents of deforestation: from state-initiated to enterprise driven process, 1970–2000. Land Use Policy 24:35–41
Sills E, Caviglia-Harris JL (2008) Evolution of the Amazonian frontier: land values in Rondônia, Brazil. Land Use Policy 26:55–67
Solís D, Bravo-Ureta BE, Quiroga RE (2009) Technical efficiency among peasant farmers participating in natural resource management programmes in Central America. J Agric Econ 60:202–219
van der Ploeg R (2011) Natural resources: curse or blessing? J Econ Lit 49:366–420
VanWey LK, Guedes GR, D'Antona AO (2012) Out-migration and land-use change in agricultural frontiers: insights from Altamira settlement project. Popul Environ 34:44–68
World Bank (2003) World development report 2003. World Bank, Washington, DC
World Bank (2008) Word development report 2008: agricultural development. The World Bank, Washington, DC
World Bank (2013) Word development indicators. The World Bank, Washington, DC. http://databank.worldbank.org/data/views/variableselection/selectvariables.aspx?source=world-development-indicators
I am grateful for comments by David R. Heres, Jacob P. Hochard, Juan Rosellon and an anonymous referee.
Department of Economics and Finance, University of Wyoming, Laramie, WY, 82071, USA
Edward B. Barbier
& John S. Bugas
Search for Edward B. Barbier in:
Search for John S. Bugas in:
Correspondence to Edward B. Barbier.
An erratum to this article can be found at http://dx.doi.org/10.1007/s40503-014-0007-1.
Appendix: Growth dynamics in the modern sector
Decentralized solution
In the decentralized economy, decisions are made by competitive firms and households. It is assumed that the representative household has the choice to work either in the modern sector or in commercial primary production. Households employed in traditional agriculture on less favored land consume all production within that sector and do not accumulate assets. Thus, the relevant population that the representative household comprises \( L_{1} + L_{2} \). With no population growth and free labor mobility, this population is determined by the labor market equilibrium, which is in turn based on the predetermined nominal wage w in the economy (from condition 5). Hence, if M is known, then per capita imports are also determined exogenously to the household's welfare-maximizing decision.
The representative household seeks to
$$ \mathop {\text{Max}}\limits_{c\left( t \right)} {\mkern 1mu} U = \int\limits_{0}^{\infty } {\left[ {\frac{{\left( {c\left( t \right)^{1 - \theta } + m^{1 - \theta } } \right) - 1}}{1 - \theta }} \right]e^{ - \rho t} {\text{d}}t} $$
$$ {\text{s}} . {\text{t}} .\,\dot{a} = ra + w - c,\quad a\left( 0 \right) = a_{0} , $$
which yields the following optimization and transversality conditions, respectively
$$ \frac{{\dot{c}}}{c} = \frac{1}{\theta }\left( {r - \rho } \right) $$
$$ \mathop {\lim }\limits_{t \to \infty } \,a\left( t \right)e^{{ - \int_{0}^{t} {rv{\text{d}}v} }} = 0. $$
Setting a = k2 in (18), and using the marginal productivity conditions (7) and (8) to substitute for r and w, yields
$$ \dot{k}_{2} = p_{2} \left[ {\tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right)} \right]k_{2} + p_{2} K_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right) - c = p_{2} \tilde{h}\left( {L_{2} } \right)k_{2} - c, $$
which is condition (11) for the accumulation of capital per person in the economy.
Similarly, using (7) in (19)
$$ \frac{{\dot{c}}}{c} = \frac{1}{\theta }\left( {p_{2} \left[ {\tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right)} \right] - \rho } \right) \equiv \gamma . $$
Condition (22) defines the consumption growth path of the decentralized economy of the modern sector, which is constant if L2 is unchanging, and is positive if \( p_{2} \left[ {\tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right)} \right] > \rho \). If γ is constant, then the per capita consumption path is \( c\left( t \right) = c\left( 0 \right)e^{\gamma t} \). Substituting the latter expression into (17) gives
$$ {\mkern 1mu} U = \int\limits_{0}^{\infty } {\left[ {\frac{{c\left( 0 \right)^{1 - \theta } e^{{\left( {1 - \theta } \right)\gamma t}} + m^{1 - \theta } - 1}}{1 - \theta }} \right]e^{ - \rho t} {\text{d}}t} . $$
The integral (23) will converge to infinity unless \( \rho > \left( {1 - \theta } \right)\gamma \), which along with (22), implies that \( {{\dot{c}} \mathord{\left/ {\vphantom {{\dot{c}} {c = \gamma > 0}}} \right. \kern-0pt} {c = \gamma > 0}} \) iff
$$ p_{2} \left[ {\tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right)} \right] > \rho > \frac{1 - \theta }{\theta }\left( {p_{2} \left[ {\tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right)} \right] - \rho } \right). $$
Substituting \( c\left( t \right) = c\left( 0 \right)e^{\gamma t} \) into (21) yields \( \dot{k}_{2} = p_{2} \tilde{h}\left( {L_{2} } \right)k_{2} - c\left( 0 \right)e^{\gamma t} \). Letting \( \beta = p_{2} \tilde{h}\left( {L_{2} } \right) \), the solution to this differential equation for capital per person is
$$ k_{2} \left( t \right) = {\text{be}}^{\beta t} + \frac{c\left( 0 \right)}{\varphi }e^{\gamma t} ,\quad \varphi = \beta - \gamma = p_{2} \tilde{h}\left( {L_{2} } \right) - \gamma , $$
where b is an unknown constant. Condition (24) implies that φ > 0.
Using a = k2 and (25) in the transversality condition (20)
$$ \mathop {\lim }\limits_{t \to \infty } \,k_{2} \left( t \right) = \mathop {\lim }\limits_{t \to \infty } \left\{ {{\text{be}}^{{\left( {\beta - r} \right)t}} + \frac{c\left( 0 \right)}{\varphi }e^{{\left( {\gamma - r} \right)t}} } \right\} = 0 $$
From (25) and (7), it is clear that \( \beta - r = - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right) < 0 \), which indicates that \( e^{{\left( {\beta - r} \right)t}} \) in (26) converges to one. From (22) and (7), \( \gamma - r = \frac{1 - \theta }{\theta }p_{2} \left[ {\tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{'} \left( {L_{2} } \right)} \right] - \frac{\rho }{\theta } \), which from the lower bound on convergence in condition (24) implies that γ − r < 0. As c(0) is finite and φ > 0, the second term inside the curly brackets in (26) converges toward zero. Hence, the transversality condition (26) requires the constant b to be zero. Equation (25), therefore, implies
$$ c\left( t \right) = \varphi k_{2} \left( t \right),\quad \varphi = p_{2} \tilde{h}\left( {L_{2} } \right) - \gamma , $$
which is the same as (12) in the text. Along the path of the decentralized modern sector, consumption per capita is proportional to capital per person. Given that \( q_{2} = \tilde{h}\left( {L_{2} } \right)k_{2} \), then it follows that
$$ \frac{{\dot{q}_{2} }}{{q_{2} }} = \frac{{\dot{k}_{2} }}{{k_{2} }} = \frac{{\dot{c}}}{c} = \gamma ,\quad \gamma = \frac{1}{\theta }\left( {p_{2} \left[ {\tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right)} \right] - \rho } \right), $$
which is (13) in the text. Per capita consumption, capital and output grow at the same rate in the decentralized economy of the modern sector.
Optimal solution
Unlike an individual producer, a benevolent social planner takes into account that each firm's increase in its capital stock adds to the aggregate capital of the modern sector via knowledge spillovers. That is, the social planner solves for (17) with respect to (21) and \( k\left( 0 \right) = k_{0} \), which yields the following optimization and transversality conditions
$$ \frac{{\dot{c}}}{c} = \frac{1}{\theta }\left( {p_{2} \tilde{h}\left( {L_{2} } \right) - \rho } \right) \equiv \gamma^{*} ,\mathop {\lim }\limits_{t \to \infty } \,k\left( t \right)e^{{ - \int_{0}^{t} {rv{\text{d}}v} }} = 0. $$
Condition (29) indicates that the optimal consumption growth path of the economy is constant if L2 is unchanging, and is positive if \( p_{2} \tilde{h}\left( {L_{2} } \right) > \rho \). If γ* is constant, then the optimal per capita consumption path is \( c\left( t \right) = c\left( 0 \right)e^{{\gamma^{*} t}} \). This result implies that the integral (23) will converge to infinity unless \( \rho > \left( {1 - \theta } \right)\gamma^{*} \), which along with (29), implies that \( {{\dot{c}} \mathord{\left/ {\vphantom {{\dot{c}} {c = \gamma > 0}}} \right. \kern-0pt} {c = \gamma > 0}} \) iff
$$ p_{2} \tilde{h}\left( {L_{2} } \right) > \rho > \frac{1 - \theta }{\theta }\left( {p_{2} \tilde{h}\left( {L_{2} } \right) - \rho } \right). $$
Following the same method as for the decentralized economy of the modern sector, solution to (21) is
$$ k_{2} \left( t \right) = be^{\beta t} + \frac{c\left( 0 \right)}{{\varphi^{*} }}e^{\gamma *t} ,\,\,\,\,\varphi^{*} = \beta - \gamma^{*} = p_{2} \tilde{h}\left( {L_{2} } \right) - \gamma^{*} . $$
Condition (30) implies that \( \varphi^{*} > 0 \) and the transversality condition in (29) ensures that \( b = 0 \). It, therefore, follows from (31) that
$$ c\left( t \right) = \varphi^{*} k_{2} \left( t \right),\,\,\,\,\varphi^{*} = p_{2} \tilde{h}\left( {L_{2} } \right) - \gamma^{*} . $$
Along the optimal path for the modern sector, consumption per capita is proportional to capital per person. As \( q_{2} = \tilde{h}\left( {L_{2} } \right)k_{2} \), then it follows that optimal modern sector growth is determined by
$$ \frac{{\dot{q}_{2} }}{{q_{2} }} = \frac{{\dot{k}_{2} }}{{k_{2} }} = \frac{{\dot{c}}}{c} = \gamma^{*} ,\quad \gamma^{*} = \frac{1}{\theta }\left( {p_{2} \tilde{h}\left( {L_{2}^{*} } \right) - \rho } \right), $$
which is (16) in the text. Optimal growth of per capita consumption, capital and output occurs at the same rate in the modern sector, and the magnitude of this growth rate is determined by total employment in the sector \( L_{2}^{*} \). As \( \gamma^{*} > \gamma \), the optimal growth of the modern sector exceeds growth in the sector based solely on the decentralized decisions of consumers and firms. Because the social planner takes into account learning-by-doing and knowledge spillovers across the sector, the optimal growth rate is determined in accordance with the average product of capital \( \tilde{h}\left( {L_{2}^{*} } \right) \) whereas the decentralized solution takes into account only the private marginal product of capital \( \tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{'} \left( {L_{2} } \right) \). Thus, the growth rate generated by decentralized decision making in the modern sector is too low.
The decentralized economy of the modern sector can still attain the optimal growth rate \( \gamma^{*} \) if capital goods purchased by individual producers are subsidized. For example, suppose that producers receive a subsidy on interest payments equivalent to \( s = p_{2} L_{2}^{*} \tilde{h}^{{\prime }} \left( {L_{2}^{*} } \right) \). From (7), the private marginal productivity of capital would be \( p_{2} \left[ {\tilde{h}\left( {L_{2} } \right) - L_{2} \tilde{h}^{{\prime }} \left( {L_{2} } \right)} \right] = r - s \), which would ensure that in (13) the decentralized and optimal growth rate would be the same \( \gamma = \gamma^{*} = \frac{1}{\theta }\left( {p_{2} \tilde{h}\left( {L_{2}^{*} } \right) - \rho } \right) \). If the subsidy is funded through a lump sum tax on the wages received by consumers \( s = \tau w \), then the budget constraint (18) of the representative consumer becomes \( \dot{a} = ra + \left( {1 - \tau } \right)w - c \). However, maximization of utility (17) with respect to this new constraint does not change the optimization condition (19). Thus, taxing consumer wages to pay for the subsidy for capital purchases by producers does not introduce any distortion in the model of the modern sector.
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Barbier, E.B., Bugas, J.S. Structural change, marginal land and economic development in Latin America and the Caribbean. Lat Am Econ Rev 23, 3 (2014) doi:10.1007/s40503-014-0003-5
Accepted: 31 August 2013
Structural change
Less favored land | CommonCrawl |
Only show open access (18)
Last 3 years (23)
Over 3 years (165)
Materials Research (37)
Statistics and Probability (23)
MRS Online Proceedings Library Archive (29)
Epidemiology & Infection (23)
The Journal of Agricultural Science (14)
Psychological Medicine (13)
Microscopy and Microanalysis (11)
Bulletin of Entomological Research (8)
The Journal of Laryngology & Otology (8)
European Psychiatry (6)
Proceedings of the International Astronomical Union (6)
The European Physical Journal - Applied Physics (6)
Laser and Particle Beams (4)
Parasitology (4)
British Journal of Nutrition (3)
Publications of the Astronomical Society of Australia (3)
European Journal of Anaesthesiology (2)
High Power Laser Science and Engineering (2)
Journal of Mechanics (2)
Symposium - International Astronomical Union (2)
Materials Research Society (37)
Animal consortium (10)
International Astronomical Union (9)
BSAS (8)
European Psychiatric Association (6)
AMMS - Australian Microscopy and Microanalysis Society (4)
EAAP (4)
JLO (1984) Ltd (3)
MSC - Microscopical Society of Canada (3)
The Australian Society of Otolaryngology Head and Neck Surgery (3)
Nestle Foundation - enLINK (2)
AMA Mexican Society of Microscopy MMS (1)
Brazilian Society for Microscopy and Microanalysis (SBMM) (1)
MSA - Microscopy Society of America (1)
Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (1)
MiMi / EMAS - European Microbeam Analysis Society (1)
Nutrition Society (1)
Royal Aeronautical Society (1)
Royal College of Speech and Language Therapists (1)
WALLABY Pilot Survey: Public release of HI kinematic models for more than 100 galaxies from phase 1 of ASKAP pilot observations
N. Deg, K. Spekkens, T. Westmeier, T. N. Reynolds, P. Venkataraman, S. Goliath, A. X. Shen, R. Halloran, A. Bosma, B Catinella, W. J. G. de Blok, H. Dénes, E. M. DiTeodoro, A. Elagali, B.-Q. For, C Howlett, G. I. G. Józsa, P. Kamphuis, D. Kleiner, B Koribalski, K. Lee-Waddell, F. Lelli, X. Lin, C. Murugeshan, S. Oh, J. Rhee, T. C. Scott, L. Staveley-Smith, J. M. van der Hulst, L. Verdes-Montenegro, J. Wang, O. I. Wong
Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022
Published online by Cambridge University Press: 15 November 2022, e059
We present the Widefield ASKAP L-band Legacy All-sky Blind surveY (WALLABY) Pilot Phase I Hi kinematic models. This first data release consists of Hi observations of three fields in the direction of the Hydra and Norma clusters, and the NGC 4636 galaxy group. In this paper, we describe how we generate and publicly release flat-disk tilted-ring kinematic models for 109/592 unique Hi detections in these fields. The modelling method adopted here—which we call the WALLABY Kinematic Analysis Proto-Pipeline (WKAPP) and for which the corresponding scripts are also publicly available—consists of combining results from the homogeneous application of the FAT and 3DBarolo algorithms to the subset of 209 detections with sufficient resolution and $S/N$ in order to generate optimised model parameters and uncertainties. The 109 models presented here tend to be gas rich detections resolved by at least 3–4 synthesised beams across their major axes, but there is no obvious environmental bias in the modelling. The data release described here is the first step towards the derivation of similar products for thousands of spatially resolved WALLABY detections via a dedicated kinematic pipeline. Such a large publicly available and homogeneously analysed dataset will be a powerful legacy product that that will enable a wide range of scientific studies.
WALLABY pilot survey: Public release of H i data for almost 600 galaxies from phase 1 of ASKAP pilot observations
T. Westmeier, N. Deg, K. Spekkens, T. N. Reynolds, A. X. Shen, S. Gaudet, S. Goliath, M. T. Huynh, P. Venkataraman, X. Lin, T. O'Beirne, B. Catinella, L. Cortese, H. Dénes, A. Elagali, B.-Q. For, G. I. G. Józsa, C. Howlett, J. M. van der Hulst, R. J. Jurek, P. Kamphuis, V. A. Kilborn, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, C. Murugeshan, J. Rhee, P. Serra, L. Shao, L. Staveley-Smith, J. Wang, O. I. Wong, M. A. Zwaan, J. R. Allison, C. S. Anderson, Lewis Ball, D. C.-J. Bock, D. Brodrick, J. D. Bunton, F. R. Cooray, N. Gupta, D. B. Hayman, E. K. Mahony, V. A. Moss, A. Ng, S. E. Pearce, W. Raja, D. N. Roxby, M. A. Voronkov, K. A. Warhurst, H. M. Courtois, K. Said
We present WALLABY pilot data release 1, the first public release of H i pilot survey data from the Wide-field ASKAP L-band Legacy All-sky Blind Survey (WALLABY) on the Australian Square Kilometre Array Pathfinder. Phase 1 of the WALLABY pilot survey targeted three $60\,\mathrm{deg}^{2}$ regions on the sky in the direction of the Hydra and Norma galaxy clusters and the NGC 4636 galaxy group, covering the redshift range of $z \lesssim 0.08$ . The source catalogue, images and spectra of nearly 600 extragalactic H i detections and kinematic models for 109 spatially resolved galaxies are available. As the pilot survey targeted regions containing nearby group and cluster environments, the median redshift of the sample of $z \approx 0.014$ is relatively low compared to the full WALLABY survey. The median galaxy H i mass is $2.3 \times 10^{9}\,{\rm M}_{{\odot}}$ . The target noise level of $1.6\,\mathrm{mJy}$ per 30′′ beam and $18.5\,\mathrm{kHz}$ channel translates into a $5 \sigma$ H i mass sensitivity for point sources of about $5.2 \times 10^{8} \, (D_{\rm L} / \mathrm{100\,Mpc})^{2} \, {\rm M}_{{\odot}}$ across 50 spectral channels ( ${\approx} 200\,\mathrm{km \, s}^{-1}$ ) and a $5 \sigma$ H i column density sensitivity of about $8.6 \times 10^{19} \, (1 + z)^{4}\,\mathrm{cm}^{-2}$ across 5 channels ( ${\approx} 20\,\mathrm{km \, s}^{-1}$ ) for emission filling the 30′′ beam. As expected for a pilot survey, several technical issues and artefacts are still affecting the data quality. Most notably, there are systematic flux errors of up to several 10% caused by uncertainties about the exact size and shape of each of the primary beams as well as the presence of sidelobes due to the finite deconvolution threshold. In addition, artefacts such as residual continuum emission and bandpass ripples have affected some of the data. The pilot survey has been highly successful in uncovering such technical problems, most of which are expected to be addressed and rectified before the start of the full WALLABY survey.
Gray Matter Deficits of Cortical-striatal-limbic Circuit in Social Anxiety Disorder
X. Zhang, S. Wang, Q. Gong
Journal: European Psychiatry / Volume 65 / Issue S1 / June 2022
Published online by Cambridge University Press: 01 September 2022, pp. S399-S400
The extant findings have been of great heterogeneity due to partial volume effects in the investigation of cortical gray matter volume (GMV), high comorbidity with other psychiatric disorders, and concomitant therapy in the neuroimaging studies of social anxiety disorder (SAD).
To identity gray matter deficits in cortical and subcortical structures in non-comorbid never-treated patients, so as to explore the "pure" SAD-specific pathophysiology and neurobiology.
Thirty-two non-comorbid free-of-treatment patients with SAD and 32 demography-matched healthy controls were recruited to undergo high-resolution 3.0-Tesla T1-weighted MRI. Cortical thickness (CT) and subcortical GMV were estimated using FreeSurfer; then the whole-brain vertex-wise analysis was performed to compare group differences in CT. Besides, differences in subcortical GMV of priori selected regions-of-interest: amygdala, hippocampus, putamen, and pallidum were compared by an analysis of covariance with age, gender, and total subcortical GMV as covariates.
The SAD patients demonstrated significantly decreased CT near-symmetrically in the bilateral prefrontal cortex (Monte Carlo simulations of P < 0.05). Besides, smaller GMV in the left hippocampus and pallidum were also observed in the SAD cohort (two-sample t-test of P < 0.05).
For the first time, the current study investigated the structural alterations of CT and subcortical GMV in non-comorbid never-treated patients with SAD. Our findings provide preliminary evidences that structural deficits in cortical-striatal-limbic circuit may contribute to the psychopathological basis of SAD, and offer more detailed structural substrates for the involvement of such aberrant circuit in the imbalance between defective bottom-up response and top-down control to external stimuli in SAD.
No significant relationships.
Prediction and copy number variation identification of ZNF146 gene related to growth traits in Chinese cattle
X. T. Ding, X. Liu, X. M. Li, Y. F. Wen, J. W. Xu, W. J. Liu, Z. M. Li, Z. J. Zhang, Y. N. Chai, H. L. Wang, B. W. Cheng, S. H. Liu, B. Hou, Y. J. Huang, J. G. Li, L. J. Li, G. J. Yang, Z. F. Qi, F. Y. Chen, Q. T. Shi, E. Y. Wang, C. Z. Lei, H. Chen, B. R. Ru, Y. Z. Huang
Journal: The Journal of Agricultural Science / Volume 160 / Issue 5 / October 2022
The great demographic pressure brings tremendous volume of beef demand. The key to solve this problem is the growth and development of Chinese cattle. In order to find molecular markers conducive to the growth and development of Chinese cattle, sequencing was used to determine the position of copy number variations (CNVs), bioinformatics analysis was used to predict the function of ZNF146 gene, real-time fluorescent quantitative polymerase chain reaction (qPCR) was used for CNV genotyping and one-way analysis of variance was used for association analysis. The results showed that there exists CNV in Chr 18: 47225201-47229600 (5.0.1 version) of ZNF146 gene through the early sequencing results in the laboratory and predicted ZNF146 gene was expressed in liver, skeletal muscle and breast cells, and was amplified or overexpressed in pancreatic cancer, which promoted the development of tumour through bioinformatics. Therefore, it is predicted that ZNF146 gene affects the proliferation of muscle cells, and then affects the growth and development of cattle. Furthermore, CNV genotyping of ZNF146 gene was three types (deletion type, normal type and duplication type) by Real-time fluorescent quantitative PCR (qPCR). The association analysis results showed that ZNF146-CNV was significantly correlated with rump length of Qinchuan cattle, hucklebone width of Jiaxian red cattle and heart girth of Yunling cattle. From the above results, ZNF146-CNV had a significant effect on growth traits, which provided an important candidate molecular marker for growth and development of Chinese cattle.
Acceleration of 60 MeV proton beams in the commissioning experiment of the SULF-10 PW laser
On the Cover of HPL
A. X. Li, C. Y. Qin, H. Zhang, S. Li, L. L. Fan, Q. S. Wang, T. J. Xu, N. W. Wang, L. H. Yu, Y. Xu, Y. Q. Liu, C. Wang, X. L. Wang, Z. X. Zhang, X. Y. Liu, P. L. Bai, Z. B. Gan, X. B. Zhang, X. B. Wang, C. Fan, Y. J. Sun, Y. H. Tang, B. Yao, X. Y. Liang, Y. X. Leng, B. F. Shen, L. L. Ji, R. X. Li, Z. Z. Xu
Journal: High Power Laser Science and Engineering / Volume 10 / 2022
Published online by Cambridge University Press: 03 August 2022, e26
We report the experimental results of the commissioning phase in the 10 PW laser beamline of the Shanghai Superintense Ultrafast Laser Facility (SULF). The peak power reaches 2.4 PW on target without the last amplifying during the experiment. The laser energy of 72 ± 9 J is directed to a focal spot of approximately 6 μm diameter (full width at half maximum) in 30 fs pulse duration, yielding a focused peak intensity around 2.0 × 1021 W/cm2. The first laser-proton acceleration experiment is performed using plain copper and plastic targets. High-energy proton beams with maximum cut-off energy up to 62.5 MeV are achieved using copper foils at the optimum target thickness of 4 μm via target normal sheath acceleration. For plastic targets of tens of nanometers thick, the proton cut-off energy is approximately 20 MeV, showing ring-like or filamented density distributions. These experimental results reflect the capabilities of the SULF-10 PW beamline, for example, both ultrahigh intensity and relatively good beam contrast. Further optimization for these key parameters is underway, where peak laser intensities of 1022–1023 W/cm2 are anticipated to support various experiments on extreme field physics.
Clinical analysis of relapsing polychondritis with airway involvement
S-Y Zhai, R-Y Guo, C Zhang, C-M Zhang, H-Y Yin, B-Q Wang, S-X Wen
Journal: The Journal of Laryngology & Otology / Volume 137 / Issue 1 / January 2023
Published online by Cambridge University Press: 02 February 2022, pp. 96-100
To identify the clinical characteristics, treatment, and prognosis of relapsing polychondritis patients with airway involvement.
Twenty-eight patients with relapsing polychondritis, hospitalised in the First Hospital of Shanxi Medical University between April 2011 and April 2021, were retrospectively analysed.
Fifty per cent of relapsing polychondritis patients with airway involvement had a lower risk of ear and ocular involvement. Relapsing polychondritis patients with airway involvement had a longer time-to-diagnosis (p < 0.001), a poorer outcome following glucocorticoid combined with immunosuppressant treatment (p = 0.004), and a higher recurrence rate than those without airway involvement (p = 0.004). The rates of positive findings on chest computed tomography and bronchoscopy in relapsing polychondritis patients with airway involvement were 88.9 per cent and 85.7 per cent, respectively. Laryngoscopy analysis showed that 66.7 per cent of relapsing polychondritis patients had varying degrees of mucosal lesions.
For relapsing polychondritis patients with airway involvement, drug treatment should be combined with local airway management.
The impact of COVID-19 on subthreshold depressive symptoms: a longitudinal study
Y. H. Liao, B. F. Fan, H. M. Zhang, L. Guo, Y. Lee, W. X. Wang, W. Y. Li, M. Q. Gong, L. M. W. Lui, L. J. Li, C. Y. Lu, R. S. McIntyre
Journal: Epidemiology and Psychiatric Sciences / Volume 30 / 2021
Published online by Cambridge University Press: 15 February 2021, e20
The coronavirus disease 2019 (COVID-19) pandemic represents an unprecedented threat to mental health. Herein, we assessed the impact of COVID-19 on subthreshold depressive symptoms and identified potential mitigating factors.
Participants were from Depression Cohort in China (ChiCTR registry number 1900022145). Adults (n = 1722) with subthreshold depressive symptoms were enrolled between March and October 2019 in a 6-month, community-based interventional study that aimed to prevent clinical depression using psychoeducation. A total of 1506 participants completed the study in Shenzhen, China: 726 participants, who completed the study between March 2019 and January 2020 (i.e. before COVID-19), comprised the 'wave 1' group; 780 participants, who were enrolled before COVID-19 and completed the 6-month endpoint assessment during COVID-19, comprised 'wave 2'. Symptoms of depression, anxiety and insomnia were assessed at baseline and endpoint (i.e. 6-month follow-up) using the Patient Health Questionnaire-9 (PHQ-9), Generalised Anxiety Disorder-7 (GAD-7) and Insomnia Severity Index (ISI), respectively. Measures of resilience and regular exercise were assessed at baseline. We compared the mental health outcomes between wave 1 and wave 2 groups. We additionally investigated how mental health outcomes changed across disparate stages of the COVID-19 pandemic in China, i.e. peak (7–13 February), post-peak (14–27 February), remission plateau (28 February−present).
COVID-19 increased the risk for three mental outcomes: (1) depression (odds ratio [OR] = 1.30, 95% confidence interval [CI]: 1.04–1.62); (2) anxiety (OR = 1.47, 95% CI: 1.16–1.88) and (3) insomnia (OR = 1.37, 95% CI: 1.07–1.77). The highest proportion of probable depression and anxiety was observed post-peak, with 52.9% and 41.4%, respectively. Greater baseline resilience scores had a protective effect on the three main outcomes (depression: OR = 0.26, 95% CI: 0.19–0.37; anxiety: OR = 1.22, 95% CI: 0.14–0.33 and insomnia: OR = 0.18, 95% CI: 0.11–0.28). Furthermore, regular physical activity mitigated the risk for depression (OR = 0.79, 95% CI: 0.79–0.99).
The COVID-19 pandemic exerted a highly significant and negative impact on symptoms of depression, anxiety and insomnia. Mental health outcomes fluctuated as a function of the duration of the pandemic and were alleviated to some extent with the observed decline in community-based transmission. Augmenting resiliency and regular exercise provide an opportunity to mitigate the risk for mental health symptoms during this severe public health crisis.
Effects of riboflavin supplementation on performance, nutrient digestion, rumen microbiota composition and activities of Holstein bulls
H. M. Wu, J. Zhang, C. Wang, Q. Liu, G. Guo, W. J. Huo, L. Chen, Y. L. Zhang, C. X. Pei, S. L. Zhang
Journal: British Journal of Nutrition / Volume 126 / Issue 9 / 14 November 2021
Published online by Cambridge University Press: 08 January 2021, pp. 1288-1295
Print publication: 14 November 2021
To investigate the influences of dietary riboflavin (RF) addition on nutrient digestion and rumen fermentation, eight rumen cannulated Holstein bulls were randomly allocated into four treatments in a repeated 4 × 4 Latin square design. Daily addition level of RF for each bull in control, low RF, medium RF and high RF was 0, 300, 600 and 900 mg, respectively. Increasing the addition level of RF, DM intake was not affected, average daily gain tended to be increased linearly and feed conversion ratio decreased linearly. Total tract digestibilities of DM, organic matter, crude protein (CP) and neutral-detergent fibre (NDF) increased linearly. Rumen pH decreased quadratically, and total volatile fatty acids (VFA) increased quadratically. Acetate molar percentage and acetate:propionate ratio increased linearly, but propionate molar percentage and ammonia-N content decreased linearly. Rumen effective degradability of DM increased linearly, NDF increased quadratically but CP was unaltered. Activity of cellulase and populations of total bacteria, protozoa, fungi, dominant cellulolytic bacteria, Prevotella ruminicola and Ruminobacter amylophilus increased linearly. Linear increase was observed for urinary total purine derivatives excretion. The data suggested that dietary RF addition was essential for rumen microbial growth, and no further increase in performance and rumen total VFA concentration was observed when increasing RF level from 600 to 900 mg/d in dairy bulls.
Evaluation of the frequency of mutation genes in multidrug-resistant tuberculosis (MDR-TB) strains in Beijing, China
Y. Liu, Y. Sun, X. Zhang, Z. Zhang, Q. Xing, W. Ren, C. Yao, J. Yu, B. Ding, S. Wang, C. Li
Journal: Epidemiology & Infection / Volume 149 / 2021
Published online by Cambridge University Press: 05 January 2021, e21
The aim of this study was to explore the frequency and distribution of gene mutations that are related to isoniazid (INH) and rifampin (RIF)-resistance in the strains of the multidrug-resistant tuberculosis (MDR-TB) Mycobacterium tuberculosis (M.tb) in Beijing, China. In this retrospective study, the genotypes of 173 MDR-TB strains were analysed by spoligotyping. The katG, inhA genes and the promoter region of inhA, in which genetic mutations confer INH resistance; and the rpoB gene, in which genetic mutations confer RIF resistance, were sequenced. The percentage of resistance-associated nucleotide alterations among the strains of different genotypes was also analysed. In total, 90.8% (157/173) of the MDR strains belonged to the Beijing genotype. Population characteristics were not significantly different among the strains of different genotypes. In total, 50.3% (87/173) strains had mutations at codon S315T of katG; 16.8% (29/173) of strains had mutations in the inhA promoter region; of them, 5.5% (15/173) had point mutations at −15 base (C→T) of the inhA promoter region. In total, 86.7% (150/173) strains had mutations at rpoB gene; of them, 40% (69/173) strains had mutations at codon S531L of rpoB. The frequency of mutations was not significantly higher in Beijing genotypic MDR strains than in non-Beijing genotypes. Beijing genotypic MDR-TB strains were spreading in Beijing and present a major challenge to TB control in this region. A high prevalence of katG Ser315Thr, inhA promoter region (−15C→T) and rpoB (S531L) mutations was observed. Molecular diagnostics based on gene mutations was a useful method for rapid detection of MDR-TB in Beijing, China.
Neutron Star Extreme Matter Observatory: A kilohertz-band gravitational-wave detector in the global network
Gravitational Wave Astronomy
K. Ackley, V. B. Adya, P. Agrawal, P. Altin, G. Ashton, M. Bailes, E. Baltinas, A. Barbuio, D. Beniwal, C. Blair, D. Blair, G. N. Bolingbroke, V. Bossilkov, S. Shachar Boublil, D. D. Brown, B. J. Burridge, J. Calderon Bustillo, J. Cameron, H. Tuong Cao, J. B. Carlin, S. Chang, P. Charlton, C. Chatterjee, D. Chattopadhyay, X. Chen, J. Chi, J. Chow, Q. Chu, A. Ciobanu, T. Clarke, P. Clearwater, J. Cooke, D. Coward, H. Crisp, R. J. Dattatri, A. T. Deller, D. A. Dobie, L. Dunn, P. J. Easter, J. Eichholz, R. Evans, C. Flynn, G. Foran, P. Forsyth, Y. Gai, S. Galaudage, D. K. Galloway, B. Gendre, B. Goncharov, S. Goode, D. Gozzard, B. Grace, A. W. Graham, A. Heger, F. Hernandez Vivanco, R. Hirai, N. A. Holland, Z. J. Holmes, E. Howard, E. Howell, G. Howitt, M. T. Hübner, J. Hurley, C. Ingram, V. Jaberian Hamedan, K. Jenner, L. Ju, D. P. Kapasi, T. Kaur, N. Kijbunchoo, M. Kovalam, R. Kumar Choudhary, P. D. Lasky, M. Y. M. Lau, J. Leung, J. Liu, K. Loh, A. Mailvagan, I. Mandel, J. J. McCann, D. E. McClelland, K. McKenzie, D. McManus, T. McRae, A. Melatos, P. Meyers, H. Middleton, M. T. Miles, M. Millhouse, Y. Lun Mong, B. Mueller, J. Munch, J. Musiov, S. Muusse, R. S. Nathan, Y. Naveh, C. Neijssel, B. Neil, S. W. S. Ng, V. Oloworaran, D. J. Ottaway, M. Page, J. Pan, M. Pathak, E. Payne, J. Powell, J. Pritchard, E. Puckridge, A. Raidani, V. Rallabhandi, D. Reardon, J. A. Riley, L. Roberts, I. M. Romero-Shaw, T. J. Roocke, G. Rowell, N. Sahu, N. Sarin, L. Sarre, H. Sattari, M. Schiworski, S. M. Scott, R. Sengar, D. Shaddock, R. Shannon, J. SHI, P. Sibley, B. J. J. Slagmolen, T. Slaven-Blair, R. J. E. Smith, J. Spollard, L. Steed, L. Strang, H. Sun, A. Sunderland, S. Suvorova, C. Talbot, E. Thrane, D. Töyrä, P. Trahanas, A. Vajpeyi, J. V. van Heijningen, A. F. Vargas, P. J. Veitch, A. Vigna-Gomez, A. Wade, K. Walker, Z. Wang, R. L. Ward, K. Ward, S. Webb, L. Wen, K. Wette, R. Wilcox, J. Winterflood, C. Wolf, B. Wu, M. Jet Yap, Z. You, H. Yu, J. Zhang, J. Zhang, C. Zhao, X. Zhu
Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica.
Severe fever with thrombocytopenia syndrome virus: a systematic review and meta-analysis of transmission mode
X. Y. Huang, Z. Q. He, B. H. Wang, K. Hu, Y. Li, W. S. Guo
Published online by Cambridge University Press: 30 September 2020, e239
Severe fever with thrombocytopenia syndrome (SFTS) is a disease with a high case-fatality rate that is caused by infection with the SFTS virus (SFTSV). Five electronic databases were systematically searched to identify relevant articles published from 1 January 2011 to 1 December 2019. The pooled rates with 95% confidence interval (CI) were calculated by a fixed-effect or random-effect model analysis. The results showed that 92 articles were included in this meta-analysis. For the confirmed SFTS cases, the case-fatality rate was 0.15 (95% CI 0.11, 0.18). Two hundred and ninety-six of 1384 SFTS patients indicated that they had been bitten by ticks and the biting rate was 0.21 (95% CI 0.16, 0.26). The overall pooled seroprevalence of SFTSV antibodies among the healthy population was 0.04 (95% CI 0.03, 0.05). For the overall seroprevalence of SFTSV in animals, the seroprevalence of SFTSV was 0.25 (95% CI 0.20, 0.29). The infection rate of SFTSV in ticks was 0.08 (95% CI 0.05, 0.11). In conclusion, ticks can serve as transmitting vectors of SFTSVs and reservoir hosts. Animals can be infected by tick bites, and as a reservoir host, SFTSV circulates continuously between animals and ticks in nature. Humans are infected by tick bites and direct contact with patient secretions.
Capsular polysaccharide and lipopolysaccharide O type analysis of Klebsiella pneumoniae isolates by genotype in China
Z. Y. Zhang, R. Qin, Y. H. Lu, J. Shen, S. Y. Zhang, C. Y. Wang, Y. Q. Yang, F. P. Hu, P. He
Published online by Cambridge University Press: 12 August 2020, e191
Klebsiella pneumoniae is a common pathogen associated with nosocomial infections and is characterised serologically by capsular polysaccharide (K) and lipopolysaccharide O antigens. We surveyed a total of 348 non-duplicate K. pneumoniae clinical isolates collected over a 1-year period in a tertiary care hospital, and determined their O and K serotypes by sequencing of the wbb Y and wzi gene loci, respectively. Isolates were also screened for antimicrobial resistance and hypervirulent phenotypes; 94 (27.0%) were identified as carbapenem-resistant (CRKP) and 110 (31.6%) as hypervirulent (hvKP). isolates fell into 58 K, and six O types, with 92.0% and 94.2% typeability, respectively. The predominant K types were K14K64 (16.38%), K1 (14.66%), K2 (8.05%) and K57 (5.46%), while O1 (46%), O2a (27.9%) and O3 (11.8%) were the most common. CRKP and hvKP strains had different serotype distributions with O2a:K14K64 (41.0%) being the most frequent among CRKP, and O1:K1 (26.4%) and O1:K2 (17.3%) among hvKP strains. Serotyping by gene sequencing proved to be a useful tool to inform the clinical epidemiology of K. pneumoniae infections and provides valuable data relevant to vaccine design.
Effects of copper sulphate and coated copper sulphate addition on lactation performance, nutrient digestibility, ruminal fermentation and blood metabolites in dairy cows
C. Wang, L. Han, G. W. Zhang, H. S. Du, Z. Z. Wu, Q. Liu, G. Guo, W. J. Huo, J. Zhang, Y. L. Zhang, C. X. Pei, S. L. Zhang
Journal: British Journal of Nutrition / Volume 125 / Issue 3 / 14 February 2021
Print publication: 14 February 2021
Coated copper sulphate (CCS) could be used as a Cu supplement in cows. To investigate the influences of copper sulphate (CS) and CCS on milk performance, nutrient digestion and rumen fermentation, fifty Holstein dairy cows were arranged in a randomised block design to five groups: control, CS addition (7·5 mg Cu/kg DM from CS) or CCS addition (5, 7·5 and 10 mg Cu/kg DM from CCS, respectively). When comparing Cu source at equal inclusion rates (7·5 mg/kg DM), cows receiving CCS addition had higher yields of fat-corrected milk, milk fat and protein; digestibility of DM, organic matter (OM) and neutral-detergent fibre (NDF); ruminal total volatile fatty acid (VFA) concentration; activities of carboxymethyl cellulase, cellobiase, pectinase and α-amylase; populations of Ruminococcus albus, Ruminococcus flavefaciens and Fibrobacter succinogenes; and liver Cu content than cows receiving CS addition. Increasing CCS addition, DM intake was unchanged, yields of milk, milk fat and protein; feed efficiency; digestibility of DM, OM, NDF and acid-detergent fibre; ruminal total VFA concentration; acetate:propionate ratio; activity of cellulolytic enzyme; populations of total bacteria, protozoa and dominant cellulolytic bacteria; and concentrations of Cu in serum and liver increased linearly, but ruminal propionate percentage, ammonia-N concentration, α-amylase activity and populations of Prevotella ruminicola and Ruminobacter amylophilus decreased linearly. The results indicated that supplement of CS could be substituted with CCS and addition of CCS improved milk performance and nutrient digestion in dairy cows.
Effects of guanidinoacetic acid supplementation on growth performance, nutrient digestion, rumen fermentation and blood metabolites in Angus bulls
S. Y. Li, C. Wang, Z. Z. Wu, Q. Liu, G. Guo, W. J. Huo, J. Zhang, L. Chen, Y. L. Zhang, C. X. Pei, S. L. Zhang
Journal: animal / Volume 14 / Issue 12 / December 2020
Published online by Cambridge University Press: 25 June 2020, pp. 2535-2542
Guanidinoacetic acid (GAA) can improve the growth performance of bulls. This study investigated the influences of GAA addition on growth, nutrient digestion, ruminal fermentation and serum metabolites in bulls. Forty-eight Angus bulls were randomly allocated to experimental treatments, that is, control, low-GAA (LGAA), medium-GAA (MGAA) and high-GAA (HGAA), with GAA supplementation at 0, 0.3, 0.6 and 0.9 g/kg DM, respectively. Bulls were fed a basal diet containing 500 g/kg DM concentrate and 500 g/kg DM roughage. The experimental period was 104 days, with 14 days for adaptation and 90 days for data collection. Bulls in the MGAA and HGAA groups had higher DM intake and average daily gain than bulls in the LGAA and control groups. The feed conversion ratio was lowest in MGAA and highest in the control. Bulls receiving 0.9 g/kg DM GAA addition had higher digestibility of DM, organic matter, NDF and ADF than bulls in other groups. The digestibility of CP was higher for HGAA than for LGAA and control. The ruminal pH was lower for MGAA, and the total volatile fatty acid concentration was greater for MGAA and HGAA than for the control. The acetate proportion and acetate-to-propionate ratio were lower for MGAA than for LGAA and control. The propionate proportion was higher for MGAA than for control. Bulls receiving GAA addition showed decreased ruminal ammonia N. Bulls in MGAA and HGAA had higher cellobiase, pectinase and protease activities and Butyrivibrio fibrisolvens, Prevotella ruminicola and Ruminobacter amylophilus populations than bulls in LGAA and control. However, the total protozoan population was lower for MGAA and HGAA than for LGAA and control. The total bacterial and Ruminococcus flavefaciens populations increased with GAA addition. The blood level of creatine was higher for HGAA, and the activity of l-arginine glycine amidine transferase was lower for MGAA and HGAA, than for control. The blood activity of guanidine acetate N-methyltransferase and the level of folate decreased in the GAA addition groups. The results indicated that dietary addition of 0.6 or 0.9 g/kg DM GAA improved growth performance, nutrient digestion and ruminal fermentation in bulls.
Selective amplification of the chirped attosecond pulses produced from relativistic electron mirrors
F. Tan, S. Y. Wang, B. Zhang, Z. M. Zhang, B. Zhu, Y. C. Wu, M. H. Yu, Y. Yang, G. Li, T. K. Zhang, Y. H. Yan, F. Lu, W. Fan, W. M. Zhou, Y. Q. Gu
Journal: Laser and Particle Beams / Volume 38 / Issue 2 / June 2020
Print publication: June 2020
In this paper, the generation of relativistic electron mirrors (REM) and the reflection of an ultra-short laser off the mirrors are discussed, applying two-dimension particle-in-cell simulations. REMs with ultra-high acceleration and expanding velocity can be produced from a solid nanofoil illuminated normally by an ultra-intense femtosecond laser pulse with a sharp rising edge. Chirped attosecond pulse can be produced through the reflection of a counter-propagating probe laser off the accelerating REM. In the electron moving frame, the plasma frequency of the REM keeps decreasing due to its rapid expansion. The laser frequency, on the contrary, keeps increasing due to the acceleration of REM and the relativistic Doppler shift from the lab frame to the electron moving frame. Within an ultra-short time interval, the two frequencies will be equal in the electron moving frame, which leads to the resonance between laser and REM. The reflected radiation near this interval and corresponding spectra will be amplified due to the resonance. Through adjusting the arriving time of the probe laser, a certain part of the reflected field could be selectively amplified or depressed, leading to the selective adjustment of the corresponding spectra.
Effects of dietary incorporation of linseed oil with soybean isoflavone on fatty acid profiles and lipid metabolism-related gene expression in breast muscle of chickens
Z. Y. Gou, X. Y. Cui, L. Li, Q. L. Fan, X. J. Lin, Y. B. Wang, Z. Y. Jiang, S. Q. Jiang
Journal: animal / Volume 14 / Issue 11 / November 2020
Published online by Cambridge University Press: 19 May 2020, pp. 2414-2422
Print publication: November 2020
The meat quality of chicken is an important factor affecting the consumer's health. It was hypothesized that n-3 polyunsaturated fatty acid (n-3 PUFA) could be effectively deposited in chicken, by incorporating antioxidation of soybean isoflavone (SI), which led to improved quality of chicken meat for good health of human beings. Effects of partial or complete dietary substitution of lard (LA) with linseed oil (LO), with or without SI on growth performance, biochemical indicators, meat quality, fatty acid profiles, lipid-related health indicators and gene expression of breast muscle were examined in chickens. A total of 900 males were fed a corn–soybean meal diet supplemented with 4% LA, 2% LA + 2% LO and 4% LO and the latter two including 30 mg SI/kg (2% LA + 2% LO + SI and 4% LO + SI) from 29 to 66 days of age; each of the five dietary treatments included six replicates of 30 birds. Compared with the 4% LA diet, dietary 4% LO significantly increased the feed efficiency and had no negative effect on objective indices related to meat quality; LO significantly decreased plasma triglycerides and total cholesterol (TCH); abdominal fat percentage was significantly decreased in birds fed the 4% LO and 4% LO + SI diets. Chickens with LO diets resulted in higher contents of α-linolenic acid (C18:3n-3), EPA (C20:5n-3) and total n-3 PUFA, together with a lower content of palmitic acid (C16:0), lignoceric acid (C24:0), saturated fatty acids and n-6:n-3 ratio in breast muscle compared to 4% LA diet (P < 0.05); they also significantly decreased atherogenic index, thrombogenic index and increased the hypocholesterolemic to hypercholesterolemic ratio. Adding SI to the LO diets enhanced the contents of EPA and DHA (C22:6n-3), plasma total superoxide dismutase, reduced glutathione (GSH)/oxidized glutathione and muscle GSH content, while decreased plasma total triglyceride and TCH and malondialdehyde content in plasma and breast muscle compared to its absence (P < 0.05). Expression in breast muscle of fatty acid desaturase 1 (FADS1), FADS2, elongase 2 (ELOVL2) and ELOVL5 genes were significantly higher with the LO diets including SI than with the 4% LA diet. Significant interactions existed between LO level and inclusion of SI on EPA and TCH contents. These findings indicate that diet supplemented with LO combined with SI is an effective alternative when optimizing the nutritional value of chicken meat for human consumers.
Effects of sodium selenite and coated sodium selenite on lactation performance, total tract nutrient digestion and rumen fermentation in Holstein dairy cows
Z. D. Zhang, C. Wang, H. S. Du, Q. Liu, G. Guo, W. J. Huo, J. Zhang, Y. L. Zhang, C. X. Pei, S. L. Zhang
Journal: animal / Volume 14 / Issue 10 / October 2020
Published online by Cambridge University Press: 28 April 2020, pp. 2091-2099
Print publication: October 2020
Se can enhance lactation performance by improving nutrient utilization and antioxidant status. However, sodium selenite (SS) can be reduced to non-absorbable elemental Se in the rumen, thereby reducing the intestinal availability of Se. The study investigated the impacts of SS and coated SS (CSS) supplementation on lactation performance, nutrient digestibility, ruminal fermentation and microbiota in dairy cows. Sixty multiparous Holstein dairy cows were blocked by parity, daily milk yield and days in milk and randomly assigned to five treatments: control, SS addition (0.3 mg Se/kg DM as SS addition) or CSS addition (0.1, 0.2 and 0.3 mg Se/kg DM as CSS addition for low CSS (LCSS), medium CSS (MCSS) and high CSS (HCSS), respectively). Experiment period was 110 days with 20 days of adaptation and 90 days of sample collection. Dry matter intake was higher for MCSS and HCSS compared with control. Yields of milk, milk fat and milk protein and feed efficiency were higher for MCSS and HCSS than for control, SS and LCSS. Digestibility of DM and organic matter was highest for CSS addition, followed by SS addition and then control. Digestibility of CP was higher for MCSS and HCSS than for control, SS and LCSS. Higher digestibility of ether extract, NDF and ADF was observed for SS or CSS addition. Ruminal pH decreased with dietary Se addition. Acetate to propionate ratio and ammonia N were lower, and total volatile fatty acids (VFAs) concentration was greater for SS, MCSS and HCSS than control. Ruminal H ion concentration was highest for MCSS and HCSS and lowest for control. Activities of cellobiase, carboxymethyl-cellulase, xylanase and protease and copies of total bacteria, fungi, Ruminococcus flavefaciens, Fibrobacter succinogenes and Ruminococcus amylophilus increased with SS or CSS addition. Activity of α-amylase, copies of protozoa, Ruminococcus albus and Butyrivibrio fibrisolvens and serum glucose, total protein, albumin and glutathione peroxidase were higher for SS, MCSS and HCSS than for control and LCSS. Dietary SS or CSS supplementation elevated blood Se concentration and total antioxidant capacity activity. The data implied that milk yield was elevated due to the increase in total tract nutrient digestibility, total VFA concentration and microorganism population with 0.2 or 0.3 mg Se/kg DM from CSS supplementation in dairy cows. Compared with SS, HCSS addition was more efficient in promoting lactation performance of dairy cows.
Hemodynamic brain response to visual sexual stimuli is different between homosexual and heterosexual men
S.-H. Hu, Q.-D. Wang, Y. Xu, M.-M. Zhang
Journal: European Psychiatry / Volume 26 / Issue S2 / March 2011
Published online by Cambridge University Press: 16 April 2020, p. 930
Many studies showed the differences in subjective response to sexual stimuli between heterosexual and homosexual men. However, the underlying neurobiological factors of sexual orientation are largely unknown. We addressed the question what is the major attribution of the expected differences in brain activation, i.e. neural circuits or different cognitive process. Twenty-eight healthy male volunteers, 14 heterosexuals and 14 homosexuals, were scanned by functional Magnetic Resonance Imaging while subjects were viewing different types of stimuli, i.e. heterosexual couple stimuli (HCS), gay couple stimuli (GCS), lesbian couple stimuli (LCS) and neutral stimuli (NS). SPM02 was used for data analysis. Rating of sexual attractiveness was assessed. Subjective sexual arousal was induced by HCS and GCS in heterosexual and homosexual men, respectively. And sexual disgust was induced by GCS and LCS in heterosexual and homosexual men, respectively. As compared to viewing NS, viewing sexual stimuli induced significant different brain activations most of which had characteristic for cognitive process. These observations suggested that different cognitive pattern was major attribution of different subjective response to sexual stimuli between heterosexual and homosexual men.
A Weak Association of the CLDN5 Locus with Schizophrenia in Chinese Case-control Samples
N. Wu, X. Zhang, L. Ye, Q. Xu, S. Jin, Z. Wang, S. Liu, G. Ju, Y. Shen, J. Wei
Journal: European Psychiatry / Volume 24 / Issue S1 / January 2009
Published online by Cambridge University Press: 16 April 2020, p. 1
An increasing number of studies have described the relationship between velo-cardio-facial syndrome (VCFS) and schizophrenia. in a family-based study, we found that rs10314, a single nucleotide polymorphism (SNP) present in the 3'-flanking region of the CLDN5 gene, was associated with schizophrenia among a Chinese population. High false positive rate is a common problem with the association study of human diseases. It is very important to replicate an initial finding with different samples and experimental designs.
A total of 749 patients with schizophrenia and 383 age and sex matched healthy control subjects in Chinese population were recruited. PCR-based RFLP protocol was applied to genotype rs10314 to see its disease association.
The χ2 goodness-of-fit test showed that the genotypic distributions of rs10314 were in Hardy-Weinberg equilibrium in both the patient group (χ2=1.12, P=0.289) and the control group (χ2=0.22, P=0.639). rs10314 was associated with schizophrenia with an odds ratio (OR) of 1.32 in the male subjects (χ2=5.45, P=0.02, 95% CI 1.05-1.67) but not in the female subjects (χ2=0.64, P=0.425, OR=1.14, 95% CI 0.83-1.57). the χ2 test showed a genotypic association only for combined samples (χ2=7.80, df=2, P=0.02). SNP rs10314 is a G to C base change. Frequency of the genotypes containing the C allele was significantly higher in the patient group than in the control group.
The present work shows that the CLDN5 gene polymorphism is more likely to be involved in schizophrenic men than women, suggesting that this gene may contribute to the gender differences in schizophrenia.
Plasma-based proteomics reveals lipid metabolic and immunoregulatory dysregulation in post-stroke depression
Y. Zhan, Y.-T. Yang, H.-M. You, D. Cao, C.-Y. Liu, C.-J. Zhou, Z.-Y. Wang, S.-J. Bai, J. Mu, B. Wu, Q.-L. Zhan, P. Xie
Journal: European Psychiatry / Volume 29 / Issue 5 / June 2014
Post-stroke depression (PSD) is the most common psychiatric complication facing stroke survivors and has been associated with increased distress, physical disability, poor rehabilitation, and suicidal ideation. However, the pathophysiological mechanisms underlying PSD remain unknown, and no objective laboratory-based test is available to aid PSD diagnosis or monitor progression.
Here, an isobaric tags for relative and absolute quantitation (iTRAQ)-based quantitative proteomic approach was performed to identify differentially expressed proteins in plasma samples obtained from PSD, stroke, and healthy control subjects.
The significantly differentiated proteins were primarily involved in lipid metabolism and immunoregulation. Six proteins associated with these processes – apolipoprotein A-IV (ApoA-IV), apolipoprotein C-II (ApoC-II), C-reactive protein (CRP), gelsolin, haptoglobin, and leucine-rich alpha-2-glycoprotein (LRG) – were selected for Western blotting validation. ApoA-IV expression was significantly upregulated in PSD as compared to stroke subjects. ApoC-II, LRG, and CRP expression were significantly downregulated in both PSD and HC subjects relative to stroke subjects. Gelsolin and haptoglobin expression were significantly dysregulated across all three groups with the following expression profiles: gelsolin, healthy control > PSD > stroke subjects; haptoglobin, stroke > PSD > healthy control.
Early perturbation of lipid metabolism and immunoregulation may be involved in the pathophysiology of PSD. The combination of increased gelsolin levels accompanied by decreased haptoglobin levels shows promise as a plasma-based diagnostic biomarker panel for detecting increased PSD risk in post-stroke patients. | CommonCrawl |
Holography forrásának megtekintése
← Holography
Nincs jogosultságod a következő művelet elvégzéséhez: lap szerkesztése, a következő ok miatt:
Ezt a tevékenységet csak a(z) szerkesztők csoportba tartozó felhasználó végezheti el.
Megtekintheted és másolhatod a lap forrását:
[[Kategória:Laborgyakorlat]] [[Kategória:Fizika laboratórium 4.]] [[Kategória:Szerkesztő:Bokor]] __TOC__ ==Introduction== Humans have the ability to observe their surroundings in three dimensions. A large part of this is due to the fact that we have two eyes, and hence stereoscopic vision. The detector in the human eye - the retina - is a two-dimensional surface that detects the intensity of the light that hits it. Similarly, in conventional photography, the object is imaged by an optical system onto a two-dimensional photosensitive surface, i.e. the photographic film or plate. Any point, or "pixel", of the photographic plate is sensitive only to the intensity of the light that hits it, not to the entire complex amplitude (magnitude and phase) of the light wave at the given point. Holography - invented by Dennis Gabor (1947), who received the Nobel Prize in Physics in 1971 - is different from conventional photography in that it enables us to record the phase of the light wave, despite the fact that we still use the same kind of intensity-sensitive photographic materials as in conventional photography. The "trick" by which holography achieves this is to encode phase information as intensity information, and thus to make it detectable for the photographic material. Encoding is done using interference: the intensity of interference fringes between two waves depends on the phase difference between the two waves. Thus, in order to encode phase information as intensity information, we need, in addition to the light wave scattered from the object, another wave too. To make these two light waves - the "object wave" and the "reference wave" - capable of interference we need a coherent light source (a laser). Also, the detector (the photographic material) has to have a high enough resolution to resolve and record the fine interference pattern created by the two waves. Once the interference pattern is recorded and the photographic plate is developed, the resulting hologram is illuminated with an appropriately chosen light beam, as described in detail below. This illuminating beam is diffracted on the fine interference pattern that was recorded on the hologram, and the diffracted wave carries the phase information as well as the amplitude information of the wave that was originally scattered from the object: we can thus observe a realistic three-dimensional image of the object. A hologram is not only a beautiful and spectacular three-dimensional image, but can also be used in many areas of optical metrology. == Theory == === Recording and reconstructing a transmission hologram === <wlatex> One possible holographic setup is shown in Fig. 1/a. This setup can be used to record a so-called off-axis transmission hologram. The source is a highly coherent laser diode that is capable of producing a high-contrast interference pattern. All other light sources must be eliminated during the recording. The laser diode does not have a beam-shaping lens in front of it, and thus emits a diverging wavefront with an ellipsoidal shape. The reference wave is the part of this diverging wave that directly hits the holographic plate, and the object wave is the part of the diverging wave that hits the object first and is then scattered by the object onto the holographic plate. The reference wave and the object wave hit the holographic plate simultaneously and create an interference pattern on the plate. {| style="float: center;" | [[Fájl:fizlab4-holo-1a_en.svg|bélyegkép|250px|Fig. 1/a.: Recording (or exposure) of an off-axis transmission hologram]] | [[Kép:fizlab4-holo-1b_en.svg|bélyegkép|250px|Fig. 1/b.: Reconstructing the virtual image]] | [[Kép:fizlab4-holo-1c_en.svg|bélyegkép|250px|Fig. 1/c.: Reconstructing the real image]] |} The holographic plate is usually a glass plate with a thin, high-resolution optically sensitive layer. The spatial resolution of holographic plates is higher by 1-2 orders of magnitude than that of photographic films used in conventional cameras. Our aim is to make an interference pattern, i.e. a so-called "holographic grating", with high-contrast fringes. To achieve this, the intensity ratio of the object wave and the reference wave, their total intensity, and the exposure time must all be adjusted carefully. Since the exposure time can be as long as several minutes, we also have to make sure that the interference pattern does not move or vibrate relative to the holographic plate during the exposure. To avoid vibrations, the entire setup is placed on a special rigid, vibration-free optical table. Air-currents and strong background lights must also be eliminated. Note that, unlike in conventional photography or in human vision, in the setup of Fig. 1/a there is no imaging lens between the object and the photosensitive material. This also means that a given point on the object scatters light toward the entire holographic plate, i.e. there is no 1-to-1 correspondence (no "imaging") between object points and points on the photosensitive plate. This is in contrast with how conventional photography works. The setup of Fig. 1/a is called off-axis, because there is a large angle between the directions of propagation of the object wave and of the reference wave. The exposed holographic plate is then chemically developed. (Note that if the holographic plate uses photopolymers then no such chemical process is needed.) Under conventional illumination with a lamp or under sunlight, the exposed holographic plate with the recorded interference pattern on it does not seem to contain any information about the object in any recognizable form. In order to "decode" the information stored in the interference pattern, i.e. in order to reconstruct the image of the object from the hologram, we need to use the setup shown in Fig. 1/b. The object itself is no longer in the setup, and the hologram is illuminated with the reference beam alone. The reference beam is then diffracted on the holographic grating. (Depending on the process used the holographic grating consists either of series of dark and transparent lines ("amplitude hologram") or of a series of lines with alternating higher and lower indices of refraction ("phase hologram").) The diffracted wave is a diverging wavefront that is identical to the wavefront that was originally emitted by the object during recording. This is the so-called virtual image of the object. The virtual image appears at the location where the object was originally placed, and is of the same size and orientation as the object was during recording. In order to see the virtual image, the hologram must be viewed from the side opposite to where the reconstructing reference wave comes from. The virtual image contains the full 3D information about the object, so by moving your head sideways or up-and-down, you can see the appearance of the object from different viewpoints. This is in contrast with 3D cinema where only two distinct viewpoints (a stereo pair) is available from the scene. Another difference between holography and 3D cinema is that on a hologram you can choose different parts on the object located at different depths, and focus your eyes on those parts separately. Note, however, that both to record and to reconstruct a hologram, we need a monochromatic laser source (there is no such limitation in 3D cinema), and thus the holographic image is intrinsically monochromatic. This type of hologram is called transmission hologram, because during reconstruction (Fig. 1/b) the laser source and our eye are at two opposite sides of the hologram, so light has to pass through the hologram in order to each our eye. Besides the virtual image, there is another reconstructed wave (not shown in Fig. 1/b) that is converging and can thus be observed on a screen as the real image of the object. For an off-axis setup the reconstructing waves that create the virtual and the real image, respectively, propagate in two different directions in space. In order to view the real image in a convenient way it is best to use the setup shown in Fig. 1/c. Here a sharp laser beam illuminates a small region of the entire hologram, and the geometry of this sharp reconstructing beam is chosen such that it travels in the opposite direction from what the propagation direction of the reference beam was during recording. </wlatex> === Theoretical background === <wlatex> For the case of amplitude holograms, this is how we can demonstrate that during reconstruction it is indeed the original object wave that is diffracted on the holographic grating. Consider the amplitude of the light wave in the immediate vicinity of the holographic plate. Let the complex amplitude of the two interfering waves during recording be $\mathbf{r}(x,y)=R(x,y)e^{i\varphi_r(x,y)}$ for the reference wave and $\mathbf{t}(x,y)=T(x,y)e^{i\varphi_t(x,y)}$ for the object wave, where R and T are the amplitudes (as real numbers). The amplitude of the reference wave along the plane of the holographic plate, R(x,y), is only slowly changing, so R can be taken to be constant. The intensity distribution along the plate, i.e. the interference pattern that is recorded on the plate can be written as $$I_{\rm{exp}}=|\mathbf{r}+\mathbf{t}|^2 = R^2+T^2+\mathbf{rt^*+r^*t}\quad\rm{(1)}$$ where $*$ denotes the complex conjugate. For an ideal holographic plate with a linear response, the opacity of the final hologram is linearly proportional to this intensity distribution, so the transmittance $\tau$ of the plate can be written as $$\tau=1–\alpha I_{\rm{exp}}\quad\rm{(2)}$$ where $\alpha$ is the product of a material constant and the time of exposure. When the holographic plate is illuminated with the original reference wave during reconstruction, the complex amplitude just behind the plate is $$\mathbf{a} = \mathbf{r}\tau=\mathbf{r}(1–\alpha R^2–\alpha T^2)–\alpha\mathbf{r}^2\mathbf{t}^*–\alpha R^2\mathbf{t}\quad\rm{(3)}$$ The first term is the reference wave multiplied by a constant, the second term, proportional to $\mathbf{t}^*$, is a converging conjugate image (see $\mathbf{r}^2$), and the third term, proportional t, is a copy of the original object wave (note that all proportionality constants are real!) The third term gives a virtual image, because right behind the hologram this term creates a complex wave pattern that is identical to the wave that originally arrived at the same location from the object. Equation (3) is called the fundamental equation of holography. In case of off-axis holograms the three diffraction orders ($0$ and $\pm 1$) detailed above propagate in three different directions. (Note that if the response of the holographic plate is not linear then higher diffraction orders may also appear.) </wlatex> === Recording and reconstructing a reflection hologram === <wlatex> Display holograms that can be viewed in white light are different from the off-axis transmission type discussed above, in two respects: (1) they are recorded in an in-line setup, i.e. both the object wave and the reference wave are incident on the holographic plate almost perpendicularly; and (2) they are reflection holograms: during recording the two waves are incident on the plate from two opposite directions, and during reconstruction illumination comes from the same side of the plate as the viewer's eye is. Fig. 2/a shows the recording setup for a reflection hologram. Figs. 2/b and 2/c show the reconstruction setup for the virtual and the real images, respectively. {| style="float: center;" | [[Fájl:fizlab4-holo-2a_en.svg|bélyegkép|250px|Fig. 2/a.: Recording a reflection hologram]] | [[Kép:fizlab4-holo-2b_en.svg|bélyegkép|250px|Fig. 2/b.: Reconstructing the virtual image]] | [[Kép:fizlab4-holo-2c_en.svg|bélyegkép|250px|Fig. 2/c.: Reconstructing the real image]] |} The reason such holograms can be viewed in white light illumination is that they are recorded on a holographic plate on which the light sensitive layer has a thickness of at least $8-10\,\rm{\mu m}$, much larger than the wavelength of light. Thick diffraction gratings exhibit the so-called Bragg effect: they have a high diffraction efficiency only at or near the wavelength that was used during recording. Thus if they are illuminated with white light, they selectively diffract only in the color that was used during recording and absorb light at all other wavelengths. Bragg-gratings are sensitive to direction too: the reference wave must have the same direction during reconstruction as it had during recording. Sensitivity to direction also means that the same thick holographic plate can be used to record several distinct holograms, each with a reference wave coming from a different direction. Each hologram can then be reconstructed with its own reference wave. (The thicker the material, the more selective it is in direction. A "volume hologram" can store a large number of independent images, e.g. a lot of independent sheets of binary data. This is one of the basic principles behind holographic storage devices.) </wlatex> == Holographic interferometry == <wlatex> Since the complex amplitude of the reconstructed object wave is determined by the original object itself, e.g. through its shape or surface quality, the hologram stores a certain amount of information about those too. If two states of the same object are recorded on the same holographic plate with the same reference wave, the resulting plate is called a "double-exposure hologram": $$I_{12}=|\mathbf r+\mathbf t_1|^2+|\mathbf r+\mathbf t_2|^2=R^2+T^2+\mathbf r\mathbf t_1^*+\mathbf r^*\mathbf t_1+R^2+T^2+\mathbf r\mathbf t_2^*+\mathbf r^*\mathbf t_2=2R^2+2T^2+(\mathbf r\mathbf t_1^*+\mathbf r\mathbf t_2^*)+(\mathbf r^*\mathbf t_1+\mathbf r^*\mathbf t_2)$$ (Here we assumed that the object wave only changed in phase between the two exposures, but its real amplitude T remained essentially the same. The lower indices denote the two states.) During reconstruction we see the two states "simultaneously": $$\mathbf a_{12}=\mathbf r\tau=\mathbf r(1-\alpha I_{12})=\mathbf r(1-2\alpha R^2-2\alpha T^2)-\alpha \mathbf r^2(\mathbf t_1^*+\mathbf t_2^*)+\alpha R^2(\mathbf t_1+\mathbf t_2)$$ i.e. the wave field $\mathbf a_{12}$ contains both a term proportional to $\mathbf t_1$ and a term proportional to $\mathbf t_2$, in both the first and the minus first diffraction orders. If we view the virtual image, we only see the contribution of the last terms $\alpha R^2(\mathbf t_1+\mathbf t_2)$, since all the other diffraction orders propagate in different directions than this. The observed intensity in this diffraction order, apart from the proportionality factor $\alpha R^2$, is: $$I_{12,\text{virt}}=|\mathbf a_{12,\text{virt}}|^2=|\mathbf t_1+\mathbf t_2|^2=2T^2+(\mathbf t_1^* \mathbf t_2+\mathbf t_1 \mathbf t_2^*)=2T^2+(\mathbf t_1^* \mathbf t_2+c.c.)$$ where the interference terms in the brackets are complex conjugates of one another. Thus the two object waves that belong to the two states interfere with each other. Since $\mathbf t_1=Te^{i\varphi_1(x,y)}$ and $\mathbf t_2=Te^{i\varphi_2(x,y)}$, $$\mathbf t_1^*\mathbf t_2=T^2e^{i[\varphi_2(x,y)-\varphi_1(x,y)]},$$ and the term in the brackets above is its real part, i.e. $$2T^2\cos[\varphi_2(x,y)-\varphi_1(x,y)]$$ This shows that on the double-exposure holographic image of the object we can see interference fringes (so-called contour lines) whose shape depends on the <u>''phase change''</u> between the two states, and that describes the change (or the shape) of the object. [[Fájl:fizlab4-holo-3_en.svg|bélyegkép|254px|Fig. 3.: The sensitivity vector]] For example, if the object was a deformable metallic plate that was given a deformation of a few microns between the two exposures, a certain recording geometry will lead to contour lines of the displacement component perpendicular to the plate on the reconstructed image. Using Fig. 3 to write the phases $\varphi_1$ and $\varphi_2$ that determine the interference fringes, you can show that their difference can be expressed as $$\Delta\varphi=\varphi_2-\varphi_1=\vec s\cdot(\vec k'-\vec k)=\vec s\cdot\vec k_\text{sens}\quad\rm{(9)}$$ where $\vec k$ is the wave vector of the plane wave that illuminates the object, $\vec k'$ is the wave vector of the beam that travels from the object toward the observer ($|\vec k|=|\vec k'|=\frac{2\pi}{\lambda}$), $\vec s$ is the displacement vector, and $\vec k_\text{sens}$ is the so-called "sensitivity vector". The red arrows in the figure represent arbitrary rays from the expanded beam. Since in a general case the displacement vector is different on different parts of the surface, the phase difference will be space-variant too. We can see from the scalar product that it is only the component of $\vec s$ that lies along the direction of the sensitivity vector that "can be measured". Both the direction and the length of the sensitivity vector can be changed by controlling the direction of the illumination or the direction of the observation (viewing). This also means that e.g. if we move our viewpoint in front of a double-exposure hologram, the phase difference, and thus the interference fringes, will change too. We can observe the same kind of fringe pattern if we first make a single exposure hologram of the object, next we place the developed holographic plate back to its original position within a precision of a few tenths of a micron (!), and finally we deform the object while still illuminating it with the same laser beam that we used during recording. In this case the holographically recorded image of the original state interferes with the "live" image of the deformed state. In this kind of interferometry, called the "real-time holographic interferometry", we can change the deformation and observe the corresponding change in the fringe pattern in real time. </wlatex> == Holographic optical elements == <wlatex> If both the object wave and the reference wave are plane waves and they subtend a certain angle, the interference fringe pattern recorded on the hologram will be a simple grating that consists of straight equidistant lines. This is the simplest example of "holographic optical elements" (HOEs). Holography is a simple technique to create high efficiency dispersive elements for spectroscopic applications. The grating constant is determined by the wavelength and angles of incidence of the two plane waves, and can thus be controlled with high precision. Diffraction gratings for more complex tasks (e.g. gratings with space-variant spacing, or focusing gratings) are also easily made using holography: all we have to do is to replace one of the plane waves with a beam having an appropriately designed wavefront. Since the reconstructed image of a hologram shows the object "as if it were really there", by choosing the object to be an optical device such as a lens or a mirror, we can expect the hologram to work, with some limitations, like the optical device whose image it recorded (i.e. the hologram will focus or reflect light in the same way as the original object did). Such simple holographic lenses and mirrors are further examples of HOEs. As an example, let's see how, by recording the interference pattern of two simple spherical waves, we can create a "holographic lens". Let's suppose that both spherical waves originate from points that lie on the optical axis which is perpendicular to the plane of the hologram. (This is a so-called on-axes arrangement.) The distance between the hologram and one spherical wave source (let's call it the reference wave) is $f_1$, and the distance of the hologram from the other spherical wave source (let's call it the object wave) is $f_2$. Using the well-known parabolic/paraxial approximation of spherical waves, and assuming both spherical waves to have unit amplitudes, the complex amplitudes $\mathbf r$ and. $\mathbf t$ of the reference wave and the object wave, respectively, in a point (x,y) on the holographic plate can be written as $$\mathbf r=e^{i\frac{2\pi}{\lambda}\left(\frac{x^2+y^2}{2f_1}\right)},\,\mathbf t=e^{i\frac{2\pi}{\lambda}\left(\frac{x^2+y^2}{2f_2}\right)}\quad\rm{(10)}$$ The interference pattern recorded on the hologram becomes: $$I=2+e^{i\frac{2\pi}{\lambda}\left(\frac{x^2+y^2}2\left( \frac 1{f_2}-\frac 1{f_1}\right)\right)}+e^{-i\frac{2\pi}{\lambda}\left(\frac{x^2+y^2}2\left( \frac 1{f_2}-\frac 1{f_1}\right)\right)}\quad\rm{(11)}$$ and the transmittance $\tau$ of the hologram can be written again using equation (2), i.e. it will be a linear function of $I$. Now, instead of using the reference wave $\mathbf r$, let's reconstruct the hologram with a "perpendicularly incident plane wave" (i.e. with a wave whose complex amplitude in the plane of the hologram is a real constant $C$). This will replace the term $\mathbf r^*\tau$ with the term $C^*\tau$ in equation (3), i.e. the complex amplitude of the reconstructed wave just behind the illuminated hologram will be given by the transmittance function $\tau$ itself (ignoring a constant factor). This, together with equations (2) and (11) show that the three reconstructed diffraction orders will be: * a perpendicular plane wave with constant complex amplitude (zero-order), * a wave with a phase $\frac{2\pi}{\lambda}\left(\frac{x^2+y^2}2\left( \frac 1{f_1}-\frac 1{f_2}\right)\right)$ (+1st order), * a wave with a phase $-\frac{2\pi}{\lambda}\left(\frac{x^2+y^2}2\left( \frac 1{f_1}-\frac 1{f_2}\right)\right)$ (-1st order). We can see from the mathematical form of the phases of the $\pm1$-orders (reminder: formulas (10)) that these two orders are actually (paraxial) spherical waves that are focused at a distance of $f=\left(\frac 1{f_1}-\frac 1{f_2}\right)^{-1}$ and $f'=\left(\frac 1{f_2}-\frac 1{f_1}\right)^{-1}$ from the plane of the hologram, respectively. One of $f$ and $f'$ is of course positive and the other is negative, so one diffraction order is a converging spherical wave and the other a diverging spherical wave, both with a focal distance of $\left|\frac 1{f_1}-\frac 1{f_2}\right|^{-1}$. In summary: by holographically recording the interference of two on-axis spherical waves, we created a HOE that can act both as a "concave" and as a "convex" lens, depending on which diffraction order we use in a given application. The most important application of HOEs is when we want to replace a complicated optical setup that performs a complex task (e.g. multifocal lenses used for demultiplexing in optical telecommunications) with a single compact hologram. In such cases holography can lead to a significant reduction in size and cost. </wlatex> == Digital holography == <wlatex> Almost immediately after conventional laser holography was developed in the 1960's, scientists became fascinated by the possibility to treat the interference pattern between the reference wave and the object wave as an electronic or digital signal. This either means that we take the interference field created by two actually existing wavefronts and store it digitally, or that we calculate the holographic grating pattern digitally and then reconstruct it optically. The major obstacles that had hindered the development of digital holography for a long time were the following: * In order to record the fine structure of the object wave and the reference wave, one needs an image input device with a high spatial resolution (at least 100 lines/mm), a high signal-to-noise ratio, and high stability. * To treat the huge amount of data stored on a hologram requires large computational power. * In order to reconstruct the wavefronts optically, one needs a high resolution display. The subfield of digital holography that deals with digitally computed interference fringes which are then reconstructed optically, is nowadays called "computer holography". Its other subfield - the one that involves the digital storage of the interference field between physically existing wavefronts - underwent significant progress in the past few years, thanks in part to the spectacular advances in computational power, and in part to the appearance of high resolution CCD and CMOS cameras. At the same time, spatial light modulators (SLM's) enable us to display a digitally stored holographic fringe pattern in real time. Due to all these developments, digital holography has reached a level where we can begin to use it in optical metrology. Note that there is no fundamental difference between conventional optical holography and digital holography: both share the basic principle of coding phase information as intensity information. [[Fájl:fizlab4-holo-4_en.svg|bélyegkép|300px|Fig. 4.: Recording a digital hologram]] To record a digital hologram, one basically needs to construct the same setup, shown in Fig. 4, that was used in conventional holography. The setup is a Mach-Zehnder interferometer in which the reference wave is formed by passing part of the laser beam through beamsplitter BS1, and beam expander and collimator BE1. The part of the laser beam that is reflected in BS1 passes through beam expander and collimator BE2, and illuminates the object. The light that is scattered from the object (object wave) is brought together with the reference wave at beamsplitter BS2, and the two waves reach the CCD camera together. The most important difference between conventional and digital holography is the difference in resolution between digital cameras and holographic plates. While the grain size (the "pixel size") of a holographic plate is comparable to the wavelength of visible light, the pixel size of digital cameras is typically an order of magnitude larger, i.e. 4-10 µm. The sampling theorem is only satisfied if the grating constant of the holographic grating is larger than the size of two camera pixels. This means that both the viewing angle of the object as viewed from a point on the camera and the angle between the object wave and reference wave propagation directions must be smaller than a critical limit. In conventional holography, as Fig. 1/a shows, the object wave and the reference wave can make a large angle, but digital holography - due to its much poorer spatial resolution - only works in a quasi in-line geometry. A digital camera differs from a holographic plate also in its sensitivity and its dynamic range (signal levels, number of grey levels), so the circumstances of exposure will also be different in digital holography from what we saw in conventional holography. As is well-known, the minimum spacing of an interference fringe pattern created by two interfering plane waves is $d=\frac{\lambda}{2\sin\frac{\Theta}{2}}$, where $\Theta$ is the angle between the two propagation directions. Using this equation and the sampling theorem, we can specify the maximum angle that the object wave and the reference wave can make: $\Theta_{max}\approx\frac{\lambda}{2\Delta x}$, where $\Delta x$ is the pixel size of the camera. For visible light and today's digital cameras this angle is typically around $3^o$, hence the in-line geometry shown in Fig. 4. Figure 5 illustrates what digital holograms look like. Figs 5/a-c show computer simulated holograms, and Fig. 5/d shows the digital hologram of a real object, recorded in the setup of Fig. 4. {| style="float: center;" | [[Fájl:fizlab4-holo-5a.gif|bélyegkép|250px| Digital amplitude hologram of a point source]] | [[Kép:fizlab4-holo-5b.gif|bélyegkép|250px| Digital amplitude hologram of two point sources]] | [[Kép:fizlab4-holo-5c.gif|bélyegkép|250px| Digital amplitude hologram of one thousand point sources]] | [[Kép:fizlab4-holo-5d.gif|bélyegkép|250px| Digital amplitude hologram of a real object]] |} For the numerical reconstruction of digital holograms ("digital reconstruction") we simulate the optical reconstruction of analog amplitude holograms on the computer. If we illuminate a holographic plate (a transparency that introduces amplitude modulation) with a perpendicularly incident plane reference wave, in "digital holography language" this means that the digital hologram can directly be regarded as the amplitude of the wavefront, while the phase of the wavefront is constant. If the reference wave was a spherical wave, the digital hologram has to have the corresponding (space-variant) spherical wave phase, so the wave amplitude at a given pixel will be a complex number. Thus we have determined the wavefront immediately behind the virtual holographic plate. The next step is to simulate the "propagation" of the wave. Since the physically existing object was at a finite distance from the CCD camera, the propagation has to be calculated for this finite distance too. There was no lens in our optical setup, so we have to simulate free-space propagation, i.e. we have to calculate a diffraction integral numerically. From the relatively low resolution of the CCD camera and the small propagation angles of the waves we can immediately see that the parabolic/paraxial Fresnel approximation can be applied. This is a great advantage, because the calculation can be reduced to a Fourier transform. In our case the Fresnel approximation of diffraction can be written as $$A(u,v)=\frac{i}{\lambda D}e^{\frac{-i\pi}{\lambda D}(u^2+v^2)}\int_{\infty}^{\infty}\int_{\infty}^{\infty}R(x,y)h(x,y) e^{\frac{-i\pi}{\lambda D}(x^2+y^2)}e^{i2\pi(xu+yv)}\textup{d}x\textup{d}y,$$ where $A(u,v)$ is the complex amplitude distribution of the result (the reconstructed image) - note that this implies a phase information too! -, $h(x,y)$ is the digital hologram, $R(x,y)$ is the complex amplitude of the reference wave, $D$ is the distance of the reconstruction/object/image from the hologram (from the CCD camera), and $\lambda$ is the wavelength of light. Using the Fourier transform and switching to discrete numerical coordinates, the expression above can be rewritten as $$A(u',v')=\frac{i}{\lambda D}e^{\frac{-i\pi}{\lambda D}\left((u'\Delta x')^2+(v'\Delta y')^2\right)}\mathcal F^{-1} \left[R(x,y)h(x,y) e^{\frac{-i\pi}{\lambda D}\left((k\Delta x)^2+(l\Delta y)^2\right)}\right],$$ where Δx, Δy is the pixel size of the CCD, and k,l and u',v' are the pixel coordinates in the hologram plane and in the image plane, respectively. The appearance of the Fourier-transform is a great advantage, because the calculation of the entire integral can be significantly speeded up by using the fast-Fourier-transform-algorithm (FFT). (Note that in many cases the factors in front of the integral can be ignored.) We can see that, except for the reconstruction distance D, all the parameters of the numerical reconstruction are given. Distance D, however, can - and, in case of an object that has depth, should - be changed relatively freely, around the value of the actual distance between the object and the camera. Hence we can see a sharp image of the object in the intensity distribution formed from the A(u,v). This is similar to adjusting the focus in conventional photography in order to find a distance where all parts of the object look tolerably sharp. We note that the Fourier transform uniquely fixes the pixel size Δx′, Δy′ in the (u,v) image plane according to the formula $\Delta x'=\frac{\lambda D}{\Delta x N_x}$ where $N_x$ is the (linear) matrix size in the $x$ direction used in the fast-Fourier-transform-algorithm. This means that the pixel size on the image plane changes proportionally to the reconstruction distance D. This effect must be considered if one wants to interpret the sizes on the image correctly. The figure below shows the computer simulated reconstruction of a digital hologram that was recorded in an actual measurement setup. The object was a brass plate (membrane) with a size of 40 mm x 40 mm and a thickness of 0.2mm that was fixed around its perimeter. To improve its reflexivity the object was painted white. The speckled appearance of the object in the figure is not caused by the painting, but is an unavoidable consequence of a laser illuminating a matte surface. This is a source of image noise in any such measurement. The figure shows not only the sharp image of the object, but also a very bright spot at the center and a blurred image on the other side of it. These three images are none other than the three diffraction orders that we see in conventional holography too. The central bright spot is the zero-order, the minus first order is the projected real image (that is what we see as the sharp image of the object), and the plus first order corresponds to the virtual image. If the reconstruction is calculated in the opposite direction at a distance -D, what was the sharp image becomes blurred, and vice versa, i.e. the plus and minus first orders are conjugate images, just like in conventional holography. [[Fájl:fizlab4-holo-6.jpg|bélyegkép|250px|Reconstructed intensity distribution of a digital hologram in the virtual object/image plane.]] A digital hologram stores the entire information of the complex wave, and the different diffraction orders are "separated in space" (i.e. they appear at different locations on the reconstructed image), thus the area where the sharp image of the object is seen contains the entire complex amplitude information about the object wave. In principle, it is thus possible to realize the digital version of holographic interferometry. If we record a digital hologram of the original object, deform the object, and finally record another digital hologram of its deformed state, then all we need to perform holographic interferometry is digital data processing. In double-exposure analog holography it would be the sum, i.e. the interference, of the two waves (each corresponding to a different state of the object) that would generate the contour lines of the displacement field, so that is what we have to simulate now. We numerically calculate the reconstruction of both digital holograms in the appropriate distance and add them. Since the wave fields of the two object states are represented by complex matrices in the calculation, addition is done as a complex operation, point-by-point. The resultant complex amplitude distribution is then converted to an intensity distribution which will display the interference fringes. Alternatively, we can simply consider the phase of the resultant complex amplitude distribution, since we have direct access to it. If, instead of addition, the two waves are subtracted, the bright zero-order spot at the center will disappear. === Speckle pattern interferometry, or TV holography === If a matte diffuser is placed in the reference arm at the same distance from the camera as the object is, the recorded digital hologram is practically impossible to reconstruct, because we don't actually know the phase distribution of the diffuse reference beam in the plane of the camera, i.e. we don't know the complex function R(x,y). If, however, we place an objective in front of the camera and adjust it to create a sharp image of the object, we don't need the reconstruction step any more. What we have recorded in this case is the interference between the object surface and the diffusor as a reference surface. Since each image in itself would have speckles, their interference has speckles too, hence the name "speckle pattern interferometry". Such an image can be observed on a screen in real time (hence the name "TV holography"). A single speckle pattern interferogram in itself does not show anything spectacular. However, if we record two such speckle patterns corresponding to two states of the same object - similarly to double-exposure holography -, these two images can be used to retrieve the information about the change in phase. To do this, all we have to do is to take the absolute value of the difference between the two speckle pattern interferograms. </wlatex> == Measurement tasks == [[Fájl:IMG_4762i.jpg|bélyegkép|400px|Elements used in the measurement]] === Making a reflection (or display) hologram === <wlatex> In the first part of the lab, we record a white-light hologram of a strongly reflecting, shiny object on a holographic plate with a size of appr. $5\,\mbox{cm x }7,5\,\rm{ cm}$. The light source is a red laser diode with a nominal power of $5\,\rm{ mW}$ and a wavelength of $\lambda=635\,\text{nm}$. The laser diode is connected to a $3\,\rm V$ battery and takes a current of appr. $55\,\rm{ mA}$. It is a "bare" laser diode (with no collimating lens placed in front of it), so it emits a diverging beam. The holographic plates are LITIHOLO RRT20 plates: they are glass plates coated with a photosensitive layer that contains photopolymer emulsion and is sensitive to the wavelength range ~500-660 nm. In order to expose an RRT20 plate properly at $635\,\rm{ nm}$, we need an (average) energy density of at least $\approx 20\,\frac{\text{mJ}}{\text{cm}^2}$. There is practically no upper limit to this energy density. The emulsion has an intensity threshold below which it gives no response to light at all, so we can use a weak scattered background illumination throughout the measurement. The photosensitive layer has a thickness of $50\,\mu \text m$, much larger than the illuminating wavelength, i.e. it can be used to record volume holograms (see the explanation on Bragg diffraction above). During exposure the intensity variations of the illumination are encoded in the instant film as refractive index modulations in real time. One of the main advantages of this type of holographic plates is that, unlike conventional holographic emulsions, they don't require any chemical process (developing, bleaching, fixing) after exposure. Other photopolymers may require exposure to UV or heat in order to fix the holographic grating in the material, but with the RRT20 plates even such processes are unnecessary: the holographic grating is fixed in its final form automatically during exposure. The holographic plates are kept in a lightproof box which should be opened only immediately before recording and only in a darkened room (with dim background light). Once the holographic plate that will be used for the recording is taken out of the box, the box must be closed again immediately. Build the setup of Fig. 2/a inside the wooden box on the optical table. Take a digital photo of the setup you have built. Some of the elements are on magnetic bases. These can be loosened or tightened by turning the knob on them. Use the test plate (and a piece of paper with the same size) to trace the size of the beam and to find the appropriate location for the holographic plate for the recording. Place the object on a rectangular block of the appropriate height, so that the expanded beam illuminates the entire object. Put the plate in the special plate holder and fix it in its place with the screws. Make sure that the beam illuminates most of the area of the holographic plate. Put the object as close to the plate as possible. Try to identify the side of the plate which has the light sensitive film on it, and place the plate so that that side of the plate faces the object and the other side faces the laser diode. Before doing any recording show the setup to the lab supervisor. To record the hologram, first turn off the neon light in the room, pull down the blinds on the windows, turn off the laser diode ("output off" on the power supply), then take a holographic plate out of the box and close the box again. Put the plate into the plate holder, wait appr. 30 seconds, then turn on the laser diode again. The minimum exposure time is appr. 5 minutes. You can visually follow the process of the exposure by observing how the brightness of the holographic plate increases in time, as the interference pattern is developing inside the photosensitive layer. If you are unsure about the proper exposure time, adding another 2 minutes won't hurt. Make sure to eliminate stray lights, movements and vibrations during recording. When the recording is over, remove the object from its place, and observe the reconstructed virtual image on the hologram, illuminated by the red laser diode. Next, take the hologram out of the plate holder and illuminate it with the high power color and white light LED's you find in the lab. Observe the reconstructed virtual image again. What is the color of the virtual image of the object when the hologram is reconstructed with the white light LED? Does this color change if the angle of illumination or the observation angle change? How does the virtual image look if you flip the hologram? Make a note of your observations and take digital photographs of the reconstructed images. Note: You can bring your own objects for the holographic recording. Among the best objects for this kind of holography are metallic objects (with colors like silver or gold) and white plastic objects. </wlatex> === Investigating a displacement field using real-time holographic interferometry in a reflection hologram setup === <wlatex> [[Fájl:fizlab4-holo-membran.jpg|bélyegkép|250px|The deformable membrane and the lever arm]] The setup is essentially the same as in the previous measurement, with two differences: the object is now replaced by the deformable membrane, and the illumination is perpendicular to the membrane surface. We will exploit this perpendicular geometry when applying formula (9). The center of the membrane can be pushed with a micrometer rod. The calibration markings on the micrometer rod correspond to $10$ microns, so one rotation corresponds to a displacement of $0,5\,\text{mm}$. This rod is rotated through a lever arm fixed to it. The other end of the arm can be rotated with another similar micrometer rod. Measure the arm length of the "outer" rod, i.e. the distance between its touching point and the axis of the "inner" rod, and find the displacement of the center of the membrane that corresponds to one full rotation of the outer rod. To make a real-time interferogram you first have to record a reflection hologram of the membrane, as described for the previous measurement above. Next, carefully rotate the outer micrometer rod through several full rotations (don't touch anything else!) and observe the membrane surface through the hologram. As the membrane is more and more deformed, a fringe pattern with a higher and higher fringe density will appear on the hologram. This fringe pattern is the real-time interferogram and it is created by the interference between the original state and the deformed state of the membrane. In two or three deformation states make a note of the number of full rotations of the outer rod, and count the corresponding number of fringes that appear on the surface of the membrane with a precision of $\frac 14$ fringe. Multiply this by the contour distance of the measurement (see above). Compare the nominal and measured values of the maximum displacement at the center of the membrane. (You can read off the former directly from the micrometer rod, and you can determine the latter from the interferogram). What does the shape of the interference fringes tell you about the displacement field? Once you finish the measurements gently touch the object or the holographic plate. What do you see? </wlatex> === Making a holographic optical element === <wlatex> Repeat the first measurement, using the convex mirror as the object. Observe how the holographic mirror works and make notes on what you observe. How does the mirror image appear in the HOE? How does the HOE work if you flip it and use its other side? What happens if both the illumination and the observation have slanted angles? Is it possible to observe a real, projected image with the HOE? For illumination use the red and white LED's found in the lab or the flashlight of your smartphone. If possible, record your observations on digital photographs. </wlatex> === Making a transmission hologram === <wlatex> Build the transmission hologram setup of Fig. 1/a and make a digital photograph of it. Make sure that the object is properly illuminated and that a sufficiently large portion of it is visible through the "window" that the holographic plate will occupy during recording. Make sure that the angle between the reference beam and the object beam is appr. 30-45 degrees and that their path difference does not exceed 10 cm. Put the holographic plate into the plate holder so that the photosensitive layer faces the two beams. Record the hologram in the same way as described for the first measurement above. Observe the final hologram in laser illumination, using the setups of Fig. 1/b and Fig. 1/c. How can you observe the three-dimensional nature of the reconstructed image in the two reconstruction setups? Could the hologram be reconstructed using a laser with a different wavelength? If possible, make digital photographs of the reconstruction. </wlatex> === Investigating a displacement field using digital holography === <wlatex> In this part of the lab we measure the maximum displacement perpendicular to the plane of a membrane at the center of the membrane. We use the setup shown in Fig. 4, but our actual collimated beams are not perfect plane waves. The light source is a He-Ne gas laser with a power of 35 mW and a wavelength of 632.8 nm. The images are recorded on a Baumer Optronics MX13 monochromatic CCD camera with a resolution of 1280x1024 pixels and a pixel size of 6.7 μm x 6.7 μm. The CCD camera has its own user software. The software displays the live image of the camera (blue film button on the right), and the button under the telescope icon can be used to manually control the parameters (shutter time, amplification) of the exposure. The optimum value for the amplification is appr. 100-120. The recorded image has a color depth of 8 bits, and its histogram (the number distribution of pixels as a function of grey levels) can be observed using a separate software. When using this software, first click on the "Hisztogram" button, use the mouse to drag the sampling window over the desired part of the image, and double-click to record the histogram. Use the "Timer" button to turn the live tracking of the histogram on and off. Based on the histogram you can decide whether the image is underexposed, overexposed or properly exposed. BS1 is a rotatable beamsplitter with which you can control the intensity ratio between the object arm and the reference arm. In the reference arm there is an additional rotatable beamsplitter which can be used to further attenuate the intensity of the reference wave. The digital holograms are reconstructed with a freeware called HoloVision 2.2 (https://sourceforge.net/projects/holovision/). Before doing the actual measurement make sure to check the setup and its parameters. Measure the distance of the camera from the object. In the setup the observation direction is perpendicular to the surface of the membrane, but the illumination is not. Determine the illumination angle from distance measurements, and, using equation (9), find the perpendicular displacement of the membrane for which the phase difference is 2π. (Use a rectangular coordinate system that fits the geometry of the membrane.) This will be the so-called contour distance of the measurement. Check the brightness of the CCD images for the reference beam alone (without the object beam), for the object beam alone (without the reference beam), and for the interference of the two beams. Adjust the exposure parameters and the rotatable beamsplitters if necessary. The object beam alone and the reference beam alone should not be too dark, but their interference pattern should not be too bright either. Observe the live image on the camera when beamsplitter BS2 is gently touched. How does the histogram of the image look when all the settings are optimal? Once the exposure parameters are set, record a holographic image, and reconstruct it using HoloVision (Image/Reconstruct command). Include the exposure parameters and the histogram of the digital hologram in your lab report. Check the sharpness of the reconstructed intensity image by looking at the shadow of the frame on the membrane. Observe how the sharpness of the reconstructed image changes if you modify the reconstruction distance by 5-10 centimeters in both directions. What reconstruction distance gives the sharpest image? Does this distance differ from the actual measured distance between the object and the CCD camera? If yes, why? What is the pixel size of the image at this distance? How well does the object size on the reconstructed image agree with the actual object size? Record a digital hologram of the membrane, and then introduce a deformation of less than 5 μm to the membrane. (Use the outer rod.) Record another digital hologram. Add the two holograms (Image/Calculations command), and reconstruct the sum. What do you see on the reconstructed intensity image? Include this image in your lab report. Next, reconstruct the difference between the two holograms. How is this reconstructed intensity image different from the previous one? What qualitative information does the fringe system tell you about the displacement field? Count the number of fringes on the surface of the membrane, from its perimeter to its center, with a precision of $\frac 14$ fringe. Multiply this by the contour distance of the measurement, and find the maximum displacement (deformation). Compare this with the nominal value read from the micrometer rod. Next, make a speckle pattern interferogram. Attach the photo objective to the camera and place the diffuser into the reference arm at the same distance from the camera as the object is from the camera. By looking at the shadow of the frame on the object, adjust the sharpness of the image at an aperture setting of f/2.8 (small aperture). Set the aperture to f/16 (large aperture). If the image is sharp enough, the laser speckles on the object won't move, but will only change in brightness, as the object undergoes deformation. Check this. Using the rotatable beam splitters adjust the beam intensities so that the image of the object and the image of the diffuser appear to have the same brightness. Record a speckle pattern in the original state of the object and then another one in the deformed state. Use HoloVision to create the difference of these two speckle patterns, and display its "modulus" (i.e. its absolute value). Interpret what you see on the screen. Try adding the two speckle patterns instead of subtracting them. Why don't you get the same kind of result as in digital holography? </wlatex> ==Additional information== For the lab report: you don't need to write a theoretical introduction. Summarize the experiences you had during the lab. Attach photographs of the setups that you actually used. If possible, attach photographs of the reconstructions too. Address all questions that were asked in the lab manual above. ''Safety rules: Do not look directly into the laser light, especially into the light of the He-Ne laser used in digital holography. Avoid looking at sharp laser dots on surfaces for long periods of time. Take off shiny objects (jewels, wristwatches). Do not bend down so that your eye level is at the height of the laser beam.''' <!--*[[Media:Holografia_2015.pdf|Holográfia pdf]]--> Links: [http://www.eskimo.com/~billb/amateur/holo1.html "Scratch holograms"] [http://bme.videotorium.hu/hu/recordings/details/11809,Hologramok_demo Video about some holograms made at our department]
Vissza a(z) Holography laphoz.
A lap eredeti címe: "https://fizipedia.bme.hu/index.php/Holography" | CommonCrawl |
Application of dry powders of six plant species, as soil amendments, for controlling Fusarium solani and Meloidogyne incognita on pea in pots
Hassan Abd-El-Khair1 &
Wafaa M. A. El-Nagdi1
Application of organic amendments could improve soil properties as well as controlling of soil-borne pathogens. Soil amendments with dry powders of six plant species materials, i.e. caraway seeds, fennel seeds, garlic gloves, onion bulbs, pomegranate peel and spearmint leaves were separately applied for controlling Fusarium solani and Meloidogyne incognita on pea plants in pots. The control is untreated check pots for Fusarium solani and Meloidogyne incognita.
The dry powder of pomegranate peel (as the rate of 10 g/1 kg soil)highly reduced the Fusarium- disease assessments (pre-emergence and post-emergence damping-off and root-rot diseases incidence), followed by spearmint leaves, caraway seeds, fennel seeds, garlic gloves and onion bulbs, respectively. The tested dry powder of plant species showed the nematicidal activity on M. incognita criteria, i.e. second juvenile (J2) in soil and roots as well as galls and egg-masses in roots of pea. The spearmint leaves, onion bulbs and fennel seeds highly reduced the J2 in soil and roots as well as galls and egg-masses. The pea plant growth parameters i.e. length of shoot, fresh & dry weights of shoot and fresh weight of roots, yield parameters, i.e. fresh and dry weight of pea pod and pod parameters as well as Rhizobium nodules number were increased in pea plants with reducing infestation with F. solani and M. incognita.
Soil amendments with dry powders of six plant species materials were used in this study reduced F. solani and M. incognita and improved pea plants
Pea plants (Pisum sativum L.) are the most important vegetables crop grew in many countries of the world as well as in Egypt, where it is rich in starch, protein, Vitamins and high in fiber (Pownall et al. 2010). Fusarium solani (Fusarium root rot disease) and Meloidogyne incognita (nematode root-knot disease) are among various soilborne pathogens which attack pea root systems (Anwar and Mcknery 2010). Application of organic amendments could improve soil properties as well as controlling of soilborne pathogens, where the non-sterilized vegetables waste-compost completely inhibited the mycelium growth of Fusarium oxysporum f.sp radicis-lycopersici in tomatoes, at the highest rates only (Kouki et al. 2012). There were several medicinal plants as caraway seeds containing R-carvone and D-limonene, fennel seeds containing fenchone, camphene, garlic gloves powder containing allicin, onion bulbs powder containing flavonoids, phytosterols and saponins, pomegranate peel contain tannins, terpenoids, alkaloids, flavonoids and glycoside and spearmint leaves contain Mint L-carvone, limonene which have nematicidal effect (Middleton et al. 2000; Youssef and El-Nagdi 2016).
The un-autoclaved water extracts of commercial composts also had inhibitory effects against F. solani, isolated from cucumber plants, in vitro tests. The compost amended soil could reduce the percentages of disease incidence and improve the growth parameters of cucumber plants in pot experiments (Sabet et al. 2013).The pomegranate peel aqueous extract inhibited the linear growth of Fusarium oxysporum and F. solani in vitro tests as well as pomegranate peel powder, when tested as seed or soil treatments, could decrease the pre- emergence and post-emergence of Fusarium damping-off disease in greenhouse experiments (Mohamad and Khalil 2015). Application of pomegranate peel aqueous extract could reduce of wilting incidence and improve of growth variables of tomato plants in vivo tests (Rongai et al. 2016). The olive oil cakes or castor bean reduced Fusarium root rot disease incidence and increased the growth parameters of eggplants in pots experiment (Abd-El-Khair et al. 2018). The compost tea, when combined with pomegranate peel powder, highly inhibited the growth of F. oxysporum in vitro tests. The combination also significantly reduced the wilting disease severity and increased the survival of lupine plants in field applications (Abou El-Nour Mona et al. 2020).
Application of aqueous or ethanol of stem extracts of Rhizophora mucronata showed more nematicidal effect against Meloidogyne javanica juveniles, than leaves extracts. Soil amended with dried powder of leaves or stem of R. mucronata controlled root-knot nematodes in plants of mash beans or okra, where the treatments also significantly increased the seeds germination and both length & weights of shoot and root of tested plants (Tariq et al. 2007). The municipal green wastes, olive pomace, spent mushroom substrates and sewage sludge, when applied as soil amendments, could significantly reduce M. incognita parameters in tomato roots. Soil amended olive pomace-based composts or composted mushroom substrate resulted the highest nematode suppression and significantly increased the plant growth of tomato plants (D'addabbo et al. 2011). Dry leaves of both fleabane (F) and sugar beets (S), mud sugar beet (M) as well as organic compost, as sugar cane residues (OC), alone or in combination with Bionema (B) were significantly reduced the M. incognita parameters. The combination of Bionema + Nile fertile was reduced numbers of J2 nematode in soil as well as galls and egg-masses in roots, followed by B + M, B + OC, B + S and B + F, respectively and significantly increased the growth parameters of sugar beet (El-Nagdi et al. 2011). Soil amended with chopped or ground dry leaves of neem and castor leaves had maximum suppression against gall and eggs numbers of M. javanica in greenhouse, respectively (Lopes et al. 2011). Soil amended with fresh chopped leaves or dry leaves powder of Datura stramonium, Peganum harmala or Tagetes minuta, poultry and sheep manure reduced the nematode population and improved plant growth parameters in garlic. P. harmala, as dry leaves powder, was the most effective (Saeed 2015).The olive oil cakes or castor bean reduced nematode parameters of M. incognita and increased the growth parameters of eggplants in pots experiment (Abd-El-Khair et al. 2018).The soil amended with ground seeds of fennel & caraway or powdered leaves of basil significantly reduced the M. incognita parameters under greenhouse conditions. Basil waste was highly reduced numbers of J2 and egg-masses of nematode, than fennel and caraway, respectively. The treatments highly improved growth and yield parameters of M. incognita infected pea plants (El-Nagdi et al. 2019).
The present study aimed to evaluate the antifungal and nematicidal activity of powdered materials of six plant species, i.e. caraway seeds (Carum carvi)), fennel seeds (Foeniculum vierns), garlic gloves (Allium sativum), onion bulbs (Allium cepa), pomegranate peels (Punica granatum) and spearmint leaves (Mentha varidis) for controlling of Fusarium solani and Meloidogyne incognita on pea plants in pots.
Dry plant materials
Six dry materials of plant species i.e. caraway seeds (Carum carvi), fennel seeds (Foeniculum vierns), garlic gloves powder (Allium sativum), onion bulbs powder (Allium cepa), pomegranate peel (Punica granatum) and spearmint leaves (Mentha varidis) were obtained from Governorate of Fauyoum of Egypt during 2018 season. Then, the plant materials were dried at room temperature. Seeds of fennel, caraway, pomegranate peels and Spearmint leaves were ground in a blender. A commercial powdered of garlic gloves (Allium sativum) and onion bulbs (Allium cepa) were applied. All plant species were applied as powder materials, for testing their antifungal and nematicidal activity.
Fusarium root-rot pathogen
Fusarium solani was isolated from naturally infected pea plants and then the pathogenic fungus was identified in Plant Pathology Department (PPD), National Research Centre (NRC), according to pathological, morphological and cultural characters according to the key described by Ellis (1971) and Barnett and Hunter (1972).
Meloidogyne incognita inoculum
The root-knot nematode, M. incognita, was identified by using protocol described by Taylor and Sasser (1978) by using nematode adult female based on its perineal pattern morphological characteristics. The pure cultures of M. incognita were reared on eggplants by a single egg-mass of this nematode inoculated to susceptible eggplant cv. Baladi in a screen house at 30 ± 5 °C. Newly hatched second juvenile of nematode as inocula were applied.
Pot experiment
Seventy of plastic pots (there were 35 pots for Fusarium experiment and 35 pots for nematode experiment), each of it contains about 2 kg of solarized sandy-loam soil (1:1), were applied. The treatments of dry powder materials were as follows; caraway seeds, fennel seeds, garlic gloves, onion bulbs, pomegranate peel, spearmint leaves and untreated (without pathogen). Each pot soil was mixed well with each tested dry powdered plant materials, separately, at rate 10 g per 1 kg soil. Then, the pots were watered and left for one week. The pots were divided into two groups; each one contains 35 pots, the first group for F. solani and the second for M. incognita. Five pots were used as replicated for each treatment. Seeds of pea (cv. Concessa) surface sterilized in solution of sodium hypochlorite (1%) for 3 min., followed by three successive rinses in sterilized-distilled water. The excess water was removed by air drying. In the group A; the pots were inoculated with 7- days-old cultures F. solani adjusted to 108 propagules/g at the rate of 3% soil weight (W: W) and then, the pots were watered and left for one week. Five seeds were sown in each pot. In the group (B); five pea seeds were sown per pot. After seeds germination, the two plants were selected in each pot. The pots were inoculated with 1000 newly hatched J2 of M. incognita in four holes made around the plant roots. The pots were arranged according to a complete randomized design on a bench of experimental greenhouse of PPD, NRC of Egypt.
Effect on disease assessments of Fusarium solani
Effect of dry powder of each caraway seeds, fennel seeds, garlic gloves, onion bulbs, pomegranate peels and spearmint leaves on disease assessments caused by F. solani was estimated. The pre-emergence and post- emergence of damping-off percentages were calculated after 15 and 45 days from sowing, respectively. The disease incidence (%) of root- rots and survived of healthy pea plants were recorded after 60 days of sowing.
$${\text{Pre - emergence}}\;\left( \% \right) = {\text{Number}}\;{\text{of}}\;{\text{non - germinated}}\;{\text{seeds/Total}}\;{\text{number}}\;{\text{of}}\;{\text{sown}}\;{\text{seeds}} \times 100$$
$$\begin{aligned} & {\text{Post - emergence}}\;\left( \% \right) = {\text{Number}}\;{\text{of}}\;{\text{dead}}\;{\text{seedlings/Total}}\;{\text{number}}\;{\text{of}}\;{\text{sown}}\;{\text{seeds }} \times 100\;{\text{Survived}}\;{\text{plant}}\;\left( \% \right) \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad {\text{ = Number}}\;{\text{of}}\;{\text{survived}}\;{\text{plants/Total}}\;{\text{number}}\;{\text{of}}\;{\text{sown}}\;{\text{seeds}} \times 100 \\ \end{aligned}$$
Effect on M. incognita parameters
Three months after inoculation, the nematicidal effect of dry powder of each caraway seeds, fennel seeds, garlic gloves, onion bulbs, pomegranate peel and spearmint leaves against M. incognita criteria viz. J2 number in soil as well as numbers of J2, galls and egg-masses in pea roots (five roots /treatment) as well as the percentages of reduction were recorded.
Effect on plant growth and pod parameters
Effect of dry powder of each caraway seeds, fennel seeds, garlic gloves, onion bulbs, pomegranate peel and spearmint leaves on plant growth parameters of pea i.e., length of shoot (cm), fresh & root weights (g) of shoot and fresh weight (g) of roots as well as pod parameters, i.e. fresh and dry weights (g) of pods were recorded with artificially infestation F. solani or M. incognita.
Results were analyzed by analysis of variance (ANOVA) using Computer Statistical Package Costat software (1990). User Manual Version 3.03, Barkley Co. The variation between treatments was detected using Duncan's multiple range test at 5% level of probability (Snedecor and Cochran 1999).
Effect on disease assessments
The antifungal activity of dry powders of six plant species, i.e. caraway seeds, fennel seeds, garlic gloves, onion bulbs, pomegranate peel and spearmint leaves on Fusarium -disease assessments, in pots experiment, are listed in Table 1. The pomegranate peel highly increased the survival pea plants being 86.1%, followed by spearmint leaves (83.0%), caraway seeds (79.0%), fennel seeds (72.2%), garlic gloves (71.4%) and onion bulbs (64.2%), respectively (Table 1).
Table 1 Effect of dry powder plant materials on disease assessments of Fusarium solani as well as number of Rhizobium nodules in pea roots system in pots, under greenhouse conditions
Results revealed that the incidence of pre-emergence of damping-off disease was in the ranges of 6.3 to 12.5%, in treated pea plants, compared to 31.3% in pea plants treated with F. solani only. The dry powders of each pomegranate peel and spearmint leaves highly reduced the disease incidence to 6.3%, followed by each of caraway seeds & fennel seeds (9.4%) and each of garlic gloves & onion bulbs (12.5%), respectively. Results revealed that the post-emergence of damping-off disease incidence was in the ranges of 6.7 to 17.9%, than 27.5% in pea plants treated with F. solani only. The dry powder of pomegranate peel highly reduced the disease incidence to 6.7%, followed by spearmint leaves (7.2%), caraway seeds (10.7%), fennel seeds (13.9%), garlic gloves (14.3%) and onion bulbs (17.9%), respectively (Table 1).
The incidence of root rot disease in treated plants with dry powder of plant species was ranged from 7.2 to 17.9% in treated plants, compared to 31.7% in pea plants treated with F. solani only. The antifungal activity of tested dry powders were same trend as mentioned before, where the pomegranate peel highly reduced the root-rot incidence to 7.2%, followed by spearmint leaves (9.8%), caraway seeds (10.3%), fennel seeds (13.9%), garlic gloves (14.3%) and onion bulbs (17.9%), respectively. Results revealed that plant dry powders were increased the survival pea plants in the range of 71.4 to 86.1%, compared to survival plants being 40.8% with F. solani only.
Effects of plant dry powders on growth parameters of pea plants i.e. length of shoot, fresh & dry weights of shoot and fresh weight of roots as well as yield parameters, i.e. fresh and dry weights of pea pods, with artificially infestation with F. solani in pot experiment, is shown in Table 2. The pomegranate peel highly increased being 58%, followed by onion bulbs (46%), garlic gloves (42%), fennel seeds (39%), caraway seeds (31%) and spearmint leaves (31%), respectively.
Table 2 Effect of dry powder plant materials on growth parameters of pea plants, artificially infected by Fusarium solani in pots, under greenhouse conditions
The plant dry powders increased the shoot length of treated plants in the ranges of 11 to 25% comparing F. solani treatment only. The pomegranate peel resulted the highest increases being 25%, followed by garlic gloves (21%), onion bulbs (19%), spearmint leaves (16%), fennel seeds (15%) and caraway seeds (11%), respectively. The fresh and dry weights of shoot of treated pea plants were in the ranges of 4 to 34% and 46 to 133% comparing with pathogen only. The pomegranate peel highly increased of each fresh and dry weights being 34 and 133%, followed by onion bulbs (24 & 127%), garlic gloves (15 & 106%), spearmint leaves (11 & 100%), fennel seeds (4 & 46%) and caraway seeds (9 & 73%). Results revealed that the fresh weight in treated pea roots ranged 31 to 58% comparing with pathogen only.
The plant dry powders increased the fresh pod of treated pea in the range of 60 to 110% comparing with pathogen only. The pomegranate peel has the highest increased being 110%, followed by onion bulbs (89%), garlic gloves (70%), caraway seeds (60%), fennel seeds (60%) and spearmint leaves (60%), respectively. The treatments also increased the dry weight of pea pod in the ranges of 35 to 91% comparing with pathogen only. The pomegranate peel also highly increased being 91%, followed by onion bulbs (87%), garlic gloves (61%), spearmint leaves (52%), fennel seeds (39%) and caraway seeds (35%), respectively (Table 2).
Effect on nematode parameters
The nematicidal activity of plant dry powder, i.e. caraway seeds, fennel seeds, garlic gloves, onion bulbs, pomegranate peel and spearmint leaves against M. incognita parameters, i.e. J2 in soil and J2, galls and egg-masses in roots in pots are shown in Table 3. The dry powders reduced the J2 numbers in soil in the ranges of 63 to 89% comparing with the untreated control (M. incognita alone). The spearmint highly increased the J2 reduction in soil being 89%, followed by fennel seeds (88%), caraway seeds (86%), pomegranate peel (80%), onion bulbs (79%) and garlic gloves (63%), respectively. The treatments also reduced the J2 number in roots in the ranges of 61 to 75% comparing with untreated control. The onion bulbs highly increased the percentages of root-J2 reduction being 75%, followed by garlic gloves (72%), fennel seeds (69%), pomegranate peel (67%), caraway seeds (64%) and spearmint leaves (61%), respectively (Table 3).
Table 3 Effect of dry powder plant materials on M. incognita parameters and Rhizobium bacterial nodules number in pea roots system in pots, under greenhouse conditions
The plant dry powders could reduce the galls number in roots in the ranges of 63 to 79% comparing with untreated control. The fennel seeds highly increased the percentages of galls reduction in pea roots being 79%, followed by garlic gloves (74%), pomegranate peel (74%), caraway seeds (68%), onion bulbs (63%) and spearmint leaves (63%), respectively. The treatments also reduced the egg-masses number in pea roots in the ranges of 67 to 83%, comparing with untreated control. The fennel seeds highly increased the percentages of egg-masses reduction in pea roots being 83%, followed by garlic gloves (75%), onion bulbs (75%), pomegranate peel (75%), caraway seeds (67%) and spearmint leaves (67%), respectively (Table 3).
Effects of plant dry powders on tested pea plant growth parameters as well as pod parameters with artificially infestation with M. incognita, in pot experiment are shown in Table 4. The shoot length of treated pea plants was in the ranges of 10 to 26% comparing with nematode alone. The pomegranate peel highly increased the shoot length being 26%, followed by onion bulbs (20%), garlic gloves (18%), fennel seeds (14%), spearmint leaves (14%) and caraway seeds (10%), respectively. The shoot fresh and dry weights of treated pea plants were in the ranges of 9 to 36% and 17 to 59%, respectively. The pomegranate peel highly increased the shoot fresh and dry weights being 36 and 59%), followed by onion bulbs (24 & 55%), garlic gloves (17 & 35%), spearmint leaves (15 & 28%), fennel seeds (11 & 24%) and caraway seeds (9 & 17%), respectively. The fresh weight of pea roots was in the ranges of 14 to 46% comparing with pathogen only. The fennel seeds highly increased the root weight being 46%, followed by garlic gloves (43%), pomegranate peel (39%), onion bulbs (26%), garlic gloves (42%), spearmint leaves (29%) and caraway seeds (14%), respectively. The plant dry powders also increased the fresh pod in treated pea plants ranged from 71 to 141% comparing with untreated control. The pomegranate peel highly increased the fresh pod weight being 141%, followed by onion bulbs (129%), garlic gloves (109%), spearmint leaves (97%) fennel seeds (85%) and caraway seeds (71%), respectively. The dry weight of pea pod improved in the ranges of 23 to 104%. The pomegranate peel has highest increase of dry weight of pod being 104%, followed by onion bulbs (92%), garlic gloves (81%), fennel seeds (58%), spearmint leaves (50%) and caraway seeds (23%), respectively (Table 4).
Table 4 Effect of dry powder plant materials on growth parameters of pea plants, artificially infected by M. incognita in pots, under greenhouse conditions
Effect on Rhizobium nodules
The pots experiment revealed that the increase of Rhizobium nodules number was in the ranges of 8–33% in treated pea roots comparing with F. solani only. The spearmint leaves highly increased the Rhizobium nodules being 33%, followed by caraway seeds (17%), garlic gloves (17%), onion bulbs (17%), fennel seeds (8%) and pomegranate peel (8%), respectively (Table 1). Under artificially infestation with M. incognita, the number of Rhizobium nodules on pea roots was in the ranges of 20–80% comparing with untreated control. The spearmint leaves also highly increased the Rhizobium nodules being 80%, followed by caraway seeds (50%), fennel seeds, (50%), garlic gloves (50%), onion bulbs (20%) and pomegranate peel (30%), respectively (Table 2).
Application of organic soil amendments could improve the physical & chemical properties, structure, temperature and humidity conditions of soil as well as nutrient contents for plants growth. The beneficial bacteria or fungi play an important role in suppressing the economically important soil borne disease (plant parasitic nematodes or pathogenic fungi).Organic amendments can provide an environmentally friendly alternative to chemical pesticide uses which are often expensive, limited available or environmentally hazardous (Renčo 2013). The use of organic amendments, a narrow C: N ratio, can improve soil fertility, whereas more efficiently can reduce the levels of nematode and could minimize the risks of increase levels of others soil-borne pathogens, where neem seeds powder has nematicidal activity in field or greenhouse (Agbenin 2004). Application of the leaves and stem of Avicennia marina or Rhizophora mucronata, as organic amendments, significantly controlled root rot fungi (Fusarium spp.) and root-knot nematode (Meloidogyne javanica) in potato in pots (Tariq et al. 2008). The olive pomace composts significantly increased soil chemical parameters as organic matter, where the nitrogen contents were at the highest rates (D'addabbo et al. 2011).
Our results revealed that the dry powders of tested plant species were differed in their antifungal or nematicdal activity against Fusarium-disease assessments or M. incognita-nematode parameters of M. incognita in potted pea plants, where the reduction differed according to treatment. The dry powders of pomegranate peel or spearmint leaves, resulted the highest reduction against incidence of pre- and post-emergence dapping-off as well as root rot diseases, than other plant dry powders when pea plants were artificially infested with F. solani. The obtained results are in agreement with those recorded by Borrego-Benjumea Ana et al. (2015), where soil amended with poultry manure or olive residue compost significantly reduced inoculum viability of Fusarium oxysporum, F. proliferatum and F. solani associated with asparagus roots when affected with crown and root rot diseases. The root severity disease symptoms significantly decreased, where F. proliferatum was lower severity, than F. oxysporum or F. solani. Soil amended with olive residue compost showed significantly asparagus plants fresh weight with Fusarium infection. Javaid and Rauf (2015) showed that the dry leaves of Chenopodium album significantly reduced disease incidence of basal rot in onion caused by F. oxysporum when incorporation at 3% (w/w). The chloroform, resulted from methanolic leaf extract, exhibited the best antifungal activity against fungal biomass. Therefore, the chloroform fraction or soil amendment with biomass of C. album dry leaves can apply as alternative of chemical fungicides for controlling basal rot disease in onions. Rongai et al. (2016) mentioned reduced the Fusarium population in soil as well as the compost tea in combination with pomegranate peel powder because of it has an important source of bioactive compounds which managed Fusarium wilt and increase in number of healthy tomato plants. It is cleared that pomegranate peel may be a promising as an environmentally safe alternative to fungicides by suppressing the most dangerous damping-off and wilt diseases (Abou El-Nour et al. 2020).
Under artificially infestation with M. incognita, the applied plant dry powders could reduce the nematode parameters, but no significant differences were recorded with most treatments. The dry powders of spearmint leaves, onion bulbs and fennel seeds had the highest nematicidal activity against of J2 in soil & roots as well as galls & egg-masses. These results are in agreement with those recorded by Stirling and Eden (2008) and Youssef and El-Nagdi (2016–2017) on spearmint. They reported that soil amended with sugarcane residue, plus ammonium nitrate, enhanced microbial activity and decreased M. incognita populations when incorporated at 4 months before planting of capsicum. Soil amendments of olive pomace or composted mushroom substrate significantly reduced gall formation of M. incognita in tomato roots, whereas the composted municipal green wastes were more suppressive when combined with sewage sludge (D'addabbo et al. 2011). The bio-fumigation by mechanically incorporation of chopped brassicaceous plants into soil controlled soil-borne nematode. Bio-fumigant effect may due to the volatile or toxic of thiocyanates originated from the of secondary metabolites hydrolysis of glucoseinolate present in the Brassica tissues. The graminaceous plants such as sorghum and Sudan grass produce nematicidal cyanides via enzymatic hydrolysis of precursor cyanogenic glycoside/dhurrin. The allelopathic plant marigold produces α-terthienyl, which has shown potential bio-fumigation effect against PPNs (Dutta et al. 2019).
Our results revealed that the dry powders of six plants species were increased the tested plant growth and pods parameters of pea plants in comparing pathogen alone, where the increase differed according to treatment. The leaves and stem of Avicennia marina or Rhizophora mucronata, as organic amendments, significantly increased the plant growth (length of shoot, weight of shoot, length of roots and weight of roots) of potato (Tariq et al. 2008). The compost of olive-waste was positively affected tomato growth, when combined with wastes of sheep wool. Soil amendments with composted mushroom substrate significantly increased the growth of plants, whereas composted municipal green waste was positively affected tomato growth in combination with sewage sludge (D'addabbo et al. 2011). The certain medicinal chopped plant green or dry leaves and their aqueous extracts of neem, Datura, camphor and oleander managed root knot nematode M. incognita criteria as well as plant growth parameters in eggplant and reduction differed according to treatment. The most plant growth parameters were increased by some treatments (Youssef and Lashein 2013).The powdered dry leaves of spearmint and sage alone or in combination were reduced M. incognita on cowpea and improving plant growth and yield criteria (El-Nagdi Wafaa et al. 2017).
Soil amendments with dry powders of six plant species, i.e. caraway seeds, fennel seeds, garlic gloves, onion bulbs, pomegranate peel and spearmint leaves were used in this study reduced F. solani and M. incognita criteria which subsequently improved plant growth and yield of pea plants. The pomegranate peel highly increased the survival pea plants infected by F. solani followed by spearmint leaves, caraway seeds, fennel seeds, garlic gloves and onion bulbs. The onion bulbs highly increased the percentages of nematode J2 reduction, followed by garlic gloves, fennel seeds, pomegranate peel, caraway seeds and spearmint leaves. It is clear that the leaves and seeds of Egyptian plant species may have best control against Fusraium pathogen or root-knot nematode as well as improved plant growth and yield when applied in the field.
Soil amendments with dry powders of some plant species materials, are available for controlling Fusarium solani and Meloidogyne incognita.
M. incognita :
J2 :
Second stage juveniles
F. solani :
F. oxysporum :
R. mucronata :
Rhizophora mucronata
Dry leaves of fleabane
Mud sugar beet
Organic compost as sugar cane residues
Incombination with bionema
PPD:
Plant Pathology Department
NRC:
National Research Centre
ANOVA:
COSTAT:
Computer statistical package
Abd-El-Khair H, El-Nagdi WMA, Hammam MMA (2018) Effect of olive and castor bean oil cakes singly or combined with Trichoderma spp on Fusarium solani and Meloidogyne incognita infecting Eggplant. Middle East J Appl Sci 8:465–473
Abou El-Nour Mona M, Sarhan EAD, Wadi Mona JM (2020) Suppressive effect of compost /pomegranate peel tea combination against Fusarium oxysporum f. sp. lupini and Rhizoctonia solani as an alternative synthetic fungicide. Egypt J Exp Biol (bot) 16:13–25
Agbenin ON (2004) Potentials of organic amendments in the control of plant parasitic nematodes. Plant Prot Sci 40:21–25
Anwar SA, Mcknery MV (2010) Incidence and reproduction of Meloidogyne incognita on vegetable crop genotypes. Pak J Zool 42:135–141
Barnett HL, Hunter BB (1972) Illustrated genera of imperfect fungi. Burgess Publ. Co., Minnesota, p 241
Borrego-Benjumea Ana I, Melero-Vara JM, Basallote-Ureba María J (2015) Organic amendments conditions on the control of Fusarium crown and root rot of asparagus caused by three Fusarium spp. Spanish. J Agric Res 13:e1009
Costat software (1990) Microcomputer program analysis, version 4. 20, CoHort Software, Berkely, CA, USA
D'addabbo T, Papajová I, Sasanelli N, Radicci V, Renčo M (2011) Suppression of root-knot nematodes in potting mixes amended with different composted biowastes. Helminthologia 48:278–287
Dutta TK, Khan MR, Phani V (2019) Plant-parasitic nematode management via biofumigation using brassica and non-brassica plants: current status and future prospects. Curr Plant Biol 17:17–32
Ellis MB (1971) Dematiaceous hyphomycetes. Commw. Mycol. Inst. Kew. Surrey, England
El-Nagdi Wafaa MA, Abd El Fatta AI (2011) Controlling root-knot nematode, Meloidogyne incognita infecting sugar beet using some plant residues, a biofertilizer, compost and biocides. J Plant Prot Res 51:107–113
El-Nagdi Wafaa MA, Youssef MMA, Dawood Mona G (2017) Nematicidal activity of certain medicinal plant residues in relation to controlling root knot nematode, Meloidogyne incognita on cowpea. Appl Sci Rept 20:35–38
El-Nagdi Wafaa MA, Youssef MMA, Abd El-Khair H, Abd-Elgawad MMM (2019) Effect of certain organic amendments and Trichoderma species on the root-knot nematode, Meloidogyne incognita, infecting pea (Pisum sativum L) plants. Egypt J Biol Pest Control 29:75
Javaid A, Rauf S (2015) Management of basal rot disease of onion with dry leaf biomass of Chenopodium album as soil amendment. Int J Agric Biol 17:142–148
Kouki S, Saidi N, BenRajeb A, Brahmi M, Bellila A, Fumio M, Hefiene M, Jedidi N, Downer J, Ouzari H (2012) Control of Fusarium wilt of tomato caused by Fusarium oxysporum f.sp. radicis-lycopersici using mixture of vegetable and Posidonia oceanica compost. Appl Environ Soil Sci. https://doi.org/10.1155/2012/2396
Lopes EA, Ferraz S, Ferreira PA, de Freitas LG, Dallemole-Giaretta R (2011) Soil amendment with chopped or ground dry leaves of six species of plants for the control of Meloidogyne javanica in tomato under greenhouse conditions. Ciência Rural 41:935–938
Middleton E Jr, Kandaswami C, Theoharides TC (2000) The effects of plant flavonoids on mammalian cells: implications for inflammation, heart disease and cancer. Pharmacol Rev 52:673–751
Mohamad Tahany GM, Khalil Amal A (2015) Effect of agriculture waste: pomegranate (Punica granatum L.) fruits peel on some important phytopathogenic fungi and control of tomato damping-off. J Appl Life Sci Int 3:103–113
Pownall TL, Udenigwe CC, Aluko RE (2010) Amino acids composition and antioxidant properties of pea seed (Pisum sativum) enzymatic protein hydrolysate fractions. J Agric Food Chem 58:4712–4718
Renčo M (2013) Organic amendments of soil as useful tools of plant parasitic nematodes control. Helminthologia 50:3–14
Rongai D, Pulcini P, Pesce B, Milano E (2016) Antifungal activity of pomegranate peel extract against Fusarium wilt of tomato. Eur J Plant Pathol 146:229–238
Sabet KK, Saber MM, El-Naggar MA, El-Mougy Nehal S, El-Deeb HM, El-Shahawy IE (2013) Using commercial compost as control measures against cucumber root-rot disease. J Mycol 6:1–13
Saeed MRM (2015) Efficacy of some organic amendments for the control of stem and bulb nematode, Ditylenchus dipsaci (Kühn) Filipjev on garlic (Allium sativum). Egypt J Agronematol 14:22–36
Snedecor GW, Cochran WG (1999) Statistical methods, 5th edn. Iowa State University Press, Ames, p 593
Stirling GR, Eden LM (2008) The impact of organic amendments, mulching and tillage on plant nutrition, Pythium root rot, root-knot nematode and other pests and diseases of Capsicum in a subtropical environment, and implications for the development of more sustainable vegetable farming systems. Australas Plant Pathol 37:123–131
Taylor AL, Sasser JN (1978) Biology, identification and control of root-knot nematodes (Meloidogyne species). IMP, North Carolina State University Graphics, Raleigh
Tariq M, Dawar S, Mehdi FS, Zaki MJ (2007) Use of Rhizophora mucronata in the control of Meloidogyne javanica root knot nematode on okra and mash bean. Pak J Bot 39:265–270
Tariq M, Dawar S, Mehdi FS, Zaki MJ (2008) The effect of mangroves amendments to soil on root rot and root knot of potato (Solanum tuberosum L.). Acta Agrobot 61:115–121
Youssef MMA, El-Nagdi Wafaa MA (2016–2017) Population density of root knot nematode, Meloidogyne incognita infecting eggplant influenced by intercropping with spear mint plants; a pilot study. Bull NRC 41(1):264–270
Youssef MMA, Lashein Asmahan MS (2013) Efficacy of different medicinal plants as green and dry leaves and extracts of leaves on root knot nematode, Meloidogyne incognita infecting eggplant. Eur J Agric Environ Med 2:10–14
There is no funding.
Plant Pathology Department, National Research Centre, Dokki, 12622, Cairo, Egypt
Hassan Abd-El-Khair & Wafaa M. A. El-Nagdi
Hassan Abd-El-Khair
Wafaa M. A. El-Nagdi
H A E (The first author) suggested the idea of the research and design the experiment in green house, shared in writing the manuscript. W M A E (second author) carried out the manuscript in the green house, statistical analysis in data, shared in writing the manuscript. The two authors read and approved the final manuscript.
Correspondence to Wafaa M. A. El-Nagdi.
Abd-El-Khair, H., El-Nagdi, W.M.A. Application of dry powders of six plant species, as soil amendments, for controlling Fusarium solani and Meloidogyne incognita on pea in pots. Bull Natl Res Cent 45, 116 (2021). https://doi.org/10.1186/s42269-021-00571-5
Organic amendments
Pisum sativum | CommonCrawl |
Blood pressure components and incident cardiovascular disease and mortality events among Iranian adults with chronic kidney disease during over a decade long follow-up: a prospective cohort study
Ashkan Hashemi1 na1,
Sormeh Nourbakhsh1 na1,
Samaneh Asgari1,
Mohammadhassan Mirbolouk2,
Fereidoun Azizi3 &
Farzad Hadaegh ORCID: orcid.org/0000-0002-8935-27441
To explore the association between systolic and diastolic blood pressure (SBP and DBP respectively) and pulse pressure (PP) with cardiovascular disease (CVD) and mortality events among Iranian patients with prevalent CKD.
Patients [n = 1448, mean age: 60.9 (9.9) years] defined as those with estimated glomerular filtration rate < 60 ml/min/1.73 m2, were followed from 31 January 1999 to 20 March 2014. Multivariable Cox proportional hazard models were applied to examine the associations between different components of BP with outcomes.
During a median follow-up of 13.9 years, 305 all-cause mortality and 317 (100 fatal) CVD events (among those free from CVD, n = 1232) occurred. For CVD and CV-mortality, SBP and PP showed a linear relationship, while a U-shaped relationship for DBP was observed with all outcomes. Considering 120 ≤ SBP < 130 as reference, SBP ≥ 140 mmHg was associated with the highest hazard ratio (HR) for CVD [1.68 (1.2–2.34)], all-cause [1.72 (1.19–2.48)], and CV-mortality events [2.21 (1.16–4.22)]. Regarding DBP, compared with 80 ≤ DBP < 85 as reference, the level of ≥ 85 mmHg increased risk of CVD and all-cause mortality events; furthermore, DBP < 80 mmHg was associated with significant HR for CVD events [1.55 (1.08–2.24)], all-cause [1.68 (1.13–2.5)] and CV-mortality events [3.0 (1.17–7.7)]. Considering PP, the highest HR was seen in participants in the 4th quartile for all outcomes of interest; HRs for CVD events [1.92 (1.33–2.78)], all-cause [1.71 (1.11–2.63)] and CV-mortality events [2.22 (1.06–4.64)].
Among patients with CKD, the lowest risk of all-cause and CV-mortality as well as incident CVD was observed in those with SBP < 140, 80 ≤ DBP < 85 and PP < 64 mmHg.
Cardiovascular disease (CVD) is the major cause of morbidity and mortality among patients with chronic kidney disease (CKD) [1]. Poorly controlled hypertension is associated with increased risk of cardiovascular morbidity and mortality as well as higher risk and accelerated rate of kidney function deterioration in patients with CKD [2]. Thus, optimal BP control is vital in CKD patient management. However, the BP threshold for initiation and goal of treatment remains controversial due to conflicting evidence available [3]. Due to the inconsistency in the evidence supporting the idea of ''the lower the better strategy'', the Joint National Committee (JNC) raised the BP goal for CKD patients from below 130/80 mmHg in JNC 7 [4] to a more liberal target of less than 140/90 mmHg in JNC 8 [5]. On the other hand, the latest report of The American College of Cardiology/American Heart Association (ACC/AHA) guideline for Prevention, Detection, Evaluation and Management of High Blood Pressure in Adults, again decreased the goal of BP lowering therapy among hypertensive CKD patients to below 130/80 mmHg [6].
The exact relationship between the components of blood pressure [SBP, DBP and their difference called pulse pressure (PP)] with CVD and all-cause mortality among CKD population, has not been consistent among studies. While some studies suggest for a linear relationship [7] or advocate for ''the lower the better strategy'' [8, 9], others report a J or U shaped association [10,11,12], depending on the specific BP components and type of outcomes studied. Among patients with incident CKD, Kovesdy et al. [13] indicated a linear association between SBP with CVD events and a U shaped relationship for both SBP and DBP with all-cause mortality. Interestingly, while Palit et al. [14] identified a strong association between higher PP and CVD events, they could not establish such a relationship between either SBP or DBP with mortality among patients with advanced CKD.
Since the studies mentioned above have mainly been conducted on Western populations, their results may not be applicable to other ethnicities such as Middle Eastern populations which have high incidence of CKD and its related risk factors such as hypertension and type 2 diabetes [15,16,17,18]. In the current study we have examined the association between different components of blood pressure (SBP, DBP, and PP) with CVD and mortality events in a long term population based study among an adult Tehranian population with prevalent CKD.
Patients and study design
"Tehran Lipid and Glucose Study" (TLGS) is a dynamic prospective longitudinal population-based study, being performed on a representative sample of Tehran, the capital city of Iran. The aim of the study is to determine the prevalence of non-communicable disease risk factors. TLGS enrollment was in two phases: First phase (1999–2001) and the second phase (2001–2005). Data collection is ongoing and scheduled to continue for at least 20 years, at 3-year intervals, details of the design and enrollment of the TLGS cohort have been reported previously [19].
From a total of 9731 participants, aged ≥ 30 years, (8064 individuals from phase I and 1667 new participants from phase 2), there were only 1761 participants with prevalent CKD (estimated glomerular filtration rate; eGFR < 60 ml/min/1.73 m2) in the cross sectional phases of TLGS. We excluded those with missing data on fasting plasma glucose (FPG), standard 2-h post challenge plasma glucose (2 h-PCG), total cholesterol (TC), body mass index (BMI), smoking habits and eGFR at baseline (n = 125), and those with no follow-up (n = 188), leaving 1448 CKD patients, who were followed until 20 March 2014. Furthermore, when we focused on CVD and its mortality as outcome, those with prevalent CVD (n = 216) were also excluded, leaving 1232 individuals.
Written informed consent was obtained from all participants and the medical ethics committee of the Research Institute for Endocrine Sciences approved the study proposal.
Clinical and laboratory measurements
Information, collected by a trained interviewer using a standardized questionnaire, which included demographic characteristics, smoking status, medication regimen (antihypertensive, lipid-lowering and anti-diabetic agents) and past medical history of CVD. Details of anthropometric measurements are discussed elsewhere [19]. BMI was calculated as weight in kilograms divided by square of height in meters. Using the MONICA protocol [20], trained personnel obtained two measurements of SBP and DBP on the right arm of participants after they rested in a sitting position for 15 min, using a standardized mercury sphygmomanometer (calibrated by the Iranian Institute of Standards and Industrial Researches). The 1st and 5th Korotkoff sounds were considered as SBP and DBP respectively; BP for each patient were measured twice at least 30 s apart, and the average of the two were reported and used for analysis in this study [20, 21].
We measured FPG, standard 2 h-PCG, TC and serum creatinine (Cr) using blood samples, drawn from subjects after 12–14 h of overnight fasting. All sampling was done between 7:00 and 9:00 AM and analyzed on the same day in the TLGS research laboratory, using commercial kits (Pars Azmoon Inc., Tehran, Iran) by a Selectra 2 auto analyzer (Vital Scientific, Spankeren, The Netherlands); serum Cr level was assessed by the Jaffe kinetic colorimetric method. According to manufacturer's recommendation, reference intervals were 53–97 mmol/l (0.6–1.1 mg/dl) in women and 80–115 mmol/l (0.9–1.3 mg/dl) in men; the sensitivity of the assay was 0.2 mg/dl. In baseline and follow-up phases both intra and inter-assay CVs were less than 3.1%. Using lyophilized serum controls in normal and abnormal ranges, assay performance was monitored after every 25 tests. All samples were assayed only when internal quality control met the standard criteria [19, 22].
Outcome measurements
Details of cardiovascular data collection can be found elsewhere [19]. To summarize, the study participants were annually followed. Those who were not available on the primary call were contacted again (up to 4 times a year) and if they did not respond, their data were considered as missing. A trained nurse asked the subjects regarding any medical incidents and later a trained physician collected complementary data on each of those incidents by gathering information from their medical files or during home visits. Hospital records or death certificates were used for mortality event records. An outcome committee, including a principal investigator, a cardiologist, an endocrinologist, an epidemiologist and the physician who collected outcome data, was formed to evaluate the results and other experts were invited as needed. Clinical conditions were assessed using the 10th revision of the International Classification of Diseases (ICD-10) and American Heart Association classification for cardiovascular events. Outcomes of interest were all-cause mortality and the first CVD events which included: Definite myocardial infarction (with positive ECG and cardiac biomarkers), probable myocardial infarction (positive ECG and cardiac signs/symptoms with negative or equivocal biomarkers), unstable angina (new cardiac symptoms or changing symptoms patterns and positive ECG findings with normal biomarkers), angiographic approved coronary heart disease and CVD related death.
The "Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI)" formula, was used for calculating eGFR (ml/min per 1.73 m2). CKD-EPI equation, as follows:
$$eGFR \, = \, 141 \times min \, \left( {Serum \, creatinine/\kappa , \, 1} \right)^{\alpha } \times \, max \, \left( {Serum \, creatinine/\kappa , \, 1} \right)^{ - 1.209} \times \, 0.993^{Age} \times \, 1.018 \, \left[ {if \, female} \right]$$
In this formula eGFR is expressed in ml/min per 1.73 m2; serum creatinine is expressed in mg/dl, κ is 0.7 for females and 0.9 for males, α is − 0.329 for females and − 0.411 for males; min indicates the minimum of serum creatinine/κ or 1, and max indicates the maximum of serum creatinine/κ or 1 [23]. Based on the Kidney Disease Outcome Quality Initiative guidelines, CKD is defined as either kidney damage or eGFR < 60 ml/min per 1.73 m2 for > 3 months [24].
Regarding smoking status, participants were placed into three groups, never, former and current smokers, based on their response to the questionnaire. Current smoker refers to an individual who uses any tobacco product (cigarettes, pipe or water pipe) on a daily or occasional basis. Type 2 diabetes (T2D) was defined according to the American Diabetes Association with FPG levels ≥ 126 mg/dl (7 mmol/l) or 2 h-PCPG ≥ 200 mg/dl (1.1 mmol/l) or usage of any anti-diabetic medication [25]. Hypercholesterolemia was defined by serum total cholesterol ≥ 200 mg/dl of (≥ 5.17 mmol/L) or receiving lipid lowering agents. PP was calculated by subtracting the DBP from SBP. A physician diagnosed CVD, prior to entering the study, was considered as prevalent CVD.
Mean (SD) values for continuous variables and frequency (%) for categorical ones of baseline characteristics are presented. Comparisons of baseline characteristics between dead and alive participants were conducted using Student's t-test for continuous and the Chi square test for categorical variables.
Follow up duration was considered the time between entrance to the study and the end points; end points were measured as CVD and mortality events. Also, censored data was considered as subjects with loss to follow-up, or having left the residential area, non-CVD mortality (for CV-mortality endpoint) event or until end of follow up (i.e. 20 March 2014), whichever occurred earlier.
Multivariable Cox proportional hazard models (age adjusted as time scale) were used to evaluate associations of blood pressure components for CVD, CVD and total mortality. In this analysis SBP and DBP were examined separately as categorical variables (SBP: ≤ 120, 120–130 (as reference), 130–140, and ≥ 140 mmHg; DBP: ≤ 80, 80–85 (as reference) 85–90 and ≥ 90 mmHg). Quartiles of PP were also considered for our data analysis, considering the first quartile as reference.
Adjustment for age was done using age as the time scale [26]. Associations between BP components and different outcomes were evaluated in two models: Model 1, included gender; Model 2 was further adjusted for both potential confounders including BMI, T2D, hypercholesterolemia, eGFR, smoking status (never smoker as reference) and anti-hypertensive medication (only for total population) and also for prevalent CVD for all-cause mortality. We found no significant p-values (minimum > 0.2) for interactions between different blood pressure components (SBP, DBP and PP) and gender for either CVD or total mortality; hence, we adjusted for gender, to reach full statistical power. Similarly, we also found no interaction between prevalent CVD and blood pressure components for total mortality (all p-values > 0.4). The analysis was also stratified based on the consumption of anti-hypertensive medications at baseline for all outcomes, excluding CV mortality. The fractional polynomial method (FP) was used to check the dose-response association between SBP, DBP and PP with CVD, all-cause and CV-mortality in a confounder adjusted model with three knots (at 25th, 50th and 75th percentiles) [27].
The Cox proportional hazard assumption was checked by the Schoenfield residual test and no violation was found. All analyses were done using Stata version 12 (Stata Corp LP, Stata Statistical Software: Release 12, College Station, TX, USA) and a two-tailed p-values < 0.05 were considered significant.
The study population included 1448 patients with prevalent CKD. Mean (SD) for age, BMI and eGFR in total population were 60.9 (9.9) years, 28.2 (4.3) kg/m2 and 52.8 (6.3) ml/min/1.73 m2, respectively. The prevalence of T2D, hypercholesterolemia, current smoking and prevalent CVD was 26, 78.4, 8.8 and 14.9%, respectively. Furthermore, the prevalence of BP lowering medications among the study population was 27.6%. During the follow up 305 individuals died. Comparing the baseline characteristics of survivors versus non-survivors, the non-survivor group had a higher means of age (58.96 vs. 68.13 years), SBP (131.64 vs. 144.12 mmHg), FPG (107.5 vs. 128.83 mg/dl), as well as higher prevalence of T2D (20.6 vs. 46.2%), current smoking (8 vs. 12.1%) and prevalent CVD (12.5 vs. 23.9%), however this group had lower mean BMI (28.4 vs. 27.37 kg/m2) and total cholesterol (237.13 vs. 231.16 mg/dl) (Table 1).
Table 1 Baseline characteristics of the study population: Tehran Lipid and Glucose Study (1999–2014)
After a median follow-up of 13.9 years, among those free of CVD at baseline (n = 1232), 317 CVD events (n = 100, attributable to CV-mortality) occurred. Moreover among the whole population, including those with prevalent CVD (n = 1448), 305 all-cause mortality events occurred. The multivariate adjusted risk estimation of different systolic and diastolic blood pressures as well as PP quartiles for CVD and all-cause mortality events among the total population and those receiving anti-hypertensive medication, as well as untreated ones, are shown in Tables 2 and 3. Regarding CVD events, compared to the reference group, participants with SBP ≥ 140 mmHg had the highest HR for CVD events in the multivariable adjusted model, a pattern also seen in the untreated group; however, we found no such risk among the treated group [2.01 (0.89–4.57), p-value = 0.1]. Furthermore, in both the treated and untreated groups, SBP < 120 mmHg was not a significant predictor for CVD events [HR 1.39 (0.52–3.8) and 0.9 (0.59–1.36), respectively]. Focusing on DBP, in multivariate analysis, among untreated participants, those with DBP ≥ 85 mmHg (whether DBP 85–90 or ≥ 90 mmHg) had statistically significant risk. Moreover, pooling DBP 85–90 and ≥ 90 mmHg as a single group, DBP ≥ 85 mmHg showed significant risk for CVD among the total as well as treated and untreated populations; the corresponding multivariate adjusted HRs (CI) were 2.35 (1.08–2.26), 3.7 (1.75–7.7) and 1.95 (1.28–2.96), respectively.
Table 2 Multivariate adjusted risk estimation of different systolic and diastolic blood pressure as well as pulse pressure quartiles for Cardiovascular disease in the total population and those with and without anti-hypertensive medication: Tehran Lipid and Glucose study (1999–2014)
Table 3 Multivariate adjusted risk estimation of different systolic and diastolic blood pressure as well as pulse pressure quartiles for total mortality in total population and those with and without anti-hypertensive medication: Tehran Lipid and Glucose study (1999–2014)
In the total population, participants with DPB < 80 mmHg had higher HR, compared to the reference group. When the analysis was stratified by treatment group, a positive but statistically non-significant risk also was observed for both treated and untreated groups ([2.0 (0.89–4.54)] and [1.43 (0.95–2.16)] respectively. Comparing different quartiles of PP for CVD events, the highest risk was seen in participants with PP ≥ 64 mmHg in the total population as well as among the untreated group (Table 2).
Studying all-cause mortality, in multivariate analysis, the highest HR was noted in SBP ≥ 140 mmHg among the total population as well as untreated participants. Comparing the four different DBP groups, among those with DBP ≥ 85 mmHg (whether DBP 85–90 or ≥ 90 mmHg) there was an increased risk of all-cause mortality in the total population, as well as the untreated group. HR was highest among participants with DBP > 90 mmHg in the total population as well as treated and untreated groups. Besides, pooling DBP 85–90 and ≥ 90 mmHg as a single group, DBP ≥ 85 showed significant risk for total mortality among the total as well as treated and untreated populations; the corresponding multivariate adjusted HR (CI) were 2.38 (1.62–3.51), 3.7 (1.82–7.5) and 2.04 (1.27–3.3), respectively. Furthermore participants with DBP < 80 mmHg had higher HR, compared with the reference group, in the total population and the treated group. Focusing on PP, the highest statistically significant HR for all-cause mortality was seen in those with PP ≥ 65 mmHg in the total population and those with PP ≥ 61 mmHg in the untreated group. In the treated group results were not statistically significant (Table 3).
The multivariate adjusted risk estimation of different SBP, DBP, as well as PP quartiles for CV-mortality in the total population, is shown in Table 4. Hazard ratio for CV mortality events in the total population was highest among participants with SBP ≥ 140 mmHg. Regarding DBP, not only HR was increased in participants with DBP ≥ 85 mmHg, (i.e. DBP 85–90 or ≥ 90 mmHg), but a statistically significant increased HR was also seen in DBP < 80 mmHg, compared with the reference group. Pooling DBP 85–90 and ≥ 90 mmHg as a single group, DBP ≥ 85 mmHg showed a multivariate adjusted HR [6.3 (2.5–15.9)] for CV mortality in the total population. Participants with PP ≥ 64 mmHg had highest HR, compared with the reference quartile of PP. Due to the low incidence for CV-mortality, we could not analyze the treated and untreated groups separately.
Table 4 Multivariate adjusted risk estimation of different systolic and diastolic blood pressure and pulse pressure quartiles for CV-mortality in the total population and those with and without anti-hypertensive medication: Tehran Lipid and Glucose study (1999–2014)
Figures 1, 2, 3, 4 and 5 show, the dose–response relationship between SBP, DBP and PP with the outcomes under the investigation. A linear relationship was shown between SBP and PP with CVD events, among the total population (Fig. 1) as well as among treated and untreated groups (Fig. 2). Considering total mortality among the total population (Fig. 3), neither SBP nor PP showed a linear relationship. However, when we stratified by treatment (Fig. 4), for SBP the relationship was linear in both the treated and untreated groups, whereas for PP, the linear relationship was only found among the treated group. For CV mortality in the total population (Fig. 5), SBP and PP showed a linear relationship; For DBP there was a U-shaped relationship with CVD events, all-cause and CV-mortality after multivariable adjustment. Relationships between DBP with CVD events and total mortality also showed a U-shaped pattern when results were stratified by treatment groups. Due to the low incidence for CV mortality, we could not analyze the treated and untreated groups separately.
Dose–response relationship between risk of CVD events with systolic blood pressure, diastolic blood pressure and pulse pressure as continuous variables, in the total population. The relationship was linear for SBP (a), U-shaped for DBP (b) and linear for PP (c)
Dose–response relationship between risk of CVD events with systolic blood pressure, diastolic blood pressure as well as pulse pressure as continuous variables in the treated and untreated populations. In the treated population, the relationship was linear for SBP (a), U-shaped for DBP (b) and linear for PP (c). In the untreated population the relationship was linear for SBP (d), U-shaped for DBP (e) and linear for PP (f)
Dose–response relationship between risk of total mortality with systolic blood pressure, diastolic blood pressure and pulse pressure as continuous variables in the total population. The relationship was non-linear for SBP (a), U-shaped for DBP (b) and non-linear for PP (c)
Dose–response relationship between risk of total mortality with systolic blood pressure, diastolic blood pressure and pulse pressure as continuous variables, in the treated and untreated population. In the treated population, the relationship was linear for SBP (a), U-shaped for DBP (b) and linear for PP (c). In the untreated population, the relationship was linear for SBP (d), U-shaped for DBP (e) and non-linear for PP (f)
Dose–response relationship between risk of CV-mortality with systolic blood pressure, diastolic blood pressure and pulse pressure as continuous variables, in the total population. The relationship was linear for SBP (a), U-shaped for DBP (b) and linear for PP (c)
Over a decade long follow-up among CKD patients in a population based study, we explored the association between SBP, DBP and PP with CVD and mortality events. Our results revealed a generally linear association between SBP with CVD and mortality events. In multivariate analysis, comparing those with 120 ≤ SBP < 130 mmHg as the reference, those with SBP ≥ 140 mmHg, showed over 60% increased risk for both CVD and all-cause mortality events, and an over twofold risk for CV-mortality. Considering DBP, a U-shaped relationship with CVD and mortality events was found. In multivariate analysis, with 80 ≤ DBP < 85 mmHg as the reference, patients with DBP < 80 or DBP ≥ 85 mmHg both showed a significantly higher positive risk for events; more importantly, the risk reached over 200% for CV mortality in patients with DBP < 80 mmHg. The U shaped association between DBP and events was also evident in the hypertensive-treated group, given that the risk for all-cause mortality events in DBP < 80 mmHg exceeded over 170%. Hence, based on results of this observational study, SBP < 140 and 80 ≤ DBP < 85 mmHg were associated with the lowest risk for CVD and mortality events. Similar to SBP, generally a linear association was demonstrated between PP with CVD and mortality events. In fact patients in the 4th quartile of PP had an over 70% risk for both CVD and all-cause mortality, in comparison to the reference group.
The associations of different components of blood pressure with CVD and mortality events among CKD patients has been addressed in several studies, however, to the best of our knowledge no study has examined the impact of all three main BP components (SBP, DBP, PP) on CVD and mortality events in a single study.
The increased risk of events we observed in SBP ≥ 140 group is consistent with the SBP goal of JNC 8 [5]; meanwhile although not statistically significant, for all-cause mortality, the increased risk was evident in those with SBP more than 130, results more in line with the new AHA recommendations of reducing SBP to below 130 mmHg for CKD patients [6]. The pattern we observed between SBP and outcomes echoes results of the SPIRINT randomized controlled trials [8, 9] and those of Bansal et al. [7]; "The lower the better strategy" was supported by results of the SPIRINT study, demonstrating lower rates of adverse events for SBP below 120 mmHg, in comparison to SBP below 140 mmHg in both CKD and non-CKD patients [8, 9]. Bansal et al. [7], in an observational study conducted on 1795 advanced CKD patients (stages 4 and 5), linked a higher rate of atherosclerotic cardiovascular events (ASCVD) to higher SBPs with a linear pattern. Relationships of DBP and PP with ASCVD were also reported as linear in this American population based study [7]. However, there are observational studies among CKD patients that have reported a U shape association between SBP and all-cause mortality events [10, 12]. Kovesdy et al. [10], among mostly elderly men with CKD, mean age around 74 year, found that SBP < 130 mmHg or ≥ 160 mmHg was associated with higher mortality events, regardless of accompanying DBP. Additionally, Weiss et al. [12], found different relationships between SBP and all-cause mortality in different age groups among elder adults, aged ≥ 65 years with CKD; they found a U shaped pattern among participants, aged 65–70, but for those ≥ 70 year. higher mortality was linked with lower values of SBP. Interestingly, in our study only among hypertensive treated patients with CKD, SBP below 120 was associated with approximately 40 and 80% increase in risk for CVD and all-cause mortality events, respectively, neither of which were statistically significant, probably due to limited number of events. The difference observed in the association between SBP and outcomes might be attributable to the younger age of our study population (mean age of 60.9 year), compared with these two population based studies from the US [10, 12].
The U shaped pattern we found regarding the relationship between DBP and all-cause mortality in CKD patients supports the results of Kovesdy et al. [10, 13]. More importantly we also showed the same U shaped pattern even with higher HRs in the hypertensive-treated group compared to the untreated group (HR 2.73 and 1.41 respectively), suggesting that DBP < 80 mmHg may even cause harm to CKD patients. In other words, our results suggest that in CKD patients lowering SBP at the expense of lowering DBP to below 80 mmHg can potentially increase morbidity and mortality rates. The higher CVD and mortality events observed in patients with low DBP can be explained by several theories: First, as most of the coronary blood flow occurs during diastole, patients with low DBP may be more susceptible to CVD events [27]. Second, patients with underlying chronic disease such as neoplasms, chronic infection, malnutrition and heart failure have lower DBP, indicating preexisting poor health status and residual confounding, lead to higher CVD and mortality events among the low DBP group, a phenomena called "reverse causality" [27, 28]. To address this concern, we omitted the mortality events during first 3 years of our follow-up; however the U shaped association between DBP and events remained essentially unchanged (data not shown). Third, some studies showed unintentionally reducing eGFR by tight blood pressure regimens, is itself an independent risk factor for CVD [27, 29].
The complex interplay of the different BP components described above, adds to the dilemma of BP control in CKD patients, as there are individuals with high SBP but normal or even below normal DBP in this population. With antihypertensive therapy, these patients will be at risk of low DBP, at some point during their course of treatment. This suggests further investigations to look for an appropriate "combination range of SBP and DBP" for optimal BP control in CKD patients.
Considering PP, our results are similar to the results of Palit [14] and Bansal [7] showing higher rates of events with higher PP in a linear pattern. CKD patients are more prone to have higher PPs and the average PP in our study was 52.8 mmHg, a level which was lower than those of the Palit [14] and Bansal [7] studies, both of which were conducted on advanced CKD patients. The extra damage to the vascular wall in addition to increased stress on the left ventricle wall are two possible explanations of the higher morbidity and mortality observed in CKD patients with higher PP [30, 31]. Large artery stiffness due to advanced atherosclerosis and accelerated medial calcification seen in CKD patients [14], makes SBP more resistant to BP lowering therapy, often necessitating extra medication to achieve SBP goals. On the other hand, poor vascular compliance in CKD patients can increase susceptibility to diastolic hypotension. Hence intensive blood pressure control regimens can further exacerbate wide ranges of PP and its related risks in CKD patients [32].
One of the interesting findings in our study is the fact that the survivor group had higher baseline values of BMI, total cholesterol and number of patients with hypercholesterolemia compared to those who died. Some but not all studies conducted among CKD patients, interpreted similar findings by stating that higher BMI might be an index of better overall health status, less frailty and or less muscle wasting, a phenomena called "the obesity paradox" [33]. The disparity among evidence on this issue may be related to differences in study populations, length of follow-up, covariate adjustment, and/or investigated outcomes [34]. Furthermore, relationships between elevated BMI and ESRD or mortality may be weaker in cohorts of individuals with CKD, which may be related to the increased risk of muscle wasting (i.e. frailty) in this population [35] and limitations of BMI in distinguishing body composition or fat distribution [36].
There are number of limitations to our study. First, due to the observational nature of this study we cannot establish a cause-and-effect relationship between different BP components and outcomes, regarding unmeasured probable confounders. Second, due to the limited number of events we did not analyze the effect of the three different components of BP in the treated versus untreated subgroups separately for CV mortality. Third, we did not have data about urinary albumin excretion, hence albuminuria in not considered in the CKD definition. Fourth, the average eGFR in our CKD population is rather high (52.8 ml/min per 1.73 m2) and as a result, our findings might not be extrapolated to patients with more advanced renal failure. Fifth, using the MONICA protocol in TLGS cohort, the BP measurements are performed only from right arm, hence interarm blood pressure discrepancy (IAD) was not assessed in our study. Nevertheless in the general population, IAD levels > 20 mmHg, usually associated with vascular disease and its related adverse outcomes, are quite infrequent, occurring in less than 4% of population [37]. Lastly, the study was conducted only among a Tehranian population; and therefore results might not be generalized to other parts of the country.
This is the first cohort study of CKD patients in a Middle Eastern population, with more than a decade long follow-up, which examines the effect of all the three different BP components for CVD and mortality events. According to our findings, maintaining SBP at levels < 140 mmHg, DBP between 80 and 85 mmHg and PP < 64 mmHg were associated with lowest risk for CV and all-cause mortality events.
CVD:
CV-mortality:
cardiovascular mortality
CKD:
BP:
JNC:
Joint National Committee
American College of Cardiology
SBP:
systolic blood pressure
DBP:
diastolic blood pressure
pulse pressure
TLGS:
Tehran lipid and glucose study
FPG:
fasting plasma glucose
2 h-PCG:
2-h post challenge plasma glucose
TC:
total cholesterol
eGFR:
estimated glomerular filtration rate
Cr:
ICD:
International Classification of Diseases
CKD-EPI:
chronic kidney disease epidemiology collaboration formula
T2D:
ESRD:
end-stage renal disease
Go AS, Chertow GM, Fan D, McCulloch CE, Hsu C. Chronic kidney disease and the risks of death, cardiovascular events, and hospitalization. N Engl J Med. 2004;351(13):1296–305.
Wright J, Hutchison A. Cardiovascular disease in patients with chronic kidney disease. Vasc Health Risk Manag. 2009;5:713–22.
Norris KC, Nicholas SB. Strategies for controlling blood pressure and reducing cardiovascular disease risk in patients with chronic kidney disease. Ethn Dis. 2015;25(4):515–20.
Chobanian AV, Bakris GL, Black HR, Cushman WC, Green LA, Izzo JL, et al. The seventh report of the Joint National Committee on Prevention, detection, evaluation, and treatment of high blood pressure; the JNC 7 report. JAMA. 2003;289(19):2560.
James PA, Oparil S, Carter BL, Cushman WC, Dennison-Himmelfarb C, Handler J, Lackland DT, LeFevre ML, MacKenzie TD, Ogedegbe O, Smith SC. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA. 2014;311(5):507–20.
Reboussin DM, Allen NB, Griswold ME, Guallar E, Hong Y, Lackland DT, Miller EP, Polonsky T, Thompson-Paul AM, Vupputuri S. Systematic review for the 2017 ACC/AHA/AAPA/ABC/ACPM/AGS/APhA/ASH/ASPC/NMA/PCNA guideline for the prevention, detection, evaluation, and management of high blood pressure in adults: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. J Am Coll Cardiol. 2018;71(19):2176–98.
Bansal N, McCulloch CE, Lin F, Robinson-Cohen C, Rahman M, Kusek JW, et al. Different components of blood pressure are associated with increased risk of atherosclerotic cardiovascular disease versus heart failure in advanced chronic kidney. Kidney Int. 2016;90:1348–56.
SPRINT Research Group. A randomized trial of intensive versus standard blood-pressure control. N Engl J Med. 2015;373(22):2103–16.
Article CAS Google Scholar
Cheung AK, Rahman M, Reboussin DM, Craven TE, Greene T, Kimmel PL, Cushman WC, Hawfield AT, Johnson KC, Lewis CE, Oparil S. Effects of intensive BP control in CKD. J Am Soc Nephrol. 2017. https://doi.org/10.1681/ASN-2017020148.
Kovesdy CP, Bleyer AJ, Molnar MZ, Ma JZ, Sim JJ, Cushman WC, et al. Blood pressure and mortality in U.S. veterans with chronic kidney disease. Ann Intern Med. 2013;159(4):233.
Kovesdy CP, Lu JL, Molnar MZ, Ma JZ, Canada RB, Streja E, et al. Observational modeling of strict vs conventional blood pressure control in patients with chronic kidney disease. JAMA Intern Med. 2014;174(9):1442–9.
Weiss JW, Peters D, Yang X, Petrik A, Smith DH, Johnson ES, et al. Systolic bp and mortality in older adults with CKD. Clin J Am Soc Nephrol. 2015;10(9):1553–9.
Kovesdy CP, Alrifai A, Gosmanova EO, Lu JL, Canada RB, Wall BM, et al. Age and outcomes associated with BP in patients with incident CKD. Clin J Am Soc Nephrol. 2016;11(5):821–31.
Article PubMed PubMed Central CAS Google Scholar
Palit S, Chonchol M, Cheung AK, Kaufman J, Smits G, Kendrick J. Association of BP with death, cardiovascular events, and progression to chronic dialysis in patients with advanced kidney disease. Clin J Am Soc Nephrol. 2015;10(6):934–40.
Tohidi M, Hasheminia M, Mohebi R, Khalili D, Hosseinpanah F, Yazdani B, et al. Incidence of chronic kidney disease and its risk factors, results of over 10 year follow up in an Iranian cohort. PLoS ONE. 2012;7(9):e45304.
Bozorgmanesh M, Hadaegh F, Mehrabi Y, Azizi F. A point-score system superior to blood pressure measures alone for predicting incident hypertension: Tehran Lipid and Glucose Study. J Hypertens. 2011;29(8):1486–93.
Derakhshan A, Sardarinia M, Khalili D, Momenan AA, Azizi F, Hadaegh F. Sex specific incidence rates of type 2 diabetes and its risk factors over 9 years of follow-up: Tehran lipid and glucose study. PLoS ONE. 2014;9(7):e102563.
Turk-Adawi K, Sarrafzadegan N, Fadhil I, Taubert K, Sadeghi M, Wenger NK, et al. Cardiovascular disease in the Eastern Mediterranean region: epidemiology and risk factor burden. Nat Rev Cardiol. 2017;15(2):106–19.
Azizi F, Ghanbarian A, Momenan AA, Hadaegh F, Mirmiran P, Hedayati M, Mehrabi Y, Zahedi-Asl S. Prevention of non-communicable disease in a population in nutrition transition: Tehran Lipid and Glucose Study phase II. Trials. 2009;10(1):5.
WHO MONICA Project. MONICA Manual, Part III, Section 1: Population survey data component.
Azizi F, Rahmani M, Emami H, Mirmiran PA, Hajipour R, Madjid M, Ghanbili J, Ghanbarian A, Mehrabi J, Saadat N, Salehi P. Cardiovascular risk factors in an Iranian urban population: Tehran lipid and glucose study (phase 1). Sozial-und Präventivmedizin. 2002;47(6):408–26.
Hosseinpanah F, Kasraei F, Nassiri AA, Azizi F. High prevalence of chronic kidney disease in Iran: a large population-based study. BMC Public Health. 2009;9(1):44.
Matsushita K, Selvin E, Bash LD, Astor BC, Coresh J. Risk implications of the new CKD Epidemiology Collaboration (CKD-EPI) equation compared with the MDRD study equation for estimated GFR: the atherosclerosis risk in communities (ARIC) study. Am J Kidney Dis. 2010;55(4):648–59.
Levey AS, Coresh J, Balk E, Kausz AT, Levin A, Steffes MW, et al. National Kidney Foundation practice guidelines for chronic kidney disease: evaluation, classification, and stratification. Ann Intern Med. 2003;139(2):137–47.
American Diabetes Association. Standards of medical care in diabetes—2015 abridged for primary care providers. Clin Diabetes. 2015;33(2):97–111.
Article PubMed Central Google Scholar
Chalise P, Chicken E, McGee D. Time scales in epidemiological analysis: an empirical comparison. 2009:1–13.
Robles NR, Hernandez-Gallego R, Fici F, Grassi G. Does a blood pressure J curve exist for patients with chronic kidney disease? J Clin Hypertens. 2017;19(8):764–70.
Sattar N, Preiss D. Reverse causality in cardiovascular epidemiological research: more common than imagined? Circulation. 2017;135:2369–72. https://doi.org/10.1161/CIRCULATIONAHA.117.028307.
Peralta CA, Norris KC, Li S, Chang TI, Tamura MK, Jolly SE, et al. Blood pressure components and end-stage renal disease in persons with chronic kidney disease. Arch Intern Med. 2012;172(1):41.
Winston GJ, Palmas W, Lima J, Polak JF, Bertoni AG, Burke G, et al. Pulse pressure and subclinical cardiovascular disease in the multi-ethnic study of atherosclerosis. Am J Hypertens. 2013;26(5):636–42.
O'Rourke M, Frohlich ED. Pulse pressure: is it a clinically useful risk factor? Hypertension. 1999;34:372–4.
Peralta CA, Shlipak MG, Wassel-Fyr C, Bosworth H, Hoffman B, Martins S, et al. Association of antihypertensive therapy and diastolic hypotension in chronic kidney disease. Hypertension. 2007;50(3):474–80.
Ladhani M, Craig JC, Irving M, Clayton PA, Wong G. Obesity and the risk of cardiovascular and all-cause mortality in chronic kidney disease: a systematic review and meta-analysis. Nephrol Dial Transplant. 2016;32(3):439–49.
Banack HR, Stokes A. The 'obesity paradox' may not be a paradox at all. Int J Obes. 2017;41:1162–3.
Kramer H, Gutiérrez OM, Judd SE, Muntner P, Warnock DG, Tanner RM, Panwar B, Shoham DA, McClellan W. Waist circumference, body mass index, and ESRD in the REGARDS (reasons for geographic and racial differences in stroke) study. Am J Kidney Dis. 2016;67(1):62–9.
Hsu CY, McCulloch CE, Iribarren C, Darbinian J, Go AS. Body mass index and risk for end-stage renal disease. Ann Intern Med. 2006;144(1):21–8.
Clark CE, Taylor RS, Shore AC, Ukoumunne OC, Campbell JL. Association of a difference in systolic blood pressure between arms with vascular disease and mortality: a systematic review and meta-analysis. Lancet. 2012;379(9819):905–14.
Conceptualization: FH. Data curation: FH, FA. Formal analysis: SA. Funding acquisition: FA, FH. Investigation: SA, AH, SN, MHM, FH. Methodology: SA, AH, SN, FH. Project administration: FA. Supervision: FH. Writing—original draft: AH, SN, FH. Writing—review and editing: AH, SN, SA, MHM, FH. All authors read and approved the final manuscript.
We would like to express our appreciation to the TLGS participants and research team members. The authors wish to acknowledge Niloofar Shiva for critical editing of the English grammar and syntax of the article. We would also like to thank Marzieh Montazeri for her assistance in the preparation of the article.
All datasets generated and analyzed during the current study are available from the corresponding author upon reasonable request.
Protocol of this study was approved by the ethics committee of the Research Institute for Endocrine Sciences of Shahid Beheshti University of Medical Sciences, Tehran, Iran, and conducted in accordance with the Declaration of Helsinki. All participants signed informed consent forms.
This study was supported by Grant No. 121 from the National Research Council of the Islamic Republic of Iran.
Ashkan Hashemi and Sormeh Nourbakhsh are co-first authors contributed equally to this study
Prevention of Metabolic Disorders Research Center, Research Institute for Endocrine Sciences, Shahid Beheshti University of Medical Sciences, No. 24, Parvaneh Street, Velenjak, P.O. Box: 19395-4763, Tehran, Iran
Ashkan Hashemi, Sormeh Nourbakhsh, Samaneh Asgari & Farzad Hadaegh
Johns Hopkins Ciccarone Center for the Prevention of Heart Disease, Johns Hopkins Hospital, Baltimore, USA
Mohammadhassan Mirbolouk
Endocrine Research Center, Research Institute for Endocrine Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
Fereidoun Azizi
Ashkan Hashemi
Sormeh Nourbakhsh
Samaneh Asgari
Farzad Hadaegh
Correspondence to Farzad Hadaegh.
Hashemi, A., Nourbakhsh, S., Asgari, S. et al. Blood pressure components and incident cardiovascular disease and mortality events among Iranian adults with chronic kidney disease during over a decade long follow-up: a prospective cohort study. J Transl Med 16, 230 (2018). https://doi.org/10.1186/s12967-018-1603-7 | CommonCrawl |
How do we know that the discovered M87 black hole isn't just a star surrounded by a dust disk?
To an untrained eye like mine, pictures of stars surrounded by dust discs look very much similar to the picture of the M87 black hole.
Here are some pictures of these dust discs:
And here is a picture of the M87 Black Hole:
I find it quite difficult to differentiate between the two in a meaningful, non-hand-wavy way. What is the scientific explanation for the distinction? I'm not doubting the scientific explanation, I just don't understand it.
photography supermassive-black-hole dust
BeliodBeliod
$\begingroup$ Notice how all the dust-disk images have artificial black regions in their centers -- this is where light from the central star has been blocked out, to make the disks more visible. There is no such central blocking in the M87 image. $\endgroup$
– Peter Erwin
Black holes are often studied (and discovered!) by observing their effects on objects around them. Stellar-mass black holes, for example, can be found by determining the orbit of any luminous companion. Supermassive black holes, by comparison, affect the motion of numerous stars and clouds of gas in their immediate vicinity. By fitting the motions of those stars, astronomers can determine that there must be an extremely massive object in that location, and typically a supermassive black hole is the only possibility.
In the case of M87, these measurements were first conducted in the late 1970s (Sargent et al., Young et al.). Both groups noted that the velocity dispersions near the nucleus required a central mass on the order of $\sim5\times10^9M_{\odot}$. Mass/luminosity ratio profiles were also calculated based on photometry, and both sets of observations noted a steep rise in $M/L$ near the center. Neither group was able to rule out other possible explanations, like a compact star cluster, but a supermassive black hole was - to quote Young et al. - "the most attractive of the models considered". Further observations over the last four decades have ruled out those other options.
The image produced by the Event Horizon Telescope is consistent with a supermassive black hole, as the EHT Collaboration wrote in the first of their papers on the observations:
It is also straightforward to reject some alternative astrophysical interpretations. For instance, the image is unlikely to be produced by a jet-feature as multi-epoch VLBI observations of the plasma jet in M87 (Walker et al. 2018) on scales outside the horizon do not show circular rings. The same is typically true for AGN jets in large VLBI surveys (Lister et al. 2018). Similarly, were the apparent ring a random alignment of emission blobs, they should also have moved away at relativistic speeds, i.e., at ~5 μas day−1 (Kim et al. 2018b), leading to measurable structural changes and sizes. GRMHD models of hollow jet cones could show under extreme conditions stable ring features (Pu et al. 2017), but this effect is included to a certain extent in our Simulation Library for models with Rhigh > 10. Finally, an Einstein ring formed by gravitational lensing of a bright region in the counter-jet would require a fine-tuned alignment and a size larger than that measured in 2012 and 2009.
There are other arguments you can test yourself. For example, the photon ring matches calculations from general relativity, assuming the now-accepted mass of the black hole.
In short: Stellar and gas dynamics require the presence of a large mass in the center of M87, and the image rules out many non-compact objects.
HDE 226868♦HDE 226868
Not the answer you're looking for? Browse other questions tagged photography supermassive-black-hole dust or ask your own question.
What defines the plane of an accretion disk around a black hole?
What is the orientation of the M87 black hole image relative to the jet?
M87. What was the black hole before?
Was the "green star" event in NGC 3314 ever figured out or named?
Why would the merger of spinning black holes within the accretion disk of a supermassive black hole cause them to "shoot straight up" out of the disk?
How will they know when to start taking the picture of the black hole at the center of the Milky Way?
Why is there no color shift on the photo of the M87 black hole?
How do we know that supermassive black holes can gain mass by means other than merging with other supermassive black holes? | CommonCrawl |
Journal of Glaciology
Volume 63 Issue 238
Measurements of wave damping by...
Core reader
Measurements of wave damping by a grease ice slick in Svalbard using off-the-shelf sensors and open-source electronics
2. INSTRUMENTATION
Cheng, Sukun Rogers, W. Erick Thomson, Jim Smith, Madison Doble, Martin J. Wadhams, Peter Kohout, Alison L. Lund, Björn Persson, Ola P.G. Collins, Clarence O. Ackley, Stephen F. Montiel, Fabien and Shen, Hayley H. 2017. Calibrating a Viscoelastic Sea Ice Model for Wave Propagation in the Arctic Fall Marginal Ice Zone. Journal of Geophysical Research: Oceans, Vol. 122, Issue. 11, p. 8770.
Sutherland, Graig Rabault, Jean and Jensen, Atle 2017. A Method to Estimate Reflection and Directional Spread Using Rotary Spectra from Accelerometers on Large Ice Floes. Journal of Atmospheric and Oceanic Technology, Vol. 34, Issue. 5, p. 1125.
Shen, Hayley H. 2019. Modelling ocean waves in ice-covered seas. Applied Ocean Research, Vol. 83, Issue. , p. 30.
Journal of Glaciology, Volume 63, Issue 238
April 2017 , pp. 372-381
JEAN RABAULT (a1), GRAIG SUTHERLAND (a1), OLAV GUNDERSEN (a1) and ATLE JENSEN (a1)
Department of Mathematics, University of Oslo, Oslo, Norway
Copyright: © The Author(s) 2017
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
DOI: https://doi.org/10.1017/jog.2017.1
Figures:
Fig. 1. Sensors F1, F2, F3 (by order of increasing distance from the camera) deployed near shore in a grease ice slick near Longyearbyen, Svalbard.
Fig. 2. Tracks of the wave sensors F1, F2 and F3 as recorded by their on-board GPS during the whole time of the measurements (UTC 14.10–16.30). Shore is indicated by the gray area. The positions of instruments F1 and F3 at the times of the beginning of each data sample used for computing wave attenuation by grease ice are marked with black stars. The final position of each instrument is indicated by a black dot.
Fig. 3. Sample of the raw data for linear acceleration logged by the sensor F3. The time indicated is UTC on 17 March 2016. Data labeled Z correspond to the IMU axes pointing upwards, while data labeled X and Y correspond to the two IMU axes in the horizontal plane.
Fig. 4. Evolution with time of the PSD for wave elevation recorded by the sensors F1, F2 and F3 computed using the Welch method on 20-min intervals, using three time windows and 50% overlap followed by a sliding average filter. From left to right, top to bottom, the UTC time obtained from the internal GPS for the beginning of the time series used is 14.15, 14.25, 15.05, 15.25.
Table 1. Comparison between the results obtained by Newyear and Martin (1997) in the laboratory and our field measurements in Svalbard
Fig. 5. Illustration of the effect of the choice of the deep or intermediate water depth dispersion relation on the wave attenuation obtained from Eqn (6). One layer label indicates that the formula from Weber (1987) which relies on the deep water dispersion relation is used, while Lamb label indicates that the intermediate water depth dispersion relation is used. Left: comparison of the attenuation curves for low frequencies. Right: quotient of both attenuation curves.
Fig. 6. Comparison of the observed damping rate from experimental data with the one-layer model. The thin curves show the frequency-dependent damping rate obtained by comparing the PSD for wave elevation obtained from sensors F1 and F3 for six equally spaced data samples with start times between 14.20 and 14.45 UTC. The gray area is the corresponding 3 σ confidence interval. The thick line is the prediction of the one-layer model, with an effective viscosity in the water layer ν w = 0.95 × 10−2 m2 s−1. The dashed line is the prediction of the more general Lamb viscous damping solution, computed for the same effective viscosity as the one-layer model using the intermediate water depth dispersion relation.
Fig. 7. Wave damping arising from the bottom and side walls in the experiment by Newyear and Martin (1997) (bottom exp. and sides exp. curves, respectively), along with the effect of the seabed boundary layer on the measurements we report in Svalbard (curve bottom Svlbd.). Decays are computed using Eqn (8) together with the intermediate depth dispersion relation Eqn (9) and the viscosity of water at 0°C. According to the information provided by Newyear and Martin (1997), we use a water depth H = 0.5 m and a wave tank width B = 1 m in the laboratory data case. The water depth used in the field data case is H = 0.8 m. The damping predicted by the one-layer model using the effective viscosity found by Newyear and Martin (1997) together with the dispersion relation for waves in intermediate water depth is presented for comparison (curve Damping laboratory).
Table 2. Summary of the power consumption of the electronics used, under active logging with 5 V power supply
Versatile instruments assembled from off-the-shelf sensors and open-source electronics are used to record wave propagation and damping measured by Inertial Motion Units (IMUs) in a grease ice slick near the shore in Adventfjorden, Svalbard. Viscous attenuation of waves due to the grease ice slick is clearly visible by comparing the IMU data recorded by the different instruments. The frequency dependent spatial damping of the waves is computed by comparing the power spectral density obtained from the different IMUs. We model wave attenuation using the one-layer model of Weber from 1987. The best-fit value for the effective viscosity is ν = (0.95 ± 0.05 × 10−2)m2 s−1, and the coefficient of determination is R 2 = 0.89. The mean absolute error and RMSE of the damping coefficient are 0.037 and 0.044m−1, respectively. These results provide continued support for improving instrument design for recording wave propagation in ice-covered regions, which is necessary to this area of research as many authors have underlined the need for more field data.
Understanding the interaction between surface waves and sea ice is an area of ongoing research. Specific applications of eventual successful quantitative results include the formulation of ocean models for climate, weather and sea state prediction (Christensen and Broström, 2008), the estimation of ice thickness (Wadhams and Doble, 2009) and the analysis of pollution dispersion in the Arctic environment (Pfirman and others, 1995; Rigor and Colony, 1997). Surface waves are also part of a feedback mechanism where reduced ice extent leads to increased fetch and, therefore, increased wave height, which in turn results in the break-up of more of the polar sea ice (Thomson and Rogers, 2014).
Several types of ice are found in polar environments, which affect wave propagation in different ways. Ice floe fields exhibit hydrodynamic interaction with the incoming wave field as well as interaction between adjacent floes (Meylan and Squire, 1996). Continuous ice sheets interact with waves by imposing specific pressure and velocity boundary conditions at the water surface, which are related to flexural effects in the ice and lead to wave damping and modifications in the dispersion relation (Liu and Mollo-Christensen, 1988; Squire and others, 1995; Squire, 2007; Sutherland and Rabault, 2016). Grease ice and pancake ice accumulate and form a viscous layer that strongly attenuates surface waves (Weber, 1987; Keller, 1998). These thinner ice types are in their greatest abundance at the ice margins where wave interaction is strongest and ice–wave interactions the least understood. For these reasons, we focus our work here on these ice types along the marginal seas.
To clarify, grease ice is composed of frazil ice crystals, typically disks of size 1–4 mm in diameter and 1–100 μ m in thickness (Newyear and Martin, 1997). Grease ice formation has been reported in cold areas where supercooled water is kept from freezing by surface turbulence (Newyear and Martin, 1997; De la Rosa and Maus, 2012). Grease ice accumulates and forms slicks of typical thickness 10–20 cm (Smedsrud and Skogseth, 2006; Smedsrud, 2011), that effectively damp high-frequency waves, therefore, appearing visually similar to an oil slick (Newyear and Martin, 1997).
The interactions between ice covers and incoming waves have been studied in previous works (Weber, 1987; Squire and others, 1995; Squire, 2007; Sutherland and Rabault, 2016), and several models have been presented. Mass-loading models only take into account the additional inertial effects originating from the presence of the ice and predict a reduction in the wavelength, but do not account for any damping (Peters, 1950). These models are insufficient when describing wave propagation in grease ice (Newyear and Martin, 1997). Similarly, the thin elastic plate model (Greenhill, 1886; Liu and Mollo-Christensen, 1988) describes the influence of flexural rigidity of the ice on wave propagation, which causes an increase of the wavelength but no damping. Such a model was developed to emulate a continuous unbroken ice sheet, but found to be unsuitable for grease ice (Newyear and Martin, 1997). Neither of these models predict wave attenuation, which needs to be described by separate mechanisms. Such mechanisms can be of two kinds: wave scattering, which is especially important in the case of the marginal ice zone (Kohout and Meylan, 2008); and the introduction of an effective viscosity in either the water or the ice layer (Weber, 1987; Keller, 1998; Carolis and Desiderio, 2002). The introduction of viscosity was initially proposed by Weber (1987) and refined by several authors later on (Keller, 1998; Carolis and Desiderio, 2002; Wang and Shen, 2010b). This last class of models is able to successfully reproduce laboratory observations of wave damping by grease ice, considering the effective viscosity as a fitting parameter. Therefore, it is this last class of models that we will examine in this paper.
More explicitly, the one-layer model of Weber (1987) assumes that the viscosity in the upper ice layer is high enough for the momentum equation to be reduced to 'a balance between pressure and friction (creeping motion)'. To explain, the effect of the creeping motion is to 'effectively halt the horizontal motion in the lower fluid at the interface' (Weber, 1987), and the viscous solution of Lamb (1932) is then recovered in the infinitely deep lower layer. An effective eddy viscosity much higher than the molecular viscosity of water is required in the water layer for the model to be consistent with observations and laboratory experiments. This viscous model of wave damping was later extended by Keller (1998), who describes the top layer by means of the incompressible Navier–Stokes equation, while the finite depth lower layer is described by the incompressible Euler equation. The solution in both layers is then obtained by linearizing about the state of rest. This two-layer model is much heavier mathematically than the one-layer model and features two parameters (ice thickness and effective viscosity of the ice layer) if taking the water depth and densities in each layer as given, instead of only one for the one-layer model (effective viscosity).
More recently, Carolis and Desiderio (2002) included an effective viscosity also in the lower layer, therefore introducing a third parameter in the model. The most recent refinement of this class of models was presented by Wang and Shen (2010b), who use a viscoelastic equation to describe the ice layer and model the water below as an inviscid layer. The formulation of Wang and Shen (2010b) went further to include both the flexural and the damping effects of the ice cover in the same model. However, these last models also suffer from a much increased mathematical complexity, which makes them difficult to use in comparisons with field data (Mosig and others, 2015).
Complementary to the numerous modeling efforts mentioned above, laboratory experiments have been performed in parallel to the development of theoretical models. The trends and theoretical curves obtained from the models are presented in the literature and compared with experimental results as a way to assess the quality of each model. However, quantitative metrics are currently not available in the literature to firmly establish the accuracy of any such model, making it difficult to objectively compare the quality of existing published predictions. Initial measurements in grease ice found good qualitative agreement (Newyear and Martin, 1997) with the one-layer model of Weber (1987). The two-layer model was later found to produce better qualitative agreement with experimental data (Newyear and Martin, 1999), but at the cost of more complex mathematics and the need for an additional parameter (ice thickness), as previously explained. The data from both experiments were used also by Carolis and Desiderio (2002) to validate their extended two-layer model with laboratory data. More recently, the effect of a mixed grease–pancake ice field was presented by Wang and Shen (2010a), which led the authors to the introduction of a viscoelastic description of the ice layer. The viscoelastic ice model was finally tested in laboratory experiments involving a variety of mixtures of grease and pancake ice (Zhao and Shen, 2015), and the authors concluded that it did a reasonable qualitative job fitting the observations, given that both equivalent viscosity and shear modulus are fitted using least-squares fits separately for each ice type.
While the development of more sophisticated models is impressive there are some drawbacks to this approach as the corresponding models grow in mathematical complexity and a greater number of fitting parameters is necessary. These advances are making it possible to obtain better qualitative agreement with laboratory and field results by enriching the underlying mathematics, but in the end the parameters used in all models are determined from empirical fit to experimental data (Wang and Shen, 2010a; Zhao and Shen, 2015). Such an approach only visually improves the quality of the model fits by adding more fitting coefficients, and pinpoints the need for the use of relevant quantitative metrics when comparing models. In addition, the wave modes for the two-layer models are obtained numerically from solving nonlinear dispersion relations, which are satisfied by infinitely many roots. Choosing the right wave mode requires some decision criterion, which proves challenging in the intermediate frequency range in particular when a viscoelastic model is used (Wang and Shen, 2010b; Mosig and others, 2015).
Moreover, articles about wave attenuation by grease ice report a wide range of empirical best-fit effective viscosity values. In their initial article, Newyear and Martin (1997) report frequency-dependent effective viscosity in the water layer using the model of Weber (1987) in the range ν w = 1.35 − 2.22 × 10−2 m2s−1, depending also on the grease ice thickness. In their second article, Newyear and Martin (1999) need to use an effective viscosity for the grease ice layer in the range ν i = 2.5 − 3.0 × 10−2 m2s−1 to describe experimental data with the two-layer model of Keller (1998). According to Carolis and Desiderio (2002), such variability arises from the fact that effective viscosities model both ice properties and other phenomena at the origin of wave damping, including the turbulence-driven dissipation. Carolis and Desiderio (2002) present a summary of eddy viscosities obtained from field measurements, that range from $\nu _{\rm f} = 1.6 - 2.0 \times 10^{ - 2} \ {\rm m}^2 {\kern 1pt} {\rm s}^{ - 1} $ in the Weddell sea, where a high level of turbulence is present, down to ν f = 0.24 × 10−2 m2s−1 in the central Arctic Ocean where much less turbulent conditions are observed. This high variability in eddy viscosity, and therefore in wave damping and effective viscosity, has also been observed for the same geographic region over just a few days (Doble and others, 2015). Attempts to provide theoretical justifications for the value of the effective viscosity needed to reproduce the damping observed in experiments have been presented (De Carolis and others, 2005), but cannot yet replace empirical fit to experimental data.
The diversity of models and parameter values discussed in the previous paragraphs makes it challenging to get a clear understanding of the situation. Two main questions arise in this context. First is the issue of how representative of the field conditions the different studies presented are. There is variability in the damping for waves in ice, and probably several different damping regimes are possible in the ocean. Therefore, characterizing the conditions in which each damping regime is observed is important. This will only be possible through the collection of much larger volumes of field data than have been discussed up to now (Mosig and others, 2015). While the number of articles that have been published could suggest that much data are available, articles discussing field data often rely on relatively small datasets and the data available on the whole are very scarce compared with the spatial extent of polar regions and diversity of regimes observed. Therefore, a first possible approach for clarifying the situation would be to collect much more field data. This in turn puts a sharp constraint on the acceptable cost of each measurement. We propose a possible solution for this issue, namely the use of instruments based on off-the-shelf sensors and open-source electronics as a way to reduce costs and increase measurement flexibility.
The second issue is to decide, confronted by the diversity of models presented in the literature, how the appropriate level of model complexity should be selected. Articles published so far have mostly introduced new and ever more complex models, fitted their parameters to a dataset and presented curves offering a visual impression of the quality of the fit. However, going further into determining which model to use should rely on some quantitative metrics that describe the statistical quality of each model. We address this issue by using several of the simplest model-fitting quality estimators, namely the coefficient of determination (Rao, 1973), usually noted R 2, and the mean absolute error (MAE) and RMSE. R 2 is a number between 0 and 1 that indicates the part of the variance in the data that are explained by the model, so that an R 2 value of 1 indicates that the model fully describes the variance in the data, while an R 2 value of 0 indicates that the model fails in explaining anything of the data presented. The MAE and RMSE are other well-established metrics that can be used to quantify the quality of a model (Willmott, 1982; Chai and Draxler, 2014), and that directly measure the discrepancy between model predictions and experimental data.
The organization of the paper is as follows. We first describe the architecture of the instruments used for the measurements. Next we present the data obtained and the methodology used to compare it with the one-layer model used by Weber (1987) and Newyear and Martin (1997). Finally, we present our results and discuss the agreement between our data and the one-layer model, as well as with the laboratory experiment of Newyear and Martin (1997), regarding the value of the effective viscosity observed.
The harsh Arctic environment sets demanding requirements for scientific observations. While several commercial solutions are available (companies selling instruments operating in the Arctic include, e.g. Sea Bird Scientific Co., Campbell Scientific Co., Aanderaa Data Instruments A.S.), these usually come at high cost and have reduced flexibility. By contrast, off-the shelf sensors and open-source electronics are now sufficiently evolved, well documented and easy to use that they become a credible alternative to traditional solutions. The specificity of open-source software and electronics is that their source code, internal designs and interfaces are made available through a license that provides the right to study, modify and distribute the product (Joshua, 2014). This has many valuable implications for the scientific community. In particular, sharing all the details of the design of an instrument can make it easier to reproduce experiments, by drastically reducing the cost and time necessary to build an exact copy of the instrument initially used. In addition, this makes it easier to build upon a common platform, therefore encouraging modularity and reuse of previous designs rather than fragmented in-house development, which very likely is redundant between research groups and private suppliers, leading to unnecessary costs.
In the present case, we can choose a sensor with the characteristics required for performing measurements in the Arctic (including thermal calibration between −40 and +85°C), while reducing sharply the cost of each logger. Our approach consists in selecting the most cost-effective off-the-shelf sensor able to perform the measurements we need, and to build the whole logger around it using only open-source electronics and software. In our experience, common electronics components which rely on integrated circuits, flash memory and other semiconductors work in a wide range of temperature and, unlike batteries and sensors, do not present major problems in cold environments. Other groups in oceanography have started using open-source solutions, and the use of open-source electronics has been increasingly reported in the literature in the last few years (Baker, 2014; Gandra and others, 2015; Cauzzi and others, 2016). However, the solutions presented so far lack the generality, robustness and flexibility that will make them easily adaptable to a wide variety of projects. There is, therefore, a need for more groups to start sharing their designs together with detailed documentation, which would help to gather momentum around the use of open-source solutions in ocean research and create an open-source ecosystem similar to what has been achieved with, for example, scientific libraries around the Python language. In summary, transition toward at least partially open-source instrumentation can lead to a drastic reduction in price and development time for all groups working with field measurements, while at the same time making reproducibility of experiments by peers easier.
The data obtained in the present study were collected using an off-the-shelf sensor and an open-source logging system, which was created with generality and ease of modification in mind. The logger is based on a microcontroller rather than a traditional computer. This allows reduced price, power consumption and complexity. In addition an open-source GPS chip and SD card reader are integrated in the logger for absolute time reference, position information and data storage capability. Any type of sensor can then be added to build an instrument in a modular fashion.
The sensor model used to measure waves is the rugged, thermally calibrated VN100 Inertial Motion Unit (IMU) manufactured by Vectornav Co. This IMU was already assessed and used to produce valuable data in the field (Sutherland and Rabault, 2016). The VN100 is configured to output magnetic vector, linear accelerations, angular rates, temperature, pressure and checksum in ASCII format at 10 Hz.
The IMUs and additional electronics were enclosed in hard plastic cases, to which a float was attached for ensuring that the whole system follows the waves well. More details about the technical solutions used for building the wave sensors are given in the Appendix.
Three instruments were deployed at sea near Longearbyen, Svalbard on 17 March 2016. A small boat was used to access the edge of a grease ice slick near shore to deploy the instruments. The water in the measurement area is shallow, down to ~80 cm at the point closest to the shore. Wind waves generated locally came approximately perpendicular to the shore, and approximately parallel to a seawall limiting the extend of the grease ice slick to the East. The influence of the seawall on the waves recorded by the sensors was negligible. To allow easy recovery, the instruments were tied together with a rope, so that the maximum distance allowed between two instruments tied together was ~15 m. The total 30 m rope length used was long enough to not influence the dynamics of the floats and the maximum distance effectively measured between two instruments tied together is ~8 m, which indicates that there was no tension on the rope when measurements are performed. A picture of the ice slick and floats is presented in Figure 1. As visible in Figure 1, the instrument F1 entered the grease ice slick, while instruments F2 and F3 remained close to each other, at the limit of the slick during most of the measurement. The position of the instruments, obtained from the recorded GPS data, is presented in Figure 2. In all the following, the instruments are referred to as F1, F2 and F3. After ~1 h of drifting, F1 got grounded on shore and the wave data obtained by F1 are therefore not reliable after 15.05 UTC.
A sample of the raw data corresponding to the linear accelerations recorded by the instrument F3 is presented in Figure 3. The factory-rated accuracy of the linear acceleration expected from the VN100 is 5 × 10−3 g, with g = 9.81 m s−2 the acceleration of gravity, which is ~2.5 orders of magnitude better than the typical vertical accelerations corresponding to the waves encountered during our measurements in grease ice. Therefore, as visible in Figure 3, the IMU is able to smoothly resolve the wave signal. The measured accelerations are primarily in the vertical direction (Z-axes of the IMU), while only small residual horizontal accelerations are observed (X- and Y-axes of the IMU). This was also observed visually during the deployment of the sensors, with the instruments appearing to be effectively stuck in the horizontal direction relatively to incoming waves due to the viscous grease ice layer.
The 10 Hz sampling frequency is well above the highest wave frequency observed, which is ~1.2 Hz. Therefore, the signal is effectively over sampled with respect to the Nyquist criterion in terms of water wave frequencies observed. While we record wave signal at 10 Hz, the IMU works internally at 800 Hz, so that the signal obtained is the result of a low-pass filtering done by the IMU processor and therefore eliminates otherwise possible aliasing of wind induced high-frequency accelerations.
When computing the wave elevation power spectral density (PSD), the true vertical direction aligned with gravity is first determined by averaging the acceleration over the whole time series for each sensor. The maximum deviation compared with the Z-direction of the IMUs fixed inside the instrument cases is <5°. The vertical wave acceleration is then obtained for each instrument by projecting the linear acceleration recorded by the IMU on the true vertical acceleration. We tested the effect of the angular deviation by adding to the vertical data an artificial random deviation of similar magnitude as the one experimentally measured, and processing the altered signal in the same way as presented in this section. We could not observe any significant influence on the results, as reported in the next section. The PSD of the wave vertical acceleration is computed with the Welch method (Earle, 1996) on 20-min intervals, using three time windows with 50% overlap, and low-pass filtered using a sliding average filter of width eight points. Error bars for the Welch spectra are computed using the Chi-squared (χ 2) error estimate. The PSD for the wave elevation is finally computed from the PSD of the wave vertical acceleration using the formula from Tucker and Pitt (2001):
(1) $${\rm PSD}[\eta ] = \omega ^{ - 4} {\rm PSD}[\eta _{{\rm tt}} ],$$
where ω = 2πf is the angular frequency, f the frequency, η the wave elevation and η tt the vertical acceleration recorded by the IMU.
Since the PSD is quadratic in the wave amplitude, we can write the attenuation coefficient for the wave amplitude between F i and F j at each frequency, a i/j , as:
(2) $$a_{i/j} (\,f) = \sqrt {\displaystyle{{{\rm PSD}_i (\,f)} \over {{\rm PSD}_j (\,f)}}}, $$
where PSD i is the PSD corresponding to the floating instrument F i . The direction of the incoming waves, which is approximately perpendicular to the coast line, is at an angle 10° west of north. The projection of the distance d i/j between instruments F i and F j on the wave propagation direction is therefore computed as:
(3) $$d_{i/j} = {\bi D}_{10} \cdot {\bi r}_{i/j}, $$
with D 10 the wave propagation direction, and r i/j the position vector from F i to F j , computed from the GPS data recorded by the instruments.
The wave damping coefficient α(f) is defined for an incoming monochromatic wave of frequency f as:
(4) $$A_f (x) = A_f (0)e^{ - \alpha (\,f)x}, $$
with A f (x) the monochromatic wave amplitude at a distance x along the wave direction of propagation, and x = 0 is an arbitrary reference position. The damping coefficient α is therefore computed from the attenuation coefficient and the projected distance between F i and F j as:
(5) $$\alpha (\,f) = - \displaystyle{{\log (a_{i/j} (\,f))} \over {d_{i/j}}}. $$
As emphasized in the introduction, several theoretical models describing wave attenuation by an ice cover have been proposed in the literature. While ice conditions can be controlled in laboratory experiments and grease ice thickness and effective viscosity can be measured and used as parameters in a two-layer model, it was not possible to measure those parameters during our field measurements. We could arbitrarily pick up a value for the grease ice layer thickness corresponding to a typical grease ice slick, and use this with one of the more sophisticated two-layer models. However, there would then be no advantage over the simpler one-layer models as the point of two-layer models is to account explicitly for the properties of the ice layer that are otherwise hidden in the value of the effective viscosity. This conflicts with the use of default values for the ice layer properties. Indeed, if default values for the ice layer properties are used based on a coarse assessment of the field conditions, all the fitting of the two-layer models is done based on the value of the effective viscosity of the ice or water layer, similarly to a one-layer model. Therefore, we compare our results with the one-layer model of Weber (1987), which has proven to yield satisfactory agreement with long waves at sea (Weber, 1987) and short waves in the laboratory (Newyear and Martin, 1997). The wave solution described by the one-layer model is the deep water limit of the more general viscous wave damping solution described by Lamb (1932). The generic equation describing the damping rate for any water depth, α g, is (Lamb, 1932):
(6) $$\alpha _{\rm g} (\,f) = \displaystyle{{\nu k} \over {2c_{\rm g} \delta}}, $$
with ν the viscosity that in the original derivation is the viscosity of the fluid, but that can be replaced by an effective eddy viscosity in the water layer ν w to model field data, k the wavenumber, c g = ∂ω/∂k the group velocity, and $\delta = \sqrt {2\nu /\omega} $ the thickness of the Stokes layer.
The one-layer model is obtained by substituting the deep water linear dispersion relation, ω 2 = gk with g = 9.81 m2 s−1 the acceleration of gravity, in Eqn (6). The expression for the damping rate using an effective viscosity in the water layer is then (Weber, 1987):
(7) $$\alpha (\,f) = \displaystyle{{\nu _{\rm w}^{1/2} \omega ^{7/2}} \over {\sqrt 2 g^2}}. $$
Dissipation at the sea bottom is neglected in this model, which is justified as long as viscous dissipation due to the grease ice layer is the dominant dissipation mechanism. This hypothesis can be checked a posteriori. A formula describing the damping effect of the bottom and side wall boundary layers in a wave tank is provided by Sutherland and others (2017) as:
(8) $$\alpha _{{\rm bs}} = \nu \gamma k\displaystyle{{[(1/\sinh (2kH)) + (1/kB)]} \over {c_{\rm g}}}, $$
where γ = 1/δ, H is the water depth, B is the width of the wave tank and ν is the viscosity of water. In our case with sea water at 0° Celsius ν = 1.83 × 10−6 m2s−1. In the open sea, if one assumes that the seabed is smooth enough that the boundary layer there is similar to what is expected in a wave tank, only the first term in Eqn (8) will contribute since no side walls are present. The effect of the boundary layers on wave damping on both the experiment of Newyear and Martin (1997) and our field measurements will be analyzed in the Discussion section.
Figure 4 shows samples of the observed PSDs. The waves recorded are locally generated wind waves, with a peak frequency ~0.8 Hz during the whole record. The damping effect of the grease ice slick is clearly visible. While instruments F2 and F3 remain close to each other at the limit of the grease ice slick and present little damping in most of the spectra, the instrument F1 that moved into the grease ice shows clear reduction in the wave PSD, especially at high frequencies as is expected from theory (Weber, 1987; Keller, 1998). However, F1 drifts to shallow areas as time goes on and was grounded when recovered. This explains the excessive attenuation observed for times 15.05 and 15.25 UTC, and the corresponding PSD obtained from F1 should not be trusted.
During the time interval between 14.15 and 15.00 UTC, F1 is about 7 m further in the grease ice slick compared with F3, so that reliable wave damping coefficients can be computed. The location of the instruments during this time window is presented in Figure 2. During this time interval we can compute the damping based on the PSD for F1 and F3.
The value of the effective viscosity in the water layer yielding best agreement between Eqn (7) and laboratory experiments of Newyear and Martin (1997) is of the order of (1.35 ± 0.08 × 10−2)m2s−1 at 1.173 Hz for a 11.3 cm grease ice layer thickness (Table 1 of Newyear and Martin (1997)). Using the same model Eqn (7), we compute the best-fit effective viscosity using nonlinear least-squares fit on the attenuation rate obtained through Eqn (5) for the collection of signal sample start times 14.20, 14.25, 14.30, 14.35, 14.40 and 14.45 UTC. We find a value ν w = (0.95 ± 0.05 × 10−2)m2s−1 using a 5-σ confidence interval for the viscosity spread, which corresponds to a relative difference to the value found in previous cold laboratory experiments of $(30 \pm 10\% )$ . Propagating the angular deviation of the true vertical signal leads to relative variations in ν w of <1%. We also compute the coefficient of determination of our model on the field data and obtain a value of R 2 = 0.89. Finally, we compute the value of the MAE and RMSE relative to the prediction of the attenuation parameter, which are 0.037 and 0.044 m−1, respectively. Results are summarized in Table 1.
R 2, MAE and RMSE are computed based on the attenuation coefficient. We include only the measurement by Newyear and Martin (1997) at the frequency closest to the frequency range observed in the field.
Obtaining a closed form wave damping formula from Eqn (6) requires the use of the deep water dispersion relation, as the more general intermediate water depth dispersion relation is not invertible in terms of exact known functions. However, the deep water approximation, which is already not strictly enforced in the laboratory study by Newyear and Martin (1997), is not strictly justified at the lower frequencies we report here. Therefore, we also compare the damping obtained in the field with Eqn (6), using the same effective viscosity in the water layer ν w = 0.95 × 10−2 m2s−1 and the intermediate water depth dispersion relation:
(9) $$\omega ^2 = gk\tanh (kH),$$
where H = 0.8 m is the water depth. Newyear and Martin (1997, 1999) showed that the effect of the grease ice layer on the real part of the dispersion relation, i.e. changes in the wavelength introduced by the grease ice layer, can be neglected up to ~1.2 Hz, which is over the higher frequency limit for which we are able to compute damping. Therefore, we can use Eqn (9) to compute wavelength and group velocity even in the presence of a grease ice layer.
In the limit of low-frequency waves, the difference between the deep and intermediate water depth dispersion relations becomes important. As shown in Figure 5(a), both dispersion relations lead together with Eqn (6) to a zero attenuation in the low-frequency limit. However, the predicted quotient of the two attenuation curves goes to infinity in the low-frequency limit as shown in Figure 5(b), indicating that the velocity at which convergence to zero happens is very different between the two curves. As can be seen in Figure 5(b), the damping predicted by both dispersion relations is similar down to a frequency of ~0.4 Hz, which is slightly above the minimum frequency measured in the field, before diverging sharply. However, the damping in the frequency domain below 0.4 Hz for which both dispersion relations yield significantly different predictions is much smaller than for higher frequencies, and therefore the absolute difference between both predictions has a small impact on the least-squares fit used for obtaining the effective viscosity.
Results of the damping values obtained from the field measurements, compared with the one-layer model using the fitted effective viscosity value, are presented in Figure 6. The thin lines indicate damping obtained from Eqn (5) for the six signal samples equally spaced in time, with start times between 14.20 and 14.45 UTC, that were used to fit the effective viscosity. The gray area indicates the 3 σ confidence interval based on all the damping curves. The thick black line indicates the one-layer model prediction from Eqn (7) obtained with the effective viscosity value previously reported, and the dashed line the intermediate water depth prediction from Eqn (6) using the same viscosity. Observations of wave attenuation in the field are therefore consistent with both the one-layer model of Weber (1987) and previous experiments in the laboratory.
We find good agreement, both qualitatively and quantitatively, between the one-layer model and the field data as shown by the width of the confidence intervals obtained, the value of the R 2 coefficient and the visual inspection of Figure 6. By contrast, we obtain a bigger discrepancy between the effective viscosity computed from our measurements and obtained by Newyear and Martin (1997). Several explanations can be attempted to explain the corresponding $(30 \pm 10\% )$ discrepancy. Firstly, the effective viscosity reported by Newyear and Martin (1997) increases slightly with frequency, which they attribute to a possible non-Newtonian behavior of the ice layer, so that reproducing their experiment for a lower frequency range corresponding to the field data obtained may yield better agreement. Secondly, as was emphasized in the introduction, the literature does report intrinsic variability in the effective viscosity due to, among other things, the thickness of the grease ice layer or the level of turbulence in the water. Those variables are difficult to measure in the field, and would be challenging to obtain from remote sensing and include in a wave model, therefore not testable at this time. As a consequence it is satisfactory to find that the one-layer model, which models all those effects in one single parameter (the effective viscosity), manages to produce similar results between field measurement and laboratory experiments, at least for the present small-scale study.
Another cause of discrepancy between our field measurements and laboratory results could be friction on the wave tank side walls. As explained previously, it is well known that the effects of the boundary layers on a wave tank side and bottom walls can affect wave damping, and must be accounted for to accurately compare laboratory results with theories (Sutherland and others, 2017). We can therefore use Eqn (8), together with the intermediate water depth dispersion relation (9), to estimate the effect of the seabed boundary layer on the measurements we performed in Svalbard, and of both the wave tank bottom and side walls in the case of the experiment by Newyear and Martin (1997). We use the viscosity of water at 0°C when computing the effect of the boundary layers. Results are presented in Figure 7. The effect of the seabed (or bottom wall) on wave decay has a peak at an intermediate frequency that depends on the water depth, and diminishes for higher frequency waves as wave motion gets concentrated near the surface. By contrast, the effect of the side walls increases with higher frequencies.
A comparison between the results presented in Figures 6 and 7 shows that the effect of the seabed on wave damping is about three orders of magnitude smaller than the effect of the grease ice at 0.6 Hz. Therefore, even considering that the seabed roughness may increase the damping effect on the field data, the effect of the seabed boundary layer on wave decay can be neglected. In the case of the experiments by Newyear and Martin (1997), Figure 7 shows that while the effect of the side walls increases at higher frequencies, it is there also about two orders of magnitude smaller than the decay coefficient reported in their Table 1. However, the presence of ice at the water surface could add another source of friction on the wave tank walls, which is not accounted for here. Unfortunately, we would need to perform a direct measurement of the friction coefficient of the grease ice on the material used for building the walls of the wave tank used by Newyear and Martin (1997) to assess the magnitude of this effect.
In the present paper, wave sensors are deployed in a grease ice slick near the shore in Svalbard and successfully measure wave spectra and wave attenuation by the grease ice. We present a comparison with the one-layer model of Weber (1987) in the frequency range 0.4–1 Hz. The value of the effective viscosity in the water layer giving the best fit to the experimental data is ν w = (0.95 ± 0.05 × 10−2)m2s−1, which is $(30 \pm 10\% )$ less than the value found at slightly higher frequency in previous cold laboratory experiments. We discuss the possible origin of this discrepancy, and cannot find any significant effect of the boundary layers developing on the wave tank walls and bottom. Therefore, we expect that the discrepancy arises from variability in the grease ice properties and the slightly lower frequency observed in the field compared with the laboratory.
In addition to the value of the effective viscosity, we obtain a coefficient of determination R 2 = 0.89, and values for the MAE and RMSE relative to the prediction of the attenuation parameter of 0.037 and 0.044m−1, respectively. This proves that, in realistic conditions corresponding to field data where only partial information is available, so that sophisticated two-layer models could not be applied, the simpler one-layer model can still provide valuable information about wave damping. Using quantitative metrics such as the coefficient of determination, the MAE and RMSE could be a method to objectively compare different models for wave attenuation by grease ice.
The need for more field measurements for wave attenuation by ice has been underlined previously in the literature (Mosig and others, 2015). In this context, the instruments used for performing this work may also be of interest for other groups. Indeed, we built instruments that are versatile and based on off-the-shelf sensors and open-source electronics, which could make experiments both cheaper and easier to replicate by peers. Code and hardware details are shared as open-source material on the Github of the corresponding author (more details in the Appendix). It is hoped that such an open-source policy can become the norm for field measurements.
The help of Aleksey Marchenko during the field work is gratefully acknowledged. The work was funded through the project 'Experiments on Waves in oil and ice' (Petromaks 2, Grant 233901). The authors want to thank two anonymous reviewers whose feedback greatly improved the initial manuscript. More information about the loggers can be found in the Appendix or on the Github of the first author (https://github.com/jerabaul29/LoggerWavesInIce).
Baker, E (2014) Open source data logger for low-cost environmental monitoring. Biodivers. Data J., 2, e1059 (doi: 10.3897/BDJ.2.e1059)
Carolis, GD and Desiderio, D (2002) Dispersion and attenuation of gravity waves in ice: a two-layer viscous fluid model with experimental data validation. Phys. Lett. A, 305(6), 399–412 (doi: http://dx.doi.org/10.1016/S0375-9601(02)01503-7)
Cauzzi, C and 5 others (2016) An open-source earthquake early warning display. Seismol. Res. Lett., 87 (doi: 10.1785/0220150284)
Chai, T and Draxler, RR (2014) Root mean square error (RMSE) or mean absolute error (MAE)? Arguments against avoiding RMSE in the literature. Geosci. Model Dev., 7(3), 1247–1250 (doi: 10.5194/gmd-7-1247-2014)
Chang, M and Bonnet, P (2010) Monitoring in a high-arctic environment: some lessons from MANA. IEEE Pervasive Comput., 9(4), 16–23 (doi: http://doi.ieeecomputersociety.org/10.1109/MPRV.2010.53)
Christensen, K and Broström, G (2008) Waves in sea ice. Technical Report, Norwegian Meteorological Institute, Oslo
De Carolis, G, Olla, P and Pignagnoli, L (2005) Effective viscosity of grease ice in linearized gravity waves. J. Fluid Mech., 535, 369–381 (doi: 10.1017/S002211200500474X)
De la Rosa, S and Maus, S (2012) Laboratory study of frazil ice accumulation under wave conditions. Cryosphere, 6(1), 173–191 (doi: 10.5194/tc-6-173-2012)
Doble, MJ, De Carolis, G, Meylan, MH, Bidlot, JR and Wadhams, P (2015) Relating wave attenuation to pancake ice thickness, using field measurements and model results. Geophys. Res. Lett., 42(11), 4473–4481 (doi: 10.1002/2015GL063628), 2015GL063628
Earle, MD (1996) Nondirectional and directional wave data analysis procedures. DBC Tech. Doc 96-01, National Data Buoy Centre, National Oceanic and Atmospheric Administration, U.S. Department of Commerce, Washington, DC.
Gandra, M, Seabra, R and Lima, FP (2015) A low-cost, versatile data logging system for ecological applications. Limnol. Oceanogr.: Methods, 13(3), 115–126 (doi: 10.1002/lom3.10012), e10012
Greenhill, AG (1886) Wave motion in hydrodynamics. Am. J. Math., 9(1), 62–96
Joshua, MP (2014) Open-Source Laboratory, Elsevier, Amsterdam
Keller, JB (1998) Gravity waves on ice-covered water. J. Geophys. Res., 103, 7663–7669 (doi: 10.1029/97JC02966)
Kohout, AL and Meylan, MH (2008) An elastic plate model for wave attenuation and ice floe breaking in the marginal ice zone. J. Geophys. Res.: Oceans 113(C9), n/a–n/a (doi: 10.1029/2007JC004434), c09016
Lamb, H (1932) Hydrodynamics. Cambridge Mathematical Library, Cambridge University Press, Cambridge
Liu, AK and Mollo-Christensen, E (1988) Wave propagation in a solid ice pack. J. Phys. Oceanogr., 18(11), 1702–1712 (doi: 10.1175/1520-0485)
Meylan, MH and Squire, VA (1996) Response of a circular ice floe to ocean waves. J. Geophys. Res.: Oceans, 101(C4), 8869–8884 (doi: 10.1029/95JC03706)
Mosig, JEM, Montiel, F and Squire, VA (2015) Comparison of viscoelastic-type models for ocean wave attenuation in icecovered seas. J. Geophys. Res.: Oceans, 120(9), 6072–6090 (doi: 10.1002/2015JC010881)
Newyear, K and Martin, S (1997) A comparison of theory and laboratory measurements of wave propagation and attenuation in grease ice. J. Geophys. Res.: Oceans, 102(C11), 25091–25099 (doi: 10.1029/97JC02091)
Newyear, K and Martin, S (1999) Comparison of laboratory data with a viscous two-layer model of wave propagation in grease ice. J. Geophys. Res.: Oceans, 104(C4), 7837–7840 (doi: 10.1029/1999JC900002)
Peters, AS (1950) The effect of a floating mat on water waves. Commun. Pure Appl. Math., 3(4), 319–354 (doi: 10.1002/cpa.3160030402)
Pfirman, S, Eicken, H, Bauch, D and Weeks, W (1995) The potential transport of pollutants by arctic sea ice. Sci. Total Environ., 159(23), 129–146 (doi: http://dx.doi.org/10.1016/0048-9697(95)04174-Y)
Rao, CR (1973) Linear statistical inference and its applications. Wiley, New York (doi: 10.1002/9780470316436)
Rigor, I and Colony, R (1997) Sea-ice production and transport of pollutants in the laptev sea, 19791993. Sci. Total Environ., 202(13), 89–110 (doi: http://dx.doi.org/10.1016/S0048-9697(97)00107-1), environmental Radioactivity in the Arctic
Smedsrud, LH (2011) Grease-ice thickness parameterization. Ann. Glaciol., 52, 77–82 (doi: 10.3189/172756411795931840)
Smedsrud, LH and Skogseth, R (2006) Field measurements of arctic grease ice properties and processes. Cold Regions Sci. Technol., 44(3), 171–183 (doi: http://dx.doi.org/10.1016/j.coldregions.2005.11.002)
Squire, V (2007) Of ocean waves and sea-ice revisited. Cold Regions Sci. Technol., 49(2), 110–133 (doi: http://dx.doi.org/10.1016/j.coldregions.2007.04.007)
Squire, V, Dugan, JP, Wadhams, P, Rottier, PJ and Ilu, AK (1995) Of ocean waves and sea-ice. Annu. Rev. Fluid Mech., 27, 115–168 (doi: http://10.1146/annurev..27.010195.000555)
Sutherland, G and Rabault, J (2016) Observations of wave dispersion and attenuation in landfast ice. J. Geophys. Res.: Oceans, 121(3), 1984–1997 (doi: 10.1002/2015JC011446)
Sutherland, G, Halsne, T, Rabault, J and Jensen, A (2017) The attenuation of monochromatic surface waves due to the presence of an inextensible cover. Wave Motion, 68, 88–96 (doi: http://dx.doi.org/10.1016/j.wavemoti.2016.09.004)
Thomson, J and Rogers, WE (2014) Swell and sea in the emerging arctic ocean. Geophys. Res. Lett., 41(9), 3136–3140 (doi: 10.1002/2014GL059983)
Tucker, M and Pitt, E (2001) Waves in ocean engineering. Elsevier ocean engineering book series, Elsevier, University of Michigan
Wadhams, P and Doble, MJ (2009) Sea ice thickness measurement using episodic infragravity waves from distant storms. Cold Regions Sci. Technol., 56(23), 98–101 (doi: http://dx.doi.org/10.1016/j.coldregions.2008.12.002)
Wang, R and Shen, HH (2010a) Experimental study on surface wave propagating through a greasepancake ice mixture. Cold Regions Sci. Technol., 61(23), 90–96 (doi: http://dx.doi.org/10.1016/j.coldregions.2010.01.011)
Wang, R and Shen, HH (2010b) Gravity waves propagating into an ice-covered ocean: a viscoelastic model. J. Geophys. Res.: Oceans, 115(C6), n/a–n/a (doi: 10.1029/2009JC005591), c06024
Weber, JE (1987) Wave attenuation and wave drift in the marginal ice zone. J. Phys. Oceanogr., 17(12), 2351–2361 (doi: 10.1175/1520-0485(1987)017<2351:WAAWDI>2.0.CO;2)
Willmott, CJ (1982) Some comments on the evaluation of model performance. Bull. Am. Meteorol. Soc., 63(11), 1309–1313 (doi: 10.1175/1520-0477(1982)063<1309:SCOTEO>2.0.CO;2)
Zhao, X and Shen, HH (2015) Wave propagation in frazil/pancake, pancake, and fragmented ice covers. Cold Regions Sci. Technol., 113, 71–80 (doi: http://dx.doi.org/10.1016/j.coldregions.2015.02.007)
TECHNICAL DETAILS ABOUT THE INSTRUMENTS USED
The general architecture of the instruments is the following. An Arduino Mega microcontroller board is used together with a GPS chip, an active GPS antenna, and an SD card reader to build a modular instrument. The GPS chip communicates with the microcontroller through one of its four physical serial interfaces, while the SD card reader is wired on the SPI microcontroller bus. Sensors can then be added by simply plugging them into one of the three remaining physical serial interfaces of the microcontroller, or by using the SPI or I2C bus. In addition, the GPS, SD card reader and sensors receive power through a MOSFET transistor that can be switched on and off from the microcontroller, which allows the whole instrument to be put in a low-power consumption mode if requested.
Since the VN100 sensor used in our study relies on a RS232 3 V level for serial communications while the microcontroller board uses TTL 5 V logic, a MAX232 logic converter chip is used for level conversion between the VN100 and the serial port of the microcontroller board used for logging. Data are logged directly in ASCII format at 10 Hz. Binary format can be used to compress the data, but this was not necessary in our case and therefore ASCII was chosen for ease of programming and debugging. The current consumption of the whole system, when logging both the IMU at 10 Hz and GPS data at 1 Hz, is 180 mA at 5 V (see Table 2). This is ~20 times less than the overall power consumption that was needed for powering the MOXA computer-based instrument used in Sutherland and Rabault (2016).
Battery autonomy is the limiting factor for long-time logging in cold regions, which is made an even more critical issue than in temperate environments since the capacity of traditionally used lead acid batteries drops drastically in cold (Chang and Bonnet, 2010). We solve this issue by using industry grade rechargeable prismatic lithium iron (LiFe) batteries that feature low self-discharge and excellent performance in the cold. Two 40 Ah, 3.2 V cells are assembled in series to provide a voltage of 6.4 V, which is reduced to the 5 V needed by our electronics using a low dropout voltage regulator. In addition, a protection circuit module is inserted between the battery and the voltage regulator to prevent overloading or overdischarging. This solution was tested in the laboratory with the complete instrument including the VN100 IMU at a temperature of −18°, and was able to work continuously for over 8 days. When only a few hours of logging are needed, we use more affordable lithium ion batteries.
To make the instrument resilient to failures caused by external events, such as short power interruptions due to shocks or other real work issues, a watchdog timer is used to reboot the microcontroller in case of a malfunctioning. In addition, the microcontroller logs the data to a new file every 15 min, and the name of the previous file is stored in the nonvolatile EEPROM microcontroller memory to avoid erasing previous data even in the event of a reboot by the watchdog. This ensures that at most 15 min of data will be lost in case of a watchdog reset or power loss.
The microcontroller board, IMU, GPS chip and antenna, SD card reader and battery with protection circuit module are enclosed in a robust pelican case. The pelican cases are chosen, so that the whole system is buoyant, independently of any additional float.
Using microcontrollers presents several advantages over more powerful computer systems, such as the MOXA computer used in Sutherland and Rabault (2016). Price, power consumption, complexity, weight and size can be reduced, and therefore more sensors can be deployed for the same budget and can deliver longer autonomy (over 1 week of continuous measurements for an instrument weighting <5 kg) without operator intervention. Microcontrollers are not able to run sophisticated embedded processing, but they are able to perform logging and to interact with various sensors. More details about the code and the electrical components used are available on the corresponding author GitHub repository (https://github.com/jerabaul29/LoggerWavesInIce).
Loading article... | CommonCrawl |
Corporate Finance & Accounting Financial Analysis
Geometric Mean Definition
Adam Hayes is a financial writer with 15+ years Wall Street experience as a derivatives trader. Besides his extensive derivative trading expertise, Adam is an expert in economics and behavioral finance. Adam received his master's in economics from The New School for Social Research and his Ph.D. from the University of Wisconsin-Madison in sociology. He is a CFA charterholder as well as holding FINRA Series 7 & 63 licenses. He currently researches and teaches at the Hebrew University in Jerusalem.
Margaret James
Reviewed by Margaret James
Peggy James is a CPA with over 9 years of experience in accounting and finance, including corporate, nonprofit, and personal finance environments. She most recently worked at Duke University and is the owner of Peggy James, CPA, PLLC, serving small businesses, nonprofits, solopreneurs, freelancers, and individuals.
How to Value a Company
What Is the Geometric Mean?
The geometric mean is the average of a set of products, the calculation of which is commonly used to determine the performance results of an investment or portfolio. It is technically defined as "the nth root product of n numbers." The geometric mean must be used when working with percentages, which are derived from values, while the standard arithmetic mean works with the values themselves.
The geometric mean is an important tool for calculating portfolio performance for many reasons, but one of the most significant is it takes into account the effects of compounding.
The geometric mean is the average rate of return of a set of values calculated using the products of the terms.
Geometric mean is most appropriate for series that exhibit serial correlation—this is especially true for investment portfolios.
Most returns in finance are correlated, including yields on bonds, stock returns, and market risk premiums.
For volatile numbers, the geometric average provides a far more accurate measurement of the true return by taking into account year-over-year compounding that smooths the average.
The Formula for Geometric Mean
μ geometric = [ ( 1 + R 1 ) ( 1 + R 2 ) … ( 1 + R n ) ] 1 / n − 1 where: ∙ R 1 … R n are the returns of an asset (or other \begin{aligned} &\mu _{\text{geometric}} = [(1+R _1)(1+R _2)\ldots(1+R _n)]^{1/n} - 1\\ &\textbf{where:}\\ &\bullet R_1\ldots R_n \text{ are the returns of an asset (or other}\\ &\text{observations for averaging)}. \end{aligned} μgeometric=[(1+R1)(1+R2)…(1+Rn)]1/n−1where:∙R1…Rn are the returns of an asset (or other
Understanding the Geometric Mean
The geometric mean, sometimes referred to as compounded annual growth rate or time-weighted rate of return, is the average rate of return of a set of values calculated using the products of the terms. What does that mean? Geometric mean takes several values and multiplies them together and sets them to the 1/nth power.
For example, the geometric mean calculation can be easily understood with simple numbers, such as 2 and 8. If you multiply 2 and 8, then take the square root (the ½ power since there are only 2 numbers), the answer is 4. However, when there are many numbers, it is more difficult to calculate unless a calculator or computer program is used.
The longer the time horizon, the more critical compounding becomes, and the more appropriate the use of geometric mean.
The main benefit of using the geometric mean is the actual amounts invested do not need to be known; the calculation focuses entirely on the return figures themselves and presents an "apples-to-apples" comparison when looking at two investment options over more than one time period. Geometric means will always be slightly smaller than the arithmetic mean, which is a simple average.
How to Calculate the Geometric Mean
To calculate compounding interest using the geometric mean of an investment's return, an investor needs to first calculate the interest in year one, which is $10,000 multiplied by 10%, or $1,000. In year two, the new principal amount is $11,000, and 10% of $11,000 is $1,100. The new principal amount is now $11,000 plus $1,100, or $12,100.
In year three, the new principal amount is $12,100, and 10% of $12,100 is $1,210. At the end of 25 years, the $10,000 turns into $108,347.06, which is $98,347.05 more than the original investment. The shortcut is to multiply the current principal by one plus the interest rate, and then raise the factor to the number of years compounded. The calculation is $10,000 × (1+0.1) 25 = $108,347.06.
Example of Geometric Mean
If you have $10,000 and get paid 10% interest on that $10,000 every year for 25 years, the amount of interest is $1,000 every year for 25 years, or $25,000. However, this does not take the interest into consideration. That is, the calculation assumes you only get paid interest on the original $10,000, not the $1,000 added to it every year. If the investor gets paid interest on the interest, it is referred to as compounding interest, which is calculated using the geometric mean.
Using the geometric mean allows analysts to calculate the return on an investment that gets paid interest on interest. This is one reason portfolio managers advise clients to reinvest dividends and earnings.
The geometric mean is also used for present value and future value cash flow formulas. The geometric mean return is specifically used for investments that offer a compounding return. Going back to the example above, instead of only making $25,000 on a simple interest investment, the investor makes $108,347.06 on a compounding interest investment.
Simple interest or return is represented by the arithmetic mean, while compounding interest or return is represented by the geometric mean.
Arithmetic Mean Definition
The arithmetic mean is the sum of all the numbers in the series divided by the count of all numbers in the series.
Compound Mean
Compound refers to the ability of a sum of money to grow exponentially over time by the repeated addition of earnings to the principal invested.
What Is Annualized Total Return?
Annualized total return gives the yearly return of a fund calculated to demonstrate the rate of return necessary to achieve a cumulative return.
Annual Equivalent Rate (AER)
The annual equivalent rate (AER) is the interest rate for a savings account or investment product that has more than one compounding period.
Present Value of an Annuity Definition
The present value of an annuity is the current value of future payments from that annuity, given a specified rate of return or discount rate.
Average Annual Growth Rate (AAGR) Definition
Average annual growth rate (AAGR) is the average increase in the value of an investment, portfolio, asset, or cash stream over the period of a year.
The Difference Between the Arithmetic Mean and Geometric Mean
Breaking Down the Geometric Mean in Investing
How Do I Use the Rule of 72 to Calculate Continuous Compounding?
Fixed Income Essentials
Yield vs Interest Rate: What's the Difference?
One Day Your Roth IRA Will Fund Itself
Making Sense of the Rule of 72 | CommonCrawl |
Leishmania braziliensis
Plasmodium falciparum
Pediculus
Mice, Nude
Mice, SCID
Onychomycosis
Leishmaniasis, Visceral
Helicobacter Infections
Leishmaniasis, Cutaneous
Malaria, Falciparum
Pituitary ACTH Hypersecretion
Gastrinoma
Neoplasm Recurrence, Local
Trypanosomiasis, African
Tinea Pedis
Parasitemia
Syphilis, Latent
Urinary Incontinence, Stress
Tuberculosis, Pulmonary
Neoplasms, Experimental
Duodenal Neoplasms
Schistosomiasis mansoni
Leishmaniasis, Mucocutaneous
Zollinger-Ellison Syndrome
Testicular Neoplasms
Foot Dermatoses
Scalp Dermatoses
Vaginosis, Bacterial
Neoplasms, Germ Cell and Embryonal
Candidiasis, Vulvovaginal
Pelvic Neoplasms
Tuberculosis, Multidrug-Resistant
Lice Infestations
Mastitis, Bovine
Skin Diseases, Infectious
Hyperparathyroidism, Primary
ACTH-Secreting Pituitary Adenoma
Antiprotozoal Agents
Artemisinins
Trypanocidal Agents
Fluorenes
Drug Combinations
Antimony Sodium Gluconate
Nitroimidazoles
Primaquine
Anti-Ulcer Agents
Antitubercular Agents
Mefloquine
Sesquiterpenes
Ethanolamines
Meglumine
Oxamniquine
Penicillins
Amoxicillin-Potassium Clavulanate Combination
2-Pyridinylmethylsulfinylbenzimidazoles
Melarsoprol
Schistosomicides
Antinematodal Agents
Antifungal Agents
Quinolines
Sulfalene
Organometallic Compounds
Nifurtimox
Amodiaquine
Acetamides
Lansoprazole
Oxazolidinones
Reagins
Phosphorylcholine
Penicillin G Procaine
Probenecid
Clavulanic Acids
Aminoquinolines
Antitrichomonal Agents
Pyrantel Pamoate
Enbucrilate
Furazolidone
Anti-Infective Agents, Urinary
Drug Therapy, Combination
Combined Modality Therapy
Parasite Egg Count
Clinical Trials as Topic
Drug Administration Schedule
Curing Lights, Dental
Directly Observed Therapy
Genetic Therapy
Spiritual Therapies
Treatment Failure
Remission Induction
Light-Curing of Dental Adhesives
Antineoplastic Combined Chemotherapy Protocols
Photochemotherapy
Parathyroidectomy
Suburethral Slings
Injections, Intralesional
Neoplasm Staging
Radioimmunotherapy
Technology, Dental
Hematopoietic Stem Cell Transplantation
Catheter Ablation
Islets of Langerhans Transplantation
Medicine, African Traditional
Microbial Sensitivity Tests
Transplantation, Homologous
Patient Compliance
Phenomena and Processes 4
Disciplines and Occupations 2
Humanities 3
DiseasesChemicals and DrugsAnalytical, Diagnostic and Therapeutic Techniques and EquipmentPhenomena and ProcessesHealth Care
Treatment OutcomeAntiprotozoal AgentsDrug Therapy, CombinationAnti-Bacterial AgentsAntimalarialsArtemisininsTrypanocidal AgentsFluorenesPraziquantelAnthelminticsAmoxicillinCombined Modality TherapyDrug CombinationsAntimony Sodium GluconateTime FactorsNitroimidazolesPrimaquineRecurrenceOnychomycosisMetronidazoleFollow-Up StudiesParomomycinParasite Egg CountLeishmaniasis, VisceralAzithromycin
Malawi: Established in 2002, the Beit CURE International Hospital in Blantyre has 66 beds and has expertise in total hip and knee replacement surgery. (wikipedia.org)
Direct-acting antiviral treatment is highly effective at curing hepatitis C in people who inject drugs and in people receiving opioid substitution therapy (OST), a systematic review and meta-analysis of 38 studies published in The Lancet Gastroenterology and Hepatology shows. (infohep.org)
s hepatitis b cn be cure nw a days? (medhelp.org)
The Water Cure is a fever dream, a blazing vision of suffering, sisterhood and transformation. (penguin.co.uk)
Sophie Mackintosh is the author of The Water Cure, which was longlisted for the Man Booker Prize 2018. (penguin.co.uk)
Water cure may refer to: Water cure (therapy), a course of medical treatment by hydrotherapy. (wikipedia.org)
Water cure (torture), a form of torture in which a person is forced to drink large quantities of water. (wikipedia.org)
The Water Cure, a 1916 film starring Oliver Hardy. (wikipedia.org)
Now, researchers say a child may have been cured of HIV infection Experts are cautious about making too much of a single case. (voanews.com)
Kenya: The first CURE hospital opened in 1998 in Kijabe. (wikipedia.org)
Research on electro-cures is often tainted by conflicts of interest. (scientificamerican.com)
After four years of hosting 'Scrapping for a Cure' in the North Alabama Area, we have moved it to the Greater Memphis Area in hopes to grow and reach more people. (google.com)
The proportion of people with a disease that are cured by a given treatment, called the cure fraction or cure rate, is determined by comparing disease-free survival of treated people against a matched control group that never had the disease. (wikipedia.org)
When all of the non-cured people have died or re-developed the disease, only the permanently cured members of the population will remain, and the DFS curve will be perfectly flat. (wikipedia.org)
The Berkson and Gage equation is S ( t ) = p + [ ( 1 − p ) × S ∗ ( t ) ] {\displaystyle S(t)=p+[(1-p)\times S^{*}(t)]} where S ( t ) {\displaystyle S(t)} is the proportion of people surviving at any given point in time, p {\displaystyle p} is the proportion that are permanently cured, and S ∗ ( t ) {\displaystyle S^{*}(t)} is an exponential curve that represents the survival of the non-cured people. (wikipedia.org)
The analysis allows the statistician to determine the proportion of people that are permanently cured by a given treatment, and also how long after treatment it is necessary to wait before declaring an asymptomatic individual to be cured. (wikipedia.org)
Remission is the state of absence of disease activity in patients known to have a chronic illness that cannot be cured. (wikipedia.org)
Fatal Cure tells the story of two young doctors Angela and David Wilson, with their 9-year-old daughter who suffers from a chronic disease, cystic fibrosis, who are lured to a small town in Vermont to start a career. (wikipedia.org)
Misión de Carlos Cure: fortalecer relación con Hugo Chávez" [Mission of Carlos Cure: strengthen relations with Hugo Chávez]. (wikipedia.org)
Conversely, a person that has successfully managed a disease, such as diabetes mellitus, so that it produces no undesirable symptoms for the moment, but without actually permanently ending it, is not cured. (wikipedia.org)
In this model, the survival at any given time is equal to those that are cured plus those that are not cured, but who have not yet died or, in the case of diseases that feature asymptomatic remissions, have not yet re-developed signs and symptoms of the disease. (wikipedia.org)
The Cure are an English rock band formed in Crawley, West Sussex in 1976. (wikipedia.org)
The Wiltshire cure is a traditional English technique for curing bacon and ham. (wikipedia.org)
Uganda: Specializing in treating neurosurgical needs, the CURE Children's Hospital of Uganda opened in 2000 and has been recognized as a global leader in treatment of hydrocephalus. (wikipedia.org)
More precisely, the CURE project paved the way for a multi-disciplinary study with three levels of analysis: estimation of the mortality ratio of cohorts of workers compared to the general population, estimation of the correlations between uranium exposure and the risk of (cancerous and non-cancerous) diseases, and estimation of the link between uranium exposure and relevant biomarkers for the study of the biological and health effects of this radionuclide. (irsn.fr)
Some diseases may be discovered to be technically incurable, but also to require treatment so infrequently as to be not materially different from a cure. (wikipedia.org)
Other diseases may prove to have a multiple plateaus, so that what was once hailed as a "cure" results unexpectedly in very late relapses. (wikipedia.org)
Cure Violence's founder and executive director, Gary Slutkin, is an epidemiologist and a physician who for ten years battled infectious diseases in Africa. (wikipedia.org)
officially
The Cure Bowl, officially the AutoNation Cure Bowl for sponsorship purposes, is an annual American college football bowl game played in December of each year starting in 2015. (wikipedia.org)
Although the band never officially released anything as Easy Cure, bootlegs of their early demos have been in circulation for a number of years, and in 2004 the Deluxe Edition of The Cure's 1979 album Three Imaginary Boys was released with a rarities bonus disc featuring a number of Easy Cure demo and live recordings from 1977 and 1978. (wikipedia.org)
Then after extravagant hopes and promises of cure, there have followed failures, which have thrown the employment of this agent into disrepute, to be again after time revived and brought into popular favor. (scientificamerican.com)
Technology is once again being touted as a cure-all, this time for what ails the American health-care industry. (technologyreview.com)
Once upon a time, damaged women came here to be cured. (penguin.co.uk)
Another way of determining the cure fraction and/or "cure time" is by measuring when the hazard rate in a diseased group of individuals returns to the hazard rate measured in the general population. (wikipedia.org)
The earliest point in time that the curve goes flat is the point at which all remaining disease-free survivors are declared to be permanently cured. (wikipedia.org)
Cure, who had no acting experience at the time, auditioned for a part in the Disney's Miracle, a film focusing on the Team USA's Miracle on Ice at the 1980 Olympics. (wikipedia.org)
The orthopedic training program has been certified by the College of Surgeons of East, Central and Southern Africa, where surgeons will spend five years training at the hospital and then work at another CURE hospital for an additional amount of time. (wikipedia.org)
Consequently, patients, parents and psychologists developed the notion of psychological cure, or the moment at which the patient decides that the treatment was sufficiently likely to be a cure as to be called a cure. (wikipedia.org)
Cure (キュア, Kyua) is a 1997 Japanese psychological thriller film with elements of horror and film noir written and directed by Kiyoshi Kurosawa, starring Koji Yakusho, Masato Hagiwara, Tsuyoshi Ujiki and Anna Nakagawa. (wikipedia.org)
For example, a patient may declare himself to be "cured", and to determine to live his life as if the cure were definitely confirmed, immediately after treatment. (wikipedia.org)
CURE clubfoot, a non-surgical treatment for the correction of clubfoot in young children, is hosted in this hospital. (wikipedia.org)
Zambia: The Beit CURE International Hospital of Zambia was established in 2004, in Lusaka when CURE signed an agreement with the Zambian Ministry of Health to operate a pediatric teaching hospital, specializing in treatment and care of children living with disabilities. (wikipedia.org)
A cure is a completely effective treatment for a disease. (wikipedia.org)
It is possible to use cure rate models to compare the efficacy of different treatments. (wikipedia.org)
The CURE (Concerted Uranium Research in Europe) project was coordinated by IRSN in 2013 and 2014. (irsn.fr)
The Cure Bowl is so named to promote awareness and research of breast cancer, with proceeds going to the Breast Cancer Research Foundation. (wikipedia.org)
Jones's concerns could apply to our era, when electro-cures for mental illness have once again been 'brought into popular favor. (scientificamerican.com)
Cure Violence now refers to the larger organization and overall health approach, while local program partner sites often operate under other names. (wikipedia.org)
Smoking is not part of the process, although bacon is often smoked after being cured. (wikipedia.org)
Cure made his film debut in Walt Disney Pictures' Miracle in 2004. (wikipedia.org)
On 5 May Easy Cure made the first of many regular live appearances at the Crawley pub then known as The Rocket. (wikipedia.org)
Would a healthy man who understood how diet could be used to prevent, reverse and cure disease be someone who was likely to have a heart attack? (healthimpactnews.com)
Inherent in the idea of a cure is the permanent end to the specific instance of the disease. (wikipedia.org)
Cure Violence approaches violence in an entirely new way: as a contagious disease that can be stopped using the same health strategies employed to fight epidemics. (wikipedia.org)
We wanted to help raise money for the non-profit organization, Cystic Fibrosis Foundation, in hopes that one day soon a cure will be found. (google.com)
CURE International is a Christian nonprofit organization based in New Cumberland, Pennsylvania. (wikipedia.org)
The Cure Violence method was developed using World Health Organization derived strategies and has won multiple awards. (wikipedia.org)
Mental health professionals now use the term talking cure more widely to mean any of a variety of talking therapies. (wikipedia.org)
CCM magazine reported that all original members of the band had been in mainstream bands prior to the formation of Idle Cure. (wikipedia.org)
The founding members of the Cure were school friends at Notre Dame Middle School in Crawley, West Sussex, whose first public performance was at an end-of-year show in April 1973 as members of a one-off school band called the Obelisk. (wikipedia.org)
In January 1977, following Martin Creasy's departure, and increasingly influenced by the emergence of punk rock, Malice's remaining members became known as Easy Cure after a song written by drummer Laurence Tolhurst. (wikipedia.org)
What if that same substance could be given to autistic children and 85% of them would experience improvement in their autism and many would be completely cured? (healthimpactnews.com)
This is a blog about health and some of my experiences and things I have heard about that can in some cases help cure and improve general health and well-being. (naturalhealthremedies.org)
La Cure is a small village located thirty miles north of Geneva, Switzerland. (wikipedia.org)
Is The U.S. Medical Mafia Murdering Alternative Health Doctors Who Have Real Cures Not Approved by the FDA? (healthimpactnews.com)
When his tenure as CEO and President of Kirschner Medical was over, Dr. Harrison created the Crippled Children's United Rehabilitation Effort (CCURE or C²URE, later CURE), hoping to meet that need. (wikipedia.org)
In April 2016, the Satmed eHealth platform was deployed to the Niamey CURE hospital to provide communication between staff and national and international doctors to receive medical counselling, remote diagnosis of patients by experts across the World, online training for doctors and nurses to improve their knowledge, and easy access to the internet, via satellite. (wikipedia.org)
United Arab Emirates: The CURE Oasis Hospital, located in Al Ain, was established in 1960 to bring American medical care to the UAE. (wikipedia.org)
Fatal Cure is a medical thriller written by Robin Cook. (wikipedia.org)
Ten years later, in a 2014 interview with the Minnesota magazine, Let's Play Hockey, Cure noted "The story of 'Miracle' is truly a love story about 20 young boys coming together and taking on the world. (wikipedia.org)
Cure is a Japanese rock music and fashion magazine published monthly. (wikipedia.org)
The Official Cure Magazine Shop in Los Angeles, XENON has been providing U.S. fans to interact with bands through Livestream Q&A session events since August 2014. (wikipedia.org)
In 2010, as a seventeen-year-old, Cure was described as "the next big thing in women's cycling. (wikipedia.org)
The AIC-CURE International Children's Hospital is a 30-bed hospital that serves approximately 8,000 children per year, also operating mobile clinics to remote regions. (wikipedia.org)
That year, Easy Cure won a talent competition with German label Hansa Records, and received a recording contract. (wikipedia.org)
The inaugural AutoNation Cure Bowl took place on December 19, 2015, and was nationally televised on the CBS Sports Network. (wikipedia.org)
When a person has the common cold, and then recovers from it, the person is said to be cured, even though the person might someday catch another cold. (wikipedia.org)
Originally it was a dry cure method that involved applying salt to the meat for 10-14 days. (wikipedia.org)
Listen to the left as Rick tells his amazing story and w atch the videos below for the full story of Rick Simpson and how he RE-discovered this cure for cancer. (cureyourowncancer.org)
Cure Violence follows a three-pronged health approach to violence prevention : detection/interruption of planned violent activity, behavior change of high-risk individuals, and changing community norms. (wikipedia.org)
He will be known as the man who rediscovered the cure for cancer by everyone. (cureyourowncancer.org)
All About Jazz states Cure All's "simplicity is refreshing rather than predictable" and that Walter's sidemen for the album are known for their appreciation for the spirit of New Orleans. (wikipedia.org)
During March 1977 Easy Cure hired and fired a vocalist known only as Gary X, who by April had been replaced by Peter O'Toole (not the actor). (wikipedia.org)
The Talking Cure and chimney sweeping were terms Bertha Pappenheim, known in case studies by the alias Anna O., used for the verbal therapy given to her by Josef Breuer. (wikipedia.org)
The Wiltshire cure has been a wet cure, soaking the meat in brine for 4-5 days, since the First World War. (wikipedia.org)
So, now that they figured out the cure for hcv , do you think we can hope in the near futur. (medhelp.org)
Afghanistan: CURE accepted the invitation from Afghan Ministry of Public Health to take over a hospital located in Kabul in January, 2005. (wikipedia.org)
Cure Violence, founded by University of Illinois at Chicago School of Public Health Epidemiologist Gary Slutkin, M.D. and ranked one of the top twenty NGOs by the Global Journal in 2015, is a public health anti-violence program. (wikipedia.org)
In December, 2015, Cure Violence has 23 cities implementing the Cure Violence health approach in over 50 sites in the U.S. International program partner sites are operating in Trinidad, Honduras, Mexico, South Africa, Canada and Colombia. (wikipedia.org)
The Robert Wood Johnson Foundation awarded CeaseFire a grant for the period 2007 to 2012 and continues to be a major funder of the Cure Violence health approach overall. (wikipedia.org)
Idle Cure was an arena rock band from Long Beach, California. (wikipedia.org)
In April 2014, we held our first 'Scrapping for a Cure' scrapbooking event in Memphis, TN at a local church. (google.com)
Cure took advantage of an athlete "adoption" programme that helps elite athlete orphans living far away from home that placed her with a local, Adelaide area family. (wikipedia.org)
In a 2004 interview before filming began, Cure recalled the audition process, "I'm out in L.A., auditioning and pretending to be an actor, hoping somebody buys it. (wikipedia.org)
The simplest cure rate model was published by Berkson and Gage in 1952. (wikipedia.org)
Several cure rate models exist, such as the expectation-maximization algorithm and Markov chain Monte Carlo model. (wikipedia.org)
CeaseFire Illinois now operates the Chicago program sites using the Cure Violence model. (wikipedia.org)
The Cure Violence model trains and deploys outreach workers and violence interrupters to mitigate conflict on the street before it turns violent. (wikipedia.org)
This logically rigorous approach essentially equates indefinite remission with cure. (wikipedia.org)
The firm's main marketing strategy is claiming that its CES device, which costs $699 and can be used at home, is cheaper and safer than other electro-cures. (scientificamerican.com)
The CURE project sought to develop a new study based on modern biological approaches, joint analysis of the main cohorts of workers monitored for uranium exposure, and the latest internal dose calculation models. (irsn.fr)
Sigmund Freud later adopted the term talking cure to describe the fundamental work of psychoanalysis. (wikipedia.org)
helo can you tell me best place whre we can cure hbv? (medhelp.org)
On 24 April 2014, three CURE physicians were killed by an Afghan security guard, among them being one American, Dr. Jerry Umanos, a pediatrician. (wikipedia.org)
Cure, the youngest actor cast as a player in the film, portrayed Mike Ramsey, the youngest player in the 1980 US ice hockey team. (wikipedia.org)
A thousand copies of No Cure were sold each issue through record shops in Reading (Quicksliver), Windsor (Revolution) and London including Rough Trade and mail order. (wikipedia.org)
For shops under this agreement, pork products sold in the UK that are labelled with "Wiltshire Cure" should only have been sourced from the UK. (wikipedia.org)
In 2015, Hilton Memphis became our Gold Sponsor and our home for our Scrapping for a cure 2-day retreat. (google.com)
In the Fall of 2006, CURE partnered with Smile Train to develop a cleft lip and cleft palate surgical training program. (wikipedia.org)
The CURE study assessed the possibility of pooling data from cohorts of workers exposed to uranium who were monitored across different countries, in order to increase the statistical power available for risk analysis. (irsn.fr)
It is available in the CURE project's final report , published in March 2015. (irsn.fr)
Some consider that after a century of employment the talking cure has finally led to the writing cure. (wikipedia.org)
By far the safest way to cure heartburn is by simply changing your diet. (naturalhealthremedies.org)
The AutoNation Cure Bowl, which features a match-up of teams from the American Athletic Conference and the Sun Belt Conference, is played at Camping World Stadium in downtown Orlando, Florida. (wikipedia.org)
The inaugural Cure Bowl was played in 2015 and the game featured opponents from the Mountain West Conference San Jose State University and the Sun Belt Conference Gerorgia State University due to The American not having enough teams to fill the tie-in. (wikipedia.org) | CommonCrawl |
Open Geospatial Data, Software and Standards
Development and testing of geo-processing models for the automatic generation of remediation plan and navigation data to use in industrial disaster remediation
G. Lucas1,2,
Cs. Lénárt1 &
J. Solymosi2
Open Geospatial Data, Software and Standards volume 1, Article number: 5 (2016) Cite this article
This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster.
Five algorithms were developed, the respective scripts were written in Python, and then tested. The first model aims at drawing a parcel clean-up plan. It tests four different parcel orientations (0, 90, 45 and 135°) and keeps the plan where clean-up parcels are less numerous. The second model uses the orientation of each contamination polygon feature to orientate the features of the clean-up plan accordingly. The third model tested if it is worth rotating the parcel features by 90° for some contamination feature. The fourth model drifts the clean-up parcel of a work plan following a grid pattern; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel.
The best optimization results were achieved with the second model; the drift and 90° rotation models do not offer significant advantage. By comparison of the results between different orientations we demonstrated that the number of clean-up parcels generated varies in a range of 4 to 38 % from plan to plan.
Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation.
On October 4th, 2010 Hungary faced the worst environmental disaster in its history when the embankment of a toxic waste reservoir failed and released a mixture of 600,000 to 700,000 m3 [1, 2] of red mud and water. Lower parts of the settlements of Kolontár, Devecser, and Somlóvásárhely were flooded. Ten people died and another 120 people were injured [2]. The red mud flooded 4 km2 of the surrounding area [3].
The idea motivating this research work came after considering the clean-up work done on the impacted area of Kolontár (to the north of Balaton). Whereas digital maps figuring the contour of the contaminated areas and the pollution thickness were availableFootnote 1 [4], the excavation work was performed in a traditional way, without the support of positioning and navigation technologies. So accurate and detailed information produced in the early stage of the remediation process was not efficiently exploited.
In a broader context, our research work aims at developing methodologies and tools to assure a continuum with the geographic information exploitation/support through a precise remediation process. The GI gathered during the disaster assessment phase should be adapted and used in the planning phase; this would provide plans and navigation data for the clean-up phase. Additionally technologies integration (remote sensing (detection), GIS (planning), positioning and navigation (clean-up)) should also be researched.
Our bibliographic research demonstrated that the use of geoinformation technologies in remediation is mainly done during the initial stage of remediation for the detection and mapping of the pollution [5]. Aerial survey [4] and soil sampling are used for data acquisition. GIS, geo-statistical analysis and 3D modelling [6–12]; are then employed for visualizing the pollution extents, estimating the volume to process (project dimensioning and costs) and planning/monitoring the remediation work at the general organisational site level. In contrast, the examples of use of geographic information technologies during the clean-up stage are quite few. In the case of in-situ remediation, injection and recovery wells can be precisely positioned with GPS based on planning optimized with geo statistic calculations [13]. In the case of ex-situFootnote 2 remediation, literature does not mention the use of navigation and positioning technologies for the excavation work done by heavy machineries. As positioning technologies are routinely employed in civil engineering and agriculture [14, 15–18] for the guidance of heavy equipment for precise and efficient work it seems the shortcomings in the case of ex-situ remediation lays in the capacity to generate adequate remediation plans and in the lack of adapted GIS tools, models, methods and practice [19]. In response, this work develops models in order to be able to produce a plan containing "clean-up parcels" and derived navigation data.
"Clean-up parcel" is a central concept and the geographic feature of interest in this work. Clean-up parcel in the real world corresponds with the surface covered by a dozer shovel until it gets filled to capacity (in other words the dozer's maximum work footprint). In the GIS model a clean-up parcel consist of a rectangular feature in a polygon feature class. Its width is equal with the dozer's blade width. Its length (length Max) is derived from the bulldozer characteristics and the thickness of pollution to collect (1).
$$ \mathrm{Volume}\_\mathrm{blad}{\mathrm{e}}_{\mathrm{Max}} = \mathrm{widt}{\mathrm{h}}_{\mathrm{dozer}} \times \mathrm{lengt}{\mathrm{h}}_{\mathrm{Max}} \times \mathrm{thickness} $$
The area of interest (contaminated area) is presented in Fig. 1. It is a polygon shapefile which was created from classified hyperspectral imagery [4]. The area covers 4 km2, is 16 km long in longitude and 5 km long in latitude. Because the catastrophe was a flood, the polygon features of the contaminated area have an orientation that generally follows the direction of the flood.
Overview of the source dataset "contaminated_area"
This paper focuses exclusively on precisely describing the conception of the algorithms, architecture and how geo-processing is done rather than providing line per line calculation details and scripts. The latter can be downloaded using the following link: https://zenodo.org/record/48883.
Readers should notice that this exploratory work is relevant for ex-situ remediation (remediation where excavation is done) on extended areas where industrial disaster took place (red mud, nuclear, chemical, etc.). In such cases heavy machinery is used and it makes sense to try to plan their moves precisely in order to save effort, time and money, in a similar way as precision agriculture or civil engineering do.
The research firstly develops models through the design of algorithms and their transcription in Python scripts. Secondly the models are tested with a test dataset derived from the red mud disaster impact assessment. The first test control if geo-processing is done without errors. The second test control if the model shows efficiency in its tasks consisting of optimizing the clean-up parcel plan (i.e. reducing the number of parcels). Last, the efficiency of the model is assessed in regard to time efficiency. Based on the results of the tests diverse improvements are made, models are tested again and final versions of models are proposed.
Clean-up parcels model development (with four orientations)
Description of the objectives
This model generates a polygon feature class, containing rectangular features with a unique shape that represents the clean-up parcels. The parcel's width is inherited from the bulldozer's blade width. The parcel's length is derived from the blade capacity. Dividing the contaminated area into clean-up parcels should be done automatically. The parcels should properly cover the whole contaminated area. The pattern designed should be optimal, meaning that technically it ensures the proper removal of pollution and economically it ensures the highest efficiency.
Considering those requirements it appears that rectangular grid pattern model is optimal.
Algorithm's raw architecture
The feature class will be similar to a grid with rectangular polygons. The algorithm could be divided into two parts:
the first part makes the calculations in order to point out to locations organised in a grid pattern with the appropriate orientation.
the second part calculates the parcel corners' coordinates and draws rectangular polygons.
Iterations (done with loops implementing repeat/while commands) will succeed ranges calculations deriving from the geographic extent of "Contaminated_area". This calculation can be separated in a function.
As the process will be automatized, it could be useful to test different plans with different orientations of the parcels. An algorithm with four different orientations (0°, 90°, 45° and 135°) was drafted. The best result was selected by counting the number of features in each feature class created and selecting the one with the fewest parcels.
Data requirement (input)
A polygon feature class where features' geometry represents the polluted areas.
Width (in meter), length (in meter), orientation (in degree).
Algorithm architecture
Procedure createRectangleAtPoint(x, y, length, width, orientation, layer)
This procedure draws one rectangle according to the coordinates of a corner starting point, the orientation, the width and length of the rectangle. The vertices of the rectangle are attributed in the clockwise direction (Fig. 2).
Details of the coordinate calculations with vertices of the parcel and drawing method in createRectangleAtPoint procedure with the 4 orientations cases
Function extent(fc)
This function extracts the geographical extent (xmax, xmin, ymax, ymin) from a reference layer; i.e. the contaminated area layer. It is used later in the calculation of the maximal limit for the iteration in the loops building the grids. This function already existed and we have simply re-used it [20].
Procedure Make_Grid(length, width, layer_name, grid_orientation)
This procedure primarily draws a grid pattern taking into consideration the orientation, the width and length provided as parameters. For each point of the grid the procedure calls the createRectangleAtPoint procedure which draws a rectangle. With the 0° and 90° orientations the procedure loops top-down with the lines and left-right inside line. With the 135° and 45° orientations the procedure follows two stages which are presented in Figs. 3 and 4. In the stage 1, loop 1 creates features in diagonal starting from the top left corner (a) moving towards the bottom right corner with an incrementtaion defined by (d). Loop 1 end when x coordinate of the pointer reach the xmax value (e). Loop 2 controls jumping one line down under the start of previous line using incrementation (f) on back-up coordinates of previous line start (b). Loop 2 ends when pointer reach ymin value (g).
Conceptual representation of clean-up parcel model in the 45° orientation case, lower part processing
Conceptual representation of clean-up parcel model in the 45° case with upper part processing
Then in a second stage, the model progress diagonally down with loop 1 and incrementation (d) but the second loop's implementation positions the next line on top of the previous one (incrementation (f)) so that the grid can cover the second half of the area (above the stage one). As many features are created out of the area of interest, a clean-up is necessary at the end. Selection is done on the features that intersect the "polluted_area" layer. They are copied in a new layer and all temporary layers are deleted at the end. Loop 2 ends when pointer reaches coordinate values both bigger than xmax and ymax value (g).
Script body
The algorithm uses the Procedure_Make_Grid and Procedure_CreateRectangleAtPoint in order to create four feature classes with 0°, 90°, 45° and 135° orientations. Finally a "get count" method is used to retrieve the number of features from each feature class. The feature class containing the smallest number of features is selected and saved; the other feature classes are deleted from the map document.
Clean up parcel model refinement (with individual and exact orientation)
In variation with the previous version (described in part 2.) the model is not intended to process groups of features from "contaminated area" but to process feature individually. The model should be designed to calculate the orientation of each feature from "Contaminated area". Knowing the orientation of each feature from "contaminated area", it prepares a clean-up plan with the same orientation.
Raw architecture of the algorithm
The algorithm should create a new attribute (poly_angle) in "Contaminated_area" where the feature orientation will be stored. The feature orientation should be calculated; then stored in "poly_angle".
Procedure createRectangleAtPoint should be modified in order to calculate coordinates based on positive (0 to 90°) or negative (0 to −90°) input orientations instead of the four orientations (0°, 90°, 45° and 135°).
Procedure Make_Grid should also be modified to integrate similar input orientations.
Width (in meter), length (in meter)
Architecture of the algorithm
This procedure draws one rectangle according to the coordinates of a corner starting point, the orientation (two cases <0 or >0), the width and length of the rectangle. The vertices of the rectangle are attributed in the clockwise direction (Fig. 5).
Details of the coordinate calculations with vertices of the parcel and drawing method in createRectangleAtPoint procedure with the exact orientation cases
Procedure Make_Grid(input_feature_class, length, width)
This procedure creates two feature classes: "Plan" that stores the clean-up parcels and "Work_layer" that is a storage feature class. A "poly_angle" attribute is created in the "Contaminated_area" table and the attribute is populated with the arcpy. CalculatePolygonMainAngle_cartography instruction. A search cursor function is used for each polygon:
1/go through each polygon geometry and extract the polygon extent (xmin, xmay, ymin and ymax). (similar to the extent function used previously but implemented at feature level)
2/uses a modified version of the former Procedure Make_Grid in order to draw a clean-up plan that cover that polygon in the "Work_layer" feature class
3/un-select the previouly selected features
4/select the polygon were the cursor is pointing
5/select the parcels that overlay with this polygon
6/Append those parcels in the "Plan" feature class
7/detele the content of "Work_layer".
Delete Work_layer.
90° parcel rotation alternative test development
This test development start from the idea that in some cases it could be advantageous to orientate the parcels differently, not in the direction of length but in the direction of width (Fig. 6), so with a 90° rotation. The corresponding plan can be generated without modifying the second model, but simply by calling Procedure Make_Grid("Contaminated_area",3,30) instead of Procedure Make_Grid("Contaminated_area",30,3) for example. For simplification the test case containing parcels rotated by 90° is called "anti".
Normal clean-up plan vs. "anti" clean-up plan
The difficulty with this test is not to generate an "anti" plan, but to count how many parcels are generated per polygon feature in an "anti" scenario compared to the normal scenario. The algorithm was modified in this respect.
The same data are used as for model 2.
An additional attribute should be created in the "Contamination_aera" feature class called "feat_num" that store the feature number.
During the search cursor recursion:
1/a selection is done on the parcels of "Work_layer" that intersect the active polygon from "Contaminated area"
2/selection is switched
3/selected features deleted
4/a GetCount_management command is called to retrieve the number of parcel of selection
5/an arcpy.da.UpdateCursor command is used to update the "feat_num" for the activated FID.
Same data as for model 2 with the minor modifications cited above.
Offset effect testing: model development
The model should move the features of clean-up parcel altogether following a grid pattern (so both in vertical and horizontal direction). The grid is oriented in the same way as the clean-up parcel feature class and the sampling distance of the grid is equal with a fifth of the parcel width. Each time the feature class is drifted the model counts how many features are located in the area of interest. The "get count" result with the smallest number of features shows the best offset to be applied.
the original area of interest is required to perform a selection based on intersection.
a new clean-up parcel feature class is necessary. It is similar to the one generated in model 1 with optimal orientation but the reference area of interest differs (extended).
The extended reference area of interest is the original area of interest extended with a buffer zone of the parcel width. If this precaution is not implemented, the clean-up parcel extent is too limited and for example an empty area appears on the left when x receives a positive drift.
1/The model should generate a new area of interest with a buffer of "length" size around the original area of interest.
2/Clean-up parcel feature class should be recreated based on the new target area (this is done in order not to have an empty area when the features will be shifted (maximal shift will be equal to parcel length)).
3/Calculate the shift values based on parcel width, length, orientation and store them in a three dimensional list.
4/All the features of this new clean-up feature class are shifted applying the offset values stored in the matrix (x,y). The grid x and y range are fixed at one-fifth of the parcel width.
5/Each time the feature class is shifted, a selection of the features intersecting with the original target area is done and the result of "getcount" is stored in a two dimensional list.
6/Unselect all features
7/Inverted shift is applied to set the feature back in place.
8/Next shift is applied, etc.
9/When all the shifting x,y values are passed, a search in the list value returns the smallest getcount.
10/From the minimal getcount, to retrieve the optimal x,y shift values.
11/Apply a final shift with the optimal x,y shift values.
Function_calculate_drift_matrix(length, width, orientation)
This function returns a three dimensional matrix containing the shift coordinates corresponding to each point of the grid. The grid is oriented according to the parameter "orientation". The step of the grid is width/5 both with "rows" and "columns". For example if parcels are 30mx3m at 90°, there are 50 columns and 5 rows in the grid and the step is 3/5 m. This function has four parts for the four different orientations.
Funtion_shift_features(in_features, x_shift = None, y_shift = None)
This function uses the arcpy.da module's UpdateCursor. By modifying the SHAPE@XY token, it modifies the centroid of the feature and shifts the rest of the feature to match. This function was available online and usable without changes, so it was simply copied [21].
Main procedure
The procedure deals with the creation of the extended area of interest; appeals the two functions described above and deals with the searches in the list "getcount".
Navigation lines model development
The model should create a polyline feature class with navigation lines. The navigation lines should:
be located in the middle of parcels,
follow their length
Input: "clean-up parcel" shape file
Output: "Navigation_lines" shape file
Algorithm's structure
Function_ExtractVerticesCoordinateFromFeature (input_feature_class)
This function extracts the vertices' coordinates from a polygon feature class geometries and returns a two dimensional list storing the coordinates. SearchCursor method is employed on each row of the feature class. The result is appended to the list. Most of the script derives from the example of Reading polyline or polygon geometries of ESRI resources help [22].
Function_CalculateMiddlePoints(list_corners)
This function receives the coordinates of the four corners of a rectangle and returns the values of the coordinates of the two points located in the middle of the shortest sides.
Procedure_WriteaLine(point_1, point_2, layer)
This procedure writes a polyline feature between two given points (coming from function_CalculateMiddlePoints) in the given layer. SearchCursor method is applied to enter new geometry.
Procedure_DrawNavigationLines(Ouput_Navigation_Lines, Source_feature_class)
This procedure makes use of the functions and procedures above to draw a new polyline feature class with the navigation lines. Createfeatureclass_management method is used to create the output feature class.
The algorithms were successfully converted into scripts in Python language and models tested first with a subset of the pollution thickness layer derived from the processing of hyperspectral aerial survey data of Kolontar red mud disaster [4].
Clean-up parcel model with four orientations
During its development the script was tested on a small feature extracted from the "Contaminated_area" shapefile.
Figure 7 shows an example of result with the four intermediary feature classes generated by the clean-up parcels model with 0°, 90°, 45° and 135° orientation, 3 m width and 30 m length on a sample of the contaminated area.
Intermediary results of clean-up parcel model with 0°, 90°, 45°and 135°orientation plan overlay
After correcting mistakes in the script the geo-processing model was applied to the whole "contaminated_area" shapefile. It resulted in very long geo-processing (more than 3 days to generate 0° and only a part of 45° orientation clean-up parcels layers).
Processing was voluntarily stopped before geo-processing was completed. This long calculation was caused:
1/by the extent and geometry of the target area (containing a lot of empty space where it was useless to have the geo-processing run),
2/by the huge number of parcels to generate (around 60,000); a direct effect of the extent of contaminated_area,
3/by the procedure_Make_grid which is not efficient with geo-processing (a lot of unnecessary geo-processing is done during iteration outside of the area of interest).
To cope with these various problems the second test was run on the same data but split into 8 zones (11 shapefiles as zone 7 was split in four).
The number of features generated per zone with the four orientations is summarized in the Table 1. Smallest values are highlighted in green and highest in red background. Combining the optimum orientation for each zone, the minimal number of feature in the clean-up plan reach 57,896.
Table 1 Number of features with the different orientations within the 8 zones
At first we can observe a significant difference in the number of features obtained after geo-processing with different orientations. It appears the orientation of the parcel pattern is an important parameter to consider in optimizing planning.
Table 2 provides statistics per zone. First we calculated a classic measure of deviation of σ/x̄ (standard deviation divided by mean). As the number of entities vary significantly per sample (zone) and in order to have values of the same order it was necessary to divide deviation by mean. The deviation varies from 2 to 20 %. The second value provided in the table is more relevant in our opinion because it better expresses the important difference between the extremes and better pulls out the efficiency of the algorithm (subtraction of the maximum feature number with the minimum feature number divided by the maximum feature number and expressed in percent). This value can be interpreted as the ability of the algorithm to "reduce" the number of parcels by x%. The feature number reduction ranges from 5 to 38 %.
Table 2 Statistics with the 8 zones
As a second conclusion the variations in the results can be very high (up to 38 %). This is definitely significant information for the planning strategy. Last, such a difference should be investigated and explained.
Figure 8 shows the geometry and size of the 8 zones in order to be able to cross the statistical results from Table 2 with spatial information. The following observation can be formulated: the smallest zones show bigger variances (reported to the mean) than the biggest zones.
Hypothesis 1: the reduced number of features is the cause of the bigger variance. Orientation matches more efficiently with smaller number of features because much of them are oriented in the same way. On the contrary when there are more features, their orientation varies more and the efficiency of the model decreases.
Hypothesis 2: the cause for important variance is a scale effect because the model efficiency works with a border effect. On smaller areas the features could be smaller; the ratio boundary/area is more in favour of the boundary compared to massive area and orientation becomes much more important.
Overview of the 8 zones
After additional tests we could conclude that both hypotheses seem valid. When comparing the results between zone 4 that has two oriented features in the same direction and zone 7 a, b, c, d with small and long features; feature number decrease by 38 % with zone 7 whereas the feature number is only decreased by 17 % for zone 4.
In terms of practice with the preparation of "Contaminated_area"; in order to optimize the geo-processing, the user should pay attention to three things:
1/to prepare zones as small as possible in order to reduce empty areas (time consideration).
2/to the extent possible have features with the same orientation inside one zone. If necessary, a zone should be split into several parts in order to ensure the features' general orientation is as similar as possible (example is 7 a, b, c, d).
3/to split feature if their geometry is complex. The result should be the creation of sub-features with simpler and oriented geometries.
In order to validate the presumptions mentioned above, the method was implemented on Zone 1 (Fig. 9) (where the algorithm showed the lowest efficiency) which was divided following the above recommendations. New results are summarized in Table 3.
splitting of zone 1 into several parts
Table 3 Counting of the number of features with the different orientation and the different sub-zones
An additional reduction of 3.7 % could be reached by applying an appropriate cut with zone 1 compared to the previous result. This result prefigures the improvements that can possibly be reached with a modified algorithm (see model 2) and proper preparation of the "Contaminated_area" layer.
Modification required and implemented in model 2
Regarding the reduction of geo-processing time, a test will be added inside the scripts implementing iteration. Before calling createRectangleAtPoint procedure an "IF" condition will be applied to check if the corner point (x,y) of the rectangle to be drawn falls into the area of interest (extended with a buffer zone of the parcel length). If x,y falls out no action will be taken, if it falls in then the rectangle will be written.
The orientation clearly appeared as a key parameter to control in order to optimize the remediation plan design. In our approach (which was exploratory) we decided to limit the number of orientation to 4 (with 0°, 45°, 90° and 135°). In order to increase the efficiency of the model the optimal orientation could be identified with 1° accuracy. This means the algorithm should be improved to take the following actions:
1/isolate each individual polygon
2/calculate polygon's orientation (with 1° accuracy)
3/apply a modified version of the clean-up parcel algorithm in order to design a clean-up plan with x° orientation for the feature considered.
With such an implementation, the optimized clean-up parcels are designed directly and it is no longer necessary to run the same script (clean-up parcel) four times with the four different orientations. So it would solve two issues: reducing the time processing and improving algorithm efficiency while reducing the number of parcel. This was applied with model 2 and the results are detailed in the paragraph below.
Dividing the contamination-area into subparts with homogenous orientation (and smaller geographic extents) is the task of the user, it is not automatable.
Results of model with exact orientation
The run of model 2 lead to a final number of 55,066 parcels compared to the 57,896 parcels from the combination of the optimal orientation obtained per zone with model 1. This means a decrease of parcel number of 5 %. The model designed the clean-up plan (containing the 55,000+ parcels) in 3 h10. Figure 10 show an extract of the clean-up plan.
Extract of the clean-up plan
Results of 90° rotation testing
The "anti" clean-up plan results with a total of 65,165 parcels. This number is much higher than the one for normal plan. The reason is that in most case the normal orientation is optimal. Out of the 193 polygon features, "anti" orientation is advantageous with 39 features (so 20 % of the feature number). With a combination of the "anti" solution for the 39 cited features and the normal solution for the rest, the total parcel number reach 54,744. So in comparison with the normal solution, the parcel number could only be decreased by 0.5 %. We can conclude that model 3 showed very limited efficiency in our case study with the reduction of feature number. We decided not to go further with the development.
Results of shift testing
Model 4 testing showed very limited results with the reduction of feature number. Only 1% difference with the number of features could be modelled. After further considerations, it seems that due to the irregular shape of the area of interest and the scale ratio, a shift is useless because on average as parcels disappear on one border others appear on the opposite border. If the AOI is regular (rectangle for example) and the scale ratio much smaller (AOI area compared to parcel area), this tool could achieve significant results. In our case -a large scale industrial disaster with relatively large irregular areas- the tool shows limited efficiency; consequently we decided not to go further with the development.
Draw navigation line model
Figures 11 and 12 below show the result of the Navigation lines model. The 55,000+ navigation lines were drawn in 0 h30.
Navigation lines feature class generated by the navigation line model overlaying the clean-up parcels feature class
Zoom in the navigation lines
The algorithm orients all the geometries of the lines in the same direction by default. Some additional algorithms development could be foreseen if it turns out that navigation requirement would need a pre-planned navigation direction and computation of the optimal order of visit. As the operation in the field constantly changes, this kind of development is not considered a priority at the moment.
In many places there is overlay between the navigation lines. This is a consequence of the overlay between the clean-up parcels (visible in Fig. 9). The draw navigation live model should be improved in order to remove those overlays.
Improvements and future development
Improvement can be done with the algorithm efficiency by strictly constraining iterations to inside the boundaries of the contaminated area in order to avoid parcel creation (and time waste) out of the pollution features areas.
Contaminated area datasets with varying shapes should be tested to assess the efficiency of the geo-processing models we designed with other type of pollution coverage (and shapes). There could be some situations where inefficient models (drift, parcel rotation) could become advantageous.
As mentioned in paragraph 7.5, on many places where contamination features are close to each other, the clean-up parcels are overlaying which in the end results with overlapping navigation lines. The next development should consider if it is worth correcting the clean-up parcel layer or either the navigation line layer with cutting the overlaying parts.
Inspired from the literature about "coverage path planning" [14, 15–17], it would be interesting to consider if capacitated vehicle routing problem models could bring added value in the remediation work, in the case of soil excavation by heavy equipment.
First we demonstrated that among four different plans with four different orientations, one plan comprised less features and constituted an optimized version compared to the others. In return this showed how important it is to consider the source feature orientation for the orientation of the features of the clean-up plan. Additionally we demonstrated that improper orientation can lead to an important increase of the number of clean-up parcels, particularly if feature has a complex shape or if the source feature/parcel feature scale ratio is low. We have demonstrated that the best modelling approach consist in processing each contaminated feature separately, computing its orientation and applying the same orientation to the clean-up plan. The different tests also highlighted the importance of the dataset preparation, by cutting complex feature into sub-features with a unique orientation. Last but not least we demonstrated that automatic planning can be achieved both with the clean-up parcels and the navigation lines on a 4 km2 impacted area, with 55,000+ features for each generated in an approximate total time of 3 h40.
from aerial survey and remote sensing processing methods
ex-situ remediation is opposed to on-site remediation and requires the excavation of soil and its transportation out of the site.
Anton AD, Klebercz O, Magyar A, Burke IT, Jarvis AP, Gruiza K, Mayes WM. Geochemical recovery of the Torna–Marcal river system after the Ajka red mud spill, Hungary. Environ Sci Processes Impacts. 2014;2014(16):2677–85. doi:10.1039/C4EM00452C. http://pubs.rsc.org/en/content/articlepdf/2014/em/c4em00452c. Accessed 2 May 2016.
Schweitzer F. Channel regulation of Torna stream to improve environmental conditions in the vicinity of red sludge reservoirs at Ajka, Hungary. Hungarian Geogr Bull. 2010;59(4):347–59. http://epa.oszk.hu/02500/02541/00008/pdf/EPA02541_hungeobull_2010_4_347-359.pdf. Accessed 2 May 2016.
Berke J, Bíró T, Burai P, Kováts LD, Kozma-Bognár V, Nagy T, Tomor T, Németh T. Application of remote sensing in the red mud environmental disaster in Hungary. Carpathian J Earth Environ Sci. 2013;8(2):49–54.
Burai P, Smailbegovic A, Cs L, Berke J, Tomor T, Bíró T. Preliminary analysis of red mud spill based on aerial imagery. Acta Geographica Debrecina Landscape Environ Ser. 2011;5(I):47–57.
Driscoll A. GIS Applications in Site Remediation. 2004. http://www.edc.uri.edu/nrs/classes/nrs409/509_2004/Driscoll.pdf. Accessed 2 May 2016.
Franco C, Delgado J, Soares A. Impact Analysis and Sampling Design in The Pollution monitoring Process of The Aznalcollar Accident using Geostatistical Methods. In: The International Archives of the Photogrammetry, vol. XXXV, Part B7. Istanbul: Remote Sensing and Spatial Information Sciences; 2004. p. 373–8.
Guyard C. Dépollution des sols et nappes : un marché sous pression. L'eau, l'industrie, les nuisances. 2013;359:23–40.
Hellawell EE, Kemp AC, Nancarrow DJ. A GIS raster technique to optimize contaminated soil removal. Eng Geol. 2001;60(1–4):107–16.
Lindsay J, Simon T, Graettinger G. Application of USEPA FIELDS GIS technology to support Remediation of Petroleum Contaminated Soils on the Pribilof Islands, Alaska. 2nd Biennial Coastal GeoTools Conference, Charleston, SC, January 8–11. 2001. http://webapp1.dlib.indiana.edu/virtual_disk_library/index.cgi/5293706/FID2516/pdf_files/ps_abs/lindsayj.pdf. Accessed 1 Jul 2015.
Mathieu JB, Garcia V, Garcia M, Rabaute A. SoilRemediation: a plugin and workflow in Gocad for managing environmental data and modeling contaminated sites. Gocad Meeting, Nancy June 2–5, 2009.
Webster I, Ciccolini L. Solving contaminated site problems cost effectively: plan, use geographical systems (GIS) and execute. http://www.projectnavigator.com/downloads/webster_ciccolini_solving_contaminated_sites_04.09.02.pdf. Accessed 1 Jul 2015.
Dukukovíc J, Stanojevic M, Vranes S. GIS Based Decision Support Tool for Remediation Technology Selection. Proceedings of the 5th IASME/WSEAS Int. Conference on Heat Transfer, Thermal Engineering and Environment, Athens, Greece, August 25–27, 2007.
Interstate Technology & Regulatory Council (ITRC). Remediation Process Optimization: Identifying Opportunities for Enhanced and More Efficient Site Remediation. 2004.
Conesa-Muñoz J, Pajares G, Ribeiroa A. Mix-opt: A new route operator for optimal coverage path planning for a fleet in an agricultural environment. Expert Syst Appl. 2016;54(2016):364–78.
Fergusson D, Likhachev M, Stentz A. A Guide to Heuristic-based Path Planning. Proceedings of the International Workshop on Planning under Uncertainty for Autonomous Systems, International Conference on Automated Planning and Scheduling (ICAPS), June, 2005. http://www.cs.cmu.edu/~maxim/files/hsplanguide_icaps05ws.pdf. Accessed 2 May 2016.
Driscoll TM. "Complete coverage path planning in an agricultural environment". Graduate Theses and Dissertations. Paper 12095. 2011. http://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=3053&context=etd. Accessed 2 May 2016.
Hameed IA, la Cour-Harbob A, Osena OL. Side-to-side 3D coverage path planning approach for agricultural robots to minimize skip/overlap areas between swaths. Robot Auton Syst. 2016;76:36–45.
Hameed IA. Intelligent coverage path planning for agricultural robots and autonomous machines on three-dimensional terrain. J Intell Robot Syst. 2013;74(3):965–83.
Global remediation technologies. MAPPING & MODELING http://grtusa.com/services/mapping-modeling/. Accessed 1 Jul 2015.
ArcGIS Help 10.1/Extent (Arcpy). http://gis.stackexchange.com/questions/72895/how-to-obtain-an-extent-of-a-whole-shapefile. Accessed 1 Jul 2015.
ArcPy Café/Shifting features. https://arcpy.wordpress.com/2012/11/15/shifting-features/. Accessed 2 May 2016.
ArcGIS Help 10.1/Reading geometries. http://resources.arcgis.com/en/help/main/10.1/index.html#/Reading_geometries/002z0000001t000000/. Accessed 1 Jul 2015.
Research Institute of Remote Sensing and Rural Development, Károly Róbert College, Gyöngyös, Hungary
G. Lucas
& Cs. Lénárt
Doctoral School of Military Engineering, National University of Public Service, Budapest, Hungary
& J. Solymosi
Search for G. Lucas in:
Search for Cs. Lénárt in:
Search for J. Solymosi in:
Correspondence to G. Lucas.
GL carried out the main research based on his thesis work realised at the Doctoral School of Military Engineering at the National University of Public Service in Budapest and the Research Institute of Remote Sensing and Rural Development in Gyöngyös. GL conceived the study, developed the algorithm, tested the algorithms and drafted the manuscript. JS and CsL participated in review of the manuscript and made proposals for its further development. All authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Lucas, G., Lénárt, C. & Solymosi, J. Development and testing of geo-processing models for the automatic generation of remediation plan and navigation data to use in industrial disaster remediation. Open geospatial data, softw. stand. 1, 5 (2016) doi:10.1186/s40965-016-0006-z
DOI: https://doi.org/10.1186/s40965-016-0006-z
Industrial disaster
Automatic planning
Geo-processing models
ArcPy | CommonCrawl |
What does "formal" mean?
I know the definition of formal power series, power series and polynomials. But what does the adjective "formal" mean? In google English dictionary, does it mean "9. Of or relating to linguistic or logical form as opposed to function or meaning" or maybe another one in the link?
Or does "formal" have some mathematical meaning which is other than usual dictionary meaning?
soft-question terminology
GobiGobi
I see formal used in at least two senses in mathematics.
Rigorous, i.e. "here is a formal proof" as opposed to "here is an informal demonstration."
"Formal manipulation," that is, manipulating expressions according to certain rules without caring about convergence, etc.
Confusingly they can mean opposite things in certain contexts, although "formal manipulations" can be made rigorous in many cases.
Qiaochu YuanQiaochu Yuan
$\begingroup$ Isn't there a notion of "formal" in algebraic geometry? $\endgroup$ – Damien Jul 27 '11 at 1:44
$\begingroup$ You mean en.wikipedia.org/wiki/Formal_scheme ? Well, "formal" here seems to mean something like "including infinitesimal information." Morally the etymology comes from making formal manipulations with infinitesimals rigorous. $\endgroup$ – Qiaochu Yuan Jul 27 '11 at 1:54
$\begingroup$ I think the etymology of the word shows how the senses are related. 'Formal' comes from 'form'; the association with rigour is via Hilbert's formalist school. By 'rigour' what is really meant is formal manipulations of logical propositions in accordance to the rules of inference, rather than following 'intuition'. $\endgroup$ – Zhen Lin Jul 27 '11 at 2:00
$\begingroup$ It should be noted that the first sense includes a really vast spectrum of degrees of formality. The way I see it, it includes usual textbook proofs (e.g. Folland's proof of Radon-Nikodým theorem is 'formal'), and 'logically' formal proofs, as in en.wikipedia.org/wiki/Formal_proof , which also serves as input for automated proof checking. $\endgroup$ – Bruno Stonek Jul 27 '11 at 3:17
$\begingroup$ I've always thought of the latter meaning as the meaning "of or related to form" — a formal power series is something that has the form of a power series, formal manipulations are those that work on the form directly (without caring about what the expression may "mean" in the analysis sense), etc. $\endgroup$ – ShreevatsaR Jul 27 '11 at 4:42
As an example, formal power series is analyzed without regard to convergence. Really, what is of interest is the sequence of coefficients.
ncmathsadistncmathsadist
And don't forget the notion of formal space arising in rational homotopy theory.
Cheerful ParsnipCheerful Parsnip
When I was learning about logic as an undergraduate, I recall being told that the word "formal", with respect to "formal languages" meant that the "form" of expressions written in that language had primacy.
In other words, rules for manipulating expressions in a formal language could be given in terms of the form of the expression only, without needing to know to what values the variables in the expression were bound.
So a formal language permits us to use relatively simple pattern-matching algorithms to decide which transformations of an expression are valid at any given time.
In this context, formality is linked to the simplicity of the rules that define the set of valid transformations of an expression.
William PayneWilliam Payne
The word "formal" in "formal power series" is indicating that you are considering all objects that are algebraically "like a power series". This is opposed to its use in analysis where you spend a lot of time figuring out for which $x$ the series converges.
Basic analysis goes like this:
"$\displaystyle\sum_{n=1}^{\infty} x^n$ is a series which converges for $|x|<1$ and therefore the function $f(x) = \displaystyle\sum_{n=1}^{\infty} x^n$ has the domain $|x| < 1$".
You then proceed to use the function and talk about derivatives and integrals on the restricted domain. If the series has very few points of convergence such as $\displaystyle\sum_{n=1}^{\infty} n!x^n$ which converges only for $x=0$, then casting it as the function $g(x) = \displaystyle\sum_{n=1}^{\infty} n!x^n$ can only have domain $x=0$ and its value is $g(0)=0$. Pretty boring function when it comes to derivatives and integrals!
When you study formal power series, you ignore the consideration of convergence and use the series as it is presented as an algebraic entity, so even though $g$ only converges at $x=0$, you ignore that and focus on other properties of the series.
Another common use of the word "formal" is with a "formal system" which is basically a big rulebook for an artificial language comprised of an alphabet (a list of symbols), a grammar (a way of arranging those symbols), and axioms (initial lists of symbols to start from). The word "formal" here is needed because it is very prim and proper and only allows manipulations according to the grammar and axioms; you can't combine symbols in any way like you can in English (for example this ee cummings poem is an "acceptable" combination of the symbols of English, but is also seemingly "wrong" according to our standard grammar).
tomcuchtatomcuchta
$\begingroup$ The first series certainly converges for $|x| < 1$; why wouldn't you consider it a power series with a finite radius of convergence? (I would have chosen an example with zero radius of convergence, such as $\sum n! x^n$.) $\endgroup$ – Qiaochu Yuan Jul 27 '11 at 1:39
$\begingroup$ Yes, @Qiaochu's series is an excellent example; even with the zero radius of convergence, it can be manipulated formally to produce... interesting and useful identities. $\endgroup$ – J. M. is a poor mathematician Jul 27 '11 at 2:23
$\begingroup$ Thank you for the suggestion! $\endgroup$ – tomcuchta Jul 27 '11 at 4:39
$\begingroup$ "not considered as a power series in analysis since it does not converge for any $x\in \mathbf R$" - actually, it does... if $x=0$ that is. $\endgroup$ – J. M. is a poor mathematician Jul 27 '11 at 4:45
Formal proof systems
One context in which the word "formal" comes in, is that of formal proof systems.
A formal proof system is a way to write theorems and their proof in the computer, such that after it have been done the proof can be automatically verified by a computer.
In such systems, theorems are just sequences of characters (strings), and starting from your axiom strings, you use a few well defined rules to transform those strings mechanically, and obtain new true strings (thus making a proof).
The huge advantage of such systems is that since each proof step is so simple and mechanical, computers can verify proofs, which can be an extremely difficult and error prone task for humans to do!
There are also cases where the proof itself requires tedious verification of thousands of cases, and would be too time consuming for any human. One notable example of this is the four color theorem.
The downside of such systems is that they are much harder to write your proofs in, because in order to communicate with the computer you have to write everything in a very precise way.
I do believe however, that if such systems are done well enough with good tooling and standard libraries, writing a proof should be no harder than writing a computer program, and the benefits would largely outweight the greater difficulty of writing the proof.
For a concrete well presented example, a have a look at Metamath's awesome proof that 2 + 2 = 4 http://us.metamath.org/mpeuni/mmset.html#trivia
Metamath is an older proof system, and there are likely better choices today as I have mentioned at: What is the current state of formalized mathematics? but their web presentation is very nice!
Such proofs require of course defining everything in terms of things that the proof system understands. In the case of Metamath, Zermelo–Fraenkel-like set theory is used to my understanding.
A TL;DR version of the classic set theory approach would be:
we can use sets, forall, exists and modus ponens
the naturals can be defined in terms of sets like this: https://en.wikipedia.org/wiki/Set-theoretic_definition_of_natural_numbers
the rationals can be defined easily as an ordered pair of integers. Ordered pairs can be defined in terms of sets easily with Kiratowski's definition, see also: Please Explain Kuratowski Definition of Ordered Pairs
the reals can be defined in terms of sets with Dedekind cuts, see also: True Definition of the Real Numbers
functions are just a set of pairs: Is $f(x) = (x + 1)/(x +2)$ a function?
once we have reals and functions, note how the epsilon delta definition of limits only uses concepts that we have previously defined: functions, reals, forall and exists! Once you see this, it is easy to believe, that, at least, we can formalize real analysis with this simple system
The fact that mathematics can be fully formalized is surprising, and was arguably only fully realized at the end of the 19th century, in particular through the seminal Principia Mathematica, and materialized with the invention of computers.
This is in my opinion the property of mathematics that best defines it, and that which clearly separates its preciseness from other arts such as poetry.
Once we have maths formally modelled, one of the coolest results is Gödel's incompleteness theorems, which states that for any reasonable proof system, there are necessarily theorems that cannot be proven neither true nor false starting from any given set of axioms: those theorems are independent from those axioms. Therefore, there are three possible outcomes for any hypothesis: true, false or independent!
Some famous theorems have even been proven to be independent of some famous axioms. One of the most notable is that the Continuum Hypothesis is independent from ZFC! Such independence proofs rely on modelling the proof system inside another proof system, and forcing is one of the main techniques used for this.
Ciro Santilli 新疆改造中心法轮功六四事件Ciro Santilli 新疆改造中心法轮功六四事件
Not the answer you're looking for? Browse other questions tagged soft-question terminology or ask your own question.
What is the meaning of "formal" in math-speak?
Please Explain Kuratowski Definition of Ordered Pairs
True Definition of the Real Numbers
What is the current state of formalized mathematics?
Is $f(x) = (x + 1)/(x +2)$ a function?
Simple closed form for $\int_0^\infty\frac{1}{\sqrt{x^2+x}\sqrt[4]{8x^2+8x+1}}\;dx$
What is a "formal" difference of vector bundles?
Accurate identities related to $\sum\limits_{n=0}^{\infty}\frac{(2n)!}{(n!)^3}x^n$ and $\sum\limits_{n=0}^{\infty}\frac{(2n)!}{(n!)^4}x^n$
Criteria for formally derived Euler-Maclaurin-type formula
What does it mean to "count (some number) of (some finite set of objects)"?
Etymology of the word "normal" (perpendicular)
Does "monotonic sequence" always mean "a sequence of real numbers"
What is linear, numerically and geometrically speaking?
How to google search mathematical notions and expressions? | CommonCrawl |
nature geoscience
The diverse meteorology of Jezero crater over the first 250 sols of Perseverance on Mars
Meteorological phenomena on Mars observed by the Perseverance rover
Martian water loss to space enhanced by regional dust storms
M. S. Chaffin, D. M. Kass, … O. I. Korablev
Seasonal Deposition and Lifting of Dust on Mars as Observed by the Curiosity Rover
Á. Vicente-Retortillo, G. M. Martínez, … A. Sánchez-Lavega
Remote determination of the shape of Jupiter's vortices from laboratory experiments
Daphné Lemasquerier, Giulio Facchini, … Michael Le Bars
Ann Carine Vandaele, Oleg Korablev, … ACS Science Team
A multi-year data set on aerosol-cloud-precipitation-meteorology interactions for marine stratocumulus clouds
Armin Sorooshian, Alexander B. MacDonald, … John H. Seinfeld
Dust tides and rapid meridional motions in the Martian atmosphere during major dust storms
Zhaopeng Wu, Tao Li, … Jun Cui
A complex storm system in Saturn's north polar atmosphere in 2018
A. Sánchez-Lavega, E. García-Melendo, … S. Ewald
MOSAiC-ACA and AFLUX - Arctic airborne campaigns characterizing the exit area of MOSAiC
Mario Mech, André Ehrlich, … Christiane Voigt
J. A. Rodriguez-Manfredi ORCID: orcid.org/0000-0003-0461-98151,
M. de la Torre Juarez ORCID: orcid.org/0000-0003-1393-52972,
A. Sanchez-Lavega ORCID: orcid.org/0000-0001-7234-76343,
R. Hueso ORCID: orcid.org/0000-0003-0169-123X3,
G. Martinez ORCID: orcid.org/0000-0001-5885-236X4,
M. T. Lemmon ORCID: orcid.org/0000-0002-4504-51365,
C. E. Newman ORCID: orcid.org/0000-0001-9990-88176,
A. Munguira ORCID: orcid.org/0000-0002-1677-63273,
M. Hieta7,
L. K. Tamppari2,
J. Polkko7,
D. Toledo8,
E. Sebastian1,
M. D. Smith9,
I. Jaakonaho ORCID: orcid.org/0000-0001-7343-55567,
M. Genzer7,
A. De Vicente-Retortillo1,
D. Viudez-Moreiras ORCID: orcid.org/0000-0001-8442-37881,
M. Ramos ORCID: orcid.org/0000-0003-3648-681810,
A. Saiz-Lopez ORCID: orcid.org/0000-0002-0060-158111,
A. Lepinette ORCID: orcid.org/0000-0002-5213-35211,
M. Wolff5,
R. J. Sullivan ORCID: orcid.org/0000-0003-4191-598X12,
J. Gomez-Elvira1,
V. Apestigue ORCID: orcid.org/0000-0002-4349-80198,
P. G. Conrad13,
T. Del Rio-Gaztelurrutia ORCID: orcid.org/0000-0001-8552-226X3,
N. Murdoch ORCID: orcid.org/0000-0002-9701-407514,
I. Arruego ORCID: orcid.org/0000-0001-9705-97438,
D. Banfield ORCID: orcid.org/0000-0003-2664-016415,
J. Boland2,
A. J. Brown ORCID: orcid.org/0000-0002-9352-698916,
J. Ceballos ORCID: orcid.org/0000-0002-6727-106217,
M. Dominguez-Pumar ORCID: orcid.org/0000-0001-5439-795318,
S. Espejo ORCID: orcid.org/0000-0003-2609-266317,
A. G. Fairén1,12,
R. Ferrandiz1,
E. Fischer ORCID: orcid.org/0000-0002-2098-529519,
M. Garcia-Villadangos1,
S. Gimenez1,
F. Gomez-Gomez1,
S. D. Guzewich ORCID: orcid.org/0000-0003-1149-73859,
A.-M. Harri7,
J. J. Jimenez8,
V. Jimenez18,
T. Makinen ORCID: orcid.org/0000-0001-9489-81547,
M. Marin1,
C. Martin ORCID: orcid.org/0000-0002-8898-40611,
J. Martin-Soler1,
A. Molina ORCID: orcid.org/0000-0002-5038-20221,
L. Mora-Sotomayor ORCID: orcid.org/0000-0002-8209-11901,
S. Navarro ORCID: orcid.org/0000-0001-8606-77991,
V. Peinado1,
I. Perez-Grande ORCID: orcid.org/0000-0002-7145-283520,
J. Pla-Garcia1,
M. Postigo1,
O. Prieto-Ballesteros1,
S. C. R. Rafkin21,
M. I. Richardson6,
J. Romeral1,
C. Romero ORCID: orcid.org/0000-0001-5442-25811,
H. Savijärvi7,
J. T. Schofield2,
J. Torres8,
R. Urqui1,
S. Zurita1 &
the MEDA team
Nature Geoscience volume 16, pages 19–28 (2023)Cite this article
Atmospheric dynamics
Inner planets
NASA's Perseverance rover's Mars Environmental Dynamics Analyzer is collecting data at Jezero crater, characterizing the physical processes in the lowest layer of the Martian atmosphere. Here we present measurements from the instrument's first 250 sols of operation, revealing a spatially and temporally variable meteorology at Jezero. We find that temperature measurements at four heights capture the response of the atmospheric surface layer to multiple phenomena. We observe the transition from a stable night-time thermal inversion to a daytime, highly turbulent convective regime, with large vertical thermal gradients. Measurement of multiple daily optical depths suggests aerosol concentrations are higher in the morning than in the afternoon. Measured wind patterns are driven mainly by local topography, with a small contribution from regional winds. Daily and seasonal variability of relative humidity shows a complex hydrologic cycle. These observations suggest that changes in some local surface properties, such as surface albedo and thermal inertia, play an influential role. On a larger scale, surface pressure measurements show typical signatures of gravity waves and baroclinic eddies in a part of the seasonal cycle previously characterized as low wave activity. These observations, both combined and simultaneous, unveil the diversity of processes driving change on today's Martian surface at Jezero crater.
The Perseverance rover landed on 18 February 2021 at 18.44° N 77.45° E, near the northwest rim of Jezero crater, on the inner northwest slopes of Isidis Planitia1. On board the rover is the most complete environmental station sent to date to another planet: the Mars Environmental Dynamics Analyzer (MEDA) instrument2. It includes new capabilities, compared with previous missions3,4,5,6,7,8, that enable better characterization of the diversity of physical processes driving near-surface environmental changes in Jezero. MEDA acquires data autonomously, on a regular and configurable basis, in sessions that typically cover more than 50% of a sol. Sampling sessions, typically one hour long, alternate every sol between even and odd hours, allowing for a complete characterization of daily and seasonal cycles every other sol (Extended Data Fig. 1 shows the temporal coverage of the measurements made). MEDA also provides context for the investigations that other rover instruments and systems are conducting and supports the planning of Ingenuity flights, as well as landing of a possible future mission to return samples collected by Perseverance.
In this Article, we present results for the first 250 sols of the mission (solar longitudes Ls = 6°–121°), northern hemisphere spring to early summer.
The active atmospheric surface layer
The atmospheric surface layer (ASL) is the lower part of the atmosphere in direct interaction with the surface, having a depth that varies on Mars from a few metres during daytime to tens of metres at night9,10. In the ASL, energy and mass exchanges between surface and atmosphere occur, and its hydrological cycle provides constraints on the photochemistry of surface and near-surface air. Most of the atmospheric dynamics in this layer are driven by radiative processes10. Albedo, net radiative flux and thermal inertia (TI) are key elements of that forcing and result in the radiative surface energy budget (SEB). MEDA's thermal infrared sensor (TIRS) and radiation and dust sensor (RDS) enable quantification of all SEB terms (see Methods for a detailed description of each term) on the surface of Mars (Fig. 1a), an important step in improving the predictive capabilities of numerical models.
Fig. 1: SEB components measured in situ in Jezero.
a, Diurnal variation of the SEB on sol 30 obtained from MEDA (symbols) and simulated with single-column models (solid lines; for further details, see Methods and Extended Data Fig. 2). b, Ground temperature measured by MEDA (black symbols) and obtained by solving the heat conduction equation (solid red line) using MEDA's net heat flux, G. The best fit is obtained for TI = 230 J m–2 K–1 s–1/2. c, Footprint (green shade) of the ground temperature sensor on sol 30. For horizontal terrains, this footprint covers an area of a few square metres. d, Diurnal evolution of broadband albedo on sols 125 (blue) and 209 (red). The albedo shows a minimum close to noon, with increasing values as the solar zenith angle increases. This non-Lambertian behaviour is similar in other sols, regardless of the type of terrain and geometry of the rover. e,f, As in c, for sols 125 (e) and 209 (f), with MEDA-derived TI values of 605 and 290 J m–2 K–1 s–1/2, respectively. Shown data result from the mean of 300 samples at the beginning and half of each hour, plus or minus the standard deviation. lmst, Local Mean Solar Time; SWd, downwelling solar flux; SWu, solar flux reflected by the surface; LWd, downwelling long-wave atmospheric flux; LWu, upwelling long-wave flux emitted by the surface; Tf, turbulent heat flux. Credit: panels c,e,f from NASA/JPL-Caltech.
When Perseverance is parked every sol, TI is obtained by minimizing the difference between measured and numerically simulated values of the diurnal amplitude of ground temperature (Fig. 1b). The MEDA-derived TI values range from 180 to 605 J m–2 K–1 s–½, as in Gale, ref. 11. The surface albedo (Fig. 1d) is obtained around the clock, using downwelling (0.2−1.2 μm) and reflected (0.3−3.0 μm) solar flux measurements (Fig. 1c,e,f). A radiative transfer model, COMIMART12, is used to convert both fluxes to 0.2−5.0 μm. The SEB is then measured and used as an upper boundary condition to solve the heat conduction equation for homogeneous terrains in models13,14. Figure 1a,b shows the diurnal cycle of retrieved fluxes and surface temperature, respectively.
Figure 1d shows the diurnal evolution of measured albedo on the sols with the highest (sol 125) and lowest (sol 209) values of the studied period. On both sols, the minimum value is reached near noon and increases as solar zenith angle (SZA) approaches 90°. The relative maximum at ~8:00 and ~17:00 occurs when SZA = ~55° and the specular reflection is within TIRS field of view (FoV). This behaviour points to non-Lambertian albedo at the surface, not observable from surface satellites in nadir pointing.
Importantly, modelling of the thermal and radiative environment shows that the effect of the radioisotope thermoelectric generator (RTG) on TIRS FoV ground heating is negligible. This fact has been verified by careful analysis of the ground temperature in consideration of the winds, which indicate that the effects are <0.5 K.
Near-surface thermal profile
Another new feature enabled by MEDA is the simultaneous tracking of temperatures at four heights—surface, 0.85 m, 1.45 m and about 40 m—around the clock (Fig. 2a). The variation of these temperatures along a sol reflects the four main regimes of the ASL: (1) daytime convection, (2) an evening transition where the convective boundary layer collapses, (3) a night-time steady regime and (4) a morning transition where the inversion fades and a convective boundary layer grows. Figure 2b–d shows an example of the daily evolution of the thermal gradient. Daytime convection peaks at noon with the maximum of the derivative of the temperature (T, in Kelvin) with the height (z, in meters) (dT/dz)max ≈ –35 K m–1 while night-time stable stratification peaks at 20:00 with (dT/dz)max ≈ +8 K m–1 in the first metre from the surface, reaching values well above the adiabatic gradient g/Cp = 0.0045 K m–1. Figure 2e shows the seasonal evolution of mean temperatures, where the daily average thermal gradient is dominated by the daytime convective period. In most sols, night-time thermal stability weakens as the night progresses, and unstable conditions often develop from 2:00 onward.
Fig. 2: Daily cycle of temperatures under typical values of TI.
a, Temperature at the surface (Tsurf; red), at z = 0.85 m (blue), z = 1.45 m (green) and z ≈ 40 m (green yellow). b, Thermal gradients from the surface to the near surface at 0.85 m (blue) and 1.45 m (green). c, Thermal gradient from the surface to 40 m (green yellow) and from 1.45 m to 40 m (grey). The adiabatic thermal gradient, dT/dz = –0.0047 K m–1, is shown in b and c with a horizontal light-blue line (a horizontal line that is partially covered by the curve (Tsurf − Tz=40 m)/Δz, which can be seen quite well in the center of the figure (LTST from 10 to 15 h)). Shown data in a–c result from the mean on 12 min intervals (740 samples), plus or minus the standard deviation. d, Seasonal evolution of mean daily air temperatures. TIRS and atmospheric temperature sensor temperatures (solid lines) at the surface (red), z = 0.85 m (blue), z = 1.45 m (green) and z = 40 m (green yellow) are compared with predictions from the Mars Climate Database21 (dashed lines). Yellow line shows the maximum irradiance measured by RDS. e, Statistical analysis of power spectra of temperature fluctuations at 0.85 m (blue), 1.45 m (green) and 40 m (yellow) between 10:00 and 15:00 h. Lines show the fit to the data in each frequency range. Figures reflect the estimated exponential indices. TI = 330 J m–2 K–1 s–1/2. Dots are averages over a 12 min window. ltst, local true solar time.
A similar observation was reported for InSight15 and attributed to the radiative influence of the hardware. In addition, Curiosity's Rover Environmental Monitoring Station instrument, measuring on the deeper Gale crater, has seen this inversion broken only during the global dust storm16. With MEDA, we observe that the night-time inversion depends on the local terrain properties. Measurements over high TI terrain result in the break-up of the night-time thermal inversion due to warm night-time surfaces (Extended Data Fig. 3). However, air temperatures at different levels are not sensitive to the TI of the specific terrain, and the influence of the terrain on air temperatures decreases progressively from 0.85 to 40.00 m (Fig. 2d and Extended Data Fig. 3). Winds driven by horizontal gradients of surface thermal properties17 may be the origin of the discrepancies we observe from measurements with respect to the predictions from one-dimensional radiative equilibrium models.
Temperature fluctuations are common throughout the sol; these rise after sunrise, peaking near noon (amplitudes ΔTmax ≈ 10 K), and are convective in nature. These subside before sunset and increase again during the break-up of the night-time inversion, suggesting strong nocturnal fluctuations (Extended Data Fig. 4). The oscillations are created by turbulent processes in the atmosphere whose characteristics can be investigated by analysing the spectral power density of temperatures, pressures and winds18,19,20. Figure 2e shows the power spectral densities of temperatures during the convective period, averaged over 250 sols. The results show typical slopes of turbulence in daytime hours, with changes at other times of the sol; these will be analysed in later works. MEDA enables the identification of different dynamical regimes, forced, inertial and dissipative, more clearly than with previous instruments19,20 and at different altitudes.
Pressure fluctuations
Turbulence is also present in pressure and horizontal wind measurements. Figure 3a shows examples of the daily pressure cycle on different sols; this cycle has the contribution of different dynamic phenomena, as discussed in the following. Analysis of the rapid pressure fluctuations shows a power spectrum similar to that of temperature (Extended Data Fig. 5), different from that expected from Kolmogorov turbulence but similar to that measured by InSight15. During the stable night-time period, the pressure fluctuations are at the detector noise level.
Fig. 3: Fluctuations in the daily pressure cycle and wind patterns.
a, Daily pressure cycles for sols 101–105, showing deviations from the mean values (in 10 s intervals) caused by different dynamic phenomena. b, Mean values of wind intensity, averaged in intervals of 5 min. Error bars show the standard deviations. c, Wind azimuths in different time slots (dotted lines, ~120° azimuth, show the regional slope direction in the Isidis basin). We observe that upward winds predominate during the day and downward winds during the night, a trend attributable to local flows acting in Jezero and the interaction with other scales (top). Histogram of wind speed at those hours, that is, the frequency distribution of the occurrence of each speed, f(v) (bottom). d, Mean directions from which the winds blow, averaged in intervals of 5 min. Error bars show the standard deviations.
Convection generates transient events detectable through several MEDA sensors, especially in temperature and pressure data (Extended Data Fig. 6). Some events are dust devils (DDs), also detected as slight drops (∼0.4–26.0%) in radiation sensor readings or imaged with Perseverance cameras22. Jezero exhibits the highest abundance of DDs so far detected by a mission on the surface of Mars23,24. The pressure drops detected in this period range from ∼0.3 to 6.5 Pa in intensity and last from 1 to 200 s. DDs where simultaneous MEDA wind data are available have estimated diameters from 5 to 135 m (ref. 23), with vortices having rotational speeds of ~4–24 m s–1. A small number of these produced measurable albedo changes on the surface as they removed dust, observed through variations in the upward/downward radiation ratio measured with TIRS and RDS sensors (Extended Data Fig. 7)23.
Wind speeds show a daily cycle, with maximum values of 7 m s–1 in the afternoon and near null between 4:00 and 6:00 (Fig. 3b). Strong gusts are detected with maximum speeds of 25 m s–1 at midday. Turbulence creates fluctuating winds of about 2–4 m s–1 at night, where they are probably responses to horizontal shear flows25, and 5–7 m s–1 during the convective hours (Fig. 3c). Pre-landing atmospheric modelling predicted that daytime upflows and night-time downflows on the Isidis basin slopes would dominate the overall wind pattern, with Jezero local topography causing a relatively small but measurable effect13,14. Wind data support the dominance of daytime upslope currents from roughly southeast, with a reversal of winds at night14 (Fig. 3c,d). These diurnal wind patterns drive aeolian erosion at Jezero23 and show a variety of behaviours resulting from a complex interaction among regional circulation, slope winds and interaction with the general circulation and the large-scale Hadley cell flow.
Atmospheric dust properties
SkyCam is tracking the regular morning–evening opacity cycle in the 600–800 nm wavelength range as a function of time (Fig. 4a)2. During the clear season covered by this study, persistently higher opacity is observed in the morning (optical depth (OD) ∼0.5) than in the afternoon (OD ∼0.4) (refs. 26, 27). Analysis of RDS data at different wavelengths and observation geometries allows determination of the optical properties of the dust, and derivation of the OD variation at high temporal resolution, similar to the more sparsely sampled Viking record28 (Fig. 4b). The dust optical properties and OD are estimated by comparing the temporal variation of the measured sky spectral intensity with radiative transfer simulations (Fig. 4c shows an example). Most of the particle-size information is obtained when the Sun trajectory is near one of the RDS lateral sensor's FoV. From these observations, we found particle sizes ranging from ∼1.2 to 1.4 μm, consistent with previous studies29,30,31, with corrections suggested by refs. 32,33,34. To estimate the non-sphericity of those particles, the T-matrix approach was used to compute the phase function, single-scattering albedo and extension cross section.
Fig. 4: Daily monitoring of atmospheric opacity and characterization of dust particles and clouds.
a, OD derived from SkyCam images to follow the day–night cycle in the visual range (550 nm). Values are observed to be higher in the morning than in the afternoon, consistent with measurements made occasionally with MastCam-Z22. b, RDS observations also allow the derivation of the OD at a temporal resolution of 1 s, enabling the study of short-duration events, such as DD. An example of a temporal increase in opacity due to a nearby DD occurred on sol 21 around 15:11. c, An example of dust particle radius and OD estimation using RDS observations at different wavelengths and radiative transfer simulations. The best fit is obtained for an effective radius reff = 1.4 μm. d, Variation of the colour index (CI), defined as the ratio between RDS observations at zeniths at 450 and 950 nm (ref. 41) to the SZA measured on sol 296. The SZA of maximum CI indicates that this cloud layer is above 45 km. e, Aerosol OD (AOD) at 9 µm retrieved from TIRS observations for sols 30 and 200. TIRS observations enable AOD to be retrieved at all local times. The large diurnal variation for sol 200 is probably caused by water-ice clouds.
These first sols of the mission fell within the aphelion cloud belt season and near the peak latitude for water-ice clouds35. It is therefore likely that some of the afternoon opacity in Fig. 4a, including the increase around sol 70 and most of the morning–afternoon difference, is due to water-ice hazes36,37. Clouds were observed around sols 70 and 180, but discrete clouds were not typically present during daytime22. During this season, a low dust OD (0.3–0.6), with low variability, is typical of other sites38,39. We have also found cloud signatures during daytime and twilight. In the latter cases (Fig. 4d)40, we could constrain the cloud altitudes using radiative transfer simulations41. In most cases, we found cloud altitudes around or above 40 km and particle sizes larger than 1 μm (indicative of water-ice particles).
The daily OD evolution was also retrieved from TIRS infrared measurements, strengthening the case for night-time clouds. Figure 4e shows the diurnal variation of thermal infrared aerosol OD, contributed by both dust and water-ice clouds, as a function of local time for two representative sols. Because TIRS observes thermal infrared radiation, this retrieval is possible for all local times, including during the night, a capability not available on previous rovers except by Mini-TES on board Mars Exploration Rover, which could make only limited and occasional night-time observations42. The OD observed by TIRS during sol 30 (Ls = 20°) shows a moderate variation with greater opacity at night than during the day. By sol 200, the aphelion season cloud belt43 was near its peak annual amplitude, and data reveal a notable diurnal variation in OD with maximum clouds shortly after dawn. This ability to track OD throughout each sol is a powerful tool that leads to new insights about how dust and water-ice clouds interact with the surface and the rest of the atmosphere.
A complex humidity cycle
Measuring the relative humidity (RH) in the ASL on diurnal and seasonal timescales is a key element in understanding hydrological processes in the Martian atmosphere44. MEDA's humidity sensor (HS) often finds a nocturnal hydrological cycle more complex than anticipated in numerical predictions13. Figure 5a shows the daily and seasonal behaviour of RH, while Fig. 5b shows the daily maxima recorded in the period studied. Likewise, Fig. 5c shows the seasonal variations of night-time water-vapour volume mixing ratio (VMR). Within the diurnal cycle, the maximum, typically in the range 15–30% in RH (referring to HS temperature), occurs in the early morning, with maximum VMR being reached around midnight. Nocturnal water-vapour amounts at Jezero during the seasons are lower than those at Gale and model predictions13.
Fig. 5: Daily and seasonal cycles of RH and VMR observed at Jezero.
a, Daily and seasonal evolution of RH for sols 80–250, Ls 44–122°. The uncertainty in the estimation of this magnitude, which is temperature dependent, is <3.5% in the temperature range recorded by HS. b, Seasonal evolution of the maximum RH values observed, reaching an absolute maximum of 29% RH, referring to the temperature recorded by HS itself, for sols 80–250, Ls 44–122°. c, Seasonal behaviour of the night-time VRM derived from RH for sols 64–250, averaging seconds 2–5 from the beginning of each acquisition session. The uncertainty in the estimation of this magnitude, for the temperatures and the RH range shown in a, is <20%. Maintenance regeneration heatings of the sensor heads are marked as blue bars (these data should not be taken into consideration).
MEDA measured a seasonal minimum in night-time VMR near Ls = 70°, with higher abundance and greater variability at the end of the season (Fig. 5c). In addition, a large increase in VMR was observed on the evening of sol 104 (see Extended Data Fig. 8 for details of the evolution of VMR and RH on those sols), accompanied by cooling atmospheric temperatures. The increase in RH slowly returned to typical values, while the temperature continued dropping. This behaviour may be due to a single dry air mass advected over the rover bringing cold, dry air, which then remains in the area. An alternative explanation is that it is related to the local surface and possible exchange processes. In the early morning of sol 105, frost conditions were possible as seen by comparing the calculated frost point (at 1.45 m) with the TIRS ground temperature. A resulting hypothesis is that, if subsurface exchange is occurring, the actual frost point at the atmosphere–surface interface may be lower owing to less vapour present in the atmosphere.
Non-local dynamical phenomena
The daily pressure cycles at Jezero showed a rich variability, reflecting the action of different dynamical mechanisms in the atmosphere under a variety of spatial and temporal scales (Fig. 3a). On the seasonal scale, as the northern polar cap sublimated, the daily mean pressure increased from 735 Pa on sols 15–20 (Ls = 13°–16°) to a maximum of 761 Pa on sols 99–110 (Ls = 52°–57°). It then gradually decreased to 650 Pa in sol 250 (Ls = 125°) as the southern polar cap grew (Fig. 6a).
Fig. 6: Atmospheric pressure variability as a consequence of various dynamic mechanisms at various scales.
a, Seasonal evolution of daily mean pressure values and their daily ranges. b, Detrended standard deviation of pressure, as a function of sol and LTST, after subtracting least squares fit of the observed pressure. Removing the tidal components shows that the dominant contribution to pressure variability occurs during the hours with strong convection and after a calm period. c, The first three components of the Fourier analysis of the pressure cycle: diurnal (24 h period; black), semidiurnal (12 h period; blue) and terdiurnal (6 h period; red) components. d, Equivalent Fourier analysis of the temperature data at z = 1.45 m: diurnal (blue) and semidiurnal (red) components.
Extended Data Fig. 9a shows the deviations of the daily pressure averages of Fig. 6a from a polynomial fit of degree 5, observing oscillations with periods in the range of 3–5 sols and amplitudes varying between 1 and 3 Pa, a signature of what could be high-frequency travelling waves arising from baroclinic instabilities also reported elsewhere13. In addition, Extended Data Fig. 9b shows the regular oscillations of the residuals resulting from a fit to the approximately 1 h measurement series that, on average, have a peak-to-peak amplitude of 0.2–0.4 Pa and periods between 12 and 20 min. The properties of these oscillations are similar to those reported in Gale, observed by the Rover Environmental Monitoring Station instrument on the Curiosity rover45 and in Elysium Planitia by the InSight mission15, which have been interpreted as produced by the passage of gravity waves. Figure 6b shows the seasonal and diurnal pressure variability where the daily patterns of pressure changes are observed.
Thermal tides cause the large modulation of the daily pressure (P) cycle46. A Fourier analysis of that cycle shows that up to six components are present, with maximum amplitudes ranging from 0.2 to 10.0 Pa. Figure 6c shows the wide variability of the normalized amplitude of diurnal and semidiurnal tides. The maximum relative change observed in the studied period is (δP/〈P〉)max ≈ 0.013. Smaller changes also occurred in tidal components 3 and 4. The semidiurnal component showed a very strong drop between sols 20 and 50; although still under investigation, it may suggest a relation to the dust loading present on those sols or the development of disturbances at the polar cap edge46. Tides are also detected in the Fourier analysis of the temperature data with half amplitudes of 26 K (diurnal), 2–6 K (semidiurnal) and 2–4 K (terdiurnal), as shown in Fig. 6d. The tidal variability is related mainly to changes in atmospheric opacity produced by clouds and dust loading at different altitudes22. The period studied is the non-dusty season on Mars, which makes the variability relatively low.
A rich and dynamic near-surface environment
Mars 2020 mission includes a payload to monitor an environment that exhibits a rich diversity of behaviours. Many of the measurements made by MEDA so far are the first time they have been obtained on Mars, revealing interesting surprises in Jezero's atmosphere.
The SEB is measured for the first time in situ. The design of future engineering systems, the understanding and modelling of photochemical reactions at the surface and the interpretation of satellite measurements benefit from these results. An example is the characterization of the non-Lambertian reflection of the surface that must be considered in the interpretation of orbital observations of variations in albedo when trying to understand changes in the physical properties of the surface.
Globally, the measured daily temperature cycle agrees with model predictions (although with some deviations in the vertical temperature gradient), expected magnitude of thermal oscillations and the seasonal evolution. In addition, the observed vortex convective activity matches the predictions of large eddy simulation using the MarsWRF model23. However, when analysing vertical temperature profiles, we find a diversity of nocturnal responses that raise intriguing questions about what is happening at the different locations traversed by the rover.
Several independent radiation sensors and methods have measured the occurrence and even development of night-time clouds long before the peak of the cloud season. The ability to track OD throughout each sol is a powerful tool that has shown the prevalence of clouds near dawn and will lead to new insights about how dust and water-ice clouds interact with the surface and the rest of the atmosphere.
The observed nocturnal hydrologic cycle is more complex than anticipated by the models and is also observed in Gale. This unpredicted behaviour can be due to a variety of causes yet to be explored in detail.
Thermal tides show smaller pressure amplitudes at Jezero when compared with Curiosity's observations47 or Viking48. While the general behaviour at Jezero was predicted by the models13,14, there are differences in its amplitude and timing probably due to the interaction of local topography with the air mass exchanges between the interior and exterior of the basin. Another interesting result is the existence of multisol waves at a time of the season when baroclinic waves have been observed to have very limited activity46.
Overall, MEDA observations show a dynamic environment rich in atmospheric phenomena that is different from other locations on Mars studied by previous missions. The characterization of Jezero's atmosphere plays an important role in the development of the Mars Sample Return mission and in the exposure of the samples being collected by Perseverance.
MEDA operational strategy
MEDA can operate continuously and independently of the rover's battery charge cycles, 24 hours a day and subject to a measurement programme sent from Earth (Extended Data Fig. 1). The observation sessions are subject to power availability and, eventually, to incompatibility with other activities to be performed by the rover.
Due to these restrictions, the usual sequence of measurements that MEDA is carrying out consists of the acquisition of all the magnitudes that the instrument records every other hour. On the following sol, the measurement hours are reversed so that every two sols, a total coverage of all daily magnitudes is made.
TIRS measurements and analysis
The TIRS is an infrared radiometer with five channels that measure downward radiation (IR1), air temperature (IR2), reflected short-wave radiation (IR3), upward long-wave radiation (IR4) and ground temperature (IR5)2. We use the ratio between TIRS IR3 and the 'total light' Top7 detector of the RDS (description follows) measurements as a proxy for surface albedo. Since TIRS and RDS measure in different bands, a correction is made using the COMIMART radiative transfer model12.
TI can be straightforwardly derived across Perseverance's traverse by using MEDA values of the net heat flux into the ground as the upper boundary condition to solve the heat conduction equation for homogeneous terrains. This quantity governs the thermal amplitude in the shallow subsurface from diurnal to seasonal timescales, and therefore accurate estimations of TI can be useful to constrain the thermal environment of the samples collected by the Mars 2020 mission. For this estimation, numerical models need to simulate the SEB (see paragraphs that follow in this section). MEDA measures the SEB, allowing for a more in situ-based estimation of the TI.
Concerning TIRS infrared fluxes, the two upward-viewing sensors of TIRS (IR1 and IR2) enable the total column optical depth of aerosol above the rover to be retrieved. The observed signal in TIRS IR1 is sensitive to a combination of atmospheric temperatures and total aerosol optical depth (dust plus water-ice cloud). The atmospheric temperature profile is taken from concurrent observations by the interferometric thermal infrared spectrometer (EMIRS) instrument on board the Emirates Mars Mission49 with temperatures near the surface modified to match the observed TIRS IR2 signal. A radiative transfer model including aerosol scattering is then used to find the aerosol optical depth that would produce the observed TIRS IR1 signal. The estimated uncertainty in these retrievals is ±0.03.
RDS measurements and analysis
The RDS comprises two sets of eight photodiodes (RDS-DP) and a camera (SkyCam)2,50. One set of photodiodes is pointed upward, with each one covering a different wavelength range between 190 and 1,200 nm. The other set is pointed sideways, 20° above the horizon, and they are spaced 45° apart in azimuth to sample all directions at a single wavelength.
HS measurements and analysis
The HS provides directly the local RH and local sensor temperature. Combined with the pressure data provided by the MEDA pressure sensor (PS), water-vapour VMR can be calculated, too. The HS has two measurement modes: continuous measurement and high-resolution interval mode (HRIM). In HRIM, the HS is powered on for only 10 s and then powered off to avoid self-heating. HRIM provides the measurements with the best accuracy, but continuous measurements are beneficial for monitoring changes in RH during short periods.
PS measurements and analysis
The PS is actually a set of two capacitive transducers that provide the hydrostatic pressure as a function of the local temperature, for which the sensor also provides its own temperature2.
They are the Barocaps RSP2M and NGM (due to the internal operation of the sensor, only one or the other can work, not both simultaneously), which can be operated at 0.5 or 1 Hz; the RSP2M has a worse resolution and worse stability than the NGM, but the RSP2M's warm-up time is shorter.
Atmospheric temperature sensor measurements and analysis
Atmospheric temperature sensors (ATSs) are thin thermocouple sensors with three sensors distributed azimuthally around the remote-sensing mast (RSM) at an altitude of 1.45 m and two sensors on the front sides of the rover at an altitude of 0.85 m, which can provide local temperature measurements at a configurable rate. The location of three sensors around the RSM ensures that at least one of them is located downwind during most of the time, producing a clean measurement of air temperature. The two sensors at 0.85 m are more shielded from the environment.
A systematic comparison of ATS data and wind measurements, including the rover orientation on each individual sol, guided us to use the following rules to select the appropriate ATS to characterize the unperturbed atmosphere. During daylight hours, the sensor at z = 1.45 m measuring the lowest temperature is generally the one located downwind. In certain wind conditions, two sensors can be located downwind, and they show very similar temperatures (within 0.1 K) and equivalent oscillations. At z = 0.85 m during daylight hours, we select the sensor with the lowest temperature. To take into account changes in wind direction that modify the selected ATS, we consider slow transitions from one sensor to another at each level when the sensor measuring the lowest temperature changes. During night-time, the smaller values of the winds and the sheltered location of the two ATSs at z = 0.85 m typically result in one sensor generally much warmer than the other. Comparison with environment winds indicates that the lowest temperature sensor is always the most exposed to environment winds with the lowest thermal perturbations from the rover. In the RSM, all three ATSs can experience radiative cooling effects from the rover deck. Thus, when two ATSs have highly correlated values and thermal oscillations, we use the average of them instead of the ATS with the lowest temperature. Thermal perturbations from the radioisotope thermal generator are easily observed at night at z = 1.45 m and identified from the wind data and rover orientation and do not have any noticeable effect on the results here presented. The radioisotope thermal generator, located at the back of the rover, does not generally cause detections in the sheltered detectors in the front of the rover at z = 0.85 m.
Wind measurements and analysis
The wind sensor consists of two horizontal booms placed on the RSM 1.5 m above the rover base and rotated in azimuth 120° with respect to each other.
This placement allows at least one of them to be out of thermal disturbance and out of the rover geometry for any wind direction. Therefore, the level of confidence and the analysis of the perturbing effect that the rover causes on the measurements made will properly depend on the wind direction and speed.
Thus, the data retrieval procedure implemented on the ground consists of the weighted combination of the local wind-speed retrievals from each individual sensor, thereby obtaining the free-flow wind estimate. The weighting of each contribution is established on the basis of the results of the computational fluid dynamic models developed to evaluate how the free wind flow is affected by the rover hardware, thus allowing interpretation of the local wind measurements at each boom location. More details of this process are provided in ref. 2.
SkyCam image analysis
The SkyCam imager is integrated inside the RDS sensor (described in the preceding) and permanently pointed at the Martian sky. The orientation of the camera is not motorized, and its optics are fixed.
SkyCam optical depths were measured via extinction determined through direct solar imaging2. The image field of view includes an annular ND-5 coating: when the Sun is within that coated region twice each sol, it appears comparable in brightness to the sky outside the annulus. Flux from the Sun was integrated after removal of background signal (mostly dark current). From the flux and atmospheric path at the time of each image, optical depth is calculated as τ = ln(F/F0)/η, where τ is normal optical depth, F is observed solar flux, F0 is a calibration parameter representing solar flux in the absence of an atmosphere and η is air mass, which is the ratio of atmospheric column mass on the observed ray to normal column mass. Because SkyCam cannot be calibrated by observing a wide range of air masses or changing the camera versus atmosphere geometry40, it was calibrated by comparison with MastCam-Z-derived optical depths22.
The current estimates of SkyCam opacity uncertainties are 0.07, while those for AM–PM differences are 0.08, and neither varies much among points. The variation among AM–PM differences is not substantial for sols 50–250. However, the population average is 0.10 ± 0.01 for sols 50–250 and –0.01 ± 0.04 before sol 50.
Derivation and importance of SEB
MEDA enables in situ quantification of the SEB on Mars. To this end, conservation of energy at the surface–atmosphere interface of Mars requires that
$$G = ({\mathrm{SW}}_{\rm{d}}-{\mathrm{SW}}_{\rm{u}}) + ({\mathrm{LW}}_{\rm{d}}-{\mathrm{LW}}_{\rm{u}}) + T_{\mathrm{f}}-L_{\mathrm{f}}$$
where G represents the net heat flux into the ground, SWd is the downwelling solar flux, SWu is the solar flux reflected by the surface, LWd is the downwelling long-wave atmospheric flux, LWu is the upwelling long-wave flux emitted by the surface, Tf is the turbulent heat flux and Lf is the latent heat flux. By convention, radiative fluxes directed towards the surface (warming) and nonradiative fluxes (Tf, Lf, and G) directed away from the surface (cooling) are taken as positive in equation (1). Moreover, the radiative fluxes are plugged into equation (1) as positive values, whereas nonradiative fluxes can be plugged in as positive or negative depending on whether they are directed away from or towards the surface.
RDS measures SWd between 0.2 and 1.2 µm, while TIRS measures SWu between 0.3 and 3 µm and LWd and LWu between 6.5 and 30 µm2,52,53. As required in quantifications of the SEB, measured radiative fluxes must be extended to the entire short-wave (0.2–5 µm) and long-wave (5–80 µm) ranges. To extend SWd and SWu, we use the radiative transfer model COMIMART12 along with measured values of aerosol optical depth from the MastCam-Z instrument22. We note that the incoming radiation between 0.2 and 1.2 µm accounts for around 78% of the entire short-wave flux and that the uncertainty in this extension is small because COMIMART includes wavelength-dependent dust radiative properties, accounting for the variations (smaller than 3% for the majority of conditions) in the conversion factor as a function of dust opacity and solar zenith angle. Similarly, we assume a surface emissivity, ϵ, of 0.99 and the Stefan–Boltzmann law to extend LWu. This value of ϵ minimizes the difference between the ground temperature measured by TIRS (8–14 µm) and the ground temperature derived from LWu. To extend LWd, we use the University of Helsinki/Finnish Meteorological Institute adsorptive subsurface–atmosphere single column model (SCM) along with measured values of aerosol optical depth51. Figure 1a shows the diurnal evolution of each term of the SEB terms on a particular sol, both obtained from MEDA (symbols) and simulated by SCM (solid lines). Extended Data Fig. 2 shows the evolution of these magnitudes recorded by MEDA over the first 250 sols. The excellent agreement demonstrates that MEDA's measurements are robust.
The turbulent heat flux is defined as \(T_{\mathrm{f}} = \rho _{\mathrm{a}}c_{\mathrm{p}}\overline {w^\prime T^\prime }\), where ρa is the air density, cp = 736 J Kg–1 K–1 is the specific heat of CO2 gas at constant pressure and \(\overline {w^\prime T^\prime }\) is the covariance between turbulent departures of the vertical wind speed, w', and temperature, T'. These departures are typically calculated over periods of about a few minutes16. As MEDA measurements of w are not yet available, we use the drag transfer method54 to indirectly calculate the turbulent heat flux as:
$$T_{\mathrm{f}} = k^2U_{\mathrm{a}}\rho _{\mathrm{a}}c_{\mathrm{p}}f(R_{\mathrm{B}})\frac{{(T_{\mathrm{g}} - T_{\mathrm{a}})}}{{{\mathrm{ln}}^2(z_{\mathrm{a}}/z_0)}}$$
where k = 0.4 is the von Karman constant, za = 1.45 m is the height at which the air temperature and horizontal wind speed (Ua) are measured, z0 is the surface roughness (set to 1 cm (ref. 55)) and f(RB) is a function of the bulk Richardson number that accounts for the thermal stability in the near surface of Mars56.
We note that Lf has been neglected in the SEB because formation or sublimation of surface ice has not been detected at Jezero to date, with maximum near-surface relative humidity values below 25% for the first 250 sols.
Derivation of thermal inertia using MEDA measurements
We use MEDA measurements of the SEB as the upper boundary condition to solve the heat conduction equation in the soil for homogeneous terrains:
$$\frac{{\partial T\left( {z,t} \right)}}{{\partial t}} = \left( {\frac{{{\mathrm{TI}}}}{{\rho c_{\mathrm{p}}}}} \right)^2\frac{{\partial ^2T\left( {z,t} \right)}}{{\partial z^2}}$$
$$- \frac{{{\mathrm{TI}}}}{{\rho c_{\mathrm{p}}}}\frac{{\partial T\left( {z = 0,t} \right)}}{{\partial z}} = G = \left( {{\mathrm{SW}}_{\mathrm{d}}-{\mathrm{SW}}_{\mathrm{u}}} \right) + \left( {{\mathrm{LW}}_{\rm{d}}-{\mathrm{LW}}_{\rm{u}}} \right) + T_{\mathrm{f}}-L_{\mathrm{f}}$$
$$T\left( {z = z_{\mathrm{d}},t} \right) = T_{\mathrm{d}},$$
where TI is the thermal inertia, ρ is the soil density, c is the soil specific heat and zd is the depth at which the subsurface temperature is constant and equal to Td. Here we assume that ρc = 1.2 × 106 J m–3 K–1 and zd = 3 × L, where \(L = \left( {\frac{{{\mathrm{TI}}}}{{\rho _c}}} \right)\sqrt {\frac{2}{\omega }}\) is the diurnal e-folding depth and ω = 7.0774 × 10−5 s–1 is the angular speed of Mars's rotation.
Under these assumptions, Td and TI are the only unknowns, which can be solved by best fitting the solution to equations (3)–(5) to measured values of the daily minimum ground temperature and diurnal amplitude in ground temperature, respectively. As analysed in ref. 54, the solution to equations (3)–(5) depends primarily on TI, with considerably smaller variations as a function of Td, zd and ρc.
Derivation of albedo using MEDA measurements
We calculate the broadband albedo in the 0.3–3.0 µm range as
$$\alpha = {\mathrm{SW}}_{\rm{u}}^{0.3 - 3\upmu {\mathrm{m}}}/{\mathrm{SW}}_{\rm{d}}^{0.3 - 3\upmu {\mathrm{m}}}$$
Here, \({\mathrm{SW}}_{\rm{u}}^{0.3 - 3\upmu {\mathrm{m}}}\) is the reflected solar flux measured directly by TIRS, while \({\mathrm{SW}}_{\rm{d}}^{0.3 - 3\upmu {\mathrm{m}}}\) is the downwelling solar flux calculated with COMIMART by extending RDS Top7 measurements from 0.19–1.20 to 0.3–3.0 µm. On the basis of uncertainties in measured solar fluxes, the relative error in albedo is <10% in the vicinity of noon and <20% towards sunset and sunrise.
Description of COMIMART and UH/FMI SCM
COMIMART includes wavelength-dependent radiative properties of the Martian atmospheric constituents. Dust radiative properties are calculated from the refractive indices derived from satellite observations32,57. We have assumed a dust particle effective radius of 1.5 µm and an effective variance of 0.3. The surface albedo is also assumed to depend on wavelength, with low values in the ultraviolet, increasing towards the near-infrared spectral region58. However, this value barely impacts simulations of downwelling solar fluxes. Solar radiative fluxes are calculated using the delta-Eddington approximation59. The model has been validated for different solar elevations, dust abundances and scattering regimes12, concluding that it is accurate for the majority of conditions that can be found at Jezero crater.
Unlike COMIMART, the University of Helsinki/Finnish Meteorological Institute single-column model (UH/FMI SCM) allows the simulation of long-wave fluxes. SCM handles solar (short-wave) radiation with a fast broadband delta-two-stream approach for CO2 and dust, and thermal (long-wave) radiation with a fast broadband emissivity approach for CO2, H2O and dust. These two schemes have been validated through comparisons from a multiple-scattering resolving model60. As in COMIMART, dust optical parameters were obtained using refractive indices in refs. 32, 57. This confirmed the dust SW single-scattering albedo to 0.90 and the asymmetry parameter to 0.70, which are used in our reference model.
When initialized with aerosol opacities retrieved from MastCam-Z at 880 nm, COMIMART and SCM simulate nearly identical solar fluxes, with relative departures <5% at noon.
Retrieval of dust scattering properties
Dust optical properties are estimated by simulating the RDS measurements2 throughout the sol, using radiative transfer simulations. The irradiance (E) measured by a given RDS channel is computed as:
$$E = C \times [T\left( {\mu _0,\varphi _0} \right) \times e^{\frac{{ - \tau }}{{\mu _0}}} + \mathop {\smallint }\limits_0^{2\pi } \mathop {\smallint }\limits_{ - 1}^0 I\left( {\tau ,\mu _0,\varphi _0,\mu ,\varphi } \right) \times T\left( {\mu ,\varphi } \right) \times R \times \mu \times {\mathrm{d}}\mu {\mathrm{d}}\varphi ]$$
where I is the scattered intensity simulated with a radiative transfer model, μ0 and φ0 are the cosine of the solar zenith angle and the solar azimuth angle, μ and φ are the cosine of the zenith angle and the azimuth angle of the observation, T and R are the sensor angular response and responsivity, C is a constant and τ is the total vertical opacity. Extended Data Fig. 10a shows RDS Top5 (750 nm) signals simulated for different dust opacities and an effective radius of reff = 1.2 μm.
Here we see that, by varying the dust opacity, the simulated signals change in value and shape. Similar results are found when we vary the effective radius of the particles, keeping constant the dust number density. By changing reff, we vary the dust opacity through the particle cross section, the particle phase function and the single-scattering albedo. Estimation of the free parameters (for example dust number density or dust particle radius) is performed by finding the optimal value of the parameter that minimizes the differences between observations and simulations, as shown in Extended Data Fig. 10b, using the Levenberg–Marquardt procedure.
The dust particle radius is estimated using (1) the observations made by the RDS lateral photodiodes2, whose FoV is narrow (±5°), when the Sun is near the FoV of one of those detectors (this occurs when the Sun is at low elevation); at that time, RDS lateral detectors provide observations at a wide range of scattering angles (sensitive to the particle phase function); (2) the observations made by the top photodiodes at different wavelengths (for example, 450, 650, 750, 950 nm). In this case, the opacity at each channel wavelength depends on the particle radius, so by fitting the observations made at different wavelengths simultaneously, we can derive the size of the dust particles. To better constrain the particle radius, the observations from RDS and SkyCam can be combined: the opacity derived from SkyCam can be used when fitting the observations made by RDS, being in this case the particle radius is the only free parameter in the retrieval.
Once the particle radius is estimated, we can provide the dust opacity at different time intervals, such as the daily or the hourly average, by assuming the particle radius does not change much throughout the sol.
All datasets of Mars 2020 are available via the Planetary Data System (PDS). Data are delivered to the PDS according to the Mars 2020 Data Management Plan available in the Mars 2020 PDS archive (https://doi.org/10.17189/1522849). Data from the MEDA instrument referenced in this paper are available from the PDS Atmospheres node. The direct link is https://pds-atmospheres.nmsu.edu/data_and_services/atmospheres_data/PERSEVERANCE/meda.html.
Farley, K. A. et al. Mars 2020 mission overview. Space Sci. Rev. 216, 142 (2020).
Rodriguez-Manfredi, J. A. et al. The Mars Environmental Dynamics Analyzer, MEDA. A suite of environmental sensors for the Mars 2020 mission. Space Sci. Rev. 217, 48 (2021).
Hess, S. L., Henry, R. M., Leovy, C. B., Ryan, J. A. & Tillman, J. E. Meteorological results from the surface of Mars: Viking 1 and 2. J. Geophys. Res. 82, 4559–4574 (1997).
Schofield, J. T. et al. The Mars Pathfinder atmospheric structure investigation/meteorology (ASI/MET) experiment. Science 278, 1752–1758 (1997).
Taylor, P. A. et al. Temperature, pressure and wind instrumentation on the Phoenix meteorological package. J. Geophys. Res. 113, EA0A10 (2008).
Smith, M. D. et al. First atmospheric science results from the Mars Exploration Rovers Mini-TES. Science 306, 1750–1753 (2004).
Gómez-Elvira, J. et al. REMS: the environmental sensor suite for the Mars Science Laboratory rover. Space Sci. Rev. 170, 583–640 (2012).
Banfield, D. et al. InSight Auxiliary Payload Sensor Suite (APSS). Space Sci. Rev. 215, 4 (2019).
Petrosyan, A. et al. The Martian atmospheric boundary layer. Rev. Geophys. 49, 2010RG000351 (2011).
Read, P. L., Lewis, S. R. & Mulholland, D. P. The physics of Martian weather and climate: a review. Rep. Prog. Phys. 78, 125901 (2015).
Martínez, G. M. et al. The surface energy budget at Gale crater during the first 2500 sols of the Mars Science Laboratory mission. J. Geophys. Res. Planets 126, e2020JE006804 (2021).
Vicente-Retortillo, Á., Valero, F., Vázquez, L. & Martínez, G. M. A model to calculate solar radiation fluxes on the Martian surface. J. Space Weather Space Clim. 5, A33 (2015).
Pla-García, J. et al. Meteorological predictions for Mars 2020 Perseverance rover landing site at Jezero Crater. Space Sci. Rev. 216, 148 (2020).
Newman, C. et al. Multi-model meteorological and aeolian predictions for Mars 2020 and the Jezero Crater region. Space Sci. Rev. 217, 20 (2021).
Banfield, D. et al. The atmosphere of Mars as observed by InSight. Nat. Geo. 13, 190–198 (2020).
Guzewich, S. D. et al. Mars Science Laboratory observations of the 2018/Mars year 34 global dust storm. Geophys. Res. Lett. https://doi.org/10.1029/2018GL080839 (2019).
Siili, T. Modeling of albedo and thermal inertia induced mesoscale circulations in the midlatitude summertime Martian atmosphere. J. Geophys. Res. Planets 101, 14957–14968 (1996).
Tillman, J. E., Landberg, L. & Larsen, S. E. The boundary layer of Mars: fluxes, stability, turbulent spectra, and growth of the mixed layer. J. Atmos. Sci. 51, 1709–1727 (1994).
Larsen, S. E., Jørgensen, H. E., Landberg, L. & Tillman, J. E. Aspects of the atmospheric surface layers on Mars and Earth. Boundary Layer Meteorol. 105, 451–470 (2002).
Davy, R. et al. Initial analysis of air temperature and related data from the Phoenix MET station and their use in estimating turbulent heat fluxes. J. Geophys. Res. 115, E00E13 (2010).
Millour, E. et al. The Mars climate database (MCD version 5.2). EPSC Abstr. 10, abstr. 438 (2015).
Bell, J. F. III et al. Geological, multispectral, and meteorological imaging results from the Mars 2020 Perseverance rover in Jezero crater. Sci. Adv. 8, eabo4856 (2022).
Newman, C. E. et al. The dynamic atmospheric and aeolian environment of Jezero Crater, Mars. Sci. Adv. 8, eabn3782 (2022).
Ellehoj, M. D. et al. Convective vortices and dust devils at the Phoenix Mars mission landing site. J. Geophys. Res. 115, E00E16 (2010).
Read, P. L. et al. in The Atmosphere and Climate of Mars (eds Haberle, R. M. et al.) Ch. 7 (Cambridge Univ. Press, 2017); https://doi.org/10.1017/9781139060172.007
Kahre, M. et al. in The Atmosphere and Climate of Mars (eds Haberle, R. M. et al.) Ch. 10 (Cambridge Univ. Press, 2017); https://doi.org/10.1017/9781139060172.010
Tamppari, L. K., Zurek, R. W. & Paige, D. A. Viking-era diurnal water-ice clouds. J. Geophys. Res. 108, 5073 (2003).
Colburn, D. S., Pollack, J. B. & Haberle, R. M. Diurnal variations in optical depth at Mars. Icarus 79, 159–189 (1989).
Stamnes, K., Tsay, S. C., Wiscombe, W. & Jayaweera, K. Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media. Appl. Opt. 27, 2502–2509 (1988).
Tomasko, M. G., Doose, L. R., Lemmon, M., Smith, P. H. & Wegryn, E. Properties of dust in the Martian atmosphere from the Imager on Mars Pathfinder. J. Geophys. Res. 104, 8987–9007 (1999).
Lemmon, M. T. et al. Atmospheric imaging results from the Mars Exploration rovers: spirit and opportunity. Science 306, 1753–1756 (2004).
Wolff, M. J. et al. Wavelength dependence of dust aerosol single scattering albedo as observed by the compact reconnaissance imaging spectrometer. J. Geophys. Res. 114, E00D04 (2009).
Lemmon, M. T. et al. Large dust aerosol sizes seen during the 2018 Martian global dust event by the Curiosity rover. Geophys. Res. Lett. 46, 9448–9456 (2019).
Chen-Chen, H., Pérez-Hoyos, S. & Sánchez-Lavega, A. Dust particle size and optical depth on Mars retrieved by the MSL navigation cameras. Icarus 319, 43–57 (2019).
Wolff, J. M. et al. Mapping water ice clouds on Mars with MRO/MARCI. Icarus 332, 24–49 (2019).
Tamppari, L. K., Zurek, R. W. & Paige, D. A. Viking era water-ice clouds. J. Geophys. Res. https://doi.org/10.1029/1999JE001133 (2000).
Hale, A. S., Tamppari, L. K., Bass, D. S. & Smith, M. D. Martian water ice clouds: a view from Mars Global Surveyor Thermal Emission Spectrometer. J. Geophys. Res. 116, E04004 (2011).
Lemmon, M. T. et al. Dust aerosol, clouds, and the atmospheric optical depth record over 5 Mars years of the Mars Exploration Rover mission. Icarus 251, 96–111 (2015).
Montabone, L. et al. Martian year 34 column dust climatology from Mars climate sounder observations: reconstructed maps and model simulations. J. Geophys. Res. Planets https://doi.org/10.1029/2019JE006111 (2020).
Toledo, D. et al. Measurement of dust optical depth using the solar irradiance sensor (SIS) onboard the ExoMars 2016 EDM. Planet. Space Sci. 138, 33–43 (2017).
Toledo, D., Rannou, P., Pommereau, J. P., Sarkissian, A. & Foujols, T. Measurement of aerosol optical depth and sub-visual cloud detection using the optical depth sensor (ODS). Atmos. Meas. Tech. 9, 455–467 (2016).
Smith, M. D. et al. One Martian year of atmospheric observations using MER Mini-TES. J. Geophys. Res. 111, E12S13 (2006).
Clancy, R. T. et al. in The Atmosphere and Climate of Mars (eds Haberle, R. M. et al.) Ch 5 (Cambridge Univ. Press, 2017); https://doi.org/10.1017/9781139060172.005
Tamppari, L. K. & Lemmon, M. T. Near-surface atmospheric water vapor enhancement at the Mars Phoenix lander site. Icarus https://doi.org/10.1016/j.icarus.2020.113624 (2020).
Guzewich, S. D. et al. Gravity wave observations by the Mars Science Laboratory REMS pressure sensor and comparison with mesoscale atmospheric modeling with MarsWRF. J. Geophys. Res. Planets https://doi.org/10.1029/2021je006907 (2021).
Barnes, J. R. et al. in The Atmosphere and Climate of Mars (eds Haberle, R. M. et al.) Ch. 9 (Cambridge Univ. Press, 2017); https://doi.org/10.1017/9781139060172.009
Haberle et al. Preliminary interpretation of the REMS pressure data from the first 100 sols of the MSL mission. J. Geophys. Res. Planets 119, 440–453 (2014).
Zurek, R. W. Inferences of dust opacities for the 1977 Martian great dust storms from Viking Lander 1 pressure data. Icarus 45, 202–215 (1981).
Edwards, C. S. et al. The Emirates Mars Mission (EMM) Emirates Mars infrared spectrometer (EMIRS) instrument. Space Sci. Rev. 217, 77 (2021).
Apestigue, V. et al. Radiation and dust sensor for Mars Environmental Dynamics Analyzer onboard M2020 rover. Sensors 22, 2907 (2022).
Savijärvi, H. I. et al. Humidity observations and column simulations for a warm period at the Mars Phoenix lander site: constraining the adsorptive properties of regolith. Icarus 343, 113688 (2020).
Sebastian, E. et al. Radiometric and angular calibration tests for the MEDATIRS radiometer onboard NASA's Mars 2020 mission. Measurement 164, 107968 (2020).
Sebastián, E. et al. Thermal calibration of the MEDA-TIRS radiometer onboard NASA's Perseverance rover. Acta Astronaut. 182, 144–159 (2021).
Martínez, G. M. et al. Surface energy budget and thermal inertia at Gale Crater: calculations from ground‐based measurements. J. Geophys. Res. Planets 119, 1822–1838 (2014).
Hébrard, E. et al. An aerodynamic roughness length map derived from extended Martian rock abundance data. J. Geophys. Res. Planets https://doi.org/10.1029/2011je003942 (2012).
Savijärvi, H. & Kauhanen, J. Surface and boundary-layer modelling for the Mars Exploration Rover sites. Q. J. R. Meteorol. Soc. A 134, 635–641 (2008).
Wolff, M. J., Clancy, R. T., Goguen, J. D., Malin, M. C. & Cantor, B. A. Ultraviolet dust aerosol properties as observed by MARCI. Icarus 208, 143–155 (2010).
Mustard, J. F. & Bell, J. F. III New composite reflectance spectra of Mars from 0.4 to 3.14 μm. Geophys. Res. Lett. 21, 353–356 (1994).
Joseph, J. H., Wiscombe, W. J. & Weinman, J. A. The delta-Eddington approximation for radiative flux transfer. J. Atmos. Sci. 33, 2452–2459 (1976).
Savijärvi, H., Crisp, D. & Harri, A.-M. Effects of CO2 and dust on present‐day solar radiation and climate on Mars. Q. J. R. Meteorol. Soc. 131, 2907–2922 (2005).
This work has been funded by the Spanish Ministry of Economy and Competitiveness, through the projects no. ESP2014-54256-C4-1-R (also -2-R, -3-R and -4-R); Ministry of Science, Innovation and Universities, projects no. ESP2016-79612-C3-1-R (also -2-R and -3-R); Ministry of Science and Innovation/State Agency of Research (10.13039/501100011033), projects no. ESP2016-80320-C2-1-R, RTI2018-098728-B-C31 (also -C32 and -C33), RTI2018-099825-B-C31, PID2019-109467GB-I00 and PRE2020-092562; Instituto Nacional de Técnica Aeroespacial; Ministry of Science and Innovation's Centre for the Development of Industrial Technology; Spanish State Research Agency (AEI) Project MDM-2017-0737 Unidad de Excelencia "María de Maeztu"—Centro de Astrobiología; Grupos Gobierno Vasco IT1366-19; and European Research Council Consolidator Grant no 818602. The US co-authors performed their work under sponsorship from NASA's Mars 2020 project, from the Game Changing Development programme within the Space Technology Mission Directorate and from the Human Exploration and Operations Directorate. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). G.M. acknowledges JPL funding from USRA Contract Number 1638782. A.G.F. is supported by the European Research Council, Consolidator Grant no. 818602.
A full list of members and their affiliations appears at the end of the paper.
A full list of members and their affiliations appears in the Supplementary Information.
Centro de Astrobiología (INTA-CSIC), Madrid, Spain
J. A. Rodriguez-Manfredi, E. Sebastian, A. De Vicente-Retortillo, D. Viudez-Moreiras, A. Lepinette, J. Gomez-Elvira, A. G. Fairén, R. Ferrandiz, M. Garcia-Villadangos, S. Gimenez, F. Gomez-Gomez, M. Marin, C. Martin, J. Martin-Soler, A. Molina, L. Mora-Sotomayor, S. Navarro, V. Peinado, J. Pla-Garcia, M. Postigo, O. Prieto-Ballesteros, J. Romeral, C. Romero, R. Urqui & S. Zurita
Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA
M. de la Torre Juarez, L. K. Tamppari, J. Boland & J. T. Schofield
Dept. Fí sica Aplicada, Universidad del País Vasco (UPV/EHU), Bilbaoí, Spain
A. Sanchez-Lavega, R. Hueso, A. Munguira & T. Del Rio-Gaztelurrutia
Lunar and Planetary Institute, Houston, TX, USA
G. Martinez
Space Science Institute, Boulder, CO, USA
M. T. Lemmon & M. Wolff
Aeolis Corporation, Sierra Madre, CA, USA
C. E. Newman & M. I. Richardson
Finnish Meteorological Institute, Helsinki, Finland
M. Hieta, J. Polkko, I. Jaakonaho, M. Genzer, A.-M. Harri, T. Makinen & H. Savijärvi
Instituto Nacional de Técnica Aeroespacial (INTA), Madrid, Spain
D. Toledo, V. Apestigue, I. Arruego, J. J. Jimenez & J. Torres
NASA Goddard Space Flight Center, Greenbelt, MD, USA
M. D. Smith & S. D. Guzewich
Dept. Física y Matemáticas, Universidad de Alcalá, Alcalá de Henares, Spain
Institute of Physical Chemistry Rocasolano, CSIC, Madrid, Spain
A. Saiz-Lopez
Cornell University, Ithaca, NY, USA
R. J. Sullivan & A. G. Fairén
Carnegie Institution, Washington, DC, USA
P. G. Conrad
Institut Supérieur de l'Aéronautique et de l'Espace (ISAE-SUPAERO), Université de Toulouse, Toulouse, France
N. Murdoch
NASA Ames Research Center, Mountain View, CA, USA
D. Banfield
Plancius Research, Severna Park, MD, USA
A. J. Brown
Instituto de Microelectrónica de Sevilla (US-CSIC), Seville, Spain
J. Ceballos & S. Espejo
Dept. de Ingeniería Electrónica, Universidad Politécnica de Cataluña, Barcelona, Spain
M. Dominguez-Pumar & V. Jimenez
Dept. of Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor, MI, USA
E. Fischer
Dept. de Mecánica de Fluidos y Propulsión Aeroespacial, Universidad Politécnica de Madrid, Madrid, Spain
I. Perez-Grande
Southwest Research Institute, Boulder, CO, USA
S. C. R. Rafkin
J. A. Rodriguez-Manfredi
M. de la Torre Juarez
A. Sanchez-Lavega
R. Hueso
M. T. Lemmon
C. E. Newman
A. Munguira
M. Hieta
L. K. Tamppari
J. Polkko
D. Toledo
E. Sebastian
M. D. Smith
I. Jaakonaho
M. Genzer
A. De Vicente-Retortillo
D. Viudez-Moreiras
A. Lepinette
M. Wolff
R. J. Sullivan
J. Gomez-Elvira
V. Apestigue
T. Del Rio-Gaztelurrutia
I. Arruego
J. Boland
J. Ceballos
M. Dominguez-Pumar
S. Espejo
A. G. Fairén
R. Ferrandiz
M. Garcia-Villadangos
S. Gimenez
F. Gomez-Gomez
S. D. Guzewich
A.-M. Harri
J. J. Jimenez
V. Jimenez
T. Makinen
M. Marin
J. Martin-Soler
A. Molina
L. Mora-Sotomayor
S. Navarro
V. Peinado
J. Pla-Garcia
M. Postigo
O. Prieto-Ballesteros
M. I. Richardson
J. Romeral
H. Savijärvi
J. T. Schofield
J. Torres
R. Urqui
S. Zurita
, M. de la Torre Juarez
, A. Sanchez-Lavega
, R. Hueso
, G. Martinez
, M. T. Lemmon
, C. E. Newman
, A. Munguira
, M. Hieta
, L. K. Tamppari
, J. Polkko
, D. Toledo
, E. Sebastian
, M. D. Smith
, I. Jaakonaho
, M. Genzer
, A. De Vicente-Retortillo
, D. Viudez-Moreiras
, M. Ramos
, A. Saiz-Lopez
, A. Lepinette
, R. J. Sullivan
, J. Gomez-Elvira
, V. Apestigue
, P. G. Conrad
, T. Del Rio-Gaztelurrutia
, I. Arruego
, D. Banfield
, J. Boland
, J. Ceballos
, M. Dominguez-Pumar
, S. Espejo
, A. G. Fairén
, R. Ferrandiz
, E. Fischer
, M. Garcia-Villadangos
, S. Gimenez
, F. Gomez-Gomez
, S. D. Guzewich
, A.-M. Harri
, J. J. Jimenez
, V. Jimenez
, T. Makinen
, M. Marin
, C. Martin
, J. Martin-Soler
, A. Molina
, L. Mora-Sotomayor
, S. Navarro
, V. Peinado
, I. Perez-Grande
, J. Pla-Garcia
, M. Postigo
, O. Prieto-Ballesteros
, S. C. R. Rafkin
, M. I. Richardson
, J. Romeral
, C. Romero
, H. Savijärvi
, J. T. Schofield
, J. Torres
, R. Urqui
& S. Zurita
J.A.R.-M. is the principal investigator of the MEDA instrument; together with M.d.l.T.J. and A. Sanchez-Lavega, the three are the corresponding authors of this manuscript. These corresponding authors have led the review and editing of the document. In addition, all authors have contributed to the conceptualization of the instrument and of this work. J.A.R.-M., M.d.l.T.J., A. Sanchez-Lavega, M.G., I.A., A-M.H., L.M.-S., V.P. and R.U. have contributed to obtaining and managing the resources that have made this work possible. J.A.R.-M., M.d.l.T.J., A. Sanchez-Lavega, R.H., G.M., M.T.L., C.E.N., A. Munguira, M.H., L.K.T., J.P., D.T., E.S., M.D.S., I.J., M.G., A.D.V.-R. and D.V.-M. have participated in the writing of this manuscript, as well as M.R., A. Saiz-Lopez, A.L., M.W., R.J.S., J.G.-E., V.A., P.G.C., T.D.R.-G., N.M., A.J.B., D.B., A.G.F., E.F. and S.N., who contributed to the revision of the document in its different stages. J.A.R.-M., M.d.l.T.J., A. Sanchez-Lavega, R.H., G.M., M.T.L., C.E.N., L.K.T., D.T., E.S., J.G-E., V.A. and J.T.S. defined the corresponding methodology. J.A.R.-M., M.d.l.T.J., A. Sanchez-Lavega, R.H., G.M., M.T.L., C.E.N., A. Munguira, M.H., L.K.T., J.P., D.T., E.S., M.D.S., I.J., A.D.V.-R., D.V.-M., A.L., M.W., J.G.-E., V.A., N.M., M.D.-P., A.G.F., E.F., S.D.G., A.-M.H., M.M., S.N., J.P.-G, S.C.R.R., M.I.R. and H.S. are involved on a daily basis with analysis of the recorded data. J.A.R.-M., M.d.l.T.J., R.H., G.M., M.T.L., C.E.N., A. Munguira, L.K.T., D.T., M.D.S., A.D.V.-R., D.V.-M., A.L., M.W., R.J.S., V.A., P.G.C., N.M., D.B., E.F., S.G., F.G.-G., M.M., C.M., A. Molina, L.M.-S., S.N., V.P., J.P.-G., C.R., J.T., R.U. and S.Z. participated in the operation of the instrument. J.A.R.-M., M.d.l.T.J., G.M., M.T.L., M.H., L.K.T., D.T., E.S., M.D.S., A.L., J.G.-E., V.A., J.B., J.C., M.D.-P., S.E., E.F., J.J.J., M.M., J.M.-S., L.M.-S., S.N., V.P., J.R. and J.T. took part in the curation and validation of data.
Correspondence to J. A. Rodriguez-Manfredi, M. de la Torre Juarez or A. Sanchez-Lavega.
Nature Geoscience thanks Jim Murphy and Peter Read for their contribution to the peer review of this work. Primary Handling Editor: Tamara Goldin, in collaboration with the Nature Geoscience team.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended Data Fig. 1 MEDA's Temporary coverage during the first sols on Mars.
Temporary coverage during the first few sols of Perseverance on the surface of Mars showing, as an example, one of the magnitudes recorded by the instrument. Blank periods correspond to situations of mission necessity (start-up, software updates, etc.), or to a specific incident that has occurred to the instrument. The color code corresponds to the temperature recorded by the air temperature sensor (ATS) at z = 1.45 m, averaged every 5 minutes.
Extended Data Fig. 2 Seasonal evolution of the SEB terms over the first 250 sols on Mars, as recorded by MEDA.
Seasonal evolution for the first 250 sols of the M2020 mission of (a) The atmospheric opacity at 880 nm retrieved from MastCam-Z between 06:00 and 18:00 LMST. (b) Daily maximum, mean and minimum values of the net heat flux into the soil (G). (c) Daily maximum downwelling solar flux in the 0.2–5 μm range (SWd) at the surface (red) and at the top of the atmosphere (gray). The seasonal evolution of SWd at the surface is governed by that the top of the atmosphere during the aphelion season, when the aerosol opacity is low and relatively stable. (d) Daily maximum solar flux in the 0.2–5 μm range reflected by the surface (SWu). (e) Daily maximum upwelling longwave flux in the 6–80 um range emitted by the surface (LWu). (f) daily maximum downwelling atmospheric longwave flux in the 6–80 um range (LWd). All these values were calculated as described in the Methods section.
Extended Data Fig. 3 Daily cycle temperatures under typical values of thermal inertia and atmosphere's thermal gradients.
Daily Cycle of temperatures at Jezero under low and high values of TI of the local terrain, and thermal gradients of the atmosphere. (a-c) Low thermal inertia with TI = 260 J·m-2·K-1·s-1/2, as in sols 20-22. (d-f) High thermal inertia terrain with TI = 545 J·m-2·K-1·s-1/2, as in sols 140-150. Colors legend is as in Fig. 2. Low surface thermal inertia (left panels) result in more stable conditions with respect to vertical motions during nighttime than high values of the surface thermal inertia. This is due to the strong locality of surface temperatures, which change strongly in different terrains, compared to a more regular behavior of air temperatures (a and d). The data shown in these figures are the result of averaging the values over 12-minute time intervals, with the standard deviation as the error bar.
Extended Data Fig. 4 Analysis of thermal oscillations along the sols.
Analysis of thermal oscillations along the sol, for the periods (a) sols 20-22, (b) sols 75-76, (c) sols 140-150, corresponding to different TI values. Blue dots correspond to ATS measurements at z = 0.84 m, green dots to ATS measurements at z = 1.45 m, and yellow dots to TIRS measurements at z ~40 m. The large amplitudes around 20:00 h correspond to thermal contamination by the RTG at that time, as confirmed by simultaneous wind and rover yaw. The oscillations were calculated by detrending the temperature data in each half-hour period using a 2nd order polynomial.
Extended Data Fig. 5 Analysis of the rapid pressure fluctuations.
Power spectrum of pressure fluctuations from sols 231 to 241. The straight line determines the power law fit in the frequency range 0.001 to 0.1 Hz.
Extended Data Fig. 6 Pressure drops as observed by MEDA.
Pressure drops on MEDA data. (a) Daily distribution of pressure drops identified using the algorithm described in14. (b) Selection of events with at least a pressure drop of 0.5 Pa and a pressure curve compatible with a vortex after visualization of each individual event. (c) Selection of events in (b) that also have a simultaneous drop in the light measured by the RDS Top7 photodiode with a drop of at least 0.5%. Histograms have been corrected from sampling effects. The error bars represent the standard deviation obtained in Monte Carlo simulations that reproduce the number of events detected per hour, using a fixed probability equal to the total number of events divided by the total number of hours of observations. The central value represents the mean number of events detected per hour. (d-h) Examples of a variety of pressure drops including: (d) weak vortices typical of the early morning; (e) night-time pressure drops coincident with increases in temperature and driven by thermal pulses from the RTG when the wind flows from the back of the rover to the Remote Sensing Mast (wind data not shown); (f) the most dusty event captured in the first 250 sols, also one of the most intense and longest; (g) long and noisy pressure drops typically found at noon and suggestive of the passage of the boundaries of convective cells; (h) long vortex just after sunset. Further information on events like (f) and (g) is given in23.
Extended Data Fig. 7 Example of a Dust Devil record observed by MEDA, on sol 166.
Dust Devil record observed on sol 166; the pink line reflects the sudden pressure drop caused by the passage of the convective vortex, while the blue line shows the jump caused in the radiation ratio measured by TIRS and RDS, versus the same magnitude on preceding sols (cyan line). The data shown in the plot are the result of the mean over 10-sec intervals, plus-minus the standard deviation.
Extended Data Fig. 8 Evolution of VMR and RH on sols of maximum variability.
MEDA instrument has recorded a more complex hydrological cycle than expected. (a) Detail of the daily evolution of RH in characteristic sols (sols 104 to 110), where the complex and temporally variable structure of the nocturnal hydrological cycle can be observed, sometimes not foreseen by the simulation models. (b) Daily evolution of the VMR for the sols shown in (a). A large increase in the VMR was observed on the evening of sol 104.
Extended Data Fig. 9 Analysis of pressure deviations recorded by MEDA.
Deviations from the recorded pressure values suggest the possibility of high-frequency traveling waves. (a) Oscillations obtained in the differences between the daily mean values in Fig. 6a and the polynomial fit of degree 5 of those values (coefficient of determination, R-squared = 0.985), where 2 intervals are observed: before and after sol ~80. (b) Oscillations obtained in the infra-daily differences. Due to their properties, these oscillations are compatible with the propagation of gravity waves.
Extended Data Fig. 10 Estimation of the optical properties of atmospheric dust from RDS data.
Simulated signals for RDS Top5 channel for different opacities, and effective radius. (a) RDS Top5 channel signals simulated for different dust opacities and, a constant dust effective radius of reff = 1.2 μm. (b) Best fit between the simulations and the observations obtained.
MEDA team consortium members.
Rodriguez-Manfredi, J.A., de la Torre Juarez, M., Sanchez-Lavega, A. et al. The diverse meteorology of Jezero crater over the first 250 sols of Perseverance on Mars. Nat. Geosci. 16, 19–28 (2023). https://doi.org/10.1038/s41561-022-01084-0
Issue Date: January 2023
Nature Geoscience Research Briefing 09 Jan 2023
Nature Geoscience (Nat. Geosci.) ISSN 1752-0908 (online) ISSN 1752-0894 (print) | CommonCrawl |
Incircles and Excircles in a Triangle
The points of tangency of the incircle of $\Delta ABC$ with sides $a,$ $b,$ $c,$ and semiperimeter $p = (a + b + c)/2,$ define the cevians that meet at the Gergonne point of the triangle. This follows immediately from Ceva's theorem and the fact that two tangents to a circle from a point outside the circle are equal.
The length of those tangents from the vertices of the triangle to its incircle can be easily determined. Denote them $x,$ $y,$ $z,$ as in the diagram. We have three equations:
(1) $\begin{align} x + y &= c \\ y + z &= a \\ z + x &= b, \end{align}$
(2) $x + y + z = p.$
Subtracting from (2) equations (1) one at a time, we get
(3) $\begin{align} x &= p - a\\ y &= p - b \\ z &= p - c. \end{align}$
These are the lengths that appear in Heron's formula, e.g., $p - a = (b + c - a)/2.$
Similarly we can find the lengths of the tangents to the excircles.
Obviously, $2p = (b + u) + (c + v).$ But, since $(b + u) = (c + v),$ we get
(4) $\begin{align} u &= p - b \\ v &= p - c. \end{align}$
From the just derived formulas it follows that the points of tangency of the incircle and an excircle with a side of a triangle are symmetric with respect to the midpoint of the side. Such points are called isotomic. The cevians joinging the two points to the opposite vertex are also said to be isotomic. Both triples of cevians meet in a point. For the incircle, the point is Gergonne'; for the points of excircle tangency, the point is Nagel's. We have just proved that, in any triangle, the Gergonne and Nagel points are isotomic conjugate of each other. (This fact has an interesting geometric illustration.)
In general, two points in a triangle are isotomic conjugate if the cevians through them are pairwise isotomic. The centroid is one point that is its own isotomic conjugate.
cot(A/2) = (p - a)/r
This obvious formula sometimes goes under the name of The Law of Cotangents:
$\displaystyle\frac{\cot (A/2)}{p-a}=\frac{\cot (B/2)}{p-b}=\frac{\cot (C/2)}{p-c}=\frac{1}{r}.$
S = ra(p - a)
$\begin{align} 2S &= (b + u)r_{a} + (c + v)r_{a} - ar_{a} - (u + v)r_{a} \\ &= (b + c - a)r_{a} \\ &= (2p - 2a)r_{a}\\ &= 2(p - a)r_{a}. \end{align}$
Nagel Point of the Medial Triangle
Nagel Point
Homothety between In- and Excircles
Property of Points Where In- and Excircles Touch a Triangle
Feuerbach's Theorem: A Proof
Feuerbach's THeorem: What Is It?
|Contact| |Front page| |Contents| |Geometry| |Up| | CommonCrawl |
Edon Kelmendi
Adding Sets to MSO
The theory of monadic second order logic of $(\omega,<)$ was proved to be decidable by Julius Büchi in 1962. In this logic we are allowed to quantify over positions (members of $\omega$), sets of positions, use the total order $<$, test for membership $x\in X$, and use Boolean connectives $\wedge, \neg$. Its theory is the collection of the true sentences.
Sets $X$ that are definable in this logic are simple: if you see such a set $X$ as an infinite word $\mathbf{w}$ over the binary alphabet $\{0,1\}$ ($\mathbf{w}_{n}=1$ if and only if $n\in X$) it has the form $$ uv^{\omega} $$ for some $u,v\in\{0,1\}^{*}$, i.e. it is ultimately periodic.
It is natural to ask whether we can add a more complicated set $S\subset \omega$ and have the MSO theory of $(\omega,<,S)$ remain decidable. In this extended logic, $S$ is part of the signature, so we are allowed to test $x\in S$. In a sense, this problem has been solved. See this paper by Rabinovich and the references therein. There are two characterisations, one by Semenov, and one by Rabinovich.
Here I will report on an interesting, and related result by Semenov that says:
Theorem 1. (Semenov 1983) There are two sets $S_1,S_2\subset\omega$ such that the MSO theory of
$(\omega,<,S_1)$, and $(\omega,<,S_2)$ is decidable, but
that of $(\omega,<,S_1,S_2)$ is undecidable.
So the objective is to find two sets, such that each one alone is simple enough to yield a decidable theory, but together they encode some complicated information giving an undecidable theory. First, let us see some examples.
What is an example of a simple set $S\subset \omega$ whose MSO theory is undecidable? Of course we can choose some $S$ which is itself undecidable, but this is not useful. Denote by $p_{n}$ the $n$th prime number, and define: $$ S=\{p_{n}^{k}\ :\ n\text{th Turing machine halts in fewer than $k$ steps}\}. $$ Clearly $S$ is decidable, but it yields an undecidable MSO theory, because the sentence "the $n$th Turing machine halts" is equivalent to "there are inifnitely many numbers in $S$ that are divisible by $p_{n}$" — in MSO we can count modulo $p_{n}$.
What about sets that have decidable MSO theories? Again, we can take ultimately periodic sets, but as discussed before these are all MSO-definable, hence not useful. A non-trivial example is the set of factorials $\{n!\ :\ n\in\mathbb{N}\}$.
Sketch: why the factorials yield a decidable theory.
It follows from Büchi's theorem that decidability of the theory of a set $S$ is equivalent to the decidability of the following problem: a non-deterministic Büchi automaton is given as input and we have to decide whether it accepts the characteristic word of $S$.
Let $\mathbf w\in\{0,1\}^{\omega}$ be the characteristic word of the set $\{n!\ :\ n\in\mathbb{N}\}$, it looks like this: $d_{1},d_2,\ldots$ are the number of $0$s between consecutive $1$s. Since any Büchi automaton $\mathcal{A}$ has finite memory, it cannot count the number of $0$s precisely. This essentially means that there exists some $d\in\mathbb{N}$ — that depends only on the number of states of $\mathcal{A}$ — such that for any $e_1,e_2>d$, with $e_1\equiv e_2\mod d!$, we can replace a block of $e_1$ many zeros with a block with $e_2$ many zeros without the automaton $\mathcal{A}$ noticing. More precisely, construct the word $\mathbf w'$ from $\mathbf w$ by replacing all ${\color{red}d_i}>d$ with $d\le {\color{green}e_i}\le d!$ such that ${\color{red}d_i}\equiv {\color{green}e_i}\mod d!$. The automaton $\mathcal{A}$ either accepts both $\mathbf w$ and $\mathbf w'$ or rejects both of them. The final observation is that the word $\mathbf w'$ is ultimately periodic whence the decidability easily follows.
For the full proof of this and a few other examples consult this paper by Elgot and Rabin. Alas these positive examples do not seem to be beneficial for proving Theorem 1. For this, the correct notion appears to be that of an almost periodic word.
Almost periodic words
We say that a finite word $u$ occurs in an infinite one $\mathbf{w}$, if $\mathbf w_{n}\mathbf w_{n+1}\cdots \mathbf w_{n+k}=u$ for some $n,k$. Informally, an infinite word is almost periodic if the distance between the occurences is bounded.
Definition 2. (almost periodic) $\mathbf w \in \Sigma^{\omega}$ is almost periodic if for every $u\in\Sigma^{*}$ there exists a bound $b\in\mathbb{N}$ such that one of the following holds:
$u$ does not occur in $\mathbf w_{b}\mathbf w_{b+1}\mathbf w_{b+2}\cdots$
$u$ occurs in every subword of $\mathbf w$ of length at least $b$
It is effectively almost periodic if there is a procedure computing such $b$.
There are many examples of almost periodic words that show up in various fields, such as Thue-Morse sequence, the word whose $n$th letter is $\mathrm{sgn}\left(\sin(n)\right)$, etc. The reason why they are interesting relative to Theorem 1, is because their MSO theory is decidable.
Theorem 2. (Semenov 1983) For effectively almost periodic $S$, the theory of $(\omega,<,S)$ is decidable.
Define the binary alphabet $$ \mathbb{B}=\left\{\begin{pmatrix}0\\ 0\end{pmatrix},\begin{pmatrix}0\\ 1\end{pmatrix},\begin{pmatrix}1\\ 0\end{pmatrix}, \begin{pmatrix}1\\ 1\end{pmatrix}\right\}. $$ The objective, for the proof of Theorem 1, now is to build a word $\mathbf w\in\mathbb B^{\omega}$ that contains some complicated information about Turing machines, similar to the negative example; yet $\mathbf w$ has to be such that the projections $\pi_{1}(\mathbf w)$, $\pi_{2}(\mathbf w)$ are themselves almost periodic.1
If I just write down the construction, it will leave too many questions unanswered, therefore in order to movitave the definition, I will first give a couple of failed attempts of reaching the goal.
Let $\mathbf z$ be an infinite word over the binary alphabet, for example the word with the Turing machines from the example above. One brutish way of encoding it is as follows.
Partition the set $\mathbb B^{4}$ into $A_{0}, A_{1}$ such that $\pi_{i}(A_0)=\pi_{i}(A_1)$, $i\in\{1,2\}$. One such partition is to see $\mathbb B^{4}$ as the set of permutations of $\mathbb B$, and set $A_{0}$ to be the even permutations, and $A_{1}$ to be the odd ones. Belonging to the set $A_0$ or $A_1$ contains one bit of information, which is lost upon projection. We can certainly encode $\mathbf z$ by picking a word from the set: $$ A_{\mathbf z_0}A_{\mathbf z_1}A_{\mathbf z_2}A_{\mathbf z_3}\cdots. $$ The problem is that there are infinite words in the set above, that have a projection that is not almost periodic (we could, for example, always pick a particular word from $A_0$). One way of proving Theorem 1 would be to show that not all words in the set above are of that nature. But this is not the way in which we will proceed, because I don't know how.
To stop the phenomenon that made it easy to pick words with projections that are not almost periodic, we need to mix the words from the set somehow.
Definition 3. (function $c$, complete concatenation) Let $X$ be a finite set of finite words. A complete concatenation of $X$ is a word $u$ that is made from concatenating elements of $X$ such that for all $x,y\in X$, $xy$ is a subword of $u$. Define $c$ to be a function that inputs a finite set of finite words and outputs one such complete concatenation, of minimal length.
Consider the following definition: \begin{align} &W(0)=A_{0}\qquad W(1)=A_{1},\\
&\ \ \ \ \ \ W(ua)=c\left(W(u)\right)W(a). \end{align}
Let $z_1,z_2,\ldots $ be a sequence of finite words with lengths $|z_{n}|=n$, such that $$ k\text{th letter of }z_{n}\text{ is }1\qquad \Leftrightarrow\qquad k\text{th Turing machine halts in fewer than }n\text{ steps}. $$
Words from the set $$ W=W(z_{1})W(z_{2})W(z_{3})\cdots, $$ contain the information about machines in the sequence $z_{1},z_{2},\ldots$, and moreover by induction we can prove the following lemma.
Lemma 4. For all $u,u'$ of the same length and $i\in\{1,2\}$ we have $$ \pi_i\left(W(u)\right)=\pi_i\left(W(u')\right). $$ From where it follows that both projections of words in $W$ are almost periodic.
Lemma 5. For all $\mathbf w\in W$, $i\in\{1,2\}$, $\pi_{i}(\mathbf w)$ is almost periodic.
Proof. Use the definition of complete concatenation and the fact that the length of members of $W(0), W(1)$ is bounded.
Words from $W$ have projections with the desirable property, thanks to Lemma 5, and they contain the information for Turing machines; so where is the problem? The problem is that even though words from $W$ have the information, it cannot be accessed by MSO. In particular, there is no formula that is equivalent to "there exists some $n$, such that the $k$th letter of $z_{n}$ is $1$". This is because the length of the elements in $W(ua)$ is not a very simple (in terms of MSO) function of the length of elements in $W(u)$. This situation would be remedied if we could somehow construct sets such that $W(ua)$ is made exclusively from concatenating elements of $W(u)$.
This brings us to the third and final attempt.
The trick is as follows. Define an auxiliary set:
\begin{align} &\qquad\qquad X(0)=A_{0}\qquad X(1)=A_{1}\\
&X(uab)=c\left(\ X(u0)\cup X(u1)\ \right)\ X(ua)\ X(ub). \end{align} and $$ W(u)=X(u0)\cup X(u1). $$
The words in $X(u)$ encode the information that is in $u$ by ordering the last two pieces, for example the difference between $X(u00)$ and $X(u01)$ is:
Indeed, we can prove by induction that for $u,u'$, such that $u\ne u'$, $X(u)$ and $X(u')$ are disjoint. Let us gather all the good properties of $W$ defined above.
$W(u)$ and $W(u')$ are disjoint for $u\ne u'$.
This follows from the discussion above, that the same statement is true for $X$, which can be proved by induction on the length of $u,u'$. If $u$ and $u'$ are of different lengths then the statement is clear.
$W(ua)$ is built by concatenating elements of $W(u)$.
$W(ua)$ is the union of $X(ua0)$ and $X(ua1)$, each of which is built by concatenating elements of the sets $X(u0)$ and $X(u1)$. The union of $X(u0)$ and $X(u1)$ is the set $W(u)$.
For $u,u'$ of the same length and $i\in\{1,2\}$, $\pi_{i}\left(W(u)\right)=\pi_{i}\left(W(u')\right)$.
The proof is the same as in Attempt 2, by induction on the length of $u,u'$.
Let $z_{1}, z_2, z_3$ be the sequence of finite words from Attempt 2, that is words whose length is $|z_{n}|=n$ and such that $$ k\text{th letter of }z_{n}\text{ is }1\qquad \Leftrightarrow\qquad k\text{th Turing machine halts in fewer than }n\text{ steps}. $$ Define the set of infinite words: $$ W=W(z_1)W(z_2)W(z_{3})\cdots . $$
Using properties 2 and 3, we can prove that the projections of words in $W$ are all almost periodic.
Lemma 6. For all $\mathbf w\in W$, and $i\in\{1,2\}$, $\pi_{i}(\mathbf w)$ is almost periodic.
The information that is held in $z_1,z_2,\ldots$ is being deleted by projections. But how do we pull it out of elements of $W$ using MSO? In particular we want a formula that is equivalent to "there exists $n$, the $j$th letter of $z_{n}$ is $1$", which in turn is equivalent to "$j$th Turing machine halts."
Define the following langauge (of finitely many words) $$ L = \bigcup_{u\in\{0,1\}^{j-1}} W(u1), $$ and let $b$ be the length of its words. From Property 1, $L$ is disjoint from $W(u0)$ for any $u$ of length $j-1$. Further, Property 2 implies that for all $v$, $|v|>j$, that have a $1$ in the $j$th position, words in $W(v)$ are made by concatenating elements of $L$.
Let $a$ be the length of words in $$ W(z_{1})W(z_{2})\cdots W(z_{j-1}). $$ We can now easily prove the following lemma.
Lemma 7. Let $\mathbf w\in W$. The following statements are equvialent:
There exists some $n$ such that $$ \mathbf w_{a+bn}\mathbf w_{a+bn+1}\cdots \mathbf w_{a+b(n+1)-1}\in L. $$
There exists some $n$, such that the $j$th letter of $z_{n}$ is $1$
The $j$th Turing machine halts.
Combining Lemma 7, 6 and Theorem 2, concludes the proof of Theorem 1.
Is there a simpler construction? Perhaps along the lines of Attempt 1?
Seen this way $\pi_{i}(\mathbf w)$ is the characteristic word of $S_i$, $i\in\{1,2\}$. ↩︎
MSO extension | CommonCrawl |
4.1: Sequences
4: Sequences and Series
{ "4.1_E:_Exercises" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" }
{ "4.0:_Introduction" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "4.1:_Sequences" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "4.2:_Series" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "4.3_:_The_Divergence_and_Integral_Tests" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "4.4:_Convergence_Tests_-_Comparison_Test" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "4.5:_Alternating_Series" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "4.6:_Ratio_and_Root_Tests" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" }
4.1 E: Exercises
[ "stage:draft", "article:topic", "calcplot:yes", "jupyter:python", "license:ccbyncsa", "showtoc:yes" ]
https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FMount_Royal_University%2FMATH_2200%253A_Calculus_for_Scientists_II%2F4%253A_Sequences_and_Series%2F4.1%253A_Sequences%2F4.1_E%253A_Exercises
Mount Royal University
MATH 2200: Calculus for Scientists II
This page is a draft and is under active development.
Exercise \(\PageIndex{10}\)
Contributors and Attributions
Find the first six terms of each of the following sequences, starting with \(\displaystyle n=1\).
1) \(\displaystyle a_n=1+(−1)^n\) for \(\displaystyle n≥1\)
Solution: \(\displaystyle a_n=0\) if \(\displaystyle n\) is odd and \(\displaystyle a_n=2\) if \(\displaystyle n\) is even
2) \(\displaystyle a_n=n^2−1\) for \(\displaystyle n≥1\)
3) \(\displaystyle a_1=1\) and \(\displaystyle a_n=a_{n−1}+n\) for \(\displaystyle n≥2\)
Solution: \(\displaystyle {a_n}={1,3,6,10,15,21,…}\)
4) \(\displaystyle a_1=1, a_2=1\) and \(\displaystyle a_n+2=a_n+a_{n+1}\) for \(\displaystyle n≥1\)
1) Find an explicit formula for \(\displaystyle a_n\) where \(\displaystyle a_1=1\) and \(\displaystyle a_n=a_{n−1}+n\) for \(\displaystyle n≥2\).
\(\displaystyle a_n=\frac{n(n+1)}{2}\)
2) Find a formula \(\displaystyle a_n\) for the \(\displaystyle nth\) term of the arithmetic sequence whose first term is \(\displaystyle a_1=1\) such that \(\displaystyle a_{n−1}−a_n=17\) for \(\displaystyle n≥1\).
3) Find a formula \(\displaystyle a_n\) for the \(\displaystyle nth\) term of the arithmetic sequence whose first term is \(\displaystyle a_1=−3\) such that \(\displaystyle a_{n−1}−a_n=4\) for \(\displaystyle n≥1\).
\(\displaystyle a_n=4n−7\)
4) Find a formula \(\displaystyle a_n\) for the \(\displaystyle nth\) term of the geometric sequence whose first term is \(\displaystyle a_1=1\) such that \(\displaystyle \frac{a_{n+1}}{a_n}=10\) for \(\displaystyle n≥1\).
5) Find a formula \(\displaystyle a_n\) for the \(\displaystyle nth\) term of the geometric sequence whose first term is \(\displaystyle a_1=3\) such that \(\displaystyle \frac{a_{n+1}}{a_n}=1/10\) for \(\displaystyle n≥1\).
Solution: \(\displaystyle a_n=3.10^{1−n}=30.10^{−n}\)
6) Find an explicit formula for the \(\displaystyle nth\) term of the sequence whose first several terms are \(\displaystyle {0,3,8,15,24,35,48,63,80,99,…}.\) (Hint: First add one to each term.)
7) Find an explicit formula for the \(\displaystyle nth\) term of the sequence satisfying \(\displaystyle a_1=0\) and \(\displaystyle a_n=2a_{n−1}+1\) for \(\displaystyle n≥2\).
Solution: \(\displaystyle a_n=2^n−1\)
Find a formula for the general term \(\displaystyle a_n\) of each of the following sequences.
8) \(\displaystyle {1,0,−1,0,1,0,−1,0,…}\) (Hint: Find where \(\displaystyle sinx\) takes these values)
9) \(\displaystyle {1,−1/3,1/5,−1/7,…}\)
Solution: \(\displaystyle a_n=\frac{(−1)^{n−1}}{2n−1}\)
Find a function \(\displaystyle f(n)\) that identifies the \(\displaystyle nth\) term \(\displaystyle a_n\) of the following recursively defined sequences, as \(\displaystyle a_n=f(n)\).
1) \(\displaystyle a_1=1\) and \(\displaystyle a_{n+1}=−a_n\) for \(\displaystyle n≥1\)
2) \(\displaystyle a_1=2\) and \(\displaystyle a_{n+1}=2a_n\) for \(\displaystyle n≥1\)
Solution: \(\displaystyle f(n)=2^n\)
3) \(\displaystyle a_1=1\) and \(\displaystyle a_{n+1}=(n+1)a_n\) for \(\displaystyle n≥1\)
4) \(\displaystyle a_1=2\) and \(\displaystyle a_{n+1}=(n+1)a_n/2\) for \(\displaystyle n≥1\)
Solution: \(\displaystyle a_1=1\) and \(\displaystyle a_{n+1}=a_n/2^n\) for \(\displaystyle n≥1\)
Plot the first \(\displaystyle N\) terms of each sequence. State whether the graphical evidence suggests that the sequence converges or diverges.
1) [T] \(\displaystyle a_1=1, a_2=2\), and for \(\displaystyle n≥2, a_n=\frac{1}{2}(a_{n−1}+a_{n−2})\); \(\displaystyle N=30\)
Solution: Terms oscillate above and below \(\displaystyle 5/3\) and appear to converge to \(\displaystyle 5/3\).
2) [T] \(\displaystyle a_1=1, a_2=2, a_3=3\) and for \(\displaystyle n≥4, a_n=\frac{1}{3}(a_{n−1}+a_{n−2}+a_{n−3}), N=30\)
3) [T] \(\displaystyle a_1=1, a_2=2\), and for \(\displaystyle n≥3, a_n=\sqrt{a_{n−1}a_{n−2}}; N=30\)
Solution: Terms oscillate above and below \(\displaystyle y≈1.57..\) and appear to converge to a limit.
4) [T] \(\displaystyle a_1=1, a_2=2, a_3=3\), and for \(\displaystyle n≥4, a_n=\sqrt{a_{n−1}a_{n−2}a_{n−3}}; N=30\)
Suppose that \(\displaystyle \lim_{n→∞}a_n=1, \lim_{n→∞}b_n=−1\), and \(\displaystyle 0<−b_n<a_n\) for all \(\displaystyle n\). Evaluate each of the following limits, or state that the limit does not exist, or state that there is not enough information to determine whether the limit exists.
1) \(\displaystyle \lim_{n→∞}3a_n−4b_n\)
Solution: \(\displaystyle 7\)
2) \(\displaystyle \lim_{n→∞}\frac{1}{2}b_n−\frac{1}{2}a_n\)
3) \(\displaystyle \lim_{n→∞}\frac{a_n+b_n}{a_n−b_n}\)
4) \(\displaystyle \lim_{n→∞}\frac{a_n−b_n}{a_n+b_n}\)
Find the limit of each of the following sequences, using L'Hôpital's rule when appropriate.
1) \(\displaystyle \frac{n^2}{2^n}\)
2) \(\displaystyle \frac{(n−1)^2}{(n+1)^2}\)
3) \(\displaystyle \frac{\sqrt{n}}{\sqrt{n+1}}\)
4) \(\displaystyle n^{1/n}\) (Hint: \(\displaystyle n^{1/n}=e^{\frac{1}{n}lnn})\)
For each of the following sequences, whose \(\displaystyle nth\) terms are indicated, state whether the sequence is bounded and whether it is eventually monotone, increasing, or decreasing.
1) \(\displaystyle n/2^n, n≥2\)
Solution: bounded, decreasing for \(\displaystyle n≥1\)
2) \(\displaystyle ln(1+\frac{1}{n})\)
3) \(\displaystyle sinn\)
Solution: bounded, not monotone
4) \(\displaystyle cos(n^2)\)
5) \(\displaystyle n^{1/n}, n≥3\)
Solution: bounded, decreasing
6) \(\displaystyle n^{−1/n}, n≥3\)
7) \(\displaystyle tan(n)\)
Solution: not monotone, not bounded
Determine whether the sequence defined as follows has a limit. If it does, find the limit.
1) \(\displaystyle a_1=\sqrt{2}, a_2=\sqrt{2\sqrt{2}}. a_3=\sqrt{2\sqrt{2\sqrt{2}}}\) etc.
2) \(\displaystyle a_1=3, a_n=\sqrt{2a_{n−1}}, n=2,3,….\)
Solution: \(\displaystyle a-n\) is decreasing and bounded below by \(\displaystyle 2\). The limit a must satisfy \(\displaystyle a=\sqrt{2a}\) so \(\displaystyle a=2\), independent of the initial value.
Use the Squeeze Theorem to find the limit of each of the following sequences.
1) \(\displaystyle nsin(1/n)\)
2) \(\displaystyle \frac{cos(1/n)−1}{1/n}\)
3) \(\displaystyle a_n=\frac{n!}{n^n}\)
4) \(\displaystyle a_n=sinnsin(1/n)\)
Solution: \(\displaystyle 0:|sinx|≤|x|\) and \(\displaystyle |sinx|≤1\) so \(\displaystyle −\frac{1}{n}≤a_n≤\frac{1}{n})\).
For the following sequences, plot the first \(\displaystyle 25\) terms of the sequence and state whether the graphical evidence suggests that the sequence converges or diverges.
1) [T] \(\displaystyle a_n=sinn\)
2) [T] \(\displaystyle a_n=cosn\)
Solution: Graph oscillates and suggests no limit.
Determine the limit of the sequence or show that the sequence diverges. If it converges, find its limit.
1) \(\displaystyle a_n=tan^{−1}(n^2)\)
2) \(\displaystyle a_n=(2n)^{1/n}−n^{1/n}\)
Solution: \(\displaystyle n^{1/n}→1\) and \(\displaystyle 2^{1/n}→1,\) so \(\displaystyle a_n→0\)
3) \(\displaystyle a_n=\frac{ln(n^2)}{ln(2n)}\)
4) \(\displaystyle a_n=(1−\frac{2}{n})^n\)
Solution: Since \(\displaystyle (1+1/n)^n→e\), one has \(\displaystyle (1−2/n)^n≈(1+k)^{−2k}→e^{−2}\) as \(\displaystyle k→∞.\)
5) \(\displaystyle a_n=ln(\frac{n+2}{n^2−3})\)
6) \(\displaystyle a_n=\frac{2^n+3^n}{4^n}\)
Solution: \(\displaystyle 2^n+3^n≤2⋅3^n\) and \(\displaystyle 3^n/4^n→0\) as \(\displaystyle n→∞\), so \(\displaystyle a_n→0\) as \(\displaystyle n→∞.\)
7) \(\displaystyle a_n=\frac{(1000)^n}{n!}\)
8) \(\displaystyle a_n=\frac{(n!)^2}{(2n)!}\)
Solution: \(\displaystyle \frac{a_{n+1}}{a_n}=n!/(n+1)(n+2)⋯(2n) =\frac{1⋅2⋅3⋯n}{(n+1)(n+2)⋯(2n)}<1/2^n\). In particular, \(\displaystyle a_{n+1}/a_n≤1/2\), so \(\displaystyle a_n→0\) as \(\displaystyle n→∞\)
Newton's method seeks to approximate a solution \(\displaystyle f(x)=0\) that starts with an initial approximation \(\displaystyle x_0\) and successively defines a sequence \(\displaystyle x_{n+1}=x_n−\frac{f(x_n)}{f′(x_n)}\). For the given choice of \(\displaystyle f\) and \(\displaystyle x_0\), write out the formula for \(\displaystyle x_{n+1}\). If the sequence appears to converge, give an exact formula for the solution \(\displaystyle x\), then identify the limit \(\displaystyle x\) accurate to four decimal places and the smallest \(\displaystyle n\) such that \(\displaystyle x_n\) agrees with \(\displaystyle x\) up to four decimal places.
1) [T] \(\displaystyle f(x)=x^2−2, x_0=1\)
2) [T] \(\displaystyle f(x)=(x−1)^2−2, x_0=2\)
Solution: \(\displaystyle x_{n+1}=x_n−((x_n−1)^2−2)/2(x_n−1); x=1+\sqrt{2}, x≈2.4142, n=5\)
3) [T] \(\displaystyle f(x)=e^x−2, x_0=1\)
4) [T] \(\displaystyle f(x)=lnx−1, x_0=2\)
Solution: \(\displaystyle x_{n+1}=x_n−x_n(ln(x_n)−1); x=e, x≈2.7183, n=5\)
1) [T] Suppose you start with one liter of vinegar and repeatedly remove \(\displaystyle 0.1L\), replace with water, mix, and repeat.
a. Find a formula for the concentration after \(\displaystyle n\) steps.
b. After how many steps does the mixture contain less than \(\displaystyle 10%\) vinegar?
2) [T] A lake initially contains \(\displaystyle 2000\) fish. Suppose that in the absence of predators or other causes of removal, the fish population increases by \(\displaystyle 6%\) each month. However, factoring in all causes, \(\displaystyle 150\) fish are lost each month.
a. Explain why the fish population after \(\displaystyle n\) months is modeled by \(\displaystyle P_n=1.06P_{n−1}−150\) with \(\displaystyle P_0=2000\).
b. How many fish will be in the pond after one year?
Solution: a. Without losses, the population would obey \(\displaystyle P_n=1.06P_{n−1}\). The subtraction of \(\displaystyle 150\) accounts for fish losses. b. After \(\displaystyle 12\) months, we have \(\displaystyle P_{12}≈1494.\)
3) [T] A bank account earns \(\displaystyle 5%\) interest compounded monthly. Suppose that \(\displaystyle $1000\) is initially deposited into the account, but that \(\displaystyle $10\) is withdrawn each month.
a. Show that the amount in the account after \(\displaystyle n\) months is \(\displaystyle A_n=(1+.05/12)A_{n−1}−10; A_0=1000.\)
b. How much money will be in the account after \(\displaystyle 1\) year?
c. Is the amount increasing or decreasing?
d. Suppose that instead of \(\displaystyle $10\), a fixed amount \(\displaystyle d\) dollars is withdrawn each month. Find a value of \(\displaystyle d\) such that the amount in the account after each month remains \(\displaystyle $1000\).
e. What happens if \(\displaystyle d\) is greater than this amount?
4) [T] A student takes out a college loan of \(\displaystyle $10,000\) at an annual percentage rate of \(\displaystyle 6%,\) compounded monthly.
a. If the student makes payments of \(\displaystyle $100\) per month, how much does the student owe after \(\displaystyle 12\) months?
b. After how many months will the loan be paid off?
Solution: a. The student owes \(\displaystyle $9383\) after \(\displaystyle 12\) months. b. The loan will be paid in full after \(\displaystyle 139\) months or eleven and a half years.
5) [T] Consider a series combining geometric growth and arithmetic decrease. Let \(\displaystyle a_1=1\). Fix \(\displaystyle a>1\) and \(\displaystyle 0<b<a\). Set \(\displaystyle a_{n+1}=a.a_n−b.\) Find a formula for \(\displaystyle a_{n+1}\) in terms of \(\displaystyle a_n, a\), and \(\displaystyle b\) and a relationship between \(\displaystyle a\) and \(\displaystyle b\) such that \(\displaystyle a_n\) converges.
6) [T] The binary representation \(\displaystyle x=0.b_1b_2b_3...\) of a number \(\displaystyle x\) between \(\displaystyle 0\) and \(\displaystyle 1\) can be defined as follows. Let \(\displaystyle b_1=0\) if \(\displaystyle x<1/2\) and \(\displaystyle b_1=1\) if \(\displaystyle 1/2≤x<1.\) Let \(\displaystyle x_1=2x−b_1\). Let \(\displaystyle b_2=0\) if \(\displaystyle x_1<1/2\) and \(\displaystyle b_2=1\) if \(\displaystyle 1/2≤x<1\). Let \(\displaystyle x_2=2x_1−b_2\) and in general, \(\displaystyle x_n=2x_{n−1}−b_n\) and \(\displaystyle b_{n−}1=0\) if \(\displaystyle x_n<1/2\) and \(\displaystyle b_{n−1}=1\) if \(\displaystyle 1/2≤x_n<1\). Find the binary expansion of \(\displaystyle 1/3\).
Solution: \(\displaystyle b_1=0, x_1=2/3, b_2=1, x_2=4/3−1=1/3,\) so the pattern repeats, and \(\displaystyle 1/3=0.010101….\)
7) [T] To find an approximation for \(\displaystyle π\), set \(\displaystyle a_0=\sqrt{2+1}, a_1=\sqrt{2+a_0}\), and, in general, \(\displaystyle a_{n+1}=\sqrt{2+a_n}\). Finally, set \(\displaystyle p_n=3.2^n\sqrt{2−a_n}\). Find the first ten terms of \(\displaystyle p_n\) and compare the values to \(\displaystyle π\).
For the following two exercises, assume that you have access to a computer program or Internet source that can generate a list of zeros and ones of any desired length. Pseudorandom number generators (PRNGs) play an important role in simulating random noise in physical systems by creating sequences of zeros and ones that appear like the result of flipping a coin repeatedly. One of the simplest types of PRNGs recursively defines a random-looking sequence of \(\displaystyle N\) integers \(\displaystyle a_1,a_2,…,a_N\) by fixing two special integers \(\displaystyle (K\) and \(\displaystyle M\) and letting \(\displaystyle a_{n+1}\) be the remainder after dividing \(\displaystyle K.a_n\) into \(\displaystyle M\), then creates a bit sequence of zeros and ones whose \(\displaystyle nth\) term \(\displaystyle b_n\) is equal to one if \(\displaystyle a_n\) is odd and equal to zero if \(\displaystyle a_n\) is even. If the bits \(\displaystyle b_n\) are pseudorandom, then the behavior of their average \(\displaystyle (b_1+b_2+⋯+b_N)/N\) should be similar to behavior of averages of truly randomly generated bits.
1) [T] Starting with \(\displaystyle K=16,807\) and \(\displaystyle M=2,147,483,647\), using ten different starting values of \(\displaystyle a_1\), compute sequences of bits \(\displaystyle b_n\) up to \(\displaystyle n=1000,\) and compare their averages to ten such sequences generated by a random bit generator.
Solution: For the starting values \(\displaystyle a_1=1, a_2=2,…, a_1=10,\) the corresponding bit averages calculated by the method indicated are \(\displaystyle 0.5220, 0.5000, 0.4960, 0.4870, 0.4860, 0.4680, 0.5130, 0.5210, 0.5040,\) and \(\displaystyle 0.4840\). Here is an example of ten corresponding averages of strings of \(\displaystyle 1000\) bits generated by a random number generator: \(\displaystyle 0.4880, 0.4870, 0.5150, 0.5490, 0.5130, 0.5180, 0.4860, 0.5030, 0.5050, 0.4980.\) There is no real pattern in either type of average. The random-number-generated averages range between \(\displaystyle 0.4860\) and \(\displaystyle 0.5490\), a range of \(\displaystyle 0.0630\), whereas the calculated PRNG bit averages range between \(\displaystyle 0.4680\) and \(\displaystyle 0.5220\), a range of \(\displaystyle 0.0540.\)
2) [T] Find the first \(\displaystyle 1000\) digits of \(\displaystyle π\) using either a computer program or Internet resource. Create a bit sequence \(\displaystyle b_n\) by letting \(\displaystyle b_n=1\) if the \(\displaystyle nth\) digit of \(\displaystyle π\) is odd and \(\displaystyle b_n=0\) if the \(\displaystyle nth\) digit of \(\displaystyle π\) is even. Compute the average value of \(\displaystyle b_n\) and the average value of \(\displaystyle d_n=|b_{n+1}−b_n|, n=1,...,999.\) Does the sequence \(\displaystyle b_n\) appear random? Do the differences between successive elements of \(\displaystyle b_n\) appear random?
Gilbert Strang (MIT) and Edwin "Jed" Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
4.1 E: Exercises is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by LibreTexts.
4.2: Series
jupyter:python | CommonCrawl |
Ladder and Cube
A 1 metre cube has one face on the ground and one face against a wall. A 4 metre ladder leans against the wall and just touches the cube. How high is the top of the ladder above the ground?
Doesn't Add Up
In this problem we are faced with an apparently easy area problem, but it has gone horribly wrong! What happened?
From All Corners
Straight lines are drawn from each corner of a square to the mid points of the opposite sides. Express the area of the octagon that is formed at the centre as a fraction of the area of the square.
Mediant Madness
Kyle got 75% on section A of his maths exam, but he only got 35% on section B.
He says this means he should score 55% overall, but his teacher said he only scored 50% overall! How can this be?
Once you have had a chance to think about it, click below to see what he scored on each section.
In section A, there were 12 questions and Kyle got 9 correct.
In section B, there were 20 questions and Kyle got 7 correct.
Overall, there were 32 questions and Kyle got 16 correct.
As section A had fewer questions than section B, the score in section B is given more weighting.
The difference in overall scores is because Kyle worked out the average of his two scores, but his teacher worked out the mediant.
Given two fractions $\frac{a}{c}$ and $\frac{b}{d}$, the mediant is defined as $\frac{a+b}{c+d}$.
Do you agree that Kyle only deserved 50%?
Here is an interactive diagram for you to explore the properties of the mediant of two fractions.
Can you find some values of $a, b, c$ and $d$ so that Kyle and his teacher both get the same value for the overall score (that is, the average and the mediant are the same)?
The pass mark for an exam is 50%. If I scored 25% on the first $n$ questions, under what circumstances can I still pass the exam?
Is it true that the mediant always lies between the two fractions? How do you know? | CommonCrawl |
Small nucleolar RNA U91 is a new internal control for accurate microRNAs quantification in pancreatic cancer
Alexey Popov1,
Arpad Szabo1 &
Václav Mandys1
RT-qPCR quantification of miRNAs expression may play an essential role in pancreatic ductal adenocarcinoma (PDAC) diagnostics. RT-qPCR-based experiments require endogenous controls for the result normalization and reliability. However, expression instability of reference genes in tumors may introduce bias when determining miRNA levels.
We investigated expression of 6 miRNAs, isolated from FFPE samples of pancreatic adenocarcinomas. Four internal controls were utilized for RT-qPCR result normalization: artificial miR-39 from C. elegans, U6 snRNA, miR-16 and snoRNA U91.
We found miR-21, miR-155 or miR-217 expression values in tumors may differ up to several times, depending on selected internal controls. Moreover, different internal controls can produce controversial results for miR-96, miR-148a or miR-196a quantification. Also, expression of our endogenous controls varied significantly in tumors. U6 demonstrated variation from −1.03 to 8.12-fold, miR-16 from −2.94 up to 7.38-fold and the U91 from −3.05 to 4.36-fold respectively. On the other hand, the most stable gene, determined by NormFinder algorithm, was U91. Each miRNA normalized relatively to the spike or U91, demonstrated similar expression values. Thus, statistically significant and insignificant differences between tumors and normal tissues for miRNAs were equal for the spike and the U91. Also, the differences between the spike and U91 were statistically insignificant for all of miRs except miR-217. Among three endogenous controls, U91 had the lowest average expression values and standard deviation in cancer tissues.
We recommend U91 as a new normalizer for miRNA quantification in PDACs.
Pancreatic ductal adenocarcinoma (PDAC) is the most common and the most aggressive primary pancreatic neoplasm. The majority of patients are diagnosed by the time the tumor had already invaded peripancreatic structures or has metastasized [1]. Therefore, there is a need for biomarkers enabling early detection of asymptomatic PDACs. miRNAs are stable in tissues and blood plasma [2]; consequently they are ideal molecules to be utilized as biomarkers. miRNAs are involved in oncogenesis, apoptosis and cell growth; thereby functioning as tumor suppressors or oncogenes [3–6]. A large number of miRNAs are proven to be overexpressed in pancreatic cancer [7–11]. On the other hand, the expression of miRNA-coding genes, which act as tumor suppressors, could be inhibited in cancer cells [12–16]. Alterations in the miRNAs expression profile of cancer in comparison with normal tissues could be used in pancreatic cancer diagnostics. The high sensitivity of reverse transcription quantitative PCR (RT-qPCR) has made it a popular method in the measurement of tumor miRNA expression. RT-qPCR-based experiments require endogenous controls for result normalization, reliability and reproducibility. The endogenous control helps to correct differences between sample quality and variations during RNA extraction or reverse transcription procedures. Housekeeping genes, ribosomal, small nuclear or nucleolar RNAs can play the role of such internal controls. However, according to experimental data, expression levels of these genes may differ in neoplastic and normal tissues [17–19]. These variations may introduce bias to experiment results.
In this study we compared the expression of selected miRNAs in samples from pancreatic cancer and normal pancreatic parenchyma and evaluated the influence of different internal controls on the expression data alterations.
Patients and tissue specimens
FFPE blocks of pancreatic ductal adenocarcinomas were retrieved from the archive of the Department of Pathology of the 3rd Faculty of Medicine of the Charles University and University Hospital Kralovske Vinohrady in Prague. The samples were collected from 24 patients, who had undergone pancreatoduodenectomy, distal pancreatectomy or total pancreatectomy between 2007 and 2012. Participants signed a written informed consent before the study. The study was performed according to the Declaration of Helsinki and approved by the Ethics Committee of the Third Faculty of Medicine (Charles University in Prague, Czech Republic). The resolution 1006/2012 was signed by Dr. Marek Vacha, Ph.D, Head of the Ethics Committee.
In the selected FFPE blocks the tumor occupied the majority of the slide. As negative control, FFPE blocks containing normal pancreatic tissue of the respective patients were selected.
Clinicopathological features
The age of patients with resected pancreatic adenocarcinoma ranged from 36 to 83 years, with a median of 65.5 years. In total, 11 patients were women and 13 patients were men. Genetic syndromes were described in none of the patients. Grossly, 18 tumors were located in the head of the pancreas, 1 in the body of the pancreas and 5 in the tail of the pancreas.
The tumors showed in all of the selected cases the features of conventional ductal pancreatic adenocarcinoma. According to the guidelines of the WHO Classification of Tumors of the Gastrointestinal Tract, 3rd and 4th edition, 1 tumor was well differentiated, 14 tumors were moderately differentiated and 9 tumors were poorly differentiated. In one patient, a synchronous mucinous cystic neoplasm (MCN) was identified in the cauda of the pancreas. In another patient the tumor originated from an MCN. In 3 patients the resected tumor was described as pT1, in 5 patients pT2, in 15 patients pT3 and in one patient pT4. Additionally, lymph node metastases were confirmed in the resected specimens of 18 patients.
RNA isolation and reverse transcription
One to three 6 μm thick unstained paraffin embedded tissue sections were procured for RNA extraction, using the miRNeasy FFPE kit (Qiagen), following the manufacturer's instructions. Two microliters of isolated RNA were used for RNA quantity and purity analysis. Optical density at 260 and 280 nm was measured with a multi-detection microplate reader Synergy HT (BioTek), including Take3 micro-volume plate. RNA integrity was assessed with denaturing agarose gel electrophoresis and GeneTools 3.08 software (SynGene).
A mix of 10 stem-loop primers was used for miRNA reverse transcription. Stem-loop primers were selected for the analysis, because their structure reduces annealing of the primer to pre- and pri-miRNAs, therefore increasing the specificity of the assay. Primers were designed with miRNA primer designer software, kindly provided by Dr. Fuliang Xie, East Carolina University. The stem-loop primer sequences for the internal controls, including the alien spike (miR-39 from C. elegans), and the examined pancreatic miRNAs are listed in Tables 1 and 2. The spike RNA was added to the reaction mix directly before the reverse transcription. Alien spike can't be used as a normalizer for differences between samples during the RNA isolation, because tissue sections may contain different amounts of tissue. Therefore, the addition of spike before RNA isolation may introduce bias, because a ratio between amount of the RNA and the alien spike concentration may vary from sample to sample.
Table 1 Stem-loop primers for the internal controls
Table 2 Stem-loop primers for the miRNAs
Reverse transcription was carried out, using RevertAid Reverse Transcriptase (Thermo Scientific), in a 50 μl reaction mixture, containing the following reagents: 1 μg of DNA-free RNA, reaction buffer [50 mM Tris–HCl (pH 8.3 at 25 °C), 50 mM KCl, 4 mM MgCl2 and 50 mM DTT]; 1 mM of dATP, dTTP, dCTP, dGTP; 20 IU rRNasin ribonuclease inhibitor; 100 IU of moloney murine leukemia virus reverse transcriptase (M-MuLV RT) and the primer mix, including 20 pmol of each stem-loop primer. Artificial spike RNA (miR-39 from C. elegans, 5 × 108 copies) was also added to the reaction as internal control. After initial denaturation (5 min at 70 °C, then cooling samples on ice), the reactions were incubated at 25 °C (10 min), and then at 42 °C for 1 h. To stop the reaction, the mixture was heated at 70 °C for 10 min.
Real-time qPCR
cDNA samples were amplified in duplicates, using the Applied Biosystems 7500 Fast real-time PCR system and Hot FirePol EvaGreen qPCR Mix Plus (Solis BioDyne). The reaction mix included 10 pmol of each primer (miRNA specific and the universal (Table 3)) and 2 μl of cDNA.
Table 3 Real-time qPCR primers
Amplification of the cDNAs was performed at the following thermal conditions: denaturation at 94 °C for 15 min, followed by 40 cycles consisting of denaturation at 94 °C for 15 s, annealing at 48 °C for 60 s and DNA synthesis at 72 °C for 40 s. Reaction product specificity was controlled with their respective melting curves.
The expression of miRNAs in neoplastic and normal tissues was compared utilizing a paired two-tailed Student's t test as well as a one-way ANOVA analysis. P-values below 0.05 were regarded as statistically significant. RT-qPCR data (threshold cycles) were linearized, and the NormFinder algorithm was used to calculate the most stable gene among the internal controls.
Evaluation of miRNA expression levels in PDAC samples
We investigated the expression of 6 miRNAs isolated from FFPE samples of pancreatic adenocarcinomas from 24 patients. The following microRNAs were selected: miR-21, which promotes cell proliferation and may accelerate tumorigenesis [8, 9, 20]; miR-155, which interacts with TP53 INP1 and transforming growth factor β (TGF-β) [11, 21, 22]; miR-96 and miR-217, which may act as a tumor suppressors, inhibiting the KRAS-signaling pathway [13, 14] also miR-148a and miR-196a, which are frequently included in experimental panels for pancreatic carcinoma diagnosis [23–29].
Four internal controls were utilized for qRT-PCR result normalization: an alien spike (artificial miR-39 from C. elegans) and three endogenous controls – U6 snRNA, miR-16 and snoRNA U91. miRNA expression values were normalized relative to each of these controls, and significant variations for the same miRNAs were found (Fig. 1, Table 4). In comparison with normal pancreatic tissue, miR-21 was significantly overexpressed, up to 14.56-fold (p < 0.01) in the case of the alien spike. However, for other internal controls, fold change values were shifted to 5.44 for U6 (p < 0.01), 7.03 for miR-16 (p < 0.01) and 17.71 for U91 (p < 0.01), respectively (Table 4, Fig. 1). The miR-155 also demonstrated increased expression levels with great variations between internal controls: 15.1-fold for the spike (p < 0.01); 5.05-fold for U6 (p < 0.01); 6.39-fold for miR-16 (p < 0.01) and 13.36-fold for U91 (p < 0.01). miRNA miR-96 in pancreatic carcinoma did not show significant differences in comparison with normal tissues, when normalizing to the alien spike (−1.04-fold, p > 0.05), as well as to U91 (−1.17-fold, p > 0.05). But, this miRNA was significantly down-regulated, when the expression was measured relative to U6 (−3.22-fold, p < 0.01) or miR-16 (−2.32-fold, p < 0.01). Also, no significant differences were found for miR-148a, normalized to spike (1.25 fold, p > 0.05) and U91 (1.06, p > 0.05). But, this miRNA was significantly inhibited for U6 (−1.33 fold, p < 0.01) and miR-16 (−2.04 fold, p < 0.01). Expression of miR-196a was slightly up-regulated relatively the alien spike (1.09-fold) and U91 (1.13-fold), without the results being statistically significant (p > 0.05). On the other hand, miR-196a was significantly down-regulated for U6 (−2.22-fold, p < 0.01), as well as not statistically significant for the miR-16 (−1.35, p > 0.05). The expression of miR-217 was significantly lower in all PDACs than in normal pancreatic tissues for all the examined internal controls (p < 0.01) (Table 4, Fig. 1).
Average expression of six miRNAs in pancreatic cancers. Four different internal controls and one combination of two of them (U6 + U91) were used for the results normalization. Data are presented as mean ± standard deviation (SD). Statistically significant differences (Student's t-test, p < 0.05) between tumors and normal tissues are labeled with asterisk. MicroRNA expression values depend on the selected internal control and may vary up to several times
Table 4 Average miRNAs fold change values in pancreatic cancers in comparison with normal tissues
NormFinder algorithm was used to calculate the most stable pairing of internal controls. According to our results, the best combination was U6 and U91 (stability value = 0036; Table 5) for the miRNAs expression evaluation. Normalized to this most stable pair, miRNAs miR-21 and miR-155 demonstrated significant upregulation (9.67-fold and 8.79-fold, p < 0.01). Activity of miR-96, miR-196a and miR-217 was significantly inhibited (−1.85-fold, −1.34-fold and −7.19-fold, p < 0.01). The miR-148a expression also was down-regulated, but the decrease was not statistically significant (−1.27-fold, p > 0.05).
Table 5 Stability evaluation of all internal controls using NormFinder algorithm
Determination of the best normalizers for the miRNAs expression measuring
To find the best normalizer, we compared miRNA expression levels, normalized relatively the artificial spike and other endogenous controls, including the combination of U6 + U91, with one-way ANOVA analysis for each individual miRNA. The null hypothesis (Ho) was, that average fold change values, calculated for the each individual miRNA, are the same for all the internal controls. However, the differences were significant in the case of miR-21, miR-96, miR-148a, miR-155 and miR-196a (p < 0.01). For miR-217, the difference was not statistically significant (p > 0.05). Consequently, we compared miRNA expression normalized to the spike, with their normalization to the other individual examined endogenous controls, using a paired two-tailed Student's t-test. Differences of miRNA expression, normalized to U6 or the combination of U6 and U91, in comparison with the alien spike, were statistically significant for all miRNAs (p < 0.01; Table 6). In the case of miR-16, the difference was not significant for miR-217 only (p > 0.05; Table 6). On the other hand, the difference between spike and the U91 was statistically insignificant for all miRNAs (p > 0.05; Table 6), except for miR-217 (p < 0.05; Table 6). Thus, one endogenous control was found which demonstrated a behavior very similar to the alien spike.
Table 6 The difference between normalizers
We investigated the causes of miRNA expression variations and their dependence on certain normalizers, and thus attempted to find the most suitable normalizer. The 2-ΔΔCT method was used for the miRNAs expression quantification, where CT is cycle threshold and ΔΔCT = ((CTmiRNA)tumor-(CTcontrol)tumor)-((CTmiRNA)normal-(CTcontrol)normal) For accurate miRNA quantification, in theory, CT values for the internal control gene should be very close for tumors and normal tissues, ideally (CTcontrol)tumor = (CTcontrol)normal. However, this CTcontrol value may be shifted up to several cycles in tumors (for example, up to ± n cycles), if the expression of endogenous control differs in tumor and normal tissue. This difference may introduce a bias to the miRNA fold change calculations:
$$ \Delta \Delta \mathrm{CT} = \left({\left(\mathrm{C}{\mathrm{T}}_{\mathrm{miRNA}}\right)}_{\mathrm{tumor}}\hbox{--} \left({\left(\mathrm{C}{\mathrm{T}}_{\mathrm{control}}\right)}_{\mathrm{normal}} \pm \mathbf{n}\right)\right)-\left({\left(\mathrm{C}{\mathrm{T}}_{\mathrm{miRNA}}\right)}_{\mathrm{normal}}\hbox{--} {\left(C{\mathrm{T}}_{\mathrm{control}}\right)}_{\mathrm{normal}}\right). $$
While analyzing the amplification curves of the different internal controls, almost in all tumor samples a cycle threshold (CT) shift of n = 5 or even 6 cycles upwards in comparison with the normal tissue was apparent (Table 7). For example, CT values of the spike were very similar for PDAC and the normal tissues, they differed less than n = 0.8 cycle (Table 7). Nevertheless, for other normalizers these values varied from n = −6.20 up to n = 5.8 cycles (Table 7). We measured expression levels of our endogenous controls, using the alien spike for normalization. As expected, U6 expression in tumors varied from −1.03 to 8.12-fold, miR-16 showed variations from −2.94 up to 7.38-fold in different tumors, and the U91 from −3.05 to 4.36-fold respectively. The difference in expression was statistically significant for all endogenous controls (p < 0.01; Table 8). Also, U6 was overexpressed in 22 tumors from 24, miR-16 in 18 tumors and U91 in 14 correspondently. miR-16 was downregulated in 6 tumors and U91 in 5 tumor samples (Table 9). Thus, all selected endogenous controls demonstrated expression instability in tumor samples.
Table 7 Cycle threshold values (CT) for endogenous controls are different in tumors and normal tissues
Table 8 Expression values of candidate endogenous control genes are highly variable in PDACs in comparison with normal tissues
Table 9 Expression of the endogenous controls is unstable in all tumor samples
To identify the most stable internal control, the NormFinder algorithm was used. RT-qPCR expression data for all internal controls were linearized and compared in two groups: tumors and normal pancreatic tissues (Table 5). The most stable gene was U91 (stability value = 0.056), but stability values for all internal controls were close (0.085; 0.056 and 0.078 for the spike, U6 and miR-16, respectively; Table 5). U6 had the same stability value as U91 (0.056), but it demonstrated higher levels of intergroup variation (Table 5). Surprisingly, the most unstable control was the artificial spike (0.085; Table 5). The NormFinder can calculate variations between two groups, including all normal or cancer samples, but it is unable to evaluate the differences between normal and cancer tissues among individual patients. This may be the reason for the "instability" of the alien spike. The most stable pair of internal controls was a combinations of two genes, U6 and U91 (stability value = 0.036; Table 5).
MicroRNA expression values depend on a selected internal control
Pancreatic ductal adenocarcinoma is one of the most frequently occurring solid cancers and it carries an extremely poor prognosis. As such, an extensive search for biomarkers of early disease is undergoing, miRNAs may have the ideal characteristics to fulfil this role. Due to their stability and resistance against RNase degradation, they are viable in a wide range of samples. Viable miRNAs for PDAC diagnosis may be isolated from frozen and paraffin embedded tissue samples, stool, blood plasma, or pancreatic juice, [24, 28, 30, 31].
For our analysis we have selected miRNAs, frequently described to be dysregulated in various types of PDACs samples. Studies mapping microRNA expression using microarrays have proven considerable heterogeneity in pancreatic carcinomas. Zhang et al. have demonstrated relative expression values miRNAs spanning 6-logs (from 0.01–10,000) among individual cases [27]. Among tumor samples we determined up to 45-fold variability in both miR-21 and miR-155 levels. During our brief review of literature we have noticed that the mean values of miRNA-levels in tumors varied among studies. There are many factors including differences in reagents/materials for miRNAs quantification protocols and data-processing algorithms, which can contribute to the variation. One of these factors is a variety of controls, which were used for normalization. Thus, the differences in the mean expression of miRNAs may be at least partially explained by the choice of controls for normalization.
For example, when normalizing with snoRNA U6, Bloomston et al. measured a median 2.2-fold increase in tumor miR-21 levels [24]. Zhang et al., using the same internal control, found that expression of miR-21 was up-regulated up to 6888-fold in several tumors [27]. Hong et al. reported about up to 550-fold increase in PDACs, normalizing relative to U6 [31]. When using both U6 and 5S as endogenous controls, du Rieu et al. detected a 20.1-fold tumor miR-21 up-regulation [8]. In our study, when normalizing with U6, a mean 5.5-fold increase in miR-21 in tumors was present. However, when normalizing to miR-16 a 7.03-fold increase was present (p < 0.01, Table 4). Wang et al. detected in plasma samples with miR-16 only a mean 2.42-fold up-regulation [30]. On the other hand, when normalizing to the artificial spike or with U91 we detected a mean 14.56-fold and 17.71-fold increase (p < 0.01, Table 4).
The data about miR-96 expression in PDAC are controversial. Several groups of authors reported about miR-96 expression fold increase in experiments with microarray [24, 32]. For example, Bloomston et al. measured an average 1.77-fold increase, when determining miR-96 levels in PDACs [24]. Kent et al. also demonstrated 2.7-fold upregulation of miR-96 in pancreatic cancer cell lines [32]. On the other hand, miR-96 has been shown to be frequently down-regulated in experiments, utilizing Northern blot or RT-qPCR [13, 15, 25, 31, 33]. Szafranszka et al. determined in their study a −4.35-fold decrease in tumor miR-96 expression, when normalizing to miR-24 [25]. Hong et al. as well as Feng et al. showed that miR-96, normalized to U6, was downregulated in PDAC samples up to −8-fold [31, 33]. With U6, miR-16 and combination of U6 + U91 respectively, we demonstrated a statistically significant mean −3.22-fold, −2.32-fold and −1.85-fold decrease in tumor tissue (p < 0.01, Table 4). However, expression analysis with the artificial spike and U91 alone yielded a statistically insignificant alteration in miR-96 expression in tumors in comparison with normal tissues (p > 0.05, Table 4).
miRNA miR-148a expression is described to be down-regulated in PDAC due to promoter hypermethylation, which represents an early event in pancreatic carcinogenesis [15]. Bloomston et al. as well as Jamieson et al. measured an average −5.5-fold and −7.14-fold decrease respectively, when determining miR-148 expression with a microarray [24, 29]. In experiments, utilizing RT-qPCR, Szafranszka et al. demonstrated −6.15-fold decrease of miR-148a levels, with miR-24 as normalizer [25]. However, Ma et al. and Zhang et al., normalizing to U6, determined a respective −2.86-fold and −2.5-fold downregulation in PDAC samples [34, 35]. Hanoun et al. also reported about miR-148a down-regulation, using U6 like the endogenous control [15]. In our study, tumor miR-148a levels were −2.04-fold and −1.33-fold decreased with U6 and miR-16 as a normalizers, respectively (p < 0.01 and p < 0.05, Table 4). On the other hand, analysis of miR-148a expression, normalized to the alien spike, U91 and a combination of U6 and U91 did not determine statically significant differences in expression between cancerous and non-cancerous tissues (p > 0.05, Table 4).
The miR-155 is an onco-miR, overexpressed in early pancreatic adenocarcinoma precursors and invasive PDAC [11]. The miR-155 expression in PDACs and pancreatic cancer cell lines, measured by microarray, ranged from 1.8 to 2.9-fold in different studies [24, 28, 29, 36, 37]. On the other hand, Habbe et al. found a mean 11.6-fold increase in intraductal papillary mucinous neoplasms, which was measured by RT-qPCR relative to U6 [11]. Zhang et al. also used U6 like a normalizer in their study. They reported about up to 52-fold increase in individual cases [27]. In our pancreatic carcinomas a mean 5.05-fold increase was present, when normalizing to U6 (p < 0.01, Table 4). However, Ma et al. measured only a 2.11-fold increase with the same endogenous control [34]. Wang et al. determined a 3.74-fold increase in serum miR-155 levels in cancer, when normalizing with miR-16 [30]. Our PDAC samples showed, on the other hand, a mean 6.39-fold increase with miR-16 as internal control (p < 0.01, Table 4). However, the expression levels were several times higher, measured relative to the alien spike or U91 - 15.1 and 13.36-fold respectively (p < 0.01, Table 4).
The miR-196a is an onco-miR reported to be frequently dysregulated in PDAC [27, 30]. Zhang et al. measured, when normalizing to U6, 0.35-1557-fold variations in tumor miR-196 expression [27]. In our tumors we determined a mean −2.2-fold decrease, when normalizing to U6 (p < 0.01, Table 7). Wang et al. demonstrated 16.05-fold increase in plasma samples with miR-16 as the endogenous control [30]. On the other hand, for miR-16, as well as for the alien spike or U91 we did not find significant differences in miR196a expression between cancer and normal tissues (p > 0.05, Table 4).
The miR-217 inhibits in vitro tumor cell growth and it functions as a potential tumor-suppressor by influencing the Akt/KRAS signaling pathway, therefore, miR-217 is frequently down-regulated in PDAC. MicroRNA miR-217 was down-regulated 10-fold in the study by Szafranszka et al., normalized relative to miR-24 [25]. However, Greither et al. determined only a mean −2-fold decrease with 18S as internal control [22]. Ma et al. demonstrated −3.91-fold decrease, using U6 for normalization [34]. On the other hand, Hong et al. found, that expression of miR-217 was down-regulated up to −62.5-fold in PDACs. They also used U6 like internal control [31]. In our samples, miR-217 expression was significantly down-regulated across all internal controls, with a maximum −24.39-fold decrease with U6 and a minimum −7.19-fold decrease with U6 + U91 combination (p < 0.01, Table 4).
Thus, for miRNAs with high positive or negative expression levels, such as miR-21, mir-155 or miR-217, fold change values may differ up to several times, depending on selected internal controls. Moreover, different internal controls can produce controversial results for miRNAs quantification, as it was demonstrated for miR-96, miR-148a or miR-196a.
Comparing internal controls: U91 is a new endogenous control for microRNAs quantification in pancreatic cancer
RT-qPCR quantification of tumor miRNA expression may play an essential role in PDAC diagnostics, chemotherapy resistance and survival prediction. RT-qPCR-based experiments require endogenous controls for the result normalization, reliability and reproducibility. U6 small nuclear RNA [8–11, 14, 15, 27, 30, 37, 38], 18S [7] and 5S ribosomal RNAs [8, 15, 39, 40], small nucleolar RNAs RNU48, RNU43, RNU44 – commercial available Applied Biosystems assays [41], or miR-16 [30, 42, 43] were often used as the endogenous controls for miRNAs expression evaluation. However, there are data indicating, that expression levels of these reference genes may differ significantly in neoplastic and normal tissues [17–19]. This expression instability may introduce bias, when determining miRNA dysregulation in tumors. For example, U6 small nuclear RNA was the most common internal control [8–11, 14, 15, 27, 30, 38] for the quantification of miRNAs expression in PDAC. However, there are data, implying that U6 expression may be unstable in breast and cervical cancers [17, 19, 42]. Also, the amount of U6 may vary significantly in serum samples from patients with breast and colorectal cancers [18, 42]. According to our findings, U6 expression may show as high as an 8-fold difference in PDAC and normal pancreatic tissue (Table 8). On the other hand, U6 was determined as the second most stable gene by the NormFinder algorithm (Table 5). U6 also demonstrated greater expression stability in breast carcinoma tissue samples when compared with the snoRNAs RNU44, RNU48 and RNU43. Furthermore, changes in levels of these snoRNAs correlated with tumor morphology and patient prognosis [41]. However, U6 alongside 5S and miR-16 showed remarkable expression variability in tissue samples from patients with breast carcinoma [42].
The data about miR-16 expression in serum samples from the breast cancer patients are controversial. On the first look, this miRNA demonstrated significant expression variations [18, 42]. On the other hand, analysis with the geNorm algorithm has identified miR-16 as well as miR-425 as the most stable normalizers [43]. According to our measurements, expression of miR-16 varied significantly in pancreatic carcinomas (p < 0.01, Table 7). In addition, miR-16 was marked by the NormFinder algorithm as the least stable of the analyzed endogenous controls (Table 5).
Another possibility for RT-qPCR result normalization is the use of alien spike RNAs, such as miR-39 from C. elegans [18, 44], as internal controls. Also, these spike RNAs should be selected while taking into consideration that the same RNA sequences may already exist in the human genome. Surprisingly, according to the NormFinder analysis, the artificial spike was the least stable control (stability 0.085; Table 5). It must be taken into consideration, that the NormFinder algorithm can calculate variations between two groups including all normal and cancer samples, but it is unable to evaluate the differences between normal and cancer tissues from the individual patients. Accordingly, this may be the reason for the "instability" of the alien spike.
In this study, we compared the expression of 4 internal controls to determine the most stable of them. On the first look, the best internal control is the artificial spike, due to its amplification curves and threshold cycles, which have demonstrated to be very close for cancers and normal tissues (Table 7). On the other hand, according to the results, yielded by the NormFinder analysis, the best normalizer is the combination of U6 and U91. This combination has the best stability value, but as normalization results show, it differs significantly from the artificial spike (p < 0.01, Table 6). The most stable gene, determined by NormFinder, was U91 (Table 5). Each miRNA normalized relatively to the spike or U91, demonstrated similar expression values. Thus, statistically significant and insignificant differences between tumors and normal tissues for miRNAs were equal for the spike and the U91 (Table 4). Also, the differences between the spike and U91 were statistically insignificant for all of miRs except of miR-217 (Table 6). Among three endogenous controls, the U91 had the lowest average expression values and standard deviation in cancer tissues (Table 8).
Thus, we recommend U91 as a new normalizer of miRNA expression in pancreatic adenocarcinoma.
We found expression of traditional endogenous controls, such as U6 and miR-16 can be unstable in pancreatic tumors and may vary up to several times. This instability may introduce bias to the miRNAs quantification. On the other hand, U91, the new stable internal control for miRNAs expression evaluation in pancreatic cancers was found.
MIQE guidelines
This study was carried out in compliance to the Minimum Information for Publication of Quantitative Real–Time PCR Experiments (MIQE; [45]).
Availability of data
Data files, including raw CT values or fold change tables are available on request, please, contact the correspondence author.
PDAC:
Pancreatic ductal adenocarcinoma
RT-qPCR:
Reverse transcription quantitative polymerase chain reaction
Formalin-fixed, paraffin-embedded (tissues)
Cycle threshold
Hidalgo M. Pancreatic cancer. The New England journal of medicine. 2010;362(17):1605–17. doi:10.1056/NEJMra0901557.
Mitchell PS, Parkin RK, Kroh EM, Fritz BR, Wyman SK, Pogosova-Agadjanyan EL, et al. Circulating microRNAs as stable blood-based markers for cancer detection. P Natl Acad Sci USA. 2008;105(30):10513–8. doi:10.1073/pnas.0804549105.
Sassen S, Miska EA, Caldas C. MicroRNA - implications for cancer. Virchows Arch. 2008;452(1):1–10. doi:10.1007/s00428-007-0532-2.
Shenouda SK, Alahari SK. MicroRNA function in cancer: oncogene or a tumor suppressor? Cancer Metast Rev. 2009;28(3–4):369–78. doi:10.1007/s10555-009-9188-5.
Zhang BH, Pan XP, Cobb GP, Anderson TA. microRNAs as oncogenes and tumor suppressors. Dev Biol. 2007;302(1):1–12. doi:10.1016/j.ydbio.2006.08.028.
Lynam-Lennon N, Maher SG, Reynolds JV. The roles of microRNA in cancer and apoptosis. Biol Rev. 2009;84(1):55–71. doi:10.1111/j.1469-185X.2008.00061.x.
Lee EJ, Gusev Y, Jiang JM, Nuovo GJ, Lerner MR, Frankel WL, et al. Expression profiling identifies microRNA signature in pancreatic cancer. Int J Cancer. 2007;120(5):1046–54. doi:10.1002/Ijc.22394.
du Rieu MC, Torrisani J, Selves J, Al Saati T, Souque A, Dufresne M, et al. MicroRNA-21 Is Induced Early in Pancreatic Ductal Adenocarcinoma Precursor Lesions. Clin Chem. 2010;56(4):603–12. doi:10.1373/clinchem.2009.137364.
Moriyama T, Ohuchida K, Mizumoto K, Yu J, Sato N, Nabae T, et al. MicroRNA-21 modulates biological functions of pancreatic cancer cells including their proliferation, invasion, and chemoresistance. Mol Cancer Ther. 2009;8(5):1067–74. doi:10.1158/1535-7163.Mct-08-0592.
Ryu JK, Hong SM, Karikari CA, Hruban RH, Goggins MG, Maitra A. Aberrant MicroRNA-155 Expression Is an Early Event in the Multistep Progression of Pancreatic Adenocarcinoma. Pancreatology. 2010;10(1):66–73. doi:10.1159/000231984.
Habbe N, Koorstra JBM, Mendell JT, Offerhaus GJ, Ryu JK, Feldmann G, et al. MicroRNA miR-155 is a biomarker of early pancreatic neoplasia. Cancer Biol Ther. 2009;8(4):340–6. doi:10.4161/Cbt.8.4.7338.
Johnson SM, Grosshans H, Shingara J, Byrom M, Jarvis R, Cheng A, et al. RAS is regulated by the let-7 MicroRNA family. Cell. 2005;120(5):635–47. doi:10.1016/j.cell.2005.01.014.
Yu SN, Lu ZH, Liu CZ, Meng YX, Ma YH, Zhao WG, et al. miRNA-96 Suppresses KRAS and Functions as a Tumor Suppressor Gene in Pancreatic Cancer. Cancer Res. 2010;70(14):6015–25. doi:10.1158/0008-5472.Can-09-4531.
Zhao WG, Yu SN, Lu ZH, Ma YH, Gu YM, Chen J. The miR-217 microRNA functions as a potential tumor suppressor in pancreatic ductal adenocarcinoma by targeting KRAS. Carcinogenesis. 2010;31(10):1726–33. doi:10.1093/carcin/bgq160.
Hanoun N, Delpu Y, Suriawinata AA, Bournet B, Bureau C, Selves J, et al. The Silencing of MicroRNA 148a Production by DNA Hypermethylation Is an Early Event in Pancreatic Carcinogenesis. Clin Chem. 2010;56(7):1107–18. doi:10.1373/clinchem.2010.144709.
Mardin W, Mees ST. MicroRNAs: Novel Diagnostic and Therapeutic Tools for Pancreatic Ductal Adenocarcinoma? Ann Surg Oncol. 2009;16(11):3183–9. doi:10.1245/s10434-009-0623-1.
Peltier HJ, Latham GJ. Normalization of microRNA expression levels in quantitative RT-PCR assays: Identification of suitable reference RNA targets in normal and cancerous human solid tissues. Rna. 2008;14(5):844–52. doi:10.1261/Rna.939908.
Xiang MQ, Zeng Y, Yang RR, Xu HF, Chen Z, Zhong J, et al. U6 is not a suitable endogenous control for the quantification of circulating microRNAs. Biochem Bioph Res Co. 2014;454(1):210–4. doi:10.1016/j.bbrc.2014.10.064.
Hansen CN, Ketabi Z, Rosenstierne MW, Palle C, Boesen HC, Norrild B. Expression of CPEB, GAPDH and U6snRNA in cervical and ovarian tissue during cancer development. Apmis. 2009;117(1):53–9. doi:10.1111/j.1600-0463.2008.00015.x.
Qi LQ, Bart J, Tan LP, Platteel I, van der Sluis T, Huitema S et al. Expression of miR-21 and its targets (PTEN, PDCD4, TM1) in flat epithelial atypia of the breast in relation to ductal carcinoma in situ and invasive carcinoma. Bmc Cancer. 2009;9. doi: 10.1186/1471-2407-9-163.
Gironella M, Seux M, Xie MJ, Cano C, Tomasini R, Gommeaux J, et al. Tumor protein 53-induced nuclear protein 1 expression is repressed by miR-155, and its restoration inhibits pancreatic tumor development. Proc Natl Acad Sci U S A. 2007;104(41):16170–5. doi:10.1073/pnas.0703942104.
Greither T, Grochola LF, Udelnow A, Lautenschlager C, Wurl P, Taubert H. Elevated expression of microRNAs 155, 203, 210 and 222 in pancreatic tumors is associated with poorer survival. Int J Cancer. 2010;126(1):73–80. doi:10.1002/ijc.24687.
Szafranska AE, Davison TS, John J, Cannon T, Sipos B, Maghnouj A, et al. MicroRNA expression alterations are linked to tumorigenesis and non-neoplastic processes in pancreatic ductal adenocarcinoma. Oncogene. 2007;26(30):4442–52. doi:10.1038/sj.onc.1210228.
Bloomston M, Frankel WL, Petrocca F, Volinia S, Alder H, Hagan JP, et al. MicroRNA expression patterns to differentiate pancreatic adenocarcinoma from normal pancreas and chronic pancreatitis. Jama. 2007;297(17):1901–8. doi:10.1001/jama.297.17.1901.
Szafranska AE, Doleshal M, Edmunds HS, Gordon S, Luttges J, Munding JB, et al. Analysis of microRNAs in pancreatic fine-needle aspirates can classify benign and malignant tissues. Clin Chem. 2008;54(10):1716–24. doi:10.1373/clinchem.2008.109603.
Huang F, Tang J, Zhuang X, Zhuang Y, Cheng W, Chen W, et al. MiR-196a promotes pancreatic cancer progression by targeting nuclear factor kappa-B-inhibitor alpha. PloS one. 2014;9(2):e87897. doi:10.1371/journal.pone.0087897.
Zhang Y, Li M, Wang H, Fisher WE, Lin PH, Yao Q, et al. Profiling of 95 microRNAs in pancreatic cancer cell lines and surgical specimens by real-time PCR analysis. World journal of surgery. 2009;33(4):698–709. doi:10.1007/s00268-008-9833-0.
Link A, Becker V, Goel A, Wex T, Malfertheiner P. Feasibility of Fecal MicroRNAs as Novel Biomarkers for Pancreatic Cancer. PloS one. 2012;7(8):e42933. doi:10.1371/journal.pone.0042933.
Jamieson NB, Morran DC, Morton JP, Ali A, Dickson EJ, Carter CR, et al. MicroRNA molecular profiles associated with diagnosis, clinicopathologic criteria, and overall survival in patients with resectable pancreatic ductal adenocarcinoma. Clinical cancer research : an official journal of the American Association for Cancer Research. 2012;18(2):534–45. doi:10.1158/1078-0432.CCR-11-0679.
Wang J, Chen J, Chang P, LeBlanc A, Li D, Abbruzzesse JL, et al. MicroRNAs in plasma of pancreatic ductal adenocarcinoma patients as novel blood-based biomarkers of disease. Cancer prevention research. 2009;2(9):807–13. doi:10.1158/1940-6207.CAPR-09-0094.
Hong TH, Park IY. MicroRNA expression profiling of diagnostic needle aspirates from surgical pancreatic cancer specimens. Annals of surgical treatment and research. 2014;87(6):290–7. doi:10.4174/astr.2014.87.6.290.
Kent OA, Mullendore M, Wentzel EA, Lopez-Romero P, Tan AC, Alvarez H, et al. A resource for analysis of microRNA expression and function in pancreatic ductal adenocarcinoma cells. Cancer Biol Ther. 2009;8(21):2013–24.
Feng J, Yu JB, Pan XL, Li ZL, Chen Z, Zhang WJ, et al. HERG1 functions as an oncogene in pancreatic cancer and is downregulated by miR-96. Oncotarget. 2014;5(14):5832–44.
Ma MZ, Kong X, Weng MZ, Cheng K, Gong W, Quan ZW, et al. Candidate microRNA biomarkers of pancreatic ductal adenocarcinoma: meta-analysis, experimental validation and clinical significance. J Exp Clin Canc Res. 2013;32:71. doi:10.1186/1756-9966-32-71.
Zhang R, Li M, Zang W, Chen X, Wang Y, Li P, et al. MiR-148a regulates the growth and apoptosis in pancreatic cancer by targeting CCKBR and Bcl-2. Tumour biology : the journal of the International Society for Oncodevelopmental Biology and Medicine. 2014;35(1):837–44. doi:10.1007/s13277-013-1115-2.
Schultz NA, Werner J, Willenbrock H, Roslind A, Giese N, Horn T, et al. MicroRNA expression profiles associated with pancreatic adenocarcinoma and ampullary adenocarcinoma. Modern Pathol. 2012;25(12):1609–22. doi:10.1038/modpathol.2012.122.
Wang J, Raimondo M, Guha S, Chen JY, Diao LX, Dong XQ, et al. Circulating microRNAs in Pancreatic Juice as Candidate Biomarkers of Pancreatic Cancer. J Cancer. 2014;5(8):696–705. doi:10.7150/Jca.10094.
Nakata K, Ohuchida K, Mizumoto K, Kayashima T, Ikenaga N, Sakai H, et al. MicroRNA-10b is overexpressed in pancreatic cancer, promotes its invasiveness, and correlates with a poor prognosis. Surgery. 2011;150(5):916–22. doi:10.1016/j.surg.2011.06.017.
Sun M, Estrov Z, Ji Y, Coombes KR, Harris DH, Kurzrock R. Curcumin (diferuloylmethane) alters the expression profiles of microRNAs in human pancreatic cancer cells. Mol Cancer Ther. 2008;7(3):464–73. doi:10.1158/1535-7163.MCT-07-2272.
Takamizawa J, Konishi H, Yanagisawa K, Tomida S, Osada H, Endoh H, et al. Reduced expression of the let-7 microRNAs in human lung cancers in association with shortened postoperative survival. Cancer Res. 2004;64(11):3753–6. doi:10.1158/0008-5472.CAN-04-0637.
Gee HE, Buffa FM, Camps C, Ramachandran A, Leek R, Taylor M, et al. The small-nucleolar RNAs commonly used for microRNA normalisation correlate with tumour pathology and prognosis. Brit J Cancer. 2011;104(7):1168–77. doi:10.1038/sj.bjc.6606076.
Appaiah HN, Goswami CP, Mina LA, Badve S, Sledge GW, Liu YL, et al. Persistent upregulation of U6:SNORD44 small RNA ratio in the serum of breast cancer patients. Breast Cancer Res. 2011;13(5):R86. doi:10.1186/Bcr2943.
McDermott AM, Kerin MJ, Miller N. Identification and Validation of miRNAs as Endogenous Controls for RQ-PCR in Blood Specimens for Breast Cancer Studies. PloS one. 2013;8(12):e83718. doi:10.1371/journal.pone.0083718.
Liu JQ, Gao J, Du YQ, Li ZS, Ren Y, Gu JJ, et al. Combination of plasma microRNAs with serum CA19-9 for early detection of pancreatic cancer. Int J Cancer. 2012;131(3):683–91. doi:10.1002/Ijc.26422.
Bustin SA, Benes V, Garson JA, Hellemans J, Huggett J, Kubista M, et al. The MIQE guidelines: minimum information for publication of quantitative real-time PCR experiments. Clin Chem. 2009;55(4):611–22. doi:10.1373/clinchem.2008.112797.
This work was technically supported by the project OPPK No. CZ.2.16/3.1.00/24024, awarded by European Fund for Regional Development (Prague & EU – We invest for your future) and by the project PRVOUK Oncology P27, awarded by Charles University in Prague.
Department of Pathology, Third Faculty of Medicine, Charles University in Prague and University Hospital Kralovske Vinohrady, Srobarova 50, 100 00, Prague 10, Czech Republic
Alexey Popov
, Arpad Szabo
& Václav Mandys
Search for Alexey Popov in:
Search for Arpad Szabo in:
Search for Václav Mandys in:
Correspondence to Alexey Popov.
AP carried out the molecular genetic studies, performed the statistical analysis and drafted the manuscript. AS provided technical and material support, helped in acquisition of data and drafted the manuscript. VM conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
Popov, A., Szabo, A. & Mandys, V. Small nucleolar RNA U91 is a new internal control for accurate microRNAs quantification in pancreatic cancer. BMC Cancer 15, 774 (2015) doi:10.1186/s12885-015-1785-9
RT-qPCR (reverse transcription quantitative PCR) | CommonCrawl |
Tag Archives: finding
Finding all audit enabled tables and columns for your Common Data Service environment
November 17, 2020 Microsoft Dynamics CRM
As a best practice in maintaining your Common Data Service environments, it is important to review your audit configurations at the table and column levels to ensure you are only auditing records that are necessary. This will prevent excessive growth of your audit partitions. If you are wondering what I mean by table and columns, see this post.
The problem is easily identifying each and every column within every table. Here is how you do it.
Using the webAPI, create the following query:
https://orgname.crm.dynamics.com/api/data/v9.1/EntityDefinitions?$ select=LogicalName,IsAuditEnabled&$ filter=IsAuditEnabled/Value%20eq%20true&$ expand=Attributes($ select=LogicalName,IsAuditEnabled;$ filter=IsAuditEnabled/Value%20eq%20true)
If you run this in your browser, it will return a list of tables and columns that are enabled for auditing in JSON format. Now, you need an easy way to read and analyze the data.
Let's see how we can do this with Excel.
First, open a new blank workbook in Excel and select the Data tab
Click Get Data | From Online Services | From Dynamics 365 (online)
Copy and paste your webAPI query from above and click OK
Click Transform Data
Click the expand button next to Attributes
Uncheck IsAuditEnabled and click OK
Finally, click Close & Load
You can do the same thing in Power BI. Check out below:
Open a new Power BI report and select Get Data and click More
Then, select Online Services and Dynamics 365 (online)
Click Connect and paste your webAPI query into the window
Click the expansion next to Attributes
Uncheck IsAuditEnabled
Click Close & Apply
Once it refreshes, you will have a full list of tables and columns for your review.
Aaron Richards
Dynamics 365 Customer Engagement in the Field
Finding a monotonic polynomial over an interval
January 2, 2020 BI News and Info
I want to find a polynomial of specified degree $ d$ defining the function $ f:[0,1]\to[0,1]$ satisfying $ f(0)=0,f(1)=1$ and $ f$ is monotonically increasing over the interval. This seems like an easy enough task especially since there's the trivial solution $ f(x)=x^d$ , and although my code seems to work decently well for $ d=2,3$ , it takes around a minute on my machine to produce an answer for $ d=4$ and it couldn't get anything in the time I set it running for $ d=5$ . I'm looking to test this for values of $ d$ up to $ 10^3$ and find multiple instances for $ f$ , so this is certainly not going to work. What am I doing wrong and how can I make this efficient?
Here is my code:
deg = 4;
f[x_] := Sum[Subscript[c, k] x^k, {k, 1, deg}];
FindInstance[{
f[1] == 1,
ForAll[x, 0 <= x <= 1, f'[x] >= 0]
}, Table[Subscript[c, i], {i, 1, deg}], Reals]
Recent Questions – Mathematica Stack Exchange
A super-fast machine learning model for finding user search intent
November 30, 2019 Big Data
In April 2019, Benjamin Burkholder (who is awesome, by the way) published a Medium article showing off a script he wrote that uses SERP result features to infer a user's search intent. The script uses the SerpAPI.com API for its data and labels search queries in the following way:
Informational — The person is looking for more information on a topic. This is indicated by whether an answer box or PAA (people also ask) boxes are present.
Navigational — The person is searching for a specific website. This is indicated by whether a knowledge graph is present or if site links are present.
Transactional — The person is aiming to purchase something. This is indicated by whether shopping ads are present.
Commercial Investigation — The person is aiming to make a purchase soon but is still researching. This is indicated by whether paid ads are present, an answer box is present, PAAs are present, or if there are ads present at the bottom of the SERP.
This is one of the coolest ways to estimate search intent, because it uses Google's understanding of search intent (as expressed by the SERP features shown for that search).
The one problem with Burkholder's approach is its reliance on the Serp API. If you have a large set of search queries you want to find intent for, you need to pass each query phrase through the API, which then actually does the search and returns the SERP feature results, which Burkholder's script can then classify. So on a large set of search queries, this is time consuming and prohibitively expensive.
SerpAPI charges ~$ 0.01 per keyword, so analyzing 5,000 keywords will cost you $ 50. Running these results through Burkholder's labeler script also takes 3 to 5 hours to get through these 5,000 keywords.
So I got to thinking: What if I adapted Burkholder's approach so that, rather than use it to classify intent directly, I could use it to train a machine learning model that I would then use to classify intent? In other words, I'd incur one-time costs to produce my Burkholder-labeled training set, and, assuming it was accurate enough, I could then use that training set for all further classification, cost free.
With an accurate training set, anyone could label huge numbers of keywords super quickly, without spending a dime.
Finding a model
Hamlet Batista has written a few stellar posts about how to leverage Natural Language models like BERT for labeling intent.
In his posts, he uses an existing intent labeling model that returns categories from Kaggle's Question Answering Dataset. While these labels can be useful, they are not really "intent categories" in line with what we typically think of for intent taxonomy categories and instead have labels such as Description, Entity, Human, Numeric, and Location.
He achieved excellent results by training a BERT encoder, getting near 90% accuracy in predicting labels for new/unlabeled search keywords.
The big question for me was, could I leverage the same tech (Uber's Ludwig BERT encoder) to create an accurate model using the search intent labels I'd get from Burkholder's code?
It turns out the answer is yes!
Here's how the process works:
1. Gather your list of keywords. If you're planning on training your own model, I recommend doing so within a specific category/niche. Training on clothing-related keywords and then using that model to label finance related keywords will likely be significantly less accurate than training on clothing related keywords and then using that model to label other unlabeled clothing related keywords. That said, I did try using a model labeled on one category/niche to label another, and the results still seemed quite good to me.
2. Run Burkholder's script over your list of keywords from Step 1. This will require signing up for SerpAPI.com and buying credits. I recommend getting labels for at least 10,000 search queries with this script to use for training. The more training data, the more accurate your model will likely be.
3. Use the labeled data from the previous step as your training data for the BERT model. Batista's code to do this is very straightforward, and this article will guide you through the process. I was able to get about ~72% accuracy using about 10,000 labels of training data.
4. Use your model from Step 3 to label unlabeled search data, and then take a look at your results!
I ran through this process using a huge list (13,000 keywords) of clothing/fashion-related search terms from SEMrush as my training data. My resulting model gets just about 80% accuracy.
It seems likely that training the model with more data will continue to improve its accuracy up to a point. If any of you attempt it and improve on 80% accuracy, I would love to hear about it. I think with 20,000+ labeled searches, we could see up to maybe 85-90% accuracy.
This means when you ask this model to predict the intent of unlabeled search queries, 8 times out of 10 it will give you the same label as what would have been returned by Burkholder's Serp API rules-based classifier. It can also do this for free, in large volumes and incredibly fast.
So something that would have taken a few thousand dollars and days of scraping can now be done for free in just minutes.
In my case I used keywords from a related domain (makeup) instead of clothing keywords, and overall I think it did a pretty good job. Labeling 5,000 search queries took under two minutes with the BERT model. Here's what my results looked like:
For SEO tools to be useful, they need to be scalable. Keyword research, content strategy, PPC strategy, and SEO strategy usually rely on being able to do analysis across entire niches/themes/topics/websites.
In many industries, the keyword longtails can extend into the millions. So a faster, more affordable approach to Burkholder's solution can make a lot of difference.
I forsee AI and machine learning tools being used more and more in our industry, enabling SEOs, paid search specialists, and content marketers to gain superpowers that haven't been possible before these new AI breakthroughs.
Happy analyzing!
Kristin Tynski is a founder and the SVP of Creative at Fractl, a boutique growth agency based in Delray Beach, FL.
Big Data – VentureBeat
Compare two plots by finding the minimum distance among points
September 23, 2019 BI News and Info
I have a question about comparing the points within two plots.
I would like to compare two plots and find the minimum distance among their points, in order to find the nearest/common points (i.e. those ones with minimum -or zero-distance) and plot it (overlapping).
What I did is to extract the coordinates of their respectively points. But I do not know how to compare them and/or the two plots. I used the following line of code, but the result is completely different from what I am looking for.
Outer[EuclideanDistance, seq1, seq2, 1] // Flatten
The result should show the points on the plot equal (almost in common) between the two plots.
Could you please help me?
Restaurant Interviewing: Strategies for Finding the Right People
August 10, 2019 NetSuite
Posted by Brady Thomason, NetSuite Solution Manager, Restaurant & Hospitality
To say that eating out has become an important part of American culture is putting it lightly. According to the National Restaurant Association (NRA), Americans spend just over half (51%) of their food dollars in restaurants.
That's more than double the share spent on eating out in 1955, and it's not a trend that's slowing down. The NRA further predicts that restaurants will employ 16.9 million Americans by 2029, up from 15.3 million today.
In other words, restaurants are hiring more people than ever. Add in the industry's eye-popping annual employee turnover rate, which the U.S. Bureau of Labor Statistics pegged at 73% in 2016, and that equals a lot of interviews.
It also means that hiring the right people — those who can handle stress, represent your restaurant, and, perhaps most importantly, stick around a while — can reduce what is becoming a huge interviewing burden.
In fact, having a toolbox of interviewing best practices can help restaurant managers better manage their time, wade through candidates more efficiently, and, ultimately, run a more successful business. Drive this point home with your hiring team: we're in a people business, and just happen to serve food.
Efficient Interviewing Strategies
Many restaurant positions are entry-level roles, which creates many challenges with finding the right people. Developing a clear interview strategy is imperative to quickly filter candidates and build your "A-team."
What's more, finding convenient times to interview candidates can be a challenge in a stressful, fast-moving restaurant environment. For these reasons, alternate interview approaches can prove effective for working through the growing number of candidates.
Some restaurants might consider using open interviews, in which a restaurant simply announces it will be interviewing during a given window of time, and anyone who wants to apply can just show up. Open interviews enable restaurant managers to interview a lot of candidates in a short time, without the need to coordinate appointments for days on end.
Another option is a group interview, in which a restaurant calls in several candidates and interviews them simultaneously, sometimes even having them perform some kind of collaborative task. This is a great way to see how people work under pressure and in team situations.
With both approaches, it's best to conduct interviews either when the restaurant is closed or during gaps between rushes.
Most often, however, restaurants interview candidates individually, in which case the setting becomes a more important consideration. No restaurant wants to take up a table during peak hours for interviews, so if it's necessary to conduct interviews while things are hopping, it's helpful to identify a quiet place—such as a banquet room, an office or some other private space—where interviews can be conducted in confidence.
Ask Important Questions
Most interviews eventually come down to questions and answers. And whether a restaurant is hiring servers, kitchen staff or even managers, having good interview templates with pre-established questions by role can prove effective in not only simplifying the interviewer's job, but also in ensuring they're asking appropriate questions from a legal standpoint (check state and local laws) and avoiding discrimination.
Questions restaurant managers should ask prospective employees:
-Why do you want to work in the food service industry? (If a candidate doesn't know how to answer this, it's a red flag.)
-What do you know about our restaurant, and what makes you a good fit? (This is a great way to see if a candidate has taken the time to prepare.)
-What does the word "hospitality" mean to you? (A simple question that's at the heart of good food service.)
-Can you provide an example of how you handle unhappy guests? (No restaurant gets it all right. How employees handle problematic scenarios could determine whether diners become — or remain — regulars.)
-How would you handle conflict with co-workers? (Restaurants are charged environments where paths cross constantly. There's a significant possibility of conflict on any given day.)
-What were the best/worst restaurant experiences you've had, and why? (Candidates will reveal numerous strengths and weaknesses in answering this.)
-What are (or will be) your favorite/least favorite parts about working in restaurants? (This can tell a restaurant much about whether a specific candidate will fit into its culture.)
-How do you deal with high-pressure situations? (Candidates who can't deal with pressure probably aren't the best fit for restaurant work.)
-What does it mean to you to be part of a team? (Running a successful restaurant is a team sport.)
-How do you handle occasions when life makes it difficult to show up to work on time? (Hint: Candidates who don't have an answer for this are much more likely to have punctuality issues.)
Homing in on the Gems
Once an interviewer's gut feeling says that a candidate has potential, additional questions can help to validate whether the person is a good hire.
For instance, the interviewer might want to ask potential servers how they would handle a guest with a coupon that's either expired or has unseen limitations (such as not being able to be used with another offer). For a kitchen position, it might make sense to ask how a candidate might deal with a server who's bringing a lot of guest issues to the kitchen's attention during a rush.
It's not the actual answers to such questions that matter as much as how candidates handle unexpected questions that put pressure on them to consider many factors — a parallel for what it's like working in a busy restaurant during the rush.
Similarly, interviewers also can find out more about how candidates perform under pressure by asking them to prepare a basic dish or role-play certain steps of the guest service experience.
In the end, the goal is for restaurants to hire the best employees who are most likely to stay the longest. If a restaurant's interviewing best practices aren't carefully considered to improve the chances of hiring the right people, it'll have devastating consequences on the entire business.
The NetSuite Blog
August 3, 2019 NetSuite
Finding Coeffients of the Product of Sums
May 3, 2019 BI News and Info
Is there any way to get Mathematica to find the coefficients of the product of sums? As an example (the problem I am trying to solve): Coefficients for a taylor expansion of $ e^{z^2}$ centered around $ z=1$ . We can write this as:
$ e^{z^2} = e^{(z-1)^2+2(z-1)+1} = \left(\sum_{n=0}^\infty \frac{(z-1)^{2n}}{n!}\right)\left(\sum_{n=0}^\infty \frac{2^n(z-1)^{n}}{n!}\right)e$ . I would like to find the coefficients of the above sum but written as just one sum over n.
Some analysis has led to find that we can write the above as: $ \sum_n^\infty \sum_{0 \leq 2m \leq n} \frac{e2^{n-2m}}{m!(n-2m)!}(z-1)^n$ , which is of the form I wanted. Is there any way for Mathematica to have found this?
Finding the Right B2C E-Commerce Suite for Your Business
February 17, 2019 CRM News and Info
By John P. Mello Jr.
Feb 16, 2019 5:00 AM PT
This story was originally published on the E-Commerce Times on Nov. 27, 2018, and is brought to you today as part of our Best of ECT News series.
Digital transformation has become a prime focus for retailers these days. In order to grow, brick-and-mortar stores realize they must use their digital touchpoints to enhance their customers' in-store experiences.
Online retailers recognize they need to separate themselves from the pack through faster and more informative shopping experiences. And omnichannel sellers and brands are aware they need to provide their customers with a seamless, cross-channel experience.
To meet the exigencies of digital transformation, retailers have been turning to business-to-consumer (B2C) commerce suites to automate their merchandising, streamline operations, and boost the impact of their business teams on the experience of their customers.
However, choosing such a platform can be difficult.
"It's a very competitive space. Differentiation is challenging," said Thad Peterson, a senior analyst with the
Aite Group, a global research and advisory firm based in Boston.
"It's a maturing market, but some aspects of it are growing faster than the market as a whole," he told the E-Commerce Times. "The home mobile side is growing more quickly — and in the developing world, it's growing much, much more quickly."
Level of Involvement
How much the business should be involved in the technical implementation and operation of the platform is one of the first questions a commerce platform shopper should consider, Gartner analyst Mike Lowndes suggested in a research note on digital commerce platform architecture. Gartner, a research and advisory company, is based in Stamford, Connecticut.
"If the business will be less involved, then a more packaged or single-vendor solution may be appropriate," he wrote. "However, if IT organizations are to be involved in more than governance, leaders need to understand the high-level approaches available to make the best decision for the business."
When examining a new digital commerce venture or replacing a legacy platform, organizations often search first for a digital commerce platform vendor before considering the impact of the vendor's product architecture on their business and future needs, Lowndes wrote.
"Alternatively, this decision is placed in the hands of a development partner or system integrator on behalf of the business," he added, "sometimes with unforeseen consequences to flexibility, future-proofing and fit for purpose."
One Size Doesn't Fit All
When shopping for a B2C e-commerce suite, the size of the purchaser is an important consideration.
"If you're a sophisticated e-commerce provider with an IT group that does a lot of your Web development, then you don't need a turnkey solution. You just need good cloud-based functionality and a good, secure platform that's flexible so you can do what you need to do," Aite Group's Peterson explained.
"If you're a small organization, you may need Web services, Web design and a lot of other things," he continued. "If you're smaller, you're ceding more control to the provider. If you're larger, you're keeping more control to yourself."
A recent
Forrester Wave report on B2C commerce suites notes, for example, that SAP Commerce Cloud is a suite suitable only for organizations with deep technical skills or a strong partnership with a system integrator.
"SAP commerce cloud is a best ft for companies looking for an industrial-strength full-function commerce platform in wide use across several industry verticals," the report points out.
IBM's Watson Commerce is another suite that requires technical chops to deploy, and IBM is also in the process of modernizing the solution's architecture.
"IBM is a good fit for large enterprises with the budget, resources, and willingness to bet on the company's ability to execute on its modern platform vision. Less mature organizations will likely find this suite too cumbersome," Forrester concludes.
Important Considerations
At the start of the shopping process for a B2C suite, an organization has to evaluate what it sells. Is my product complex or is it simple?
"There are solutions that are better for selling simple products than complex products," said Gartner Vice President Penny Gillespie.
Where a product is being sold is another important consideration.
"Some platforms do better selling locally and regionally than globally," Gillespie told the E-Commerce Times.
For example, in Forrester's report notes that Digital River's commerce suite is a good fit for companies looking to expand globally and to outsource the transactional overhead of doing business internationally.
Forrester offers three general recommendations when shopping for a B2C suite:
Make sure the suite contains the core set of features that drive a customer's online experience — including search, personalization and promotions, and the analytics to tie those three elements together. The ability to target content and products with consumer incentives across the consumer's shopping journey is essential to giving the consumer a differentiated shopping experience.
Make sure the suite is agile enough to give business users the tools they need to rapidly change content. Business users need a 360-degree view of their customers, along with a promotions and campaign engine they can control, so they can attract customers and induce them to make purchases.
Make sure the suite incorporates operational efficiencies that reduce costs and provide an upgrade process that requires little regression testing and no recoding. A containerized approach to upgrades that manages versions and automates scaling is critical to simplifying the upgrade cycle, as is the use of an abstracted API layer to isolate the commerce runtime from store customizations.
To Transform or to Optimize?
When purchasing a B2C suite, a buyer should understand the difference between a solution that's going to optimize an organization's performance and one that can transform it. A transformational solution is one prepared to deal with the future of e-commerce.
Just a scant six years ago, optimization was the driving force behind digital commerce investments, but that isn't the case anymore, Gartner research shows.
"In 2012, customers were investing in digital commerce for cost savings. In 2017, it was about transformation and delivering great customer experiences," Gillespie said.
"When I think about delivering a great customer experience, I think about delivering a personalized customer experience," she continued. "And when I think about delivering a personalized customer experience, I think of content being relevant, my process being easy and seamless, and content that resonates with me."
One characteristic of transformation is putting commerce in context. For example, the app for a furniture store will be able to show a consumer how a piece of furniture will look in a home, or a clothing store app will display how an item will look on the consumer.
Another characteristic is shifting from being reactive to a consumer's wants to being proactive or anticipatory of them.
"Today, 99.99 percent of all transactions are initiated by customers," Gillespie explained. "In the future, we're going to see more and more transactions by merchants or suppliers based on what they know about the customer."
Draw a Road Map
Commerce platform shoppers should create a road map for digital commerce and manage technologies based on the digital commerce technology ecosystem, Gillespie recommends.
"This will lead to a complete digital commerce solution, maximizing the value of both the commerce platform and the corresponding digital commerce ecosystem applications," she said. "Organizations underestimate the requirements of a digital commerce solution. As a result, they deploy incomplete solutions that impede their journey to success."
It's important to scrutinize an IT vendor's digital commerce platforms to ensure they match the road map and requirements, Gillespie advised, and to identify requirements delivered natively. "This can reduce unplanned spending on additional technology and lower integration costs."
Develop a Short List
Commerce suites need to provide customers with more than just access to a company's goods, observed Hayward, California-based Charles King, a principal analyst with Pund-IT, a technology advisory firm.
"It also needs to highlight and reinforce a company's brand and go-to-market strategy," he told the E-Commerce Times.
"Customization, search, personalization and support for company promotions are all critical parts of that process," King added. "I'd also add that mobile transaction support and optimization is a critical issue for many, if not most, retailers — especially those in consumer markets."
After performing its initial analysis, suite shoppers will need to create short list of prospects. When making that list, "first and foremost, invest time and effort in determining what your own organization hopes to accomplish with online commerce, along with developing realistic budgets and timelines," King recommended.
"Then take a long and close look at primary vendors, along with whatever strategic partners — banks, hosted service providers, designers and such — are working behind the scenes," he continued. "That includes examining a vendor's existing sites, and arranging discussions with those customers."
If a suite shopper operates in a particular industry, platform providers that focus on that industry should be good candidates for a short list of finalists.
"You need to understand your vertical," said Aite Group's Peterson, "and identify players with expertise in that vertical, so you don't have to explain to them what you're doing or adapt their technology to what you're doing."
Keep Your Eyes on the Target
As a shopper works down the list of candidates for a suite deployment, sales pitches can start to fog the shopper's focus, but it's crucial to keep what needs to be accomplished front and center, King advised.
However, "companies also need to determine how flexible or willing to compromise they can be on specific points," he added.
"Realistically, it will be difficult to find a commerce vendor that's a perfect 100 percent fit for your situation," King continued, "but considering and remaining focused on your organization's core requirements will go a long way to determining which B2C partners are worthy of serious consideration."
John P. Mello Jr. has been an ECT News Network reporter
since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the
Boston Phoenix, Megapixel.Net and Government
Security News. Email John.
Finding position of the maximum value of each subset
January 26, 2018 BI News and Info
I have the following set:
list = {{32/39, 1/5, 0, 0, 0}, {5/33, 3/5, 1/3, 0, 3/4}};
I need to find the position of maximum value from each subset.
Position[list, Max[list]]
and it gives the position {{1,1}}. But my result should be {{1,1}, {2,5}}/
Finding Meaning At Work: Why Am I Doing This Job?
January 14, 2017 SAP
Everyone wants to find meaning at work, but many don't, as recent research shows. But why does meaning matter, and what are its sources?
Nearly nine out of 10 employees in organizations worldwide don't perceive their daily work as meaningful, a recent study has shown. That's an alarming number, considering that the same research identifies meaning as a "root cause of innovation and corporate performance." But when do people feel that their work is meaningful, and how can organizations and leaders help to create meaning?
Those were some of the questions addressed by renowned experts at the Future of Leadership Conference 2016 at the end of November. The conference was hosted by the non-profit Future of Leadership Initiative (FLI), which is dedicated to investigating modern leadership culture.
Luxury or business factor?
The FLI researchers surveyed people in 140 countries and found some astonishing results.
Organizations whose employees see their work as meaningful are around 21% more profitable. Their employees are more engaged and more persistent.
For 58% of employees – especially from the younger generation – a meaningful job is even more important than a high salary, the study reveals. Stefan Ries, chief human resources officer and member of the SAP Executive Board, knows this from personal experience: "Young people entering the job market ask us about meaning straight out and their choice of employer hinges on the answer."
So it's all the more alarming that 87% of the employees in the FLI survey don't perceive their work as meaningful.
Sources of meaning at work
As the research verifies, people experience their work as meaningful when they feel they're making an impact.
Giving employees autonomy also creates a sense of meaning. Dr. N. S. Rajan, former chief human resources officer and member of the group executive council of Tata Sons and author of the book Happiness at Work, explains: "It is very important for someone to have a meaningful say in what he or she does. When you have the empowerment and the autonomy to do it the way you best can do it, it makes you feel that you have really contributed."
Ries agrees: "We have to say goodbye to traditional hierarchical leadership models. A manager needs to be more of a coach who occasionally makes you get out of your comfort zone. This is the only way to create innovation."
A common understanding of values and goals is also critical. The more your own values tally with the company's values, the more meaningful your job will seem. That's why it's crucial "to create a common understanding of the company's strategy and vision, and to demonstrate how are you going to live this vision so that employees can see how it connects with their everyday work," Ries continues.
But it's not only about what you do, but who you do it with. An environment that fosters relationship building and an atmosphere of appreciation and trust creates a sense of belonging, which, according to Dr. Rajan, is another key factor for a fulfilling job.
Corporate responsibility in the digital age
What are meaningful corporate goals in an age where digitization is turning the world of work upside down and the exploitation of nature and the environment is advancing at an alarming pace?
John Elkington is a world authority on corporate responsibility and, in the 1990s, coined the term "triple bottom line." Elkington, who is currently head of Project Breakthrough, a joint initiative with United Nations Global Compact, believes that the next 10 to 15 years are going to be among the most dangerous and high-risk that our species has gone through. At the same time, "if we can work out what we want to do, be very clear, engage the wider world, we can make it further and faster than we possibly imagine," he explains.
But how can we tap the opportunities? According to Elkington, there is no time left for incremental changes. Instead, he urges exponential change, a radical mindset shift, and new business models that combine sustainability with profitability. In his opinion, the United Nations' sustainability goals provide the framework for this.
This framework is also something that young people tend to subscribe to, he argues. "The global goals are like a purchase order from the future – as though the world of the 2030s is trying to reach back into today's world to say, these are some of things we need," he explains.
So it may be that this call from the future can also generate a sense of purpose. "Meaning is what I wish to be," says Dr. Rajan. "That direction gives me a sense of purpose. That is true for organizations also."
Profit or social engagement?
Both! In April 2016, SAP employee Irina Pashina took part in the social sabbatical program, where selected employees are given the opportunity to work in social enterprises and non-profit organizations in emerging markets and help solve specific business problems there.
Pashina worked at Arunodhaya, an Indian organization that strives to combat child labor and child poverty. She found three factors particularly motivating: to aim for a higher goal than meeting profit and quarterly targets, to sense the direct and tangible impact of her work, and to work independently and on her own initiative.
Pashina doesn't believe economic success and social engagement are mutually exclusive: "By helping SAP to be successful, I can also contribute in a small way to making the world a better place."
Today's employees need flexibility to thrive. Learn How to Design a Flexible, Connected Workspace.
Article published by Andrea Diederichs. It originally appeared on SAP News Center and has been republished with permission. | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.